The underlying technology behind AI systems is software and algorithms.
AI systems vulnerabilities are often input handling and logic flaws.
Not providing a large amount of trained data can lead to unexpected and undesired AI behaviors.
Data poisoning is when trained data has been tainted.
Evasion attacks are designed to interrupt AI’s ability to perform classification and identification.
Inference attacks can be introduced when a specific connection in an AI system can be unraveled and detected.
An extraction attack is when a malicious actor feeds data into a model and tracks how the model manipulates the data.