Why is artificial intelligence very easy to be deceived?


On October 16, 2023, according to NetEase Smart News, fraud has long been one of the world’s oldest and most innovative "professions." Now, it may have a new target: artificial intelligence. Recent research suggests that AI systems can be vulnerable to manipulation by cybercriminals, and as their role in daily life grows, attacks on them could become more frequent.

The core issue lies in how AI algorithms perceive the world differently from humans. Small changes in input data can drastically alter an AI's output, even if the change is imperceptible to people. This makes AI systems susceptible to what experts call “adversarial attacks.”

Many studies focus on image recognition systems, especially those using deep learning neural networks. These systems are trained on thousands of images, learning to identify patterns that help them classify objects in new images. However, the features they detect are often not the high-level concepts humans expect—like the word “stop” on a sign or the tail of a dog. Instead, they analyze pixel-level details, which may look like random noise to us but are highly predictive for the system.

This means attackers can exploit these hidden patterns to trick AI into misclassifying images. For example, they can create images that appear normal to humans but cause the AI to see something entirely different. These manipulations are known as “adversarial examples.”

Initially, such attacks required access to the internal structure of the AI model. But in 2016, researchers developed a “black box” attack method, allowing them to fool AI without knowing its inner workings. By analyzing the system's responses to modified inputs, they could generate deceptive images that look normal to human eyes.

Tests were once limited to digital environments, but real-world applications have since emerged. Last year, researchers showed that modifying an image on a smartphone could trick image recognition systems. Similarly, special glasses were found to confuse facial recognition systems, making them mistake people for celebrities.

These developments raise serious concerns. If a self-driving car misses a stop sign due to an adversarial attack, it could lead to accidents, insurance fraud, or even harm to people. As facial recognition becomes more common in security systems, the risk of impersonation grows.

In response, researchers are developing countermeasures. Some deep learning models can now detect adversarial examples. However, this creates an ongoing arms race between attackers and defenders, where each side constantly improves their tactics.

Adversarial attacks aren’t limited to images. Chinese researchers found that adding specific words or misspelling a single word in a sentence can disrupt text analysis systems. Audio-based attacks have also been demonstrated, where distorted sounds can trick voice assistants into performing malicious actions.

Perhaps the most alarming application is bypassing cybersecurity defenses. Many companies use AI to detect malware, but these systems are also vulnerable. At a recent hacking conference, a company demonstrated how AI could be used to evade anti-malware systems by modifying code until it passed undetected.

Another major threat is “data poisoning,” where attackers inject flawed data into training sets to corrupt AI models. This is especially dangerous for systems that continuously update with new information, like antivirus software. Attackers can flood the system with misleading data, forcing it to make incorrect decisions and creating opportunities for intrusion.

While many of these techniques require technical expertise, the tools needed are becoming more accessible. Just as spammers once outsmarted early spam filters, we may soon see similar tactics targeting AI-driven systems. As AI becomes more embedded in our lives, the rewards for deception could outweigh the risks.

Stay informed about the latest developments in AI security by following NetEase Smart News on WeChat (smartman163). The future of AI depends on both innovation and vigilance.

SnSbCu Babbitt Wire

Babbitt wire is with tin as the base, the product is added with certain amount of antimony, copper

or other improved elements.

Sn Sb7Cu3 Babbit metal is suitable for metal spraying on the end face of metalized film capacitor. With strong adhesive force, good weldability and low loss angle, it is an ideal metal spraying material for laminated capacitor.

Other trade marks are suitable to use CMT, TIG and MIG technology to manufacture sliding bearing bush material layer. It is of high bonding strength with substrate, with material utilization rate of 70~80%. It has small amount of ingredient segregation without any loose or slag inclusion. The internal control standard of the alloy composition is prevailing over GB/T8740-2013. Babbi1 metal added with improved elements can substantially enhance the service life.




Snsbcu Babbitt Wire,Overlaying Materials,Thermal Spray Babbitt Wire,Babbitt Wire

Shaoxing Tianlong Tin Materials Co.,Ltd. , https://www.tianlongspray.com