When it comes to Artificial Intelligence, one of the topics commonly raised is ethics, meaning the usage of this technology in a responsible and wise manner.
Despite this vision, AI may (and if may, will be) used both sides, in an ethical and unethical manner: I like to address these options as White Hat AI and Black Hat AI.
Here some definitions and examples to make it clear:
White Hat AI refers to the usage of Artificial Intelligence for ethical and lawful purposes, with the intention of providing value for the society (but also Corporations). Here are some examples:
- Fraud detection systems: AI algorithms that can detect and prevent fraudulent activities, such as credit card fraud or insurance fraud;
- Healthcare diagnosis and treatment: AI models which help doctors and medical professionals to diagnose diseases and prescribe treatments;
- Autonomous vehicles: Self-driving cars that use AI to navigate roads and avoid accidents;
- Disaster response: AI models that help emergency responders to predict, prevent, and mitigate the effects of natural (or human caused) disasters
Black Hat AI, on the other hand, refers to the usage of AI for malicious or illegal purposes.
Here are a few examples:
- Malware creation: AI models that can create new and more effective malware that can elude traditional antivirus software and infiltrate operative systems
- Cyberattacks: AI algorithms which can scan and leverage vulnerabilities in computer systems to steal sensitive information or cause damage
- Deepfakes: AI-generated videos or images that can be used to spread disinformation or hit individuals.
- Social engineering: AI-powered chatbots that can trick people to unveil sensitive information or clicking on malicious links / undertake
There is a third area, which I’d call Red Hat AI, that somehow falls in between the two above, where some White AI initiatives – wrongly used – might find themselves to fall in, despite all the good initial intentions.
Whoever works on the White Hat AI will have to be sure that the technology which is developing will be inherently safe and not possibly stretched into the Red Hat AI area.