The following series of posts (entitled “Becoming Artificially Human”) were written for a digital writing course at Old Dominion University. They are thus unedited from the original submission and potentially error-prone.


Left: Fake (Steve Buscemi) | Right: Real (Jennifer Lawrence)

While previous posts have been somewhat optimistic or at least allow room for optimistic scenarios. This post will look at the darker side of artificial intelligence. MalwareBytes, in a recent report, estimated that we will “see AI implemented or used against itself for malicious purposes in the next 1-3 years”. MalwareBytes believes this sudden adoption of AI in malware will take place because “if cybercriminals know one thing, it’s how to profit off a trend.”

Although no AI malware has cropped up yet, however I would be remiss to not briefly mention IBM’s AI-malware proof-of-concept DeepLocker. The malware utilizes existing AI models to indentify targets through facial recognition, geolocation, and voice recognition. Once a target has been identified, the malware then conducts its malicious attack (which can vary from a traditional malicious attack on the system to ransomware or something more complex). This level of sophistication had previously not existed; malware had previously been less picky and less stealthy in its movements. Security software companies such as Symantec, creators of Norton Anti-Virus, as well as others are already preparing for the day AI malware and malicious AI sprout up. Norton’s solution looks to AI to help predict risk and mitigate incoming malware essentially funding AI to defend against AI.

Not only is AI malware on the horizon, but DeepFakes are already here. DeepFakes being the automatic (using AI technologies or systems called Neural Networks) creation of photo or video manipulated media. DeepFakes are essentially a next step up for “fake news” and “photoshops”. PBS put out a great piece explaining DeepFakes and the worries regarding the software which also showcases Department of Defense’s (DoD) solution to the problem. That solution being a computer that reasons which has its own inherent complications (as computer scientists have been working on such a problem for around the past 40 to 50 years).  DeepFakes are not just going to affect politicians; they are already affecting women as the technologies used to place Nicholas Cage in the place of different actors is also being used to place women in porn (that they were never in). According to Sophos, “96% of the deepfakes being created in the first half of the year were pornography, mostly being nonconsensual, mostly casting celebrities – without compensation to the actors, let alone their permission.”

The problem of DeepFakes is elevated by the fact that there is a lack of public awareness. According to Kantar, who conducted a survey in April 2019, 45% of those surveyed said they “were not aware at all” as to how “AI is affecting their life presently.” On October 5th, 2019, it was reported by NBC Bay Area that California is looking to ban the distribution of DeepFakes in advance of the 2020 presidential election. Whether the law should or should not pass is irrelevant to this conversation; the fact that a state would take DeepFakes this seriously or see the technology as this threatening is what speaks volumes.

Category
Tags

Comments are closed