Artificial Intelligence (AI) has not been benevolent in many regards. Even science fiction novels, movies, and series portray it as sinister. Most Sci-Fi classics portray AI as a dystopian technology displacing & enslaving humans. The character Hal in 2001: A Space Odyssey and the Cylons of Battlestar Galactica shows AI as ruthless. Even the movie WALL-E showed that humanity is dependent on technology.
What is the real-world application and implementation of artificial intelligence (AI) like?
The real-world application and implementation of artificial intelligence (AI) aren’t prominent. Yet entertainment shows it. In fact, it has been quite subtle in all these years. Currently, the hottest topics in AI are generative AI and of course, ChatGPT.
ChatGPT gathered 100 million active monthly users within the first two months. Right now, it is one of the fastest-growing consumer apps ever. Users became fascinated due to its advanced capabilities. It managed to cause ripples in certain business sectors.
Yet, it has a lot of work to do in data privacy. A lot of businesses and private citizens used it. Yet they did it without understanding its data privacy and ownership practices.
The consequences came quite surprising. Italy banned the software for a few months until it recently lifted the ban. It did it on the condition that its data protection policy be clear and understandable.
The privacy risks ChatGPT poses to each individual today is a risky implications. It is being discussed in greater detail everywhere. The E.U’s General Data Protection Regulation (GDPR) is monitoring it and other software.
Also, Google recently unveiled its own conversational AI bot known as Bard. Other companies are following suit. Tech companies working on artificial intelligence have entered an arms race of sorts. But tech moguls like Elon Musk and Steve Wozniak want a halt on AI Development until further notice.
In short, the problem is further exacerbated by users’ personal data.
Out of the 300 billion words in ChatGPT, how many do users own?
ChatGPT is supported by a huge machine language. It requires huge amounts of data to work on and to improve itself with. The more data its model is trained on, the better it gets at detecting patterns. How? it anticipates what will come next and hence generates conceivable text.
OpenAI is the company behind ChatGPT. It fed the tool with almost 300 billion words in a systematic manner from various sources. The internet, blog posts, social media posts, journals, magazines, books and what not. It even got personal information about users without their consent.
Hence, if people wrote a blog post, comment reviewed a product or commented on an article online, there is a good chance that ChatGPT obtained that without their consent and permission.
Why is it such an issue?
The data collection used for training ChatGPT proved to be a problem for numerous reasons. One of the first reasons was that it never asked about using user data. This is a bonafide violation of user’s privacy especially when sensitive data has been obtained and has been used for identifying people, their family members, relatives, friends, coworkers and the like, and of their location too.
Even when the data was available publicly, the usage of such breaches contextual integrity. This is a key principle in legal discussions about privacy. It requires that users’ information is not revealed outside of the context, at any cost, in which it was originally created.
Second reason is that OpenAI, the maker of ChatGPT, did not offer any method for users to check whether or not the company stored their personal information. It also did not specify how users can delete them.
This is a fundamental right, guaranteed under the European Union’s General Data Protection Regulation (GDPR). Italy under the light of such banned ChatGPT but uplifted the ban conditionally. It is still under debate whether or not ChatGPT is compliant with most requirements and stipulations of the GDPR.
Such a principle is colloquially known as the ‘Right to be forgotten’ and is quite important in instances where information present is inaccurate or misleading.
It has been a regular occurrence with ChatGPT and Brian Hood, a former mayor of the Hepburn Shire council near Melbourne, Australia, was suing ChatGPT for wrongly accusing him of being a criminal. He was in fact a whistleblower in a major political scandal.
Lastly, OpenAI did not pay for the data it had taken from the internet. A lot of website owners, people, companies, and other entities that produced it were not properly compensated. It is worth noting that recently, OpenAI was valued at USD$ 29 billion. This is double its value, in comparison to its previous worth in 2021.
Moreover, the hosting company that provided OpenAI with the best dedicated server to keep it running was also not properly compensated or rewarded for its round-the-clock services.
OpenAI’s privacy policy regarding ChatGPT is a flimsy one
Another privacy risk involving data provided to ChatGPT is in the form of prompts users provide it. Whenever they ask it to answer a question or perform a certain task, users by accident provide it with sensitive information which is eventually displayed in the public domain.
Suppose a corporate attorney prompted ChatGPT to review a fraud settlement agreement, or a coder asked it to check a code, or a student decided to use it to answer a complex question; the agreement, the code and the answers become part of ChatGPT’s database. It means that such data is further used to train the Chabot, and this can be included in responses to prompts by other users.
Beyond this, OpenAI gathered other user information on a wider scope. As per the company’s privacy policy, it collects IP addresses of users plus their browser kind and its settings plus data on user interactions within the site (especially the kind of content users engage with, features used and actions taken). This is a sign of concern.
That is not all! It even collected information about users’ browsing activities over the course of time and across various sites. This indicates that even the best cheap dedicated server is not safe from OpenAI’s data grappling tentacles.
It is an alarming fact that OpenAI states it may share personal information of users with unspecified third-party entities and outfits without informing users first. This is a wrong practice of meeting business objectives and is a clear violation of numerous cyber laws and data protection acts.
Conclusion
Keeping all the aforementioned facts in clear consideration, ChatGPT is unfortunately a problem child in terms of user privacy. Its parent company OpenAI is not only under intense scrutiny and may be sued in a court of law over privacy, data harvesting, defamation, and other violations of various laws.