Through constant technological advances, many questions regarding human rights arise. Does artificial intelligence and technology play a role in human rights? If so, does it help to protect human rights or does technology infringe upon them in certain ways? 

Artificial Intelligence (AI) is “the capability of a machine to imitate intelligent human behavior.” This can include many forms of technology, including security camera services, computers, phones, etc. On the one hand technology leads to economic growth and new discoveries, however, it is important to also recognize that technological advances have the ability to lead to increased access to personal information of most people in society. For example, the Australian Human Rights Commission fears that both the government and the public in general are behind the speed of technological development; in other words, neither recognizes the full force of technology. 

The overarching question is, how do people ensure technology is used to deliver what people want instead of what people fear? The Australian Human Rights Commission feels that AI makes certain automated decisions that disadvantage individuals based on characteristics, such as race, gender and age. Some of these issues have the tendency to arise during situations relating to policing, home loans and social security. In these instances of fast-paced decision making, AI has the possibility of making discriminatory decisions based off of these aforementioned characteristics, many times through disparate impact, which is a labor law in the United States (US) that refers to decisions made that affect a specific group of people more than another. A recent incident of this was the case, Texas Department of Housing and Community Affairs v. Inclusive Communities Project. In brief, the Inclusive Communities Project “used a statistical analysis of housing patterns to show that a tax credit program effectively segregated Texans by race.” 

How broad is privacy as a human right?

Article 12 of the Universal Declaration on Human Rights states that no one shall be subjected to arbitrary interference with their privacy. However, arbitrary interference is never defined and is instead left up to the interpretation of those enforcing it. Many countries have domestic privacy laws protecting their citizens; however, the extent of coverage varies drastically. For example, the European Union (EU) is considered to have strong privacy laws, with transparency as a core tenant. A recent attempt to increase privacy laws in the EU includes the General Data Protection Regulation (GDPR). The GDPR is an attempt “to give individuals more control over how their data are collected, used, and protected online.” It also creates more oversight and restrictions into how companies use this data. 

The US is also attempting to increase their privacy restrictions. For example, through the introduction of the EU-US, as well as the Swiss-EU Privacy Shield. This was designed by the US Department of Congress, the Swiss Administration and the European Commission to give companies a mechanism allowing them to operate within data protection laws when moving personal data between the EU or Switzerland and the US. Many countries have already incorporated privacy laws and others are attempting to implement them such as Peru, Costa Rica, Uruguay and Mexico.

Forms of privacy infringement 

In a study undertaken by Gallup Research, they found that “80% of millennials place ‘some’ or ‘a lot’ of trust in companies to keep their data secure.” In reality, many times, it seems these companies, Apps and other forms of technology do little, if anything, to protect privacy (although it is questionable to what extent the violation of privacy rights is caused by App owners or by the lack of users’ attention to privacy policies).

A major category of privacy infringement centers on phone and computer privacy, and more specifically, listening through microphones, watching through cameras and having access to private information such as GPS location or information from social media sites. Certain companies, such as Apple, have come forward issuing apologies for listening in on private conversations, for example, when one uses Siri. However, the likelihood of conversations being analyzed by a real person is small, but instead, the conversations are more likely monitored by AI. Other studies have uncovered that apps can record a phone’s screen and send information to third parties. These companies have tremendous access to “personal data including almost everything we post, share and search for online.” The technology companies seem to be monitoring the online activity in order to create personalized advertisements. But do they use the personal data for more? 

FaceApp scandal

“FaceApp” is a smartphone application made by “Wireless Lab,” a company based in Russia. According to the terms and conditions of the app, “your photos could be used in unexpected ways.” The privacy policy of the App states that “its affiliates and service providers may transfer information that we collect about you, including personal information across borders and from your country or jurisdiction to other countries or jurisdictions around the world.” In order to use the App, people need to upload pictures to the cloud. Once the picture is uploaded, “FaceApp” basically owns the rights to the photo. These photos can be used in any way seen fit by the company. For example, the photos could be used for advertising on billboards or developing facial recognition technology. Users that download the App grant “FaceApp” the right “to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform, and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed.” However, once the app is deleted, as many people deleted this app out of concerns for their privacy, the App still has the right to the photos and data on the phones for which it was previously authorized.

What can we do?

There are many routes one can take when discussing technology and how to best make use of it. For example, some propose “a moratorium on certain uses of facial recognition technology until an appropriate legal framework that protects human rights has been established.” Others state the need for transparency in the process of decisions relating to individuals and the ability to challenge such decisions.

Another outspoken critic of privacy infringement is Steve Wozniak, co-founder of Apple Inc. Wozniak stated, “I’m worried about everything. I don’t think we can stop it though.” A piece of advice offered by Wozniak in order to protect privacy “is to figure out a way to get off Facebook.” However, even deleting one’s Facebook, or other social media platforms, does not erase the data that has been previously shared on one’s network.

There are also numerous legal means for protecting human rights against privacy infringements. For example, the aforementioned GDPR as well as the US-EU Privacy Shield. However, it is of utmost importance that the legal procedures are continually reevaluated and updated in order to remain constant with the current level of technology. Additionally, various forms of legal requirements for App owners could be instituted, but more importantly, transparency of the App and technological services must be increased. People should be better aware of the privacy policies of the Apps. Laws could be instituted better informing users of how and in what ways their information will be used as well as what rights they are granting to App and technology companies, but it could be argued that this transparency should not depend on people reading the current state of privacy policy of Apps due to the length and time consumption. Instead, for example, laws could be instituted to categorize the apps under a group privacy policy or require shorter, more concise privacy policies.

Going forward, there are many questions that one must ask, such as what do the privacy laws cover? Do they ensure privacy against taking personal information? Do they ensure privacy from companies listening to conversations through phones and computers? Do they ensure privacy in the public sphere against facial recognition technologies? Do people have the right to private online searches?

As technology and AI continues to advance at ever increasing rates, the importance of answering questions along these lines continues to grow as well. 

 

References

Baker, Nick. Why Australia's peak human rights body is worried about artificial intelligence, SBS News, 17 December 2019, accessed 19 December 2019 (https://www.sbs.com.au/news/why-australia-s-peak-human-rights-body-is-worried-about-artificial-intelligence).

CBS News. Are smartphones listening and targeting us with ads?, CBS News. 27 February 2018, accessed 5 January 2020 (https://www.cbsnews.com/news/phone-listening-facebook-google-ads/).

Chanthadavong, Aimee. Human Rights Commission wants privacy laws adjusted for an AI future, ZDNet, 17 December 2019, accessed 19 December 2019 (https://www.zdnet.com/article/human-rights-commission-wants-privacy-laws-adjusted-for-an-ai-future/).

Davis Wright Tremaine LLP. Discrimination and Algorithms in Financial Services: Unintended Consequences of AI, 6 March 2018, accessed 18 February 2020 (https://www.dwt.com/blogs/payment-law-advisor/2018/03/discrimination-and-algorithms-in-financial-service).

GDPR. EU. Does the GDPR apply to companies outside of the EU?, accessed 18 February 2020 (https://gdpr.eu/companies-outside-of-europe/).

Gustke, Constance. Which countries are better at protecting privacy?, BBC, 25 June 2013, accessed 19 February 2019 (https://www.bbc.com/worklife/article/20130625-your-private-data-is-showing).

International Trade Administration, U.S. Department of Commerce. Welcome to the Privacy Shield, accessed 18 February 2020 (https://www.privacyshield.gov/welcome).

Merriam-Webster. artificial intelligence (https://www.merriam-webster.com/dictionary/artificial%20intelligence).

Passy, Jacob. Read this before using FaceApp - you give up more personal data than you realize on this Russian-made app, MarketWatch, 22 July 2019, accessed 18 February 2020 (https://www.marketwatch.com/story/having-fun-using-faceapp-think-again-you-give-up-more-data-than-you-think-with-this-russian-made-app-2019-07-17).

Pettijohn, Nathan. Of Course Your Phone Is Listening To You, Forbes, 3 September 2019, accessed 05 January 2020 (https://www.forbes.com/sites/nathanpettijohn/2019/09/03/of-course-your-phone-is-listening-to-you/#7b4c03366a3f).

UN General Assembly, Universal Declaration of Human Rights, 10 December 1948, 217 A (III).

US Norton. What are some of the laws regarding internet and data security, accessed 19 December 2019 (https://us.norton.com/internetsecurity-privacy-laws-regarding-internet-data-security.html).

Photograph

Artificial intelligence. Illustrative photo, author: Markus Spiske, Unsplash, public domain, edits: cropped.