Saturday, April 27, 2024

Decoding deepfake: Conversation with Anushka Jain on intersection of technology, privacy, and digital frontier

The present century has been witnessing the overflow of digital manipulation through deep generative methods. What you see is not always real, but the growth of deepfakes make unreal things look increasingly convincing. Are deep fakes only affecting an individual or are they demeaning an entire society? Recent deep fakes of actresses Rashmika Mandanna, Kajol, Kathrina Kaif have shaken the internet and put the central government’s attention to the subject. Following the tensions, India’s Minister of Electronics and Information technology clarified that the government would make rules on deepfake prevention soon. In fact, that may be a temporary solace for cyber users. Does technological evolution ignore women and their privacy? 

Concerned about the deepfake generation, I spoke with Anushka Jain, an Artificial Intelligence expert and a lawyer who is interested in technology, facial recognition & machine learning. Anushka is currently working as a Research Associate in Digital Futures Lab, which is a multidisciplinary research collective that examines the complex interaction between technology and society in the global south.

Sofia: The invasion of synthetic media has added fuel of fakeness to the world of content creation. In a technologically advanced era like this, how deeply can the wrong use of synthetic media tend to become more dangerous than ever before?

Anushka: It can be really harmful, especially as it becomes easier and easier to create this kind of media. It is becoming more advanced. So if we see a visual it is difficult to identify what is real or what is fake. In that situation it is easier to misuse, especially when it is deployed in the context of elections, that will lead to the spread of a lot of misinformation about candidates and other election related activities. So, careful use is needed in the world of content creation. Otherwise synthetic media has the tendency to make it more dangerous.

Do we need an aggressive collective action to prevent the use of Deepfakes?

Definitely, different stakeholders need to be consulted for the prevention method of deepfake. Action must be taken to ensure that the harm which is produced from deepfakes has to be reduced. A collective action is essential at this point of time. All the stakeholders contribute different aspects like the government contributes with its reach and the ability to make the regulation. Private companies contribute with the fact that they are ones who are making it, so they are the best players to enact regulation. Whether it is Artificial Intelligence experts, civil society or stakeholders; they are the best players to actually point out what is right, what is wrong and how regulation should happen to prevent deepfakes. So the thing is, the solutions for this problem will come out from stakeholders. They need to be involved when it comes to regulation and collective action.

Deepfakes pose a transparent and formidable threat to women. Does this malicious dissemination of female Pornography reveal how misogynistic the technology has become? Is technological evolution destabilizing women’s privacy?

See, at the end of the day we cannot say inherently technology is misogynistic, but the use is misogynistic definitely. It is just the expression of what society is. Technology is providing an avenue for misogynistic actions to take place. Deepfakes are a problem. However, it is their use by certain actors which are targeting women which is the bigger issue. First of all, it is very important to understand that blaming technology will not result in solutions unless and until you understand that it is the use of technology that is problematic. So if the creators do not have deepfakes, they find another avenue to express their misogynistic thoughts and actions. 

Definitely this is a big issue, it is also significant to not just blame the technology but also highlight the people who use it.

People who are building the technology also need to be aware of the harm that the technology can do. I think one of the biggest issues is that technology is not being built with women in mind. Technological evolution is not taking into account the female perspective or the female experience. Civil society stakeholders, NGOs or experts who work for women’s rights issues would have been able to tell that this is one of the first things that the technology is being used for.

I think this is also an issue with the people who are building the technology, they are not thinking about the harms that could come out of technology. They are building it without understanding the possibility of its consequences on society. Technology is not a person thus it cannot hold views. Whereas misogyny is a view that people hold. At the end of the day, people are making the deepfakes,it is not like deepfakes are generated automatically.

Can governments in India have surveillance on DF creators’ social media handles since DFs do not come under free speech? If the govts do so, will there be a concern of creators’ privacy in cyberspace?

Privacy does not protect illegal activities. If any content creator is doing something illegal under Indian law then the government can take action and that is necessary. Social media is not a private forum, it is a public space and the use of social media is regulated by social media companies and the government. So the question of privacy doesn’t really arise in this situation. It is not like the government is going to interfere with creators’ private space by looking at what kind of videos they hold. If they post something on X or Instagram or other social media platforms then that is a public forum and free speech does not mean absolute free speech. There are conditions on freedom of expression under Article 19 of the Indian constitution. Free speech is regulated to the extent that it cannot be used to defame people. Government may monitor the content creators handle but it does not mean a complete surveillance. If such users post the content in a public forum then the question of privacy does not come in there.

People who are not even technologically inclined are using DFs now. There are a lot of apps available too. In such a situation will the prevention of DFs be difficult? Can Artificial Intelligence mimic Biometric data?

Yes, for sure. Now it is easier and easier to make these deepfakes. People who are not well aware about the proper use of technology now have access to deepfake making apps and they are also advanced technically. So definitely it will pose a greater threat. And yes, AI can be dangerous in dealing with biometric data. It can pose a threat to privacy and personal details.

Now people are talking about the harm posed by shallow fakes, clear fakes and deepfakes. Is this the exact time to have a detailed discussion about digital rights?

It is not like the creation of deepfakes goes against digital rights. But, if the creators are doing something illegal then they are not allowed to do that and that doesn’t mean their digital rights are suspended. Digital rights are extremely significant and it is very important that the freedom of expression online is protected. At the same time deepfake pornographic content which is being created by putting someone’s face or body does not come under digital rights. One cannot defame or abuse another person in cyberspace. It is actually the fact that something like this is affecting the digital rights of the person about whom the deepfakes are being created. Because their experience in online space is adversely affected.

Is DF technology shaping a zero trust culture where people doubt the functionality of technology that even destroys their personal lives?

I don’t think so. A lot of people are still depending and trusting on technology for their daily uses when it comes to banking, medical needs. I am not of the opinion that there is a zero trust culture arising in society now. It completely depends upon the situations where the deep fakes are completely used.

Everytime we are speaking about the harmful side of DFs. But in social media there is a growing trend where users laugh at audio deep fakes like well known personalities singing, speaking using artificiality. Is there any positive side to highlight, especially in the Entertainment industry?

I don’t think deepfakes are helping society in any way. But for entertainment purposes it is being used to create these kinds of fakes where people know it’s fake and not resulting in any harm then I don’t think deepfakes are a problem. For example there are people who mimic celebrities in real life so we are not going to say they are harming society or bringing positivity. They just exist for entertainment. There is no social benefit to the society where deepfakes are going to bring. If it is not harming anyone and if people know it’s fake or a joke then I don’t see why it could not be just allowed to exist in that situation.

How can a person spot the fakeness of a visual? What are the legal help that are currently available for the victims of DFs? Which are the immediate sites they must check in? 

Victims can report to the cyber crime cell of the police station under their jurisdiction. And in terms of identifying deep fakes, I think it’s very important to have some level of public awareness that what you see on social media can be fabricated, and don’t believe in everything you see without checking. One major step is to cross check the contents when we see. Search for credible news reports about the incident instead of just believing the video. 

In terms of technically how to spot deep fakes? It does break sometimes around the face and it has some kind of lag. It will not blink as much as a normal person does. So there are basic ways to identify. But the thing with technology is always developing and improving. There will be some other ways to identify deep fake after years. One of the ways to protect yourself is just take a step back and see if what you are seeing is true or crosscheck the fact.

How tough is it to identify a high quality deepfake? Can we identify the fakeness of a video from eye or lip movements?

When you see a video be critical to the information you receive other than that, there is no straight solution for this problem. The technology is improving, instead of trying to find technical ways just be a little self aware about the information you have, just question whether it makes sense. Deepfake do tend to glitch if you move the face, there will be eye and lip movements, when we closely watch it can be identifiable. As the technology improves it will become technically difficult to learn the functionalities of high quality deep fakes. Crosscheck everything and a sense of public awareness is needed.

To what extent deepfakes can control health work mechanisms? What will be the role of deep fakes in the health sector?

Deepfakes are a beneficiary in the medical sector. It can be used to recognise tumors and to detect issues that are not identified before. Deepfakes can be used to generate synthetic data in a way which protects the privacy of individuals. So in health procedures deepfakes are proving to be quite beneficial.

Do you think that Social media tech giants are taking proper measures to prevent the use of DFs from their platforms?

I don’t think so if all the social media platforms have rolled out specific regulations with regards to deepfakes. It’s under research. Definitely social media companies can do more in preventing harm that can cause to the users. One such way is to have a faster grievances redressal mechanism. They must respond to the users suddenly as possible, verify the video/photo and if it is found to be fake it should be taken down. I think social media platforms can do more and so can governments too. Everyone is understanding the core problem and working for the solution to prevent it at this point of time.

Technological evolution is a need of the hour. But when cyber attacks using DFs happen, whom should we blame, the technology or the creators of the fake content? and where does the accountability lie?

Accountability lies with creators mainly. But in general to some extent social media platforms, government and people who are building this technology are also responsible for all these problems. It’s their responsibility to monitor the content. So it is not only one group which is accountable, a lot of groups are answerable for the harms that are occuring in cyberspace. The government should think about how deepfakes are harming publicly available images of individuals. There is an urgent requirement to control the spread of deepfakes in a larger context.

spot_img

Don't Miss

Related Articles