Last year, in November, the Chinese army deceived India by morphing content and publicizing images showing "surrendered Indian soldiers captured by Chinese PLA crouching in humiliating postures". Later, it was found that neither the video nor the photographs were real. India neutralized the Chinese narrative about who controlled Galwan.
by releasing realistic, authentic images of the Indian army performing its New year flag-raising exercise. After thoroughly examining the photos, Indian defense analysts concluded that the texture of the mountains in the footage given by China did not correlate to the geographic parameters of Galwan. Moreover, the PLA troops resembled Chinese movie stars clearly indicating that it was staged and not real. Apart from that, the observers noted that the images of ostensibly supplicant Indians in some previous Chinese photos appeared to be photoshopped.
It seems as if AI-driven 'deep fakes' are the next big tool of the Chinese disinformation campaign. The debate that arises/shapes from this is whether democracies like India should play defensive or start exposing the lies of autocratic China by producing forged content to deflate and deter China.
A closer look: Deepfakes are fake videos, audios or images which look eerily realistic. Deepfake, like other deep-learning algorithms, is based on neural networks, which are software frameworks that seek to replicate the human brain's functioning. Deepfakes require source data samples, and an encoder and decoder. The important properties of the source data, which might be an image, video, text, or audio file, are analysed and compared using a universal encoder. Once the data is broken down to a lower dimensional latent space, the encoder gets trained to find patterns. The decoder is a trained algorithm that compares and contrasts the two images using the specifications of the target. As a result, the algorithm superimposes the source's attributes onto the target image, generating fake data. Although by taking a closer look at them, deepfakes can easily be spotted, it is quite simple to mistake a deepfake for a real video. Today, deepfakes are widely being used for misinformation, and if done well, can fool thousands of people. “The capacity to generate deepfakes is proceeding much faster than the ability to detect them.”
Democratic India is extremely vulnerable to such false news attacks, and it must act quickly to stop them, or else Indians would suffer the consequences.
Deepfakes: A challenge to truth in politics Deepfakes is a global problem and when mixed with politics, can have spectacular or even quite disastrous results.
Recently, BJP member Manoj Tiwari used deepfakes to make English and Haryanvi dubbed versions of his videos to reach a larger audience.
An alleged deepfake of Gabonese president Ali Bongo was widely distributed, which led to a military coup, the first in 55 years.
Evidence suggests that a fake news item including fictitious remarks from Qatar's emir may have caused the diplomatic spat between Saudi Arabia and Qatar.
Many deepfakes of president Joe Biden and former presidents Barack Obama and Donald Trump are being used to spread misinformation.
A political party in Belgium made a deepfake video depicting President Donald Trump intervening in the country's internal affairs. “As you know,” the video falsely depicted Trump as saying, “I had the balls to withdraw from the Paris climate agreement—and so should you”. A political storm erupted and died down only when the party's media team admitted to the high-tech counterfeit. A deepfake depicting President Trump ordering the deployment of U.S. forces against North Korea could trigger a nuclear war.
As a consequence of this, even the truth will not be believed. Nixon's phone call cost him the presidency. Images of tragedy from concentration camps prompted us to take action. If the notion of believing what you see is under attack, that is a huge problem.
More people should learn about deepfakes, since a shocking proportion of individuals fall for these videos.
What can we do about deepfakes? Can they be detected? Sometimes, it is possible to detect text, images or audio synthesized by a machine.
Is it the same for deepfakes?
Well, yes and no. In the initial days of deepfakes, they were quite easy to catch— the videos had irregular blinking patterns, the skin tone on the face and on the neck didn’t match up, etc. However, it didn’t take long before these small indicators also vanished.
Several governments, including many states in the United States, have recognized the danger and adopted legislation to prevent the misuse of deep fakes. However, there are currently no rules in India that control deep fakes. India needs to think and act smarter by bringing in legislation and unleashing Indian cyberwarriors who with their skills won’t let any further similar misconceptions to shape in future.
“If AI is reaching the point where it will be virtually impossible to detect audio and video representations of individuals speaking things they never uttered…, seeing will no longer be believing.” Without reliable evidence, we will have to determine who or what to believe. Is that a good idea? Would it work out well?