Amber Sinha is a Tech Policy Press Fellow.
This is the sixth issue of our Technology and Democracy Report, which examines the use of artificial intelligence (AI) and other technologies in the Indian general elections. For the past seven weeks, this report has provided news and analysis on the ongoing elections in India, with all but one phase now concluded. In this issue, we take a closer look at the use of WhatsApp in electioneering in India.
WATCH: WhatsApp in Indian election campaign
In our last Correspondent interview, we mentioned a new report from Rest of World that analyzed the BJP's use of WhatsApp groups in its election campaign. Based on research conducted in collaboration with the Pulitzer Center's AI Accountability Network and Princeton University's Digital Witness Lab, the report analyzes activity across BJP-affiliated WhatsApp groups in Mandi, a small town in northern India, to understand the app's role in the party's 2024 election campaign. The report notes that at least 5 million WhatsApp groups are run by the BJP in India, and that this tight-knit network allows the party to disseminate information across the country “within 12 minutes.”
While in the West, Facebook and Twitter have been the primary focus for spreading misinformation, in India (as well as Brazil and China), messaging services such as WhatsApp play a major role. Of India's roughly 820 million internet users, WhatsApp has more than 535 million monthly active users, and the average number of monthly WhatsApp users surpassed Facebook in September 2018.
In particular, WhatsApp transfer of image and video files is one of the major ways to spread information and news in India. As a result, a large amount of misinformation is disseminated through the app in the form of misleading images and videos, often accompanied by text claims. Unwanted videos and images appeal to people's raw emotions more than text messages. They can also be remixed with false contexts and messages that have little to do with the actual image or video.
Related article:
Misinformation and extremism are spread on social media and WhatsApp, but the user experience on these two platforms is very different. On social media platforms like Facebook, the content users see is moderated by algorithms, but WhatsApp offers a completely different experience. It just shows all messages in reverse chronological order (most recent messages first). User behavior and reactions on Facebook are also moderated by the platform's design. Users are expected to leave comments on posts, share them, and react with emojis. On WhatsApp, interactions are not formed in the same way. In WhatsApp groups, it is up to the user to decide how to engage.
Norms and conventions on WhatsApp have also evolved without the training of a personalized algorithm. Each group has its own shared identity and purpose, and violating them often results in backlash. There may also be clear rules about what may and may not be shared in the group, and group administrators play an active role in removing users who violate these rules, and members police content that they feel does not belong. This aspect of WhatsApp groups is reminiscent of the heavily monitored forum discussions of the first decade of the web.
In a group with extended family, “good morning” messages, jokes, and funny forwards may be OK, but messages that go against the group's political leanings may not be welcome. Messages and forwards sent to one group do not necessarily travel easily to other groups, and people are very conscious of which messages belong where.
Also, it is not easy to post or forward the same message to all groups or contacts, as it must be sent to each group or individual. So while messages can spread on WhatsApp, it is more difficult to control or monitor their spread. However, this aspect of WhatsApp makes it useful for mobilizing small geographic groups or communities. For example, WhatsApp messages are often used to spread rumors that prey on community prejudices and turn local sentiment into violence.
Groups may have their own content restrictions, but even in 2018, WhatsApp did not provide a way to report abuse or report misinformation. In September 2018, it finally appointed a grievance officer in India to whom users can contact with concerns or complaints. The grievance officer cannot be contacted via WhatsApp and requires a digital signature to be contacted by email.
In countries like India, WhatsApp messages are the largest source of misinformation and the hardest to track. This is primarily because communication is end-to-end and occurs at a more private level, making it much harder to trace sources back to specific individuals. As a result, WhatsApp allows users to only see what's going on in their own groups, greatly reducing visibility into the spread of information compared to Facebook and other platforms with more public groups and feeds.
Political campaigns can overcome these limitations by spreading content through large, complex networks of campaigners. In this respect, the BJP has an advantage in India. A Rest of World article highlights the size of the BJP's WhatsApp group in Mandi to show how tightly the party has organized its digital network across India.
In the mandi, the BJP has WhatsApp groups that anyone can join. There is a hierarchy of groups, from the national level down to states, districts, sub-districts, etc., and down to individual “booths” that represent communities of people voting at the same polling station. In addition, there are groups targeted at different demographics and interests. In the mandi, farmers can join at least two agriculture-specific WhatsApp groups. There are also groups for young people, doctors, veterans, traders, and intellectuals. Women can join the “Mahila Morcha” group, which means “women's wing” in Hindi. There are also groups for official caste and tribal classifications. Some groups are only for BJP functionaries or members, while others are open to the general public. There are also groups associated with the BJP that are not explicitly political. One such group is dedicated to keeping the mandi clean, but BJP members remain administrators.
Though WhatsApp was conceived as a private messaging service, today it is hard to think of it as anything other than a breeding ground for group conversations. All WhatsApp groups are built around common interests and connections. These can be personal (relatives, friends, weddings, vacation planning), work-related (company-wide, departmental, project-related), hobby-related (cricket, movies, quizzes) or other communities (housing complexes, alumni groups, new parents). Each group is bound together by a common identity.
An earlier report on fake news in India commissioned by the BBC recognised this shared identity as the main driver behind WhatsApp groups acting like a cohort. It helps connect people in a homogenous, tight-knit network of like-minded people. Common identities, connections and beliefs lead to group members suffering from confirmation bias – the well-known tendency to process information by looking for or interpreting information that is consistent with one's beliefs. This makes WhatsApp an ideal medium to mobilise group members.
WhatsApp groups often bring together people with similar beliefs, which strengthens confirmation bias, and the fact that people you know are sharing information leads to unquestioning acceptance.
Other developments
There has been much talk about the use of deepfakes in the ongoing Indian elections without sufficient thorough research or studies. Nilesh Christopher and Varsha Bansal attempt to fill this gap in a long-form article in Wired magazine, shedding light on the players, practices, and trends in India's burgeoning deepfake industry. The article highlights some of the well-known companies producing synthetic video content for elections and how they are working with major political parties such as the BJP and Congress. Apart from the creation of deepfake content intended for online dissemination, the article also discusses other uses of synthetic voice and video, such as AI-based voice calls for logistics coordination and election campaigning, and how AI-based services are integrated into the BJP's Saral App used for intra-party coordination.
In our last Dispatch, we mentioned a Boom Live article about X's crowdsourced fact-checking program, Community Notes, being ineffective in India. Last week, The Hindu also ran an article about the inability of the Community Notes feature to effectively flag misleading content posted by the ruling BJP on X's platform. The article noted that the feature appears to have recently stopped showing fact-check notes on controversial BJP content. This doesn't mean that people aren't making community notes on this content (notes are being posted in complete opposition to certain posts), but these notes are not approved and are not visible to X users. The article quoted a former X executive as saying that “X's rollout of the Community Notes feature in India without human moderators weeks before the election was expected to be deeply flawed and could be seen as simply a brand boosting move.”
In a recent report, International Citizens Watch (ICWI) and Ekō documented Meta's approval of AI-operated political ads that spread disinformation and incite religious violence during the Indian elections. To evaluate Meta's mechanisms for detecting and blocking political content that may be inflammatory or harmful during the Indian elections, they created several instances of such political ads and submitted them to Meta's advertising platform. According to the report, the ads were based on existing examples of actual hate speech and disinformation prevalent in India. In total, they submitted 22 ads in English, Hindi, Bengali, Gujarati, and Kannada to Meta, of which 14 were approved by Meta's review mechanism.
A new investigative report by CheckMyAds documents how Google continues to make money from Hindu nationalist media site OpIndia, despite the site repeatedly violating Google's policies on incitement to hate and spreading misinformation. Wired also covered the report in detail, documenting OpIndia's history of spreading conspiracy theories, hate speech, and misinformation, as well as its presence on other platforms, including Google, Facebook, and X.
Additional Information
- Article 19: The Center for Democracy and Technology and eight other organizations have published a letter expressing concern about recent actions taken by the Indian central government against journalists, political opponents, and the media.
- A recent article in The Quint discusses the differences between political content on WhatsApp and YouTube.
We welcome your feedback. Is there a story or important development we've missed? Send us your suggestions, comments and criticisms at contributions@techpolicy.press.