#### Digital Aggression Against Women
Women, particularly political leaders, are the primary victims of AI misuse. AI-generated explicit and misleading content is disrupting their political and social activities. A recent study by Empowerment Network Bangladesh found that 72% of women have faced online harassment, with 82% of cases involving AI-generated offensive images or videos. This not only jeopardizes personal safety but also deepens societal divisions and political unrest.
#### Misinformation During National Crises
On July 10, a tragic bus accident in Savar claimed numerous lives, plunging the nation into grief. However, AI-generated fake videos spread rapidly on social media, exacerbating public distress. Fact-checking group TruthGuard identified these videos as fabricated using AI tools, noting inaccuracies in location details and visuals. This incident underscores the dangerous potential of AI misuse during national emergencies.
#### Administrative and Legal Gaps
The government has yet to implement effective measures to curb AI misuse. Rezaul Karim, an advisor at the Ministry of ICT, stated, “While the Cyber Security Act addresses AI misuse, enforcement remains weak due to limited digital awareness among the public.”
Dr. Farzana Akter, Professor of Computer Science at Jahangirnagar University, emphasized, “Developed nations are deploying advanced technologies and policies to combat AI misuse. Bangladesh must establish monitoring units and educate the public urgently.”
UK-based tech analyst Ayesha Siddique warned, “The spread of AI-driven misinformation on social media is spiraling out of control. Without stringent laws, fact-checking platforms, and media literacy programs, digital violence, especially against women, will escalate.”
#### Spread of AI-Driven Misinformation
Recent data indicates that 92% of posts on platforms like Facebook, X, and YouTube are influenced by AI recommendations. Over the past six months, misinformation has surged by 350 times, intensifying political propaganda and social polarization. The accessibility of deepfake, synthetic video, and voice cloning technologies allows even ordinary individuals to create convincing fake content, raising fears of a massive spike in digital violence before the election.
#### Legal and Policy Void
Bangladesh lacks specific legislation to regulate AI misuse. The Ministry of ICT and Election Commission have failed to take proactive steps. Experts recommend criminalizing AI-generated false content, deploying advanced deepfake detection tools, and forming dedicated monitoring teams for social media oversight.
Dr. Farzana Akter added, “Without robust monitoring, misinformation will continue unchecked. The Election Commission must deploy specialized teams to safeguard the digital space.”
Ayesha Siddique noted, “Platforms like Facebook, X, and YouTube need stronger content moderation. The government must collaborate with them to enforce stricter regulations.”
#### Recommendations for the Future
Experts call for a multi-pronged approach to address AI misuse:
- Enact and enforce stringent laws
- Deploy advanced fact-checking and deepfake detection technologies
- Establish monitoring teams with tech experts
- Promote digital and media literacy among the public
- Strengthen coordination with social media platforms
A Jahangirnagar University professor warned, “The rise of AI-driven misinformation could push Bangladesh into chaos before the election. Immediate and coordinated action is critical to prevent a digital disaster.”
#### Conclusion
The misuse of AI technology in Bangladesh is escalating into a profound threat to social stability, political integrity, and human rights. Without swift government intervention and widespread public awareness, the upcoming election and the nation’s future could be severely undermined, paving the way for a catastrophic digital crisis.