Violence Prevention

How AI Tools Are Being Misused to Fuel Gender-Based Violence

March 17, 2026

Content note: This post discusses image-based sexual abuse, technology-facilitated harassment, and surveillance. If you or someone you know needs support, contact MOSAIC’s Violence Prevention Programs or BC Victim Services at 1-800-563-0808.

This blog post is contributed by Kriti Thapa, Women’s Support Worker and Scam Awareness Coordinator with MOSAIC’’s Violence Prevention Programs. Drawing from her work supporting women and newcomers navigating online safety risks, Kriti shares insights on how emerging technologies like generative AI are shaping new forms of technology-facilitated gender-based violence.


The International Women’s Day 2026 theme, “Give to Gain,” calls us to share knowledge, resources, and advocacy with one another. One of the most urgent things we can share right now is awareness about technology-facilitated violence against women and girls (TF-VAWG) and how generative AI tools are being used to harm women, gender-diverse people, and youth.

For those working in community and nonprofit organizations, knowing about these risks is important. Many clients are directly affected, and understanding the issue helps us advocate for safer AI use, stronger protections, and accountability from tech companies and policy makers.

 

What is TF‑VAWG and why does generative AI matter?

Technology-facilitated violence against women and girls (TF-VAWG) refers to violence or harassment done through digital tools that cause harm such as physical, emotional, sexual, financial, or social. The UN explains that these acts often come from deep power imbalances and sexism. With AI technology growing quickly, new types of abuse using deepfakes and image‑based harm are spreading faster. 

The UN and global partners are now working to track how common TF‑VAWG is and what forms it takes in different countries, showing that this problem is serious and must be measured consistently.


Join us on March 26, 2026 for a day of connection, support, and care, with legal guidance and creative wellness activities. Reserve your spot today.


From online abuse to AI‑made deepfakes

In early 2026, investigations revealed that some AI tools were being used to make sexually explicit and fake images of women and minors without consent. In Canada, the Privacy Commissioner announced it was reviewing concerns related AI image generation and privacy protections on platforms such as X (formerly Twitter) and its AI system Grok.

Government departments in Canada have been discussing how to respond to this misuse. The Intimate Images Protection Act in BC protects people from perpetrators sharing private sexual photos or videos of victims without consent. Individuals may be able to seek protection through a tribunal or court process.

Similar actions are happening worldwide where countries like Spain have launched legal investigations, and media outlets including CNET and TIME have reported growing pressure on AI companies to strengthen their safety rules in response to concerns. 

Public‑health experts remind us that TF‑VAWG isn’t just a tech issue and that it is also a serious health and safety problem that needs prevention, survivor‑support services, and better data and policies. 

Workplace risks: Harassment and AI misuse

AI-generated sexualized images can also create risks in workplace settings. Reports in 2026 warned that deepfakes targeting co-workers may contribute to workplace harassment and can create legal and reputational risks for employers, particularly those without clear policies or reporting processes in place.

Human‑resources experts now recommend that organizations update their policies to prohibit the creation or sharing of AI‑generated sexual or demeaning content, and to establish clear reporting and investigation steps.

Additional risks

Beyond deepfakes, AI can also be used to spread scams, misinformation, or deploy stalkerware, software secretly installed on a person’s device to monitor their location, messages, and calls without their knowledge. The misuse of AI tools can contribute to harm for vulnerable people and communities such as immigrant, newcomer, refugee and other status women.

As conversations about generative AI grow, it is also important to consider how these technologies affect the communities around us. The large data centres that power AI systems require significant energy and water to operate. In some regions, facilities use millions of litres of water daily for cooling, which can place pressure on local water supplies, particularly in areas already experiencing drought. These environmental impacts are increasingly part of broader conversations about responsible technology and the wellbeing of communities.

What we can do now

For individuals and community members

  • Use AI tools thoughtfully. Be cautious when uploading personal photos, documents, or sensitive information to AI tools, as these may be stored, used for training, or exposed in a data breach. This is especially important for clients in vulnerable situations.
  • Include image‑based abuse in safety planning. Talk with clients about deepfakes, surveillance apps, doxxing, and non‑consensual sharing. Help them gather proof (screenshots, URLs, dates) and document evidence and seek support from trusted organizations or legal authorities where appropriate.
  • Share concerns and stay informed. When safe to do so, raising awareness about these issues in community conversations can help bring attention to emerging risks.

For community organizations and service providers:

  • Advocate for safer platforms. Support practices and policies that require AI tools to have safety checks, strong limits and repercussions on sexualized images, fast removal processes, clear consequences for violations, and independent oversight from civil society or external auditors.
  • Update workplace policies. Make sure harassment policies cover AI‑made content. Train staff to spot deepfakes, save evidence, and respond with care.
  • Support better laws and data. Work with groups pushing for clear laws on deepfakes and better protection for survivors.
  • Promote digital awareness. Thinking critically about images, news, and videos by checking reliable sources can help reduce harm and misinformation.

At MOSAIC, services provided by the Violence Prevention Programs support women and newcomers in navigating online safety risks and connecting with resources when technology-facilitated harm occurs.


Honouring International Women’s Day at MOSAIC

We believe that community connection is itself a form of safety. In recognition of International Women’s Day, MOSAIC is hosting a community event on March 26, 2026, bringing together community members for a day of shared experiences, creative activities, and access to legal information.

Participants will have the opportunity to book 20-minute private consultations with a family lawyer from Sinclair Centre Law LLP to ask questions and receive guidance on family law matters.

Spots are limited, reserve yours today.

Kriti Thapa
Written By:

Kriti Thapa

More By This Author