What is image-based abuse?
A recent study from the American Sunlight Project found that about 1 in 6 congresswomen have been the victims of sexually explicit “deepfakes.” Deepfakes are AI-generated images that appear real, and are often difficult to differentiate from real images. In this case, they depicted victims in “compromised, intimate situations” without their consent.
Image-based abuse can include sharing these AI-generated deepfakes of people in compromising situations, and also sharing real images that were intended to stay private. The rise of image-based abuse, sometimes referred to as “revenge porn” (sharing sexually-explicit images or videos of someone online without their consent), isn’t just affecting politicians; in 2020, an estimated 1 in 12 Americans (1 in 3 in the UK and Australia) were the victim of image-based abuse. This number has likely grown over the past five years.
With the ever-expanding capabilities of generative AI, it’s easier than ever for abusers to use these tools to create and distribute fake images of people without their consent. Anyone can be impacted by image-based abuse. For trans folks in particular, image-based abuse may be part of broader anti-trans tactics to create fear and cause physical and emotional harm.
How does image-based abuse impact trans people?
While we don’t have clear data on trans and nonbinary peoples’ experiences with image-based abuse, we know that trans communities experience high rates of other forms of technology-facilitated abuse. Data from the Cyber-Abuse Research Initiative tells us that 9 in 10 trans/nonbinary people have experienced some form of technology-facilitated abuse, such as cyberstalking or online harassment.
When we think about image-based abuse, we may think of sexual images being shared first. But for trans and nonbinary people, sharing pre-transition photos is another form of privacy violation that may be just as harmful to someone’s mental health and physical safety. Outing someone in this way, especially if they are not public about their trans identity, could put them at risk of losing their job or housing, or experiencing physical harm.
Fae Johnstone (they/she), a trans activist who was featured in a Hershey’s ad for International Women’s Day, spoke out about their experience with cyberstalking and technology-facilitated abuse in response to the ad:
“I was inundated with hate mail and online harassment. My privacy was violated. My deadname was leaked, pre and early transition photos shared. Far right groups dug into my family and my personal life for anything they could use to throw mud at me.”
What Johnstone describes is not only a horrifying example of online harassment and cyberstalking, but also image-based abuse. Stalkers shared pre-transition photos of Johnstone, which had the potential to put their safety at risk.
The impact is huge. Johnstone writes:
“Words cannot convey the mental health impact of being turned into an object to be loathed & vilified, simply for being yourself + speaking up for your community.”
Trans people may experience image-based abuse in the ways we’ve seen targeting primarily cisgender women, leaking sexual images or sharing AI deepfakes made and distributed without the victim’s consent. Trans people may also experience anti-trans cyber-abuse tactics, attempting to out someone against their will, cause emotional harm, or direct further hate or abuse in their direction.
Whether images shared are sexual or not, no one deserves to have their privacy violated. Image-based abuse, along with all forms of cyber-abuse, contributes to lasting harm to individuals and communities.
What can we do about it?
It can be scary to read about the prevalence of image-based abuse. Many of us probably know someone who has been targeted or at least have heard of a high-profile person being targeted. Many of us are also struggling with additional fear about the incoming administration and how trans communities may be further impacted by anti-trans hate.
But there ARE actions we can take to protect ourselves and our communities from image-based abuse.
Many of the strategies we can use to protect ourselves from image-based abuse are strategies we’re already using to avoid anti-trans online hate. Some of these include reporting, blocking, and documenting harassment when it happens, adjusting our privacy settings to restrict who can access our social media accounts, and seeking connection offline.
The National Domestic Violence Hotline also suggests the following security tips:
- Select the strongest security settings on all your devices; use strong, unique passwords; and enable multi-factor authentication (MFA).
- Physically cover your webcams when not in use, using a commercial webcam cover or a post-it note or sticker/tape.
- If possible, do not make your friends or contacts list visible or accessible to others. This list is often the first thing that perpetrators copy for blackmail purposes.
- How to do this on Facebook: https://www.facebook.com/help/115450405225661/
- How to do this on Instagram: https://www.wikihow.com/Hide-Followers-on-Instagram
- Lock and/or log off devices when not in use.
- If you create intimate images of yourself, try to leave out details that may be used to identify you. That includes your face, unique tattoos, and scars, as well as unique furnishings, décor, jewelry, or other personal items.
If you are currently experiencing image-based abuse and are in a safe place to do so, the Safety Center on the Cyber Civil Rights Initiative (CCRI) provides guidance about how to document, request that images be removed, and get legal support. You do not have to follow any of these steps that do not feel safe or comfortable for you, but you may find it helpful to know your options.
Outside of taking direct action against the abuse, you may need other forms of emotional support. This blog post includes a list of helplines for trans and nonbinary folks that will not contact law enforcement without the caller’s consent. Here are some options for quick access:
- BlackLine: 1 (800) 604-5841
- Trans Lifeline: (877) 565-8860
- TrevorLifeline: 1-866-488-7386
- RAINN: 800-656-4673
If you are not being directly impacted by image-based abuse, there are still ways to get involved to prevent image-based abuse. Start conversations in your community about image-based abuse. Share resources on social media about what to do if you or someone else is targeted. Learn about the DEFIANCE Act or other legislation that would hold perpetrators accountable.
Support those you know who experience image-based abuse. Too often, victims and survivors are blamed for the harm done to them. One way we can offer support is to remind others that the perpetrator and any people who share the image(s) are to blame, not the person whose image is being shared. You can also help a friend to assess their online safety or follow any of the tips above on reporting the abuse.
–
While taking precautions with our online security may help reduce your risk, remember that you are never at fault for experiencing image- or technology-based abuse. Sharing photos with someone privately does not mean you are consenting to them being shared publicly. Posting selfies online does not give permission for anyone to use your photos to create harmful AI images. It is always the perpetrators of abuse who are causing the harm.