If you’re active on social media, you’ve probably noticed friends, celebrities, and favorite brands turning into action figures using ChatGPT prompts.
Recently, artificial intelligence chatbots like ChatGPT have evolved beyond simply helping you brainstorm writing ideas — they now possess the capability to create realistic doll images.
By uploading a photo of yourself and instructing ChatGPT to design an action figure complete with accessories based on that image, the tool will produce a plastic-doll representation of you that resembles typical toys in packaging.
Though the AI action figure phenomenon initially gained traction on LinkedIn, it has since gone viral across various social media sites. For instance, actor Brooke Shields recently shared a picture of her action figure representation on Instagram, complete with a needlepoint kit, shampoo, and Broadway ticket.
Proponents of this trend argue, “It’s enjoyable, free, and extremely simple!” However, before you post your own action figure for everyone to see, experts caution you to consider potential data privacy risks.
One significant drawback? Sharing extensive information about your interests can make you more vulnerable to hackers.
The more you reveal to ChatGPT, the more detailed your action figure “starter pack” becomes — which poses a considerable immediate privacy risk if shared on social media.
In my own prompt, I uploaded an image of myself and requested ChatGPT to “Create an action figure toy of the individual in this picture. The doll should be full-figured and showcased in its original blister pack.” I mentioned that my action figure “always includes an orange cat, a cake, and daffodils,” reflecting my hobbies of cat ownership, baking, and botany.
However, these action figure accessories can disclose more about you than you might be comfortable revealing publicly, noted Dave Chronister, the CEO of cybersecurity firm Parameter Security.
“It’s risky to showcase ‘Here are three or four things I’m currently most passionate about’ to the public, because that could make you a target,” he explained. “Social engineering attacks remain the easiest and most prevalent method for attackers to prey on individuals and employees.”
Hackers often exploit your heightened emotions to confuse rational thought. These cybersecurity threats are more effective when the attacker knows what will trigger anxiety or excitement, leading you to click on unsafe links, according to Chronister.
For example, if you disclose that one of your action figure accessories is a U.S. Open ticket, a hacker might craft an email that makes it easier to trick you into giving away your banking and personal details. In my case, if a bad actor personalized their phishing attempt around opportunities for orange-cat fostering, I could be more inclined to engage than I would with a generic scam email.
So perhaps you should, like me, reconsider participating in this trend to share personal hobbies or interests on well-known networking platforms like LinkedIn, where job scammers often roam.
An even larger concern is the normalization of sharing so much personal information with AI systems.
Another potential data hazard is how ChatGPT, or any AI-based image generation tool, may capture your photo for storage and future model training, according to Jennifer King, a fellow in privacy and data policy at the Stanford University Institute for Human-Centered Artificial Intelligence.
She pointed out that with OpenAI, which developed ChatGPT, you need to explicitly opt out and instruct the tool to “not train on my content” so that anything you upload or type won’t be utilized for further training purposes.
However, many users are likely to stick with the default setting and not disable this feature, as they may not fully comprehend that it’s an option, as Chronister observed.
Why might sharing your images with OpenAI be problematic? The long-term effects of OpenAI training models on your image remain unclear, which raises a privacy concern.
OpenAI states on its website: “We don’t use your content to promote our services or create advertising profiles about you — we use it to enhance our models’ capabilities.” Nevertheless, the specifics of how your images contribute to future improvements aren’t clearly outlined. “The concern is that you simply don’t know what occurs after you provide the data,” remarked King.
Consider whether you’re comfortable with assisting OpenAI in building and profiting from these tools. Some individuals may be fine with this, while others may not,” King said.
Chronister described the AI doll trend as a “slippery slope” since it fosters the acceptance of sharing personal information with companies like OpenAI. You might think, “What’s the harm in sharing a little more data?” and eventually, find yourself disclosing something that should remain private, he warned.
Contemplating these privacy issues can detract from the enjoyment of seeing yourself as an action figure. Nonetheless, it’s the kind of risk assessment that ensures your safety online.
The actual expense of AI memes
This is enjoyable, but there are some worries.
To begin with, AI image production comes with its own costs. Of course, there’s the fee for a ChatGPT Plus subscription (approximately $20 / £16 / AU$30 per month), though you can create about three images daily on the free plan, depending on demand. More critically, there’s the expense associated with AI models like 4o.
A report from Queens University Library indicates, “Artificial Intelligence models utilize a significant amount of water and release substantial amounts of carbon during their creation, training, functioning, and upkeep.” Another study from Cornell University highlights AI’s increasing consumption of freshwater, stating that “training the GPT-3 language model in Microsoft’s advanced U.S. data centers can directly evaporate 700,000 liters of clean freshwater.”
If you think that these AI patterns and the memes they generate aren’t gaining widespread attention, stressing our systems, and potentially depleting natural resources, take a look at the remarks from OpenAI CEO Sam Altman.
We have a saying in my home that every time we produce one of these AI memes, a tree is harmed. This is exaggeration, of course, but it’s reasonable to say that AI content creation comes with its own costs, and perhaps we should consider how we think about and utilize it.
That raises another issue. With so many users mainly employing AI for trendy image creation, are they overlooking the main purpose?
The advancing intelligence of AI suggests that in the near future, it could match or exceed human intelligence (the arrival of Artificial General Intelligence could be as soon as next year). Meanwhile, we will all be using it to craft amusing movie posters and then be surprised when AI replaces our jobs.
The pathway into AI and maybe maintaining control of the conversation is by utilizing it as a practical instrument. It might not be as entertaining, but ChatGPT, Gemini, Copilot, Apple Intelligence, and the upcoming Alexa+ all include numerous tools that enhance our everyday activities, such as writing, creating outlines, refining presentations, and summarizing lengthy articles. Yes, they can generate complete images and short videos, but tools like Adobe Firefly offer more utility for fine-tuning existing ones in subtle ways, like eliminating backgrounds or distracting elements.
People do use AIs like ChatGPT 4o for these functions, but those everyday tasks are overshadowed by the surge of AI trend images and memes, such as the ones featuring my life-like action figure in plastic packaging.
It is kind of impressive, isn’t it?