From Dolls to Deepfakes: How AI Can Invade Your Privacy and Turn Against You

If you’ve been active on social media recently, you might have seen numerous dolls or action figures pop up. These depict your relatives and acquaintances as cartoonish doll versions enclosed in packages complete with their preferred add-ons. This phenomenon represents the newest social media craze leveraging artificial intelligence technology.

This trend involves using generative AI tools such as ChatGPT to design dolls or action figures along with their accessories. Although it offers an entertaining method for expressing yourself to friends, cybersecurity professionals caution about potential privacy issues.

“It’s enjoyable, however, when you upload a really high-quality photo to a firm assisting with building these images or dolls, you’re essentially granting them an irreversible permit to utilize your image,” explained Rishabh Das, an assistant professor specializing in emerging communication technologies at Ohio University in Athens.

He mentioned that you can never predict when a firm might get acquired by another entity, leading to the transfer of those licenses.

He has closely monitored advancements in artificial intelligence and their potential to optimize processes, improve communications, and increase internet safety. However, he has also observed how malicious individuals have exploited these technologies for nefarious purposes.

As technology for deceiving someone’s appearance has advanced, deepfakes have become more prevalent. These tools create simulations of a person by using images, videos, or snippets of audio.

“A mere handful of seconds from a video, image, or audio clip of yours can be sufficient for criminals to recreate your actions using deepfakes,” Das stated.

He mentioned that social media is also a place from which a significant amount of information can be gathered.

“We enjoy posting and sharing our life experiences with friends, but sadly, criminals exploit this to gather information about us,” he stated.

At present, there is no specific federal legislation safeguarding your data from exploitation by scammers or preventing that information from being transformed into a deepfake without your knowledge.

At the Better Business Bureau of Central Ohio, deepfake scams are reviving the classic “Grandma! Help!” cons, but now scammers are employing AI-generated voices that mimic a loved one’s tones to make their deceptions even harder to resist.

It’s the same strategy: ‘I need funds for bail, I’m incarcerated, I have to cover legal fees,'” explained Lee Anne Lanigan, who serves as the investigative director at the BBB of Central Ohio. “Feel free to end the call whenever you want. You can reach out to someone else instead. There’s absolutely no obligation to be courteous with them. Just disconnect. Contact your trusted contact. They will assuredly answer and assure you that everything is okay.

She mentioned that certain families have devised secret codes or expressions to verify their true identity when someone claiming to be part of the family asks for things they wouldn’t typically request over the phone.

Consumer Reports examined the utilization of generative artificial intelligence and deepfake technology and discovered that completely removing your digital trail from the internet is almost impossible.

Nevertheless, the simplest methods to safeguard yourself and your loved ones from deepfake scams involve being cognizant of their existence, activating two-step verification for all monetary accounts, and simply trusting your instincts. If something seems amiss, it likely is.

Leave a Reply

Your email address will not be published. Required fields are marked *