AI Trends Spark Privacy Concerns, Experts Warn

If you’ve been active on social media recently, you might have seen many doll or action figure representations. These depict your relatives and acquaintances as cartoonish dolls inside packages complete with their preferred items. This phenomenon represents the newest social media craze leveraging artificial intelligence technology.

This trend encourages individuals to utilize generative AI tools such as ChatGPT to design dolls or action figures along with their accompanying accessories. Although it offers an entertaining method for expressing yourself to your peers, cybersecurity specialists caution about potential privacy issues.

“It’s enjoyable, however, when you upload a really high-quality photo to a firm assisting with constructing these images or avatars, you’re granting them an irreversible permit to utilize your image,” explained Rishabh Das, an assistant professor specializing in emerging communication technologies at Ohio University in Athens.

“You can’t predict when that firm might get purchased by a new entity, at which point those licensing rights could shift as well,” he explained.

He has closely monitored advancements in artificial intelligence and their potential to optimize processes, improve communications, and increase online safety. However, he has also observed how malicious individuals have exploited these technologies for harmful purposes.

As technology for replicating someone’s appearance has advanced, deepfakes have become more prevalent. These tools create simulations of an individual by using images, videos, or snippets of audio.

“A mere handful of seconds from a video, image, or audio snippet can be sufficient for criminals to recreate your actions using deepfakes,” Das stated.

He mentioned that social media is also a source from which much of this information can be harvested.

“We enjoy posting and sharing our life experiences with friends, but sadly, criminals take advantage of this to gather information about us,” he stated.

At present, there is no specific federal legislation safeguarding your data from exploitation by scammers or preventing unauthorized creation of deepfakes using that information.

At the Better Business Bureau of Central Ohio, deepfake scams are reviving the classic “Grandma! Help!” cons, but now fraudsters are employing AI-generated voices that mimic a beloved family member’s tones to make their schemes even more persuasive.

It’s the same strategy: ‘I need funds for bail, I’m incarcerated, I have to cover legal fees,'” explained Lee Anne Lanigan, who serves as the investigative director at the BBB of Central Ohio. “Feel free to end the call whenever you want. You can reach out to someone else instead. There’s absolutely no obligation to be courteous with them. Just disconnect. Contact your trusted contact. They will assuredly answer and assure you that everything is okay.

She mentioned that certain families have created secret codes or expressions to verify their true identity when someone claiming to be part of the family asks for things they wouldn’t typically request over the phone.

Consumer Reports examined the utilization of generative AI and deepfake technology and discovered that completely removing your digital trail from the internet is almost impossible.

Nevertheless, the simplest methods to safeguard yourself and your loved ones from deepfake scams involve being conscious of their existence, activating two-step verification on all monetary accounts, and simply trusting your instincts. Should something seem amiss, it likely is.

Leave a Reply

Your email address will not be published. Required fields are marked *