Posts: 14,229 +158
A hot potato: Deepfakes rely on AI to create highly convincing images or videos of someone saying or doing something they never actually said or did. Some examples produced for entertainment purposes equate to harmless fun but others are using the tech for nefarious purposes.
In a recent public service announcement from the FBI's Internet Crime Complaint Center (IC3), the agency warned of an increase in the number of complaints received regarding the use of deepfakes and stolen personal information to apply for remote and work-from-home jobs.
While deepfakes have come a long way in a relatively short period of time, there are still some rough edges that attentive employers can occasionally pick up on. During live online interviews, for example, the actions and lip movements of the person being interviewed aren't always in sync with the audio of the voice being heard. Furthermore, actions like coughing or sneezing are another indicator that something fishy is going on as they don't align with what is being seen.
The FBI said positions applied for in the reports included information technology and computer programming, database, and software-related job functions. Some of these positions would grant the applicant access to customer personally identifiable information, corporate financial data, IT databases and / or proprietary information, all of which could be valuable on the black market.
Companies or victims of this sort of activity are encouraged to report it to the FBI's IC3 division.
Image credit: Anna Shvets