Approx. read time: 4.1 min.
Post: Deepfakes and the Legal Labyrinth: Navigating the Challenges of AI-Generated Scams in Canada
Deepfakes and the Legal Labyrinth: Navigating the Challenges of AI-Generated Scams in Canada. Deepfakes and the Legal Labyrinth: Navigating the Challenges of AI-Generated Scams in Canada. The authenticity of this article is not in question, yet it is important to note that AI-generated deepfakes, which bear a striking resemblance to reality, are being utilized to deceive individuals. Experts caution that the pace at which artificial intelligence technology is advancing is outstripping the legal system’s ability to adapt. It’s quite astonishing to think that Canadian celebrities such as TV chef Mary Berg, singer Michael BublĂ©, comedian Rick Mercer, and ice hockey legend Sidney Crosby are sharing their financial success secrets. However, the Bank of Canada’s efforts to halt these revelations were in vain.
Naturally, none of these stories are true. Instead, they represent a deceptive tactic employed by scammers on social media, who lure people with eye-catching posts—depicting Berg being arrested or Bublé being escorted away by authorities—only to lead them to a seemingly credible news article on the CTV News website.
Those who delve deeper into what seems to be an article generated by AI will find numerous links, approximately 225 on a single page, encouraging them to invest $350 with the promise of a tenfold return in just a week. This strategy marks the latest in a series of deepfake advertisements, articles, and videos that misuse the identities, images, recordings, and even voices of well-known Canadians to promote investment or cryptocurrency ventures.
Legal experts specializing in deepfake and AI-generated content highlight the current lack of effective legal measures, pointing out that Canadian legislation has not kept pace with technological advancements.
Financial frauds exploiting celebrities are hardly novel, yet the incorporation of cutting-edge generative AI technology introduces an innovative twist to an age-old scheme, as noted by Molly Reynolds, a partner at Torys LLP in Toronto. She predicts the situation will deteriorate before any improvement is seen, emphasizing the ongoing struggle to develop effective tools and legislation to prevent such occurrences, a battle that is seemingly being lost.
Detecting deepfakes is becoming increasingly challenging. WonSook Lee, a computer science professor at the University of Ottawa, observes that while some AI-generated content is easily identifiable, the quality of certain programs has reached a level where distinguishing real from fake is much more difficult. Even when imperfections exist, they can be rectified with photo and video editing software, Lee adds, underscoring the continuous improvement of AI technology.
The platform X has managed to reduce the frequency of scam ads featuring Canadian celebrities by suspending some accounts responsible for sharing them. However, CBC News’s attempt to reach out to X Corp.’s spokesperson only resulted in an automated reply.
Platforms like X face the dilemma of balancing moral and legal responsibilities, with limited legal obligations to remove fraudulent content, as explained by Reynolds. This situation leaves affected individuals without substantial legal support or assistance from technology companies, unlike high-profile individuals such as Taylor Swift, who can leverage their significant social influence.
Deepfakes and the Legal Labyrinth: Navigating the Challenges of AI-Generated Scams in Canada
Following the spread of sexualized AI-generated images of Taylor Swift, swift action by social media platforms and legislative proposals by U.S. lawmakers highlighted the potential for rapid response to such issues. Nonetheless, Reynolds emphasizes the broader harm that can arise from unauthorized use of one’s likeness, not just in cases of non-consensual, sexualized imagery.
Efforts to secure interviews with Mary Berg and Rick Mercer regarding the misuse of their images were unsuccessful. Pablo Tseng, a Vancouver-based intellectual property lawyer at McMillan LLP, asserts that the law recognizes the wrongful use of an individual’s image, regardless of their fame, presenting the potential for legal action.
Despite the absence of specific Canadian legislation targeting deepfakes, existing legal frameworks could offer avenues for recourse. However, pursuing legal action involves a lengthy and costly process, though it may ultimately prove beneficial, as seen in a recent class-action lawsuit against Meta.
The challenge of attributing responsibility for deepfake scams is expected to increase with advancements in generative AI technology, making the task of identifying perpetrators nearly insurmountable, according to Lee. This complexity is compounded by the widespread availability of AI research and source code, enabling the creation of sophisticated programs without traceable origins.