OpenAI's AI system raises deepfake concerns

OpenAI's new artificial intelligence (AI) system, Sora, is raising concerns about the spread of deepfakes and raising questions about copyright for AI-generated content. Experts evaluate the extent to which the system can create videos that simulate reality with quality.

OpenAI recognizes Sora's limitations, such as the difficulty in accurately simulating the physics of a complex scene and understanding specific instances of cause and effect. The company highlights, however, Sora's ability to create complex scenes with multiple characters and generate specific movements and details in the scenarios.

Read more:

However, OpenAI also released videos showing Sora's limitations, such as scenes that don't make sense and unusual sequences. Experts point out that the videos presented by the company are probably just a few among thousands that were made until a presentable result was achieved, he points out. The globe. It is not yet known when Sora will be made available to all users and whether it will be free or paid.

[O modelo] may have difficulty accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person may take a bite of a cookie, but the cookie may later have no bite mark.

OpenAI, about Sora

Sora Risks

  • The executive director of the Institute of Technology and Society of Rio de Janeiro (ITS-Rio), Fabro Steibel, considers that the process of creating videos is not as simple as it seems, being necessary to create many images to obtain a quality one;
  • OpenAI guarantees that Sora will have protections against violent, sexual and celebrity content, in addition to including an encrypted mark to indicate that the content was generated by AI;
  • However, experts warn that markers will not solve the problem of deepfakes, as it is possible to remove or fake them;
  • The acceleration of these synthetic media systems makes it necessary to develop AI-generated content identification tools.

We already knew that the capabilities of these systems would increase and that GPT-3 and GPT-4 were the beginning of an improvement curve. Being part of the process, the risks we had in previous generations of AI are repeated or accentuated. Discussions about creating false or violent content and the risks of creating content with copyrighted material appear again.

Francisco Brito Cruz, executive director of InternetLab, in an interview with The globe

The lack of regulation and the legislative vacuum generate doubts about the compliance and ownership of the results generated by AI.

The launch of Sora shows that it will become increasingly easier to produce deepfakes, raising the need for legislative parameters before this happens. Big tech, including OpenAI, Microsoft and Google, has committed to stopping the possible abuse of AI in electoral contexts.

However, the issue of copyright is likely to generate more controversy as these systems advance.

Related Articles

Check Also
Back to top button