A picture of Pope Francis in a fashionable long puffer jacket has created the latest Internet storm and raised questions about the development of Artificial Intelligence technologies. This is not the only piece of AI-related news that has graced the pages of newspapers and magazines — from
protests against AI-generated art to a
New York Times columnist's worrisome conversation with the Bing chatbot, AI has entered our daily lives in increasingly complex ways.
The coverage on the AI-augmented picture of Pope Francis highlights that AI is developing at a rapid pace. In fact, experts suggest that the best AI generated images
have become near impossible to distinguish from real images.. It takes a close inspection of the pictures to notice the skewed shadows or smudged facial features that are the most common indicators of both photoshop-edited and AI-created images.
With the already heightened presence of fake news and propaganda materials online, the issue of AI-generated content, which may be indistinguishable from real materials, becomes central to the conversation about media literacy. The Pope Francis image may be a fairly low-stakes incident, but other artificially created images have the potential to cause social disturbances. Such are the recent
fake images of Donald Trump’s arrest. Following this case,
AI experts warn that while augmented reality images are nothing new, the speed of the development of the technology and the possibility of misuse by big media are real threats.
Not every image created with artificial intelligence is or will be published for nefarious purposes. Many are generated for comedic effect — according to the creator of the Pope Francis image in the
only interview he has given until this point to BuzzFeed — or for aesthetic purposes. Many defend AI images by calling them
a new form of art. However, there are already many disputes as to whether they could really be called art and if yes, then who do the rights for the work go to. Recently, the U.S. copyrights agencies ruled that
AI-generated images are not protected by the copyrights law and are public domain, however, what remains as an issue is the copyrights that protect the images used to train the programs.
Out of fears for both copyright issues and programs going rogue and breaking protocols, AI experts from the Future of Life foundation, an organization backed up by tech executives, including Elon Musk, have issued an
open letter, urging for at least a six-month pause in the development of artificial intelligence. Their criticisms are met with doubts, since the people involved in writing the letter are developing such technologies and are simply against its public use. Advancing the technology behind closed doors may prove to be more harmful than having it publicly available,
say educators and university researchers.
Yana Peeva is Senior Columns Editor. Email her at feedback@thegazelle.org