The chat about AI

The subject of artificial intelligence has garnered a great deal of attention over the last few months, with AI text and image generators as well as voice reproductions sweeping across social media and AI pioneer Geoffrey Hinton recently leaving Google while voicing regrets about pushing the tech forward.

With the understanding that developments in AI don’t exactly mean that a robot uprising will happen anytime soon, it’s worth considering what the technology is capable of at this point.

As with many developments in technology, AI discussions can often involve plenty of exaggeration, so it’s important for those not typically in the loop to understand what the technology really looks like.

An article from Los Angeles Times business columnist Michael Hiltzik titled “The artificial intelligence field is infected with hype. Here’s how not to get duped” delved into this issue.

In the October 2022 article, Hiltzik described how AI researchers and other experts generally consider this technology to simply be a tool for humans to use – comparable to a truck or a telescope.

As it’s such a broad term, it’s easy to say people interact with AI every day. Spell check, face recognition and social media or search algorithms are all, in a sense, AI.

While certain hopes and fears about AI changing the world could well be realized in the future, Hiltzik notes these sorts of promises are, for the moment, largely just based on hype and marketing.

The current technology known as generative AI – such as text generation software like ChatGPT – is a development from large language models which themselves were a development from neural networks which first appeared in the early 2010s.

New York Times columnists Kevin Roose and Cade Metz discussed this development in their April 2023 series on AI.

The term neural network essentially describes a system which has been fed a tremendous amount of information and, in doing so, has been able to recognize certain patterns within that information.

Roose and Metz note how, after viewing thousands of cat pictures, a neural network might be able to identify other pictures which contain a cat. They also mention that this technology, when applied to audio, is what allows software like Apple’s Siri to recognize its user’s voice.

Large language models were the next step, where large tech companies opted to use the vast amount of text on the internet from various sources to contribute to neural networks.

The current generative AI followed, with neural networks being able to use pattern recognition and apply it in such a way as to develop text, pictures, audio and even video.

Some of the results of this technology are already quite prominent, with so-called AI artists developing a presence on social media and voice replication resulting in a variety of memes and viral videos.

Beside the artistic potential of generative AI, proponents of the technology commonly point to ways it might streamline day-to-day responsibilities such as drafting emails.

However, a number of problems have also developed as generative AI has exploded in popularity over the last few months.

A long-held concern by many that AI could impact a wide range of jobs in the same way machine automation did could see fruition, though as the technology still has a great many problems and is still heavily reliant on human input, it’s unclear how or when this could truly become an issue.

AI text generation has certainly caused waves in this area as seen by the ongoing Writers Guild of America strike. Amid the union’s myriad of demands are strict regulations regarding the use of AI and its relation to crediting writers.

As detailed in an April 2023 article from Gil Appel, Juliana Neelbauer and David Schwiedel of Harvard Business Review, generative AI is also questionable when it comes to the matter of intellectual property.

Since this software is reliant on the vast amounts of data involved in teaching a neural network, the work of many individuals is involved.

This has been a particular concern among online artists who have attempted to combat their work contributing to AI generative works.

Plagiarism is also a concern for written AI content, particularly in the realm of academia where some students have submitted AI generated papers.

Locally, Waterloo and Columbia school districts have considered possible approaches to handling the rise in this use of technology, though no specific policies have been set.

Perhaps the largest issue AI might lead to is misinformation, a problem that has already arisen in a variety of ways.

Somewhat more niche are scams involving AI generated voices of friends or loved ones requesting financial assistance in phone calls.

AI generated text could also contribute to scams, as even less effort is necessary to write and send mass emails.

One of the more concrete ways AI can contribute to misinformation takes the form of AI generated photos.

An example of this happened last week, with an AI generated pictured of a plume of smoke next to a building circulating on Twitter with the caption “Large Explosion near The Pentagon Complex in Washington D.C. – Initial Report.”

The tweet was made by a verified account seemingly related to Bloomberg News, though both the account and image were fake.

Given the impact AI can have on misinformation, it’s worth considering how you might be able to better recognize AI-generated pictures in particular.

NBC News published an online article this past December titled “You against the machine: Can you spot which image was created by AI?” The article features 21 pairs of images, allowing the reader to select which one is real and which one was AI generated.

While they can sometimes be difficult to differentiate, common mistakes found in AI generated images include an abnormal number of fingers or distorted facial features.

Other subjects like shadows or furniture can also sometimes have clear mistakes or inconsistencies.

While similar awkwardness or inconsistencies exist when it comes to AI writing or video generation, it can often be difficult to be sure what’s real and what’s not.

As with any misinformation, it’s generally important to question content encountered on the internet. Consider where the information is coming from and whether or not you can trust the source after a close observation.

As AI technology develops, further benefits and concerns might as well. For now, it’s worth looking at the hype surrounding AI as well as the content it generates with a healthy amount of skepticism.

Print Friendly, PDF & Email

Andrew Unverferth

HTC web
BoB_300x200_Digital_MortgageAds_Display_Monroe
BoB_300x200_Digital_MortgageAds_Display_Monroe
MonroeCountyElectric300X15012_19