It's Thalen's Turn column: Just because we can, does it mean we should?

The following is an opinion column written by an Echo Press editorial staff member. It does not necessarily reflect the views of the Echo Press.

Our turn
優太丸 木戸 - stock.adobe.c

“Your scientists were so preoccupied with whether they could, they didn't stop to think if they should.”

That was the line Jeff Goldblum delivered as Dr. Ian Malcolm in Stephen Spielberg’s 1993 film, “Jurassic Park."

It's a line that rings through my head when I think of the rapid advancements of artificial intelligence (AI). I am concerned that A.I. could affect jobs, culture and livelihoods. And, I am not alone.

Geoffrey Hinton was dubbed the "Godfather of A.I." after developing technology that became the foundation for A.I. systems with two of his graduate students at the University of Toronto in 2012. In a May 1, 2023 New York Times article, Hinton told NYT journalist Cade Metz that he quit his job at Google to speak out about the risks of A.I. and that he now regrets his life's work.

Hinton isn't alone either. According to another NYT article , in late March of this year, over 1,000 technology researchers and leaders signed a letter expressing concerns of A.I. and its "profound risks to society and humanity." As of Wednesday, May 3, there are a total of 27,565 signatures. Notable names like Elon Musk and co-founder of Apple Steve Wozniak were among the signatures along with University professors and CEOs of tech companies.


"I console myself with the normal excuse: If I hadn’t done it, somebody else would have," Hinton told the New York Times.

The article says that "generative A.I. can already be a tool used for misinformation." Hinton expressed how hard it will be to prevent "bad actors" from abusing the technologies power.

Hinton also expressed concerns that A.I. may "upend the job market" and "that the internet will be flooded with false photos, videos and text, and the average person will 'not be able to know what is true anymore.'"

Personally, while scrolling through social media, I have already seen posts of images and videos where commenters are questioning whether it is an original piece by a human or a product or A.I.

One example is an A.I.-generated movie trailer I saw on Facebook about a fake movie called, "The Galactic Menagerie." It is the story of Star Wars as told by Wes Anderson. Anyone familiar with Anderson's films knows he has a very distinct style of filmmaking and that he tends to cast the same actors. In the fake trailer, the A.I., in my opinion, perfectly replicates how a Star Wars movie by Anderson would look. The worst part is, as a fan of both Star Wars and Anderson, I am disappointed that it's not real.

I worry that with the A.I. technology able to replicate art like movies and images, one day, the artists will become obsolete. Or, rather than hone the skills of their craft, they become too reliant on A.I. and people begin to lose their creativity. I think we are already beginning to see that. Too many movies have come out in the last decade that are remakes, sequels, prequels or adaptations of previous works. Rarely, it seems, do we get to be entertained by something truly original. Even music is lacking creativity and meaning.

Will we ever get lyrics like, "Orders from the D.A. Look out kid, don't matter what you did. Walk on your tip toes, don’t tie any bows. Better stay away from those that carry around a fire hose. Keep a clean nose, watch the plainclothes. You don't need a weather man to know which way the wind blows," again? God, I hope so. That is a line from Bob Dylan's "Subterranean Homesick Blues" by the way.

This all makes me wonder if this is the beginning of the technological singularity, a hypothetical future point when the development of technology becomes uncontrollable and irreversible and may cause unforeseeable changes to human civilization.


The last concern Hinton listed is that with A.I. learning to develop its own computer code and operate on that code it may lead to the development of "truly autonomous weapons — those killer robots."

Even Stephen Hawking expressed similar concerns. In a Dec. 2, 2014 BBC article, he said, “The development of full artificial intelligence could spell the end of the human race... It would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Thalen Zimmerman of Alexandria joined the Echo Press team as a full-time reporter in Aug. 2021, after graduating from Bemidji State University with a bachelor of science degree in mass communication in May of 2021.
What To Read Next
Get Local