Artificial Intelligence is rapidly transforming the business world, with many companies embracing its potential to increase efficiency, reduce costs, and improve decision-making. However, the implementation of AI also has significant implications for the labor market, with the potential to automate many routine jobs, leading to job displacement for workers.
While AI has the potential to increase productivity and profitability for businesses, it could also lead to job displacement for those in routine and repetitive jobs. This displacement could be especially problematic for workers in industries such as manufacturing, which is a significant sector in the Dayton region.
White-collar jobs and jobs requiring a bachelor’s degree can also be automated. In fact, AI is already being used to automate tasks such as data entry, document analysis, and customer service, which were previously performed by humans. This means that workers in fields such as finance, law, and healthcare, who were once considered safe from automation, may now be at risk of job displacement.
The last three paragraphs you just read were not written by a human journalist. They were written by ChatGPT, the viral language learning model developed by OpenAI, and edited for length and style by the journalist now writing these words.
For transparency’s sake, the following prompt was given to ChatGPT to generate the above three paragraphs: “Write an opening paragraph in the style of a newspaper article giving a brief overview of the applications of AI in business settings. Include both positive and negative elements of AI implementation in the labor market.”
The model was then asked to expand on how this would affect Dayton in particular, followed by how it would affect those with bachelor’s degrees.
Since its release in November, ChatGPT has garnered millions of users, and has already disrupted many areas of life and work. The generative AI chatbot functions conversationally, able to respond to questions and synthesize those answers.
At the same time, the explosion of ChatGPT usage has raised significant questions about the future of work — and the ethics of artificial intelligence and machine learning as a whole.
What is AI?
Machine learning models, or artificial intelligence, are files that has been trained to recognize types of patterns and to predict outcomes from those patterns, often those that humans can’t see.
Humans working to create machines to think like we do is nothing new, said Pablo Iannello, professor of law and technology at the University of Dayton. But for the first time in history, machines are able to communicate with each other — and learn from each other — without any kind of human input.
“Artificial intelligence becomes really important when you combine different things: one is machine learning, another is the internet of things, and the third one is blockchain,” Iannello said.
“If you combine those three things at the very high speed of programming and learning, then you have the situation in which we are today: You have computers that can learn by themselves.”
The “internet of things” is the idea that any object can collect and transmit data to the internet, like smart refrigerators or car sensors. Blockchain is technology that decentralizes the record of digital transactions along computational “nodes,” famously associated with cryptocurrency.
Large language models like ChatGPT, as well as image generators like Midjourney and Dall-E, draw their data from the billions of words and images that exist on the internet.
ChatGPT has already been used to write everything from children’s books to code. It can also be manipulated into producing incorrect answers for basic math problems, and will fabricate facts and “evidence” with confidence, said Wright State computer science professor Krishnaprasad Thirunarayan.
“That leaves me with mixed feelings,” he said. “These tools promise a fertile area of research on trustworthy information processing but, on the other hand, they are not yet ready for prime-time deployment as a personal assistant.”
Like any tool, artificial intelligence can be used for good, or it can be used for malicious purposes. Facial recognition software that can help apprehend criminals can also be misused by governments to track and harass citizens, either deliberately or through mistaken identities, Thirunarayan said.
“Premature overreliance on these not-yet-fool-proof-technologies without sufficient safeguards can have dire consequences,” Thirunarayan said.
Law and ethics
Artificial intelligence tools propose to disrupt the practice of law in multiple ways. Paralegals and other legal professionals are among those at risk of having their jobs automated by language learning models.
But the legal world also faces a major challenge: Developing laws and regulations that protect the humans that interact with AI tools.
Laws tend to lag behind the technological world, and the societal values that come along with those developments, said Pablo Iannello, professor of law and technology at the University of Dayton.
“Artificial intelligence is changing the way we see life. Law is going to change because the world is changing,” Iannello said.
Current law for gathering data is based around the concept of consent, Iannello said. Anytime you go to a website or create an account on Facebook or Google, you accept the terms and conditions, which includes data collection.
According to the most recent AI Impacts Survey, nearly half of 731 leading AI researchers think there is at least a 10% chance that an AI capable of learning at the same level as a human being would lead to an “extremely negative outcome.”
“The worst thing is that it looks nice,” Iannello said. “We don’t have to worry about politicians. We don’t have to worry about corrupt people. We don’t have to worry about corruption because machines will solve the problems.”
“But if that happens, who’s going to control the machines?”
In March, OpenAI released a report that found about 80% of the U.S. workforce could have at least 10% of their tasks affected by AI, while nearly 20% of workers may see at least 50% of their tasks impacted.
A March report by investment banking giant Goldman-Sachs found that generative AI as a whole could expose the equivalent of 300 million full-time jobs to automation worldwide.
“If it is trained on an extensive code base, (AI) can lead to mundane programming tasks being templatized and eliminated. This can mean more time to do non-trivial and potentially more interesting tasks, but can also simultaneously mean loss of routine jobs,” Thirunarayan said.
The influence spans all wage levels, with higher-income jobs potentially facing greater exposure, according to OpenAI researchers. Among the most affected are office and administrative support systems, finance and accounting, healthcare, customer service, and creative industries like public relations and art.
“A lot of people were aware that AI is is trending towards maybe supplementing or impacting many jobs, perhaps in areas like truck driving, for example, and I think a lot of folks thought white collar workers were more immune,” said David Wright, Director of Academic Technology & Curriculum Innovation at the University of Dayton.
“But almost everyone who’s had any sense of what AI is today and what it can look like tomorrow, we knew that this is going to affect everyone.”
The Goldman-Sachs report posited that while many jobs would be exposed to automation, others would be created to offset them in areas of supporting machine learning and information technology.
However, other studies show that the wage declines that affected blue collar workers in the last 40 years are now headed for white collar workers as well. In 2021, the National Bureau of Economic Research claimed automation technology has been the primary driver of U.S. income inequality, and that 50% to 70% of wage declines since 1980 come from blue-collar workers replaced by automation.
“All these issues can have far-reaching consequences: They can increase the social divide between the haves and the have-nots, and between the technologically savvy and those without comparable skills. On the other hand, these changes can relieve us of mundane chores and make time for the pursuit of higher goals,” Thirunarayan said.
In March, ChatGPT passed the bar exam with flying colors, approaching the 90th percentile of aspiring lawyers who take the test, researchers say. However, as yet, ChatGPT’s most recent iteration, GPT-4, has not been able to pass the exam to become a Certified Professional Accountant.
That’s because, in part, ChatGPT struggles with computations and critical thinking, said David Rich, a senior manager and CPA with Clark Schaefer Hackett.
Rich said he uses GPT-4 two to three times a week, on everything from doing accounting research, to writing memos, though the output text does take a decent bit of editing, he said.
“I’m a pretty picky writer, but it’s always nice to have a good starting place, even if it’s just ideas. It’s probably saved me about 80% of the time I would have spent getting that initial first draft,” Rich said.
ChatGPT isn’t the only artificial intelligence disrupting the accounting world. The American Association of CPAs is one of several organizations developing what’s called Dynamic Audit Solutions, to improve how auditors perform their audits.
The reasons businesses value CPAs include personal relationships, critical thinking, and the accountant’s ability to be intimately familiar with the ins and outs of their business, something a machine can’t replicate, Rich said.
“If it’s large manufacturing company, I’m familiar with how the CEO interacts with the CFO, how they interact with the board. That’s just something that AI is never going to be able to do. I won’t say never, but it would have a hard time really capturing the value proposition that we’re bringing,” Rich said.
ChatGPT has thrown a wrench at higher education. If used correctly, the software can easily write essays virtually indistinguishable from those of a human college student. Students at the University of Dayton are among many now doing their homework with ChatGPT, forcing the University to reckon with how it teaches classes across all disciplines.
“AI is something that looms very large for us, both in terms of how it impacts learning, and how it affects students and how they’re learning today,” Wright said.
The phenomenon has been met with mixed reception by educators nationwide. While some have called for better anti-cheating software, others have said this is indicative of a broader shift in work.
“Another challenge is how to incorporate AI so that when the students graduate, they have the skills needed to succeed in the workplace, wherever and whatever they do,” Wright said.
While AI may be sufficient for college essays, it lacks in producing practical, professional written work, said Gery Deer, who owns and operates GLD Communications in Jamestown and the newspaper the Jamestown Comet.
“I think where I can really smell it is that it’s a little too formulaic,” he said.
Despite this, ChatGPT is poised to take a sizeable chunk of public relations work. Deer says he has already lost work to ChatGPT, but that’s not the biggest worry.
“There’s enough work to go around, so I’m less worried about that. The downside is there’s nobody proofing it. There’s no regard for the audience in this material,” he said.
Quality work costs money, but creative work is seen as one of the easiest to cut costs from, Deer said.
“I’m not so much worried about losing my job,” Deer said. “I am more concerned with the level of junk that I’m going to have to now compete with.”
A group of artists filed a class-action lawsuit against image generators Stable Diffusion and Midjourney in January. AI image generators train on millions of images created by thousands of artists who post their work on the internet. As the model learns from the art contributed to the dataset, users are able to generate images in those artists’ styles in seconds — but as it stands, the artist whose style is referenced will never see a cent.
“Style is all an artist has,” Deer said. “As a writer, all I can do is rearrange the words, but it’s my style that creates that.”
Top 10 occupations most exposed to machine Large Language Models (ChatGPT) according to humans:
Financial Quantitative Analysts
Writers and Authors
Web and Digital Interface Designers
Interpreters and Translators
Public Relations Specialists
Poets, Lyricists and Creative Writers
Top 10 occupations most exposed to machine Large Language Models according to ChatGPT:
Accountants and Auditors
News Analysts, Reporters, and Journalists
Legal Secretaries and Administrative Assistants
Clinical Data Managers
Climate Change Policy Analysts
Court Reporters and Simultaneous Captioners
Proofreaders and Copy Markers
About the Author