The Artificial Intelligence Dilemma

The Artificial Intelligence Dilemma typographic title
The classical concept of Artificial Intelligence has been around since the 1950s. But newly emerging innovations have put AI’s capabilities in the hands of everyday users, opening new possibilities—and new risks.
The classical concept of Artificial Intelligence has been around since the 1950s. But newly emerging innovations have put AI’s capabilities in the hands of everyday users, opening new possibilities—and new risks.
By Robert Lerose
Throughout history, numerous technological breakthroughs have transformed our society. From landline telephones, radio, and television to personal computers, the Internet, and smartphones—each has marked a turning point in the way we live, work, socialize, become informed, and relate to one another.

There is little doubt that generative AI, or artificial intelligence, will join that roster, making a dramatic impact on our lives—and in every facet of business—while also raising complex questions about its promise and ethical implications. One thing is clear: AI has caught the attention of Wall Street. According to Pitchbook, in 2022, venture capitalists poured $483.6 million into generative AI companies in New York alone—a 1,096.6% surge from $41.1 million in 2018. Almost $10 billion has been invested in the AI industry in the last two years across the nation.

“only human managers have the ability to render judgments and make informed choices on AI-generated predictions.”
Institutions in nearly every sector of society are experimenting with generative AI, from school systems to emergency response teams to media organizations. In the summer of 2023, researchers at the University of California, San Francisco, announced a milestone by helping a stroke victim regain some of her speech using AI. Meanwhile, California firefighters are testing AI technology to spot blazes before they erupt.

The misuses of AI are equally noteworthy. Scammers are deploying AI to mimic individuals’ voices to siphon funds from their bank accounts, and in June 2023, a New York City lawyer was fined for submitting a legal brief suffused with AI-generated falsehoods. The threat of disrupting existing working conditions is so great that the striking Writers Guild of America fought to regulate AI-produced material by the film and TV studios.

artwork with small icons and a human face
AI has been around for decades, powering devices like robotics and voice recognition systems. But with the release of ChatGPT in 2022 by OpenAI, public attention soared. (ChatGPT, short for Chat Generative Pre-trained Transformer, is a computer program that simulates human dialogue.) Now, instead of merely responding to prompts, the generative technology of ChatGPT can produce new content at lightning speed for users who simply ask a question or issue a command.

That advance ushered in life-changing concerns. How would AI improve the efficiency of organizations? How could digital technologies leverage a business’s core competencies to the next level? What do businesses need to know to use these tools in socially responsible ways? And most importantly, what is the human cost of these seismic changes?

Managing Transformation

Fordham’s Gabelli School of Business is home to the Center for Digital Transformation (CDT), which is exploring the expanding opportunities AI affords, along with the most pressing questions it poses. Founded in 2011, the CDT gives faculty members, students, and researchers opportunities to understand the relationship and effects of digital technologies in business and the global marketplace. In particular, the center focuses on how these new discoveries can reimagine business and social orders for the better.

“The center is a focal point for faculty research in information systems, analytics, and upcoming technologies and [exploring] how they are transforming businesses,” said Aditya Saharia, professor of information systems and director of the center, who holds a Ph.D. in theoretical physics. “It’s about how we manage technology, how we adopt technology, and how technology is helping us manage businesses.”

The CDT’s virtual Design Lab fosters research in AI, deep machine learning, and solutions for business and social problems. RP Raghupathi, Ph.D., professor of information technology and director of the lab, coordinates student projects and the advanced research they conduct with faculty—challenging them to excel and preparing them to face the competitive environment outside the school walls.

Author of more than 70 refereed journal articles, Raghupathi estimates that over 600 students have utilized the lab to pursue advanced research projects since 2014. “As we train the next generation of leaders, we wanted to provide experiential learning to students that goes beyond the classes and the courses they take,” he said.

artwork with small icons and a human face

A Customized Interview App

The Design Lab at the Gabelli School lets students gain practical, hands-on experience working with AI. Equally important, students can use their projects as concrete examples of their knowledge and skill to show employers when they apply for jobs after graduation.

To that end, a recent project by two students addressed an issue familiar to nearly all job seekers: how to prepare for an interview with a prospective employer. Under Raghupathi’s supervision at the Design Lab, Master of Business Analytics candidates Haoxiang Jia and Zicheng Wang created the AI Interviewer, a tool that uses generative AI to simulate interview scenarios.

“We hope AI Interviewer will have the capability to play the role of professional interviewers in real life, having the knowledge base for specific roles and companies, and seamless voice interaction,” Jia said.

Users can hone their skills in multiple ways. For example, by cutting and pasting a job description into AI Interviewer, the user will get a series of relevant questions that might be asked during the actual interview. Users can also have their résumé analyzed for feedback on whether they would be qualified for a particular position.

Today, AI Interviewer is in beta mode and available to the public to test and provide feedback. The students are fine-tuning the app with the intention of ultimately providing a platform for everyone to use for free or at a very low cost.

“We hope AI Interviewer will enhance the accessibility to trending technologies for people, especially [those] who don’t have access to abundant career resources and nontechnical major students,” Jia said. “In other words, we hope AI Interviewer lowers the bar to use AI applications.”

Decisions With AI

AI’s ability to sift through and analyze vast amounts of data in the blink of an eye has given marketing leaders a turbocharged tool—the implications and opportunities of which are still unfolding.

“It can create a new space for competition and change the dynamics of the industry,” said Navid Asgari, Ph.D., associate professor of strategy at the Gabelli School and Grose Family Endowed Chair in Business. “It has the potential to reduce the barrier to entry in certain industries.”

artwork with small icons and a human face
By way of analogy, Asgari cited how biotechnology redefined drug development in the 1970s, “enabling new entrants such as Genentech to rise to prominence in the pharmaceutical industry and compete alongside incumbents,” he explained. “Likewise, AI can also usher in new firms’ entry. However, despite the hype, AI is unlikely to dethrone the incumbents and leave their employees jobless. In fact, AI might reinforce the established firms’ position because they own large volumes of data, which, in turn, benefit AI algorithms.”

Asgari said that managers need to be aware of what’s happening both within their industry and within their company itself. AI is all about “excellent predictions.” Rather than viewing this as a threat, managers should look at it as an opportunity to assert their authority: only human managers have the ability to render judgments and make informed choices on AI-generated predictions.

“You are unlikely to see AI making decisions at the helm of a company,” Asgari said. “In fact, it makes humans—whatever makes us human—even more important. AI should be viewed not as a substitute, but as complementary [to human intelligence].”

Still, he noted, managers need to be on guard about putting their faith uncritically in data at the expense of missing the bigger picture. As good as AI models and insights might be, they come with their own weaknesses, such as biases, which need to be compensated for. For example, AI systems have replied to prompts with racial, gender, and economic inaccuracies and stereotypes that were embedded in the datasets that AI trained on. “Truth is a more comprehensive notion,” he said.

Ethical Obligations

Champions of AI laud its data analysis and modeling capabilities to enhance everything from diagnosing patients to managing natural resources to creating personalized education programs—but critics abound. In March 2023, more than 1,100 techies urged a six-month moratorium on AI development because of the possibility for “profound risks to society and humanity.”

Miguel Alzola, Ph.D., associate professor of law and ethics at the Gabelli School, said that AI has the potential to deepen the inequality that already exists between shareholders and workers in business. To bridge that divide, corporations need to consider how each member of their labor force can benefit from the advances brought about by AI. Business leaders, he noted, also have an obligation to apply “general moral principles” when using AI, such as accountability and respecting individuals’ privacy.

“AI does not help with value conflicts, which is what ethics is about,” Alzola said. “AI cannot meaningfully tell students how to live their lives or what moral choices to make in the classroom while completing assignments or doing research.”
With pressure from the marketplace to come up with even more sophisticated AI tools, safety standards might not keep up with developments. In that case, Alzola explained, government will need to step in and regulate AI tools for surety before the public has access to them, just as they do with pharmaceutical products.

As impactful as AI appears to be, it is worth remembering that it is still a human-made tool, subject to human judgment.

“AI does not help with value conflicts, which is what ethics is about,” Alzola said. “AI cannot meaningfully tell students how to live their lives or what moral choices to make in the classroom while completing assignments or doing research. AI lacks the psychological capacities necessary to empathize with other human beings. In the end, AI forces us to reconsider what it means to be a human, but AI will never be human.”

—Robert Lerose is a freelance writer based on Long Island, New York.