The Artificial Intelligence Dilemma
There is little doubt that generative AI, or artificial intelligence, will join that roster, making a dramatic impact on our lives—and in every facet of business—while also raising complex questions about its promise and ethical implications. One thing is clear: AI has caught the attention of Wall Street. According to Pitchbook, in 2022, venture capitalists poured $483.6 million into generative AI companies in New York alone—a 1,096.6% surge from $41.1 million in 2018. Almost $10 billion has been invested in the AI industry in the last two years across the nation.
The misuses of AI are equally noteworthy. Scammers are deploying AI to mimic individuals’ voices to siphon funds from their bank accounts, and in June 2023, a New York City lawyer was fined for submitting a legal brief suffused with AI-generated falsehoods. The threat of disrupting existing working conditions is so great that the striking Writers Guild of America fought to regulate AI-produced material by the film and TV studios.
That advance ushered in life-changing concerns. How would AI improve the efficiency of organizations? How could digital technologies leverage a business’s core competencies to the next level? What do businesses need to know to use these tools in socially responsible ways? And most importantly, what is the human cost of these seismic changes?
Managing Transformation
“The center is a focal point for faculty research in information systems, analytics, and upcoming technologies and [exploring] how they are transforming businesses,” said Aditya Saharia, professor of information systems and director of the center, who holds a Ph.D. in theoretical physics. “It’s about how we manage technology, how we adopt technology, and how technology is helping us manage businesses.”
The CDT’s virtual Design Lab fosters research in AI, deep machine learning, and solutions for business and social problems. RP Raghupathi, Ph.D., professor of information technology and director of the lab, coordinates student projects and the advanced research they conduct with faculty—challenging them to excel and preparing them to face the competitive environment outside the school walls.
Author of more than 70 refereed journal articles, Raghupathi estimates that over 600 students have utilized the lab to pursue advanced research projects since 2014. “As we train the next generation of leaders, we wanted to provide experiential learning to students that goes beyond the classes and the courses they take,” he said.
A Customized Interview App
To that end, a recent project by two students addressed an issue familiar to nearly all job seekers: how to prepare for an interview with a prospective employer. Under Raghupathi’s supervision at the Design Lab, Master of Business Analytics candidates Haoxiang Jia and Zicheng Wang created the AI Interviewer, a tool that uses generative AI to simulate interview scenarios.
“We hope AI Interviewer will have the capability to play the role of professional interviewers in real life, having the knowledge base for specific roles and companies, and seamless voice interaction,” Jia said.
Users can hone their skills in multiple ways. For example, by cutting and pasting a job description into AI Interviewer, the user will get a series of relevant questions that might be asked during the actual interview. Users can also have their résumé analyzed for feedback on whether they would be qualified for a particular position.
Today, AI Interviewer is in beta mode and available to the public to test and provide feedback. The students are fine-tuning the app with the intention of ultimately providing a platform for everyone to use for free or at a very low cost.
“We hope AI Interviewer will enhance the accessibility to trending technologies for people, especially [those] who don’t have access to abundant career resources and nontechnical major students,” Jia said. “In other words, we hope AI Interviewer lowers the bar to use AI applications.”
Decisions With AI
“It can create a new space for competition and change the dynamics of the industry,” said Navid Asgari, Ph.D., associate professor of strategy at the Gabelli School and Grose Family Endowed Chair in Business. “It has the potential to reduce the barrier to entry in certain industries.”
Asgari said that managers need to be aware of what’s happening both within their industry and within their company itself. AI is all about “excellent predictions.” Rather than viewing this as a threat, managers should look at it as an opportunity to assert their authority: only human managers have the ability to render judgments and make informed choices on AI-generated predictions.
“You are unlikely to see AI making decisions at the helm of a company,” Asgari said. “In fact, it makes humans—whatever makes us human—even more important. AI should be viewed not as a substitute, but as complementary [to human intelligence].”
Still, he noted, managers need to be on guard about putting their faith uncritically in data at the expense of missing the bigger picture. As good as AI models and insights might be, they come with their own weaknesses, such as biases, which need to be compensated for. For example, AI systems have replied to prompts with racial, gender, and economic inaccuracies and stereotypes that were embedded in the datasets that AI trained on. “Truth is a more comprehensive notion,” he said.
Ethical Obligations
Miguel Alzola, Ph.D., associate professor of law and ethics at the Gabelli School, said that AI has the potential to deepen the inequality that already exists between shareholders and workers in business. To bridge that divide, corporations need to consider how each member of their labor force can benefit from the advances brought about by AI. Business leaders, he noted, also have an obligation to apply “general moral principles” when using AI, such as accountability and respecting individuals’ privacy.
As impactful as AI appears to be, it is worth remembering that it is still a human-made tool, subject to human judgment.
“AI does not help with value conflicts, which is what ethics is about,” Alzola said. “AI cannot meaningfully tell students how to live their lives or what moral choices to make in the classroom while completing assignments or doing research. AI lacks the psychological capacities necessary to empathize with other human beings. In the end, AI forces us to reconsider what it means to be a human, but AI will never be human.”
—Robert Lerose is a freelance writer based on Long Island, New York.