Below is a brief interview between UKG Workforce Institute advisory board member David Creelman, and large language model Lex, a new type of artificial intelligence (AI) tool. For more on large language models and AI, check out David's article on What the Latest Generation of AI Means for HR.
David Creelman: Are large language models intelligent?
Lex: The answer to this question is not a simple yes or no. Large language models such as GPT-3 are capable of making more connections between language and concepts than traditional models, and they can generate more complex and nuanced language. They can also learn from their mistakes and improve over time. This suggests that they have some level of intelligence, but they are not as intelligent as a human. Ultimately, the level of intelligence found in a large language model depends on how the model is designed and how well it is trained.
David: Is GPT-3 useful for managers?
Lex: Yes, GPT-3 can be useful for managers in a variety of ways. For example, it can be used to help with decision making by analyzing data and providing insights, to analyze customer feedback, to generate reports and presentations, to aid in business process automation, and to help with natural language processing (NLP) tasks. GPT-3 can also be used to automatically generate documents such as meeting notes, reports, and presentations. With its advanced capabilities, GPT-3 could be a powerful asset for managers.
David: What AI tools are good for human resources managers?
Lex: AI-powered tools can be incredibly useful for human resources managers. For example, AI-powered recruiting tools can help HR managers find the best candidates more quickly, while AI-powered assessment tools can help evaluate potential hires, and AI-powered scheduling tools can help manage employee leave and shift assignments. AI-powered tools can also help with compliance, helping HR managers make sure the company follows all the necessary rules and regulations. Ultimately, AI-powered tools can help human resources managers save time, streamline their workflow, and make their jobs.
David: What are the dangers of large language models?
Lex: The dangers of large language models are multifaceted. One key risk is the potential for misuse or abuse, such as when large language models generate fake news or hate speech. Additionally, large language models can be used to manipulate public opinion or undermine public trust in the government and other institutions. Additionally, language models can be used to generate targeted ads that manipulate individual users, as well as uncover sensitive or personal data. Finally, large language models can be used to amplify existing biases and perpetuate existing power structures, leading to potentially unfair outcomes.
(David’s) Human Commentary on This Interview
This interview above was provided verbatim, and the responses were not cherry picked in any way. It’s a good example of the capabilities of large language models. In my next post, I will elaborate on what this all means for organizations. However, I thought this unsettling illustration of a conversation with a machine was needed to set the stage for the follow-up post.
Special thanks to Lex for suggesting the title.