ChatGPT was released in November 2022, just about two years ago. The tool caught the world’s imagination and hinted at great changes to come. HR is still grappling with what those changes will mean to the function. Several tensions make deciding how to approach AI difficult; here are four of those tensions:
1. The gap between amazing and useful
There is no question that current AI tools like ChatGPT are amazing but that doesn’t always mean they are useful. For example, it’s easy to imagine an AI chatbot answering employee queries, however, in practice a chatbot might make too many mistakes to be useful.
If you don’t try AI tools, you may miss out on something amazing, yet the effort you put into trying them may prove to be a waste.
2. The risk of a “fast follower” strategy becoming a “far behind” strategy
Most HR departments won’t have the resources to be leaders in applying AI, so it’s tempting to choose the strategy of being a “fast follower”. The trouble is that as you sit back and watch what is working elsewhere, you may find that you are not following fast but merely falling further and further behind.
3. AI is moving so fast that your well-laid plans might turn out to be foolish
Let’s imagine you recognize that there is a great opportunity to deploy an AI chatbot, but you will have to invest in developing some tools so that it keeps information private. You make those investments but long before you are done, the next release of ChatGPT or one of its competitors has the features you need already built in. Your carefully planned investment now seems foolish.
4. AI experts assure you AI can or can’t do something, but experts are often wrong
When we are considering AI safety or potential job losses related to AI it’s natural to look to experts. Unfortunately, there are so many unknowns about AI, especially where AI will be in a few years, that even the experts are just guessing. With AI we are living in a world of unknown unknowns.
What to do about these tensions
These four tensions can be summarized by saying that deploying AI tools creates hard-to-assess risks but not using AI at all is an even bigger risk. There isn’t an option to sit back and wait until things settle down because that’s not likely to happen anytime soon.
I recommend implementing a sustainable program for learning about and experimenting with AI while being cautious about making any big bets. The important words from that sentence are sustainable, learning, experimenting, and cautious. The idea of sustainability is easily overlooked, you cannot give everyone a workshop on AI and say “Okay, now we are done”; you need mechanisms so that people stay at least somewhat up-to-date month by month. Learning should be the emphasis rather than deploying and the word “experimenting” is there to remind us that it’s not enough to hold workshops, we need to try things out. Finally, there is the word “cautious” and that one needs no explanation.
AI researcher and YouTuber Károly Zsolnai-Fehér always ends his videos with an enthusiastic salute of “What a time to be alive!” That captures a lot. AI progress is astonishing while creating difficult tensions for HR professionals. Perhaps like Zsolnai-Fehér we should embrace the tensions with enthusiasm and enjoy that we are in the midst of a remarkable time.