While ChatGPT is helping benefits communications become more efficient, detailed and reliable, the risks may outweigh the rewards when it comes to full automation, says Tom Milne, a principal at Normandin Beaudry.
The possibilities for generative artificial intelligence in the benefits space are infinite, he says, noting the algorithms are helping expedite the research and planning stages of communications and plan design. Currently, the platform is primarily used for product branding, themes and ideas, particularly in communications around the launch of a new pension plan or benefits offering.
ChatGPT can tell plan sponsors how a particular benefits product works and can be programmed to access plan sponsors’ servers to generate proposals, benefits summaries and create strategies based on plan information.
However, Milne believes AI technologies still have a long way to go in terms of personalizing that data. Unless it’s provided with the detailed plan information, ChatGPT models still struggle to connect the specifics of a benefits program or offering. For example, it can’t yet link whether a plan sponsor’s health-care spending account rolls over each year or whether it’s on a use-it-or-lose-it term, he says, nor can it apply the program because it’s unable to decipher specifics in terms of what the program can do. It also lacks the ability to express authentic human emotions in its output.
While some people in the industry worry ChatGPT will put them out of work, the human touch is still needed to validate its output, says Milne. As well, privacy concerns linger over the use of the technology, so people are still needed at the front end of the programming to ask the right prompts and at the back end to review the output.
“The one thing [the benefits industry] has is massive amounts of data on how employees use benefits programs or overall drug usage,” he says.
As an example, Milne notes plan sponsors can use AI to run summaries on the top five drugs being used by plan members or chart year-over-year usage data to see any increases or decreases in utilization.
“ChatGPT learns information based off a number . . . of things and then applies [that knowledge] to other things. Who knows what someone put into that system before you or who knows how smart that AI is going to be in connecting the dots? Think of the security concerns you have around someone accessing your personal information. ChatGPT [can potentially] put all of these pieces together.”
As well, there are legal considerations around how AI is being adopted, he says, noting there’s currently a lawsuit against the owners of ChatGPT’s open source program because much of the tool’s information was gleaned by scrolling through online data from a host of copyright holders. It begs the question of liability when it comes to the information used in AI programming. Indeed, Normandin Beaudry has an oversight board that recently provided guidance to its employees on using the platform.
While AI is upending benefits communications and will continue to fine-tune its learnings to be more authentically “real,” says Milne, for the time being, the human touch remains vital to ensuring the technology can be used responsibly.