For super funds, with great AI power comes great responsibility

Asset owners integrating artificial intelligence (AI) models into their workplaces and processes need to carefully consider the issue of potential harm – to employees, fund members and society generally – and the guardrails that are necessary to minimise unforeseen consequences of emerging and rapidly developing technologies.
But for all the challenges and big, thorny issues AI throws up, there are clear and significant benefits to be gained from doing it well.
KPMG associate director and responsible AI lead Melinda Rankin told the Investment Magazine Chair Forum earlier this month that AI is reaching into almost every aspect of society and “it’s really important that we all start thinking together as a society, not just how it works, how a responsible AI or AI should function in your workplace”.

“But I think it has broader questions about the type of society that we all want to live in,” Rankin said.
Rankin said it was natural to be hesitant about using AI in a business setting, particularly super funds, and “I don’t want to somebody that of says you’re behind if you haven’t done that already”.
“If you’re hesitant and sceptical, that’s really healthy,” she said.
“I think that you should be really asking all your colleagues across the organisation how they’re going to manage it, and what are the risk-management processes and controls that they’re going to establish, and what’s the transparency and accountability around that.”

KPMG national sector leader, wealth and asset management, Linda Elkins said an issue around the use of AI and algorithms in business processes is that “if something goes wrong, it’s going to go wrong at scale”.
“It’s going to be every case,” Elkins said.
“That’s important. when you think about things like FAR [Financial Accountability Regime] and [CPS] 230, who is going to be the responsible executive for the algorithms that you’re putting in place? Are there critical processes that these algorithms are now supporting? And if so, how are you going to document and mitigate those risks? How are you going to think about possible outage or harm, and risk tolerance?”

Endless potential

Even so, the potential of AI to enhance business processes, improve efficiency and massively benefit the member experience is absolutely real, even if it’s still only in its infancy. 
Aware Super chair Sam Mostyn said AI will be central to the technology changes taking place at the fund, including its role in a feature called the “member listening engine”, which can monitor every call that comes into the member services team. 
“It tracks and highlights why people are calling us and what we can learn from that,” Mostyn said. 

“The most recent data shows that…a quarter of a million calls come in each year about one issue, and it’s following up on a form. It’s people ringing to say, ‘I think I’ve put a form in, can you tell me the status of it?’” 

Mostyn said Aware took that insight and designed into its member app a feature called ‘my activities’, where a member can be updated on the progress of a form through the system. 
“It’s created an efficiency benefit in the way of members not having to ring, not having teams of people answering a call about a transaction,” Mostyn said.
ART chair Andrew Fraser said in his role as Chancellor of Griffith University, the rise and rapid take-up of AI has educators asking a fundamental question: If AI can do it, why are we teaching it? 
“Which is one of the really challenging questions for us as a society, and that is, as technology develops, then, why are we actually making people do these things when someone else can do it?” Fraser said. 

“Is that a useful skill that will be of value into the future?” 

Fraser also said there’s a sort of technology arms race taking place, where students are using AI to produce things like written assignments, and technology is being developed to detect the use of AI. As the detection algorithms improve, so does the AI used to avoid detection. 

“Which kind of takes you back to the first question, which is, do we need to deny it? Or do we need to embrace it?” Fraser said. 

Interpersonal skills a bonus

One irony of the rise of AI is that a premium is now being placed on interpersonal skills. Fraser said new ways of testing educational attainment, such as oral exams, are assuming greater importance. 

“Universities have become places where students attend lectures at 1am in the morning, laying in their bed, listening to lectures at two times speed on their Airpods, because no one’s got the time to listen to someone talk in real life anymore,” he said. 

“From that extreme, atomised online experience, and online examinations, the one way to really test whether someone has comprehended and gained the knowledge is to sit with an oral examination.  

“And the benefit of that is, actually, for you to be able to do that it’s not just about the demonstration of achievement educationally, it’s a skill to be able to talk to another human being.” 

KPMG’s Rankin said the deployment of AI inside organisations does not follow established innovation paths, which usually involve centres of excellence developing solutions to a problem before turning them loose inside a business. 

She said another dimension that is “really, really important to note [and] which is also different…is that the nature of this technology is to be democratised, which is to say that it’s decentralised”. 

“Before…there would be a [chief technology officer], and you have very clear authority over that technology, and a maybe a business owner, but not everybody could access it,” Rankin said. 

“But the nature of these products, especially with Microsoft, is for everybody to have them. Co-pilot, everybody can access it. And there is a sort of libertarian, liberal idea in that, which is wonderful. The only problem is that your risk management processes have to be extremely good. 

“And you have to have a much better notion of the values that you want to harness, more than you probably ever did before, not only because this technology gives you the speed, autonomy and opacity you might want; but the other side of that is that society is demanding more accountability and transparency.  

“So you’ve kind of got these different tensions, which I think are really healthy.”