Workday Innovation Summit - what AI product risks are unacceptable? Workday explains, and customers react (2024)

Workday Innovation Summit - what AI product risks are unacceptable? Workday explains, and customers react (1)

Generative AI for the enterprise has shifted. It's no longer acceptable to come back from events with happy roadmaps. Talk of "revolutionizing" some type of industry/job/role doesn't cut it anymore, nor do reassuring platitudes about AI ethics or customer data privacy.

We need customer validation and gut checks. We need architectural specifics on how gen AI accuracy is improved - and customer data protected. Oh, and if we make assurances about responsible AI, we need transparency there too. What products were canned or altered? What areas of AI are off limits?

Pricing details are needed also. If we don't come back with that type of info now, what the heck am I supposed to write about now? I can best serve diginomica readers by pressing further; thus my series of AI deep dives.

Workday addresses growth priorities - CEO Carl Eschenbach frames the issues

Next up: Workday. I'm fresh back from Workday's annual analyst event, aka the Workday Innovation Summit. With a panel of customers mingling at the event, and some of Workday's top AI execs on hand, it was time to dig in.

To set the stage, Workday is at a big crossroads on a number of levels - not just AI. For more on that, check my Innovation Summit takeaway/surprises video review with Constellation's Holger Mueller:

Event Takeaways - @Workday Innovation Summit 2024 - carch @JonERP and me on our key takeaways #WDaySummit https://t.co/nONw2IvZSR

— Holger Müller #NextGenApps #FutureofWork #EntAcc (@holgermu) April 23, 2024

On February 1, 2024, Carl Eschenbach was named Workday's sole CEO. At Workday's Innovation Summit, Eschenbach framed Workday's overall growth and challenges. Workday's growth (1.3x user growth, 1.4x transaction growth this year) brings Eschenbach's prioritization issue front and center: sell deeper into the install base? Push further into promising international markets like Germany and Japan? Go downmarket via Accelerate with Workday's streamlined implementations for the midmarket? Double down on industry sweet spots like health care and public sector, and/or push into new verticals/micro-verticals?

Push for the international spread of Workday Financials, which recently hit a $1 billion annual run rate? Or deepen a popular next-gen talent approach based on Workday's Skills Cloud (2,300 customers), and build on the AI-driven talent approach a deep skills ontology enables? The likely answer is all of the above, but tough go-to-market choices are still inevitable.

Workday on Responsible AI - a stark contrast from move-fast-and-break-things

These are good problems to have - but in the meantime, Eschenbach has two key areas in his CEO wheelhouse: hone Workday's go-to-market with easier-to-consume pricing and delivery, andpromulgate Workday's "Responsible AI" strategy (which Workday often shortens to RAI).

Workday's AI approach can be summed up in two words: "responsible" and "measured." Yes, you hear a phrase similar to "responsible AI" in just about every vendor keynote, but you don't hear "measured" in connection with AI very often.

For emerging tech, we idealize a move-fast-and-break-things mentality instead. That's proven to be a pretty horrendous mentality for AI, with embarrassing AI gaffes and AI overreach everywhere. So how is Workday different?

My AI agenda at the Innovation Summit went like this:

  1. Examine "responsible AI," and better understand how Workday puts their version of Responsible AI into action.
  2. Dig into the architecture Workday is using to produce/support enterprise-grade AI applications.

On the first day of the show, I kicked off the responsible AI questions by venting on the ethical AI platitudes we've heard from the keynote stage across the 2024 event circuit: Pretty much everyone in this room has been hearing an earful from vendors about responsible AI spraying. It can be very difficult to try to figure out exactly what that means. Most enterprise vendors do have a lot of good intent, but I wanted to share with you a few things that I'm looking for, and hear your responses to how you would say you differentiate around this topic.

When you think about the EU AI act and its risk level framework, a lot of HR practices are in the high risk area. What I want to hear more of is:

1. Workday has checked already checked a box with its active participation around AI regulatory frameworks and public policy, so I want to hear more about that story.

2. But then I want to hear more about when you decided against going with something that violates your practices, because I'm sure you come up with use cases that don't fit your responsible AI framework.

3. I'd also like to hear more about partner accountability. The Workday AI marketplace certification program is great. But what happens when partners don't honor Workday's vision? What I want to hear about responsible AI is the messy stuff and the difficult stuff, because that will show me that you're on that path.

Workday's AI risk framework - how do you tie risk assessment into AI development?

Over the next day, I talked in depth with several members of Workday's AI leadership. Workday itself uses a nearly identical risk framework to the EU AI Act, as well as what is spelled out in NIST's AI Risk Management Framework. (These risk areas span from "low risk" to "high risk" and "unacceptable risk"). During the Workday Responsible AI analyst session, Kelly Trindel, Chief Responsible AI Officer at Workday, explained how Workday's risk assessment is integrated into AI development:

We've found a way to make this scalable and efficient for our development teams. It's basically like a questionnaire that goes out to our development teams, and they do an ideation stage of the product. They can find out within minutes: what's the risk level of this technology? And then if it's a higher risk level, what do I have to do? We want to give that information to them as soon as possible, so that they know they can build a new process.

Examples of low risk Workday AI might include embedded functions like detecting budget anomalies. Higher risk areas included HR processes tied to promotions. A high risk area is not off limits for Workday, but it requires, as Trindel put it, more "guidance and guardrails." Then Trindel answered my 'what is off limits' question: what is considered "unacceptable risk" by Workday?

We've chosen not to build things that would assist with intrusive productivity monitoring, or any kind of [workforce] surveillance. We steer away from those things, and then think about that principle: positively impact society. We could build things like that - and choose not to.

Before Workday could scale gen AI across its platform, Responsible AI needed to be fused into product development. Trindel:

These points are what really drives our risk assessment. First, we're looking at the OECD definition of AI, for example, and figuring out: does this fit within our AI scope, or could it possibly have an impact on workers' economic opportunities? If so, it shifts towards higher risk.

Is this targeted for individuals? Does it make predictions or categorize people, or is it intended to do that? If so, it's higher risk, as opposed to larger populations. And then finally, those instances where we're building something based on sensitive or emerging technology, we just want to give it a minute and think a little harder about that. So there's higher points towards a higher risk technology.

Trindel says now that this risk framework is in place, development can shift gears also:

When you have frameworks like this across product and technology, that speeds up our ability to develop, because you're not just wondering what you should or shouldn't build here, going over it again and again, within your own team - you've got a team that's got your back to help figure it out.

Customers react to Workday's generative AI approach

Of course, there is more to responsible AI than risk assessment (accuracy and explainability come immediately to mind). But for now, the burning question is: what do Workday's customers think about Workday's "measured" RAI approach? I had a chance to ask three of them last week, via the assembled customer panel. Stacy Davis, CPA-VP and Assistant Controller at Blackbaud, noted that keeping an element of human oversight is crucial for them:

We're careful about how we want to use AI; it's human-centric for us. I think someone mentioned earlier, it's kind of nice [for the system] to present something, and then a human reviews it. We're in the process of implementing with a Workday partner who was mentioned yesterday - Auditoria AI.

They're going to help us in our collections phase, automating some of those routine transactions. We're interested in other things - I think there's a [Workday] release coming out that provides data for variance analysis. Now, it still requires a human to understand the 'why' - it can present the 'what'. So I think we're interested; we're happy Workday is being thoughtful, because we're being thoughtful as well.

Comments from Lynn Rice, SVP Chief Accounting, and Rich Lappin, AVP of HR Connect at Unum, served as a reminder that some useful aspects of AI are already available in Workday products, via well-tested machine learning scenarios like anomaly detection:

We're very similar to what Stacy said. We certainly are interested in learning more about [gen AI] use cases, especially on the finance side. Having a partner like Workday also pushes that towards us. That's helpful for sure. But it's something that we definitely want to take our time with.

The whole discussion around trust and Responsible AI resonated. Some of the areas we're looking at; some of the functionality Workday has added around anomaly detection, journal insights, invoice automation. If we're able to utilize that functionality, that can make a big difference. It puts you on the offensive.

The Unum team brought up the other essential aspect of responsible/trustworthy AI: output accuracy. Data quality plays a key role:

When you're closing out, it's super important for the data to be accurate, and be on time... The more we can get ahead of issues, that's where we want to be.

Human oversight is a key component of responsible AI, but asKyle Arnold, CHRO, Bon Secours Mercy Health pointed out, there are scenarios where no human is available. Can a well-designed AI system fill the gap responsibly? Sometimes a copilot or digital assistant may be useful. Other times, a well-trained bot may be your 24/7 presence, when no human is around to help:

From just working on HR, what we're most excited about is generative AI, and how we can better support our associates without a human touch. There's still going to be a human touch, but there's also an AI. So we're really excited about that, and always just being there 24/7 for the clinicians. Our workers are 9 to 5. HR - we're not seven days a week, but healthcare is 24 hours a day. So anything that we're going to do in generative AI is huge for us.

Arnold says that Workday's "methodical" AI approach is better than chasing shiny new tech toys:

I would like to add that I appreciate Workday's stance and their methodical approach to AI and generative AI, as they're trying to figure out how to do this well, how do to this responsibly, and how to do this with trust. We're doing that ourselves. There's always the race for the next big thing out there, but Workday is actually moving at the pace that we are willing to adopt it, and they're working in the same way we are. Again, while there's always that race for the next big 'Aha,' I actually think it's the ability of the platform and the thoughtful approach to AI - and we really appreciate that.

My take - AI trust lies in the specifics

Explainaibility is not a strength of any type of deep learning AI - generative AI included. But Trindel says Workday is making strides here. She showed us screen shots where Workday is embedding AI notices and documentation "at the point of user interaction" with Workday systems.

Some of the guidelines and guardrails that we have for the higher risk technology would be: you have to give notice in your interface to workers, during their interaction with AI. So you can see that indicates this what's AI, and then you can see instances where we're showing here in the user interface, where these AI outputs were derived from... There's explainability, at the point of user interaction; there's also explainability, documentation and other types of things.

I won't lie: I went into this event with a bit of a chip on my shoulder. I've just heard too many feel-good generic statements about ethical AI this spring. Mueller and I got into a back and forth on this during our video; Mueller isn't sure the AI ethical talk in our industry is going to hold up in the longer term. Point taken, but what this really comes down to isn't just ethics, but trust. "Responsible AI," as I see it, is the sum total of all your AI endeavors, as they contribute to customer trust.

I happen to think that output accuracy and consistency is the most most important part of this - a topic I was able to get more specifics on, via Workday's generative AI architecture. That will have to wait for another installment.

Workday's self-described "measured" approach to gen AI should not be taken as a shortfall in AI development. I'll save the details for now, but Workday is pushing into gen AI across development and product functionality. Three main feature areas jump out: findability (search), assistance (e.g. co-pilots), and content creation (in particular automation of high volume aspects like job descriptions). Workday is using machine learning suggestions and automation to expedite an infusion of new UX "modernization" features; 280 tasks have been upgraded on the UX side. Workday says they are on track for 10x this year (more than 2,000 tasks across the platform).

During the event, every time I looked around, it seemed like there was an AI expert or Responsible AI team member to talk with me. This was no accident; Angela Barbato's Workday analyst relations team has a very clever way of making sure that whoever is on site is mixed up with the right Workday leaders. When it comes to vendor assessments, nothing defuses me more than talking to people who know their stuff inside and out.

A big part of getting AI right is having a team that brings big picture experience. Kelly Trindel is just one example on a diverse team - a deep background in workplace discrimination law and AI public policy informs these conversations. Perhaps that contributed to the type of Workday customer sentiment I quoted in this piece.

No sugar-coating: Workday's AI success will ultimately be judged by user adoption and the market's bottom line. But as more AI regulation comes to the US, Workday is about as ready as a vendor could be; they are in plenty of these public policy discussions already.

There's a strong case to be made that especially with AI, trust is a much bigger factor than in other emerging tech we've seen before. On a more practical bent, Workday also has some of the better AI pricing policies in the industry, which is actually another component to earning (or losing) AI trust many software vendors are not getting right. It's a topic that is really bothering me, as monetizing customer data is a pretty ironic move for enterprise software vendors. I'll pick that up next time.

Workday Innovation Summit - what AI product risks are unacceptable? Workday explains, and customers react (2024)

References

Top Articles
Latest Posts
Article information

Author: Stevie Stamm

Last Updated:

Views: 6292

Rating: 5 / 5 (60 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Stevie Stamm

Birthday: 1996-06-22

Address: Apt. 419 4200 Sipes Estate, East Delmerview, WY 05617

Phone: +342332224300

Job: Future Advertising Analyst

Hobby: Leather crafting, Puzzles, Leather crafting, scrapbook, Urban exploration, Cabaret, Skateboarding

Introduction: My name is Stevie Stamm, I am a colorful, sparkling, splendid, vast, open, hilarious, tender person who loves writing and wants to share my knowledge and understanding with you.