Just because generative AI is trendy right now, doesn’t mean it has to be applied to every application. But try telling that to Workday.
The enterprise management platform vendor today announced a suite of new generative AI features aimed at “increasing productivity” and “streamlining business processes.” Soon, Workday customers will be able to automatically compare signed contracts against contracts in Workday to surface discrepancies, create personalized knowledge management articles and generate statements of work for service procurement.
The announcements were made at Workday Rising, Workday’s annual customer conference, which is taking place in San Francisco this year.
Some of the additions seem genuinely useful — or harmless at the worst. But one is a bit concerning to this reporter: AI-generated employee work plans.
“Managers [will be able] to quickly create a summary of employees’ strengths and areas of growth, pulling from stored data including performance reviews, employee feedback, contribution goals, skills, employee sentiment and more,” Workday writes in a press release.
I see a few problems with this.
Studies have shown that text-analyzing AI can exhibit biases against people who use expressions and vernacular that fall outside the “norm” (i.e. the majority).
For example, some AI models trained to detect toxicity see phrases in African-American Vernacular English (AAVE), the informal grammar used by some Black Americans, as disproportionately “toxic.” And Black Americans aren’t the only minority group that suffers. In a recent study, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models.
So what if Workday’s models fail to understand the nuance of a performance review or employee feedback because of how it’s written, leading the models to draw the wrong conclusions about someone? Good question.
Then, there’s the “employee sentiment” piece.
Biases can once again rear their ugly head in AI models trained to detect sentiment from a sentence. Research has shown that, for instance, text-based sentiment analysis systems can exhibit prejudices along racial, ethnic and gender lines — e.g. associating Black people with more negative emotions like anger, fear and sadness.
In response to these concerns, Workday says that it’s “transparent about how its AI models are designed” (albeit unwilling to reveal the exact data used to train its models) and built the work plan feature to show managers “how the data inputs contribute to a strength or area of growth.”
“As with other Workday generative AI use cases and our human-in-the-loop approach, users are encouraged to review the results as a strong first draft that they should edit, iterate on and finalize,” Shane Luke, head of AI and machine learning at Workday, told TechCrunch via email. Let’s hope that managers using Workday heed that advice.
As for Workday’s other new generative AI features, they’re less problematic on their faces.
AI-generated job descriptions in Workday leverage information already stored in the platform, including the skills needed for a role and job location details, to simplify the process of writing job listings. This reporter was initially worried that Workday might be training its description-generating models on HR employees’ copy without their knowledge or permission, but Luke assures me that this isn’t the case.
“We don’t train on dedicated job description data,” he said. “Workday customers control and configure how their data is used for AI and machine learning purposes, including whether data is used for training purposes … We make it clear from the product fields what data is being used in generation, and users are encouraged to evaluate responses as first drafts and apply their own judgment.”
Elsewhere, Workday will soon be able to automatically craft “past due” notices with recommendations on the tone of the correspondence, driven by how late a customer is or how often they’re late. (Finance teams will be able to use the capability to automate letters in bulk, Luke says.) And procurement leaders will be able to get suggestions for relevant clauses to include in procurement contracts, depending on the type of project, project location and type of deliverables.
With the aforementioned contract analysis feature, powered by generative AI, Workday customers will be alerted to potential errors in contracts and receive proposed corrections. And with the knowledge article generation feature, users will be able to draft articles like talking points for managers and takeaways from company videos with suggestions on tone and length. (Workday stresses users are more than welcome to ignore those suggestions.)
Developer Copilot marks Workday’s first foray into the generative coding, introducing text-to-code capabilities to Workday Extent, its platform for creating custom apps that run on Workday. Developer Copilot — embedded in Workday’s app builder — delivers “contextually aware” code suggestions for Workday apps, complete with curated content and search results, similar to code-generating services like GitHub Copilot and Amazon CodeWhisperer.
And — piggybacking on the popularity of ChatGPT — Workday is piloting a number of conversational AI experiences. Luke says that they’ll “enhance users’ ability to interact with information and tasks” in a natural way, taking advantage of the generative AI capabilities such as summarization, search and maintaining context.
“We believe that when used responsibly, generative AI can drive impactful business outcomes,” Luke said. “Fundamentally, our AI approach is focused on human-machine teaming. The user is always the final decision maker and moderator.”
Workday expects the new generative AI features to launch within the next six to 12 months. Unfortunately, it didn’t provide a more specific timeline than that.