How startups can use generative AI from ideation to implementation

The day ChatGPT debuted, this transformational technology captured the imaginations of business leaders and changed decision-making forever. Today’s C-suite sees incredible upside opportunities with generative AI. Set to drive a $7 trillion increase in GDP and boost global productivity by 1.5%, generative AI and its tangible economic consequences have reimagined business priorities for decades — and potentially generations — to come.

ChatGPT and other generative AI technologies have opened the door to breakthrough thinking across all industries. Some technology and business leaders are thinking about unintended consequences — in particular, the “hallucination” problem. Sometimes, ChatGPT’s hallucinations are innocuous and easily corrected by improving training data or adding a human into the loop. As the world races to adopt this technology, we have to continue to work on improving the error rates and decreasing the hallucinations.

Data errors across an investment portfolio could translate to lost revenue, missed regulatory filings and a complete distrust of the technology.

Above all else, financial decision-making and compliance are predicated on data accuracy and confidence in the information. So, while it’s annoying to have ChatGPT generate a wrong answer for noncritical prompts, data errors across an investment portfolio could translate to lost revenue, missed regulatory filings and a complete distrust of the technology.

Fortunately, technologists can take a step back and ask the following questions to unlock the potential power of generative AI.

Do we have a phased approach?

Generative AI will have far-reaching consequences across a business’s workflow and the products it brings to its customers. R&D and go-to-market teams should follow a playbook so every part of the organization can innovate responsibly and efficiently. To start, tech teams must take a “square one” approach and examine their use cases, infrastructure needs, goals, and next steps.

What use cases can be solved by using gen AI technologies?

As we automate our internal workflows and provide new functionalities to our customers, what types of processes — for example, human in the loop — can we put in place to ensure that we provide accurate responses to our customers’ queries?

Do we have the talent with essential skills in transformers? Do we have a comprehensive view of compliance restrictions?

This represents only a fraction of the questions and planning involved in generative AI development and deployment. Technology teams will have much to gain if they have a strategic plan guiding their AI use case development process versus answering these questions as they go forward.

Are we correctly ring-fenced for internal and external stakeholders?

Fintech companies need to think long and hard about who has access to what data and how to oversee this access. ChatGPT’s propensity for mimicking human interaction, digesting massive amounts of data and boosting productivity further our engagement with this technology. However, blurring these lines may result in compliance issues. For example, internal stakeholders should have appropriate access to sales, marketing and R&D data, while external users should not. If confidential client data somehow leaks to unauthorized internal users, it’s quite possible that data in the wrong hands can result in unintended disclosure, security breaches, noncompliance issues and potential fines.

Is our technology transparent and accurate?

Since AI’s arrival, one of the foremost concerns businesses and skeptics have put forward about the technology revolves around the transparency and explainability of its decision-making. Generative AI has added more urgency to the push to change AI from a “black box” into a “glass box,” especially concerning financial reporting.

Accuracy is of the utmost importance in the financial world. With transparency, customers can ensure accurate data and reporting. Transparency also aligns with compliance and regulations as regulators place more emphasis on explainability and oversight. Fintech companies have international standards like the EU’s Artificial Intelligence Act to use as a guide to understand the rules around more transparent AI, as well as the disclosures around AI-generated content, helping adopters in the U.S. envision how U.S.-specific operations might adopt these or similar standards.

For generative AI to mine deeply buried patterns across enterprise data and synthesize accurate answers, tech leaders have to train their AI on accurate data. Decision-makers everywhere need confidence in the data powering generative AI and the outcomes it yields.