Blog > How Did companies use ChatGPT In 2023?

How Did companies use ChatGPT In 2023?

Business use cases for generative AI: the good, the bad, and what to expect in 2024



OpenAI launched ChatGPT just over a year ago. Prior to its release, even the tech community was only half-paying attention to large language models. GPT-2, GPT-3…barely anyone cared. Now, it feels like AI chatter is everywhere.

Is it all just talk? Are companies really using AI? What do they do with it? FinText is very nosy about these questions.

All year, we’ve been paying close attention to how businesses were using ChatGPT. We took notice whenever companies messed up, and whenever they scored a win. Taken collectively, the first year delivered an astonishing amount of change.

Today, we’ll breeze through the successes, the pitfalls, and the patterns we’re seeing in businesses’ adoption – one year in.

Right after the big bang

FinText is not unbiased: well before ChatGPT’s release, we had been using OpenAI models in our own product, to classify investment articles by topic.

But as consumers rushed to try ChatGPT, will believed this would quickly create a conflict between companies and their own employees. Back in January we said:

We’re seeing three things happening. For a start, some companies are burying their heads in the sand, pretending this has nothing to do with them.

Others, currently the minority, are grabbing the bull by its horns and working on how they can safely use generative AI in their day-to-day. (Disclaimer: we provide training to investment marketing teams on productively using large language models.).

Lastly, and most interestingly, we see employees using it on their own initiative. They do so mostly privately. And they use it to cut down the workload, or tackle urgent, unexpected tasks.

This year, we’ll see some companies move from the first group to the second, and a whole lot of companies –  unwittingly – move to the third.

True to word, the financial sector’s knee-jerk response was to outright ban chatGPT. Their concerns mostly stemmed from sharing sensitive internal data with an un-vetted external provider.

JPMorgan was first but others quickly followed, including Citigroup, Goldman Sachs, Deutsche Bank, and Wells Fargo.

What success looks like in AI adoption

Some companies didn’t just grab the bull, they became zealot ranchers:

FactSet gave its employees access to generative AI tools by wrapping their own user interface around OpenAI’s models – they called it Chat. All employees, not just engineers, were encouraged to participate in an internal hackathon to discover ways the company could put AI to use.

Within 48 hours, 1,000 employees were using Chat, and that number later grew sixfold – to  roughly 50% of the company’s workforce. Best of all, the company could track what its employees were using the tool for.

They bucketed the user questions into key themes (including technical, finance, communication, research and personal). Personal questions, incidentally, amounted to less than 2% of the usage.

Employees reported as much as 20% improvement on research time when working in an unfamiliar area. FactSet, which had only spent about $5000 on AI and cloud costs, is delighted.

The immediate trap: poor automation

Make no mistake: to most companies, the irresistible appeal of automation is in cutting back on operating expenses, especially wages. No surprise, then, that companies rushed to automate work this year. And they discovered things aren’t quite so simple.

Sports Illustrated began publishing articles written by fake, AI-generated writers, but once questioned on the matter, these authors mysteriously vanished from the website. The publisher, The Arena Group, remains tight-lipped about their use of AI-generated content.

In this case, the scoop was leaked a by a person tasked with creating the fake reviews. But in a second instance, it was employees who sensed something was off:

Writers and editors at Reviewed, a product recommendation website, say that the company used artificial intelligence to create reviews. Gannett, the parent company of Reviewed, has denied using artificial intelligence to write product reviews.

The choice to involve employees in automation efforts (the way FactSet has) isn’t about guaranteeing job security. It’s the company recognising that job automation is a tricky effort, and may take a while to nail.

The definitive AI use-case: ‘Fuzzy to database’

Even though it’s only been a few months, it’s already becoming clear that the low-hanging fruit in AI adoption are wherever businesses can reliably turn fuzzy inputs into database entries or queries.

Think about what this means: humans hate inputting data into systems, and they’re not that good at it – results are often error-prone even for simple, repetitive tasks.

The reason humans do it to begin with is because the input is fuzzy. A tiny bit of judgment is required to transform an input into what “the system” can accept.

For example, Hiscox teamed up with Google to create an AI model for automating the core underwriting process for specialist risks. The AI digests the draft contract provided by the broker. It then passed the information to an existing system to determine whether Hiscox will take on the risk. Finally, the AI writes up a nice summary.

In Hiscox’s case, the desired outcome is that it’ll be able to do more with current resources. But sometimes, doing more is resulting in making the product itself better:

Sweden’s biggest daily news outlet, Aftonbladet, used generative AI to create summaries, and discovered audiences spend longer reading articles that have summaries than those without.

Competing for attention is a sinking ship

The problem marketers have is that generative AI fundamentally changes attention economics. To see exactly how things will play out, look no further than to markets where attention equals money, i.e. the media business.

In previous years, the Google-Facebook ad duopoly had killed a slew of small- and medium-sized media outlets; this year, ChatGPT showed the bigger ones are no longer safe:

MailOnline – one of the UK’s most popular news sites – announced it will implement a paywall for a small selection of articles starting next year, in an effort to boost revenue for the Daily Mail. With almost 24 million monthly users, the MailOnline has traditionally relied on strong ad revenue, and was always free. Not anymore.

Longevity is no panacea. Popular Science, the beloved 151-year-old magazine, is coming to an end. National Geographic and Gizmodo have also been dismantling their science-focused teams.

Digital natives fare no better. BuzzFeed continued to falter, and was in talks to sell Complex Networks for a meagre $140 million, down from the $300 million it paid just two years ago.

If nothing else, publishers have become more vigilant, and many began exploring a licensing arrangement with OpenAI. Already, at least two deals have been struck:

First, The Associated Press signed a two-year deal that will let OpenAI train its generative AI tools on the news agency’s historical content. Then, Axel Springer announced a global partnership to allow users of OpenAI’s ChatGPT to access summaries of selected news content from its news outlets, including POLITICO and Business Insider. Again, OpenAI will use Axel Springer’s content to train their language models.

We’re just getting Started

What you don’t want to do with tech is fixate on specifics. Remember how excited Steve Jobs was about the mouse? Point and Click! Awesome and transformative as it was, we bought in and moved on.

The companies that took action this year are some of the largest in the world in their fields – and they were chiefly responding to ChatGPT (and the models powering it). What’s still to come is the response to all the progress that has happened since.

We expect OpenAI to remain the default choice for experimentation, mostly due to its aggressive cost subsidies. Running large language models remains expensive and cumbersome, and OpenAI takes much of this pain away.

But concerns over data safety aren’t going away – we’ll see more use of local models. We’ll also see more licensing deals struck between content owners and commercial LLM companies.

Large language models aren’t going away anytime soon. We expect the shake-up in industry adoption in 2024 to be orders of magnitude more pronounced than anything we’ve seen this year.