Q&A: How Thomson Reuters used genAI to enable a citizen developer workforce

Thomson Reuters spent years building an AI platform to cull through massive troves of data and documents for its legal, global trade and compliance clients. But when generative AI came along, the company was forced to up its game.

1 2 Page 2
Page 2 of 2

"So, how do you do things at pace? What I didn’t want to do with the platform was have each development team figure out how to best safely and securely access the LLMs, and the content that feeds them, in a responsible way. Because even with the best of intentions, if they’re not doing it by design something could go wrong.

"This is where the genAI platform comes in. By having building blocks, we can ensure we are doing things in the right way. Building blocks ensure data residency and data privacy are respected. Building blocks ensure ethical concerns are being evaluated against the models we create. By doing all that in the platform, now all I have to do is tell these developers, ‘You have to use the platform.’ If they use the platform, then I know they have privacy, security, safety all built in by design."

How have you trained your tech and business workers to use genAI? "Change management is just as important to us as it is to our customers. So we went through a process of creating foundational AI training for every member of our company. This is training we build with our Thomson Reuters data science experts and our technology experts geared toward a broad audience. So these were fundementals that we thought everybody needs to know in order to best serve our customers. But then we created these spoke training programs for specific parts of the company.

"As an example, in my development organization, we rolled out a much more extensive AI training that was aimed at developers. These are people building products, so obviously AI training is going to go into greater depth than that foundational training. It’s geared toward someone who’s going to be building an AI application. We then rolled that out across the entire team and we track progress to ensure everyone gets through that training.

"And we have similar types of targeted training for other segments of the organization. What a salesperson in front of our customer is going to need to know about AI is going to be different from what a developer on my team will need to know about AI, but they both need to know about AI. So, we’re investing a lot in training and development."

When did you start the AI training and how did you execute it? "We have dedicated training days. Last year, we had a dedicated AI training day. And that’s not the first time we’ve done that. While we change the material to adapt it to the AI world, AI training is not a new concept for us. We’ve had multiple times over the years where we’ve had our AI experts create training material for us. That’s done in the context of our customers and our business.

"Then we built new training modules to help with training on generative AI models. We created those at the beginning of 2023 as it became something with more broad interest. But the other AI training models we’ve been using for years."

How many training modules did you create? "There’s so many levels of AI training available to our employees, you could spend weeks trying to get through all of it. Some of it is optional and some of it is mandatory. It’s not optional [as a whole]. The only optional element to it is what level of your PhD do you want to achieve through the training?

"The AI Skills Factory goes back to the generative AI platform we have for clients. So, the Westlaw Research [tool] is part of the AI Skills Factory. They’re all built off this genAI platform. So, the Skills Factory – especially that low-code, no-code environment – [is] where we can develop these new AI-skills to rapidly bring them to market."

What are the costs, the power requirements and the time that goes into building an AI platform like yours? Do you train up your own LLMs? "There are a variety of ways you can use train an AI model. The largest models in the world are built by providers who invest the time and resources to build those gigantic models that can serve virtually any purpose in the world. That’s not something we’d build ourselves. We’d access those models ourselves just like any of our customers would.

"On us building our own models, we experiment with a variety of things. This goes back to R&D that I was talking about earlier. We experiment with using off-the-shelf models. We experiment with building our own models, which will not be as large as those gigantic models that come from the hyperscalers of the world.

"Again, all of it comes back down to the fact that there will not be a one-size fits all in the future. I imagine a future where depending on the customer problem, we’re going to employ a different kind of model. I suspect our content and subject matter expertise will allow us to provide unique value with custom-built models, but they’re not going to be the size of these gigantic models you see from hyperscalers.

"For us the sweet spot is discovering the smallest model that provides the best response to the customer’s problem. And the smaller the model, the more efficient it is in many aspects, like run time, like costs, like efficiency in all its forms. That’s what our R&D function does. It helps us identify how to use our content and subject-matter expertise to the smallest model that will solve that model for that customer.

Your November announcement mentioned a multi-year strategy. Where do you see AI at Thomson Reuters going? "I think you’re going to see the pace at which we deliver go up. We had one product release in November, we have a couple more coming up and we had our acquisition of Casetext.

"I go back to speed again. I think what you’re going to see is that this isn’t something that’s ever going to be a point solution where we have one generative AI solution. In that diagram I shared, there’s something called AI Assistant and its generative AI power as something that exists across all of our products ... where it’s helping you get more value, leverage those AI skills, and solve the problems you’re trying to solve more effectively. And we’re going to keep upping the pace at which we develop those skills and we integrate them into products.

"You’ll see more and more of that as we move ahead. That’s why that investment in the foundational platform was so important at this time, because we feel that will give us a differentiated advantage in the future."

Your announcement stated Thomson Reuters will invest $100 million in AI. Is that this year or multiple years into the future?  "That’s what we’re organically investing in building our own AI solutions. That is a minimum of $100 million we’re going to put in, that’s the build part of our strategy. But I also want to talk about our recent acquisition of Casetext. If we see something that’s a great fit with Thomson Reuters in every way — culturally, technologically, and most importantly that can help solve customer problems — we’ll make acquisitions as part of our buy strategy.

"Then, there’s the partner strategy, as well. We recently announced a partnership with Microsoft where back during their Build Conference we were one of the first organizations to talk about what an integration with Microsoft Copilot could look like. So we have teams hard at work to show what that vision [is] to help lawyers more efficiently draft contracts in Microsoft Word by leveraging Copilot add-ins — we’re busy at work making that a reality.

"So, you’ll see us looking at all three of those angles: the $100 million to build; where we can buy; and we’ll continue to partner where we can use that to help service our customers."

Copyright © 2024 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2
It’s time to break the ChatGPT habit