John Williams Revolutionizes Customer Experiences at Amplience with GenAI

John Williams Revolutionizes Customer Experiences at Amplience with GenAI

John Williams, CTO at Amplience

Video preview
John Williams
John Williams
CTO at Amplience

John Williams is responsible for Amplience’s research and development and has a track record of matching technical innovation with business strategy. John was previously CTO and Head of Technology at LBi, the global digital agency, and has an MBA from Imperial College.


Our customers use Amplience to deliver content to their websites, to their mobile sites. In fact, any digital touch points. One of the biggest challenges that we've had is that the macro environment is currently affecting our customers and, therefore, everybody else in the ecosystem. And the things that customers want from Amplience are very different from what they wanted previously.

We've been working with DataStax now for twelve years, right from the inception of the product,

and we're looking at using DataStax to help us drive our AI strategy by using database technology and looking at things like vector search to help with generative content.

With the emergence of Generative AI, it was crucial that Amplience actually had an AI strategy, because what was really important was not about building AI features and bolting AI features into the platform. It was about delivering value to the customers using AI. So, therefore, we had to really understand what the problems our customers were facing and how we could solve those problems using things like generative content.

Generative content has to be applicable, meaning that it actually has to solve an actual problem. It has to be contextual, meaning that it has to be within the industry context, that it's actually the content that customers want to generate.

It also has to have high efficacy, which means it's actually going to deliver the desired outcome that customers want. And this is where I believe DataStax fits into this. There are three areas in which you can deliver those kinds of results.

The first one is great prompt engineering. The second one is you could fine-tune a model, but crucially, the sort of middle piece is around RAG. To deliver RAG, you need a really good vector search and a vector database, which I know that DataStax is actually delivering.

The reason why RAG is so important is that it really delivers context. And the reason why this is really important to retail and e-commerce is that things like product information change regularly.

If you're seasonal, that product information is going to change, and putting that into an LLM is not the right thing to do. To deliver that context, you need to put it into a vector store, retrieve that actual information, and feed that into the LLM, so you get the contextual generation of content. And the reason why we chose DataStax is that we needed a fully scalable, highly available database. We have customers currently loading a million images every week, and we have tons of customers loading tens of thousands of images every week and every month.

We currently have half a billion images in our system right now, and we would not have been able to do that without DataStax.