Header Ads Widget

GoDaddy has 50 Large Language Models; its CTO Offers an Explanation (Ishraq Ahmed Hashmi)

 Why GoDaddy is Investing in AI with Large Language Models

Amazon Web Services Chatbots Emerging Technology

GoDaddy have recently unveiled yet another chatbot known as Airo which can be used to design company logos, websites, emails, and social media campaigns within a few seconds. In addition to assisting clients in developing their products, GoDaddy is contemplating ways for it to leverage AI, according to CTO Beadnall.


CREDIT: SHUTTERSTOCK

A year ago, GoDaddy did not have a single large language model to operate and integrate with its backends. Currently, the Internet domain registry and web hosting firm has over fifty, some of which are focused on client-side automation solutions while others are still in development for pilot implementations for the achievement of internal solutions for employees.

The first of the company’s generative AI initiatives was the development of an AI bot to automatically design company logos, websites, and email and social media campaigns for small businesses and organizations. It first introduced GoDaddy Airo, an AI customer-facing chatbot earlier this year. With AI experiments driving innovation at GoDaddy, the company has set out to approach the documentation of over 1,000 AI experiments systematically. Because innovation without some kind of hypothesis and some kind of measurement is a novelty, stated GoDaddy’s CTO Charles Beadnall.

Beadnall has been responsible for leading the engineering at GoDaddy toward the development of AI solutions; below he discussed that process and its hurdles with Computerworld.

Share your AI story and what others can expect. We have been highly concerned with the issue of AI for several years now. We have also employed different versions of it. AI’s a big term and it’s got lots of different subcomponents: machine learning, generative AI, etc.

What we have been working on for the last 3 to 4 years is establishing a common data foundation across various divisions, so we will have inputs from our various touchpoints, divisions, and businesses to improve our understanding of customers’ behaviors. That alongside with the culture of experimentation it has allowed us to really leverage generative AI to really look at the incremental uplift that it has given to our customers and our bottom line and to continue to evolve that cycle in a very regular fashion.

 

0 of 28 volts Volume 0%



GODADDY CTO CHARLES BEADNALL

GODADDY

“Because we are all about driving outcomes, either for our business and our profit, or for our customers, and so we need to have a concrete hypothesis of what it is that generative AI is going to provide to those. That’s something that has been in the course of development over the last several years using common data platforms, creating an experimental culture, and now implementing generative AI in practice.”

What does it mean to say that there should be tangible outcomes when implementing AI?“ Lastly, if you do not have a way or means of measuring whatever you are expecting to deliver, it may well turn out to be successful, but you won’t be quite sure that it is so for your organization; It has therefore been quite useful to have this controlled environment that would allow us launch a new feature and then measure its outcomes against the controlled A/B environment. We, therefore, believe that if you do not have some form of data you cannot do so. Is data prepared or do you need to build new data or clean the data repositories before using generative AI? This leads to the well-known proverb, “garbage in, garbage out.” “This brings a host of implications with it without any doubt.” I suppose it is an issue which people should be concerned about. Most of the quality assurance in large language model sales are being done by the vendors.

‘What we have done is created a unified entry point that communicates with all the different big language models at the backend, and right now there are over fifty different models, and they can be for images, text chat, or anything else. And that gateway is I think responsible for not only implementing the guardrails…, but also for examining the responses back from the LLMs to see if there is some pattern that we need to know about suggesting that it is not working as intended.

“Quite evidently, this space is moving at light speed. A year ago, we had no LLMs, today we already have 50 LLMs. That should give you some idea of the pace of change. Of course, different models will have different features and that is something that we would have to keep an eye on. Nevertheless, by having that mechanism at our disposal, we can at least monitor what we are sending out and what

How come you have 50 LLMs?

“It is rapidly evolving with the various models outcompeting each other in cost, accuracy, reliability, and security The vast majority of these are currently in sandbox or test environments, and only a tiny number are run in production behind Airo Some will never see production, while others will be obsoleted by models that are more accurate or cheaper to build.

May I know the details about this gateway you are talking about? How does it work and did you develop this software yourself, or did you have to purchase it from a third party?“ It’s something we built and it’s going on a year now. They are useful to orchestrate the unpredictability of the technology.

“It was with our first push into the space because we needed a way to coordinate among ourselves as the different LLMs need. If you think logically about a year ago, there was only one vendor at that time but everyone could see that there were going to be many players who wanted to get into this space, and so no one knew who was going to win. Naturally, the discussion became a bit more refined of who was going to win.

How did you go about training the workforce regarding AI and more crucially, getting them on board to embrace AI? This is one of the areas that has not really proven to be difficult for me in this role, which has actually been rather refreshing. We have a business unit that originated our first use case for it that is assist customer in creating content to host on their site and selecting the appropriate domain name for that site. That is a question that many clients get caught on when choosing a domain name, what sort of content they will be posting, and if they want to start selling products, they will need the descriptions of those items. So, it will be a customer need that we would like to fulfill.

“Certainly, outlining how AI is going to assist us on a course identified, is that business unit appropriately prioritized and mobilized resources against it in order to develop some of the initial trials within this space. It’s so important to have that clear, compelling use case to help the team rally behind it and get behind something. We are conducting various tests and receiving results; not all of the trials turned out as planned. It’s a learning process, true.

“Some of the most interesting experiments, perhaps, are the ones in which things do not work out because these are the experiments that make you look at the question of ‘what will?’ and do so in a different manner as teams started getting these results and realizing the impact that this was going to have on customers, they were willing to spend more time with the technology and really start to focus on outcomes.

Is it possible to employ AI for developing actual tangible products one can offer to clients? Or is it more symbiotic in its nature like suggesting the type of content that should be posted, proofreading code, or making a video?“ I do believe it is ready for prime time. Now it really does depend on what secondary use we are looking for. This is where I think the ability to test can be critical to determine whether it is ready for prime time as it relates to this particular usage scenario. However it is creating positive value to the customer interaction because it is a set of steps that are not required but a majority of our customers are utilizing. There are many different forms

What is GoDaddy Airo? What does it do? I would say that it is what the abbreviation AI stands for: the Artificial Intelligence integration in our products and services.

It’s our core AI technology based on our data layer, on our experimentation layer and on our gateway which we are using against our LLMs. It could evolve into further new products in the future, but for now I think its purpose is to make the products we sell today even more superior. It will grow with time as it is already a work in progress as we pioneer our way into it. ”Does Airo help your clients or do you employ Airo to help you and then provide your clients with the AI results you get?“ As soon as you register a domain name and a website, we boot you right into that experience. We help you build out the site further and if you’re uploading inventory items to the site, Airo will automatically fill out that [textual] description for you. If we can get people from the brainstorming phase to a live business online, that’s our big goal. That’s where we want to provide value to our customers how accurate is Airo?“ I believe it is quite accurate based on the experiments, in which we set certain parameters that boast a relatively high accuracy level. Although there might be certain combination that we learn over time, I would suggest that overall, it has been rather accurate than we initially imagined.

Where did you get your LLMs that drive the generative AI?“ The actual LLMs we’re using are ChatGPT, Anthropic, Gemini, and AWS’s Titan. So, we are leveraging several different models on the backend to do the GenAI itself. But all the integrations into our flows and products is the work we do.”

Some of the challenges that have been raised are the following:

What are some of the challenges that you have faced while trying to adopt AI in your organizations? In the areas where the Dutch implemented innovation, they proceeded relatively swiftly, while also ensuring that they properly weighed the security and privacy implications. Yeah, the idea of broken link building is the area that I think we did dedicate, or at least I did dedicate more than proper time and thoughts into. Although, as I mentioned earlier, the biggest challenge is in finding ways in which these LLMs can be deployed and how the experiments can be structured to meet these requirements? ``In other words, building out the capabilities, I believe is the preserve of an organization. It lies in being capable of addressing the common platform approach and then being able to explain the security.

“That’s why it has to be about those factors along with the customer’s needs. And can you really spend enormous amount of money, without much benefit? Thus, it has always been about those factors and that’s where balancing such has been a major concern for us.

What’s next?“ I believe the biggest frontier for us is expanding the use of AI in other areas of the company and for our employees, so you have quite a journey ahead of you in those directions now For the most part, adding this type of capability to produce more LLMs on its own and with our data for further inside use cases We are in the midst of it now for deciding which successful test forms to start inside the organization.



 

 

 

Post a Comment

0 Comments