Babak Hodjat is Vice President of Evolutionary AI at Cognizant, and former co-founder and CEO of Sentient. He is responsible for the core technology behind the world’s largest distributed artificial intelligence system. Babak was also the founder of the world’s first AI-driven hedge fund, Sentient Investment Management. He is a serial entrepreneur, having started a number of Silicon Valley companies as main inventor and technologist.
Prior to co-founding Sentient, Babak was senior director of engineering at Sybase iAnywhere, where he led mobile solutions engineering. He was also co-founder, CTO and board member of Dejima Inc. Babak is the primary inventor of Dejima’s patented, agent-oriented technology applied to intelligent interfaces for mobile and enterprise computing – the technology behind Apple’s Siri.
A published scholar in the fields of artificial life, agent-oriented software engineering and distributed artificial intelligence, Babak has 31 granted or pending patents to his name. He is an expert in numerous fields of AI, including natural language processing, machine learning, genetic algorithms and distributed AI and has founded multiple companies in these areas. Babak holds a Ph.D. in machine intelligence from Kyushu University, in Fukuoka, Japan.
Looking back at your career, from founding multiple AI-driven companies to leading Cognizant’s AI Lab, what are the most important lessons you’ve learned about innovation and leadership in AI?
Innovation needs patience, investment, and nurturing, and it should be fostered and unrestricted. If you’ve built the right team of innovators, you can trust them and give them full artistic freedom to choose how and what they research. The results will often amaze you. From a leadership perspective, research and innovation should not be a nice-to-have or an afterthought. I’ve set up research teams pretty early on when building start-ups and have always been a strong advocate of research investment, and it has paid off. In good times, research keeps you ahead of competition, and in bad times, it helps you diversify and survive, so there is no excuse for underinvesting, restricting or overburdening it with short-term business priorities.
As one of the primary inventors of Apple’s Siri, how has your experience with developing intelligent interfaces shaped your approach to leading AI initiatives at Cognizant?
The natural language technology I originally developed for Siri was agent-based, so I have been working with the concept for a long time. AI wasn’t as powerful in the ’90s, so I used a multi-agent system to tackle understanding and mapping of natural language commands to actions. Each agent represented a small subset of the domain of discourse, so the AI in each agent had a simple environment to master. Today, AI systems are powerful, and one LLM can do many things, but we still benefit by treating it as a knowledge worker in a box, restricting its domain, giving it a job description and linking it to other agents with different responsibilities. The AI is thus able to augment and improve any business workflow.
As part of my remit as CTO of AI at Cognizant, I run our Advanced AI Lab in San Francisco. Our core research principle is agent-based decision-making. As of today, we currently have 56 U.S. patents on core AI technology based on that principle. We’re all in.
Could you elaborate on the cutting-edge research and innovations currently being developed at Cognizant’s AI Lab? How are these developments addressing the specific needs of Fortune 500 companies?
We have several AI studios and innovation centers. Our Advanced AI Lab in San Francisco focuses on extending the state of the art in AI. This is part of our commitment announced last year to invest $1 billion in generative AI over the next three years.
More specifically, we’re focused on developing new algorithms and technologies to serve our clients. Trust, explainability and multi-objective decisions are among the important areas we’re pursuing that are vital for Fortune 500 enterprises.
Around trust, we’re interested in research and development that deepens our understanding of when we can trust AI’s decision-making enough to defer to it, and when a human should get involved. We have several patents related to this type of uncertainty modeling. Similarly, neural networks, generative AI and LLMs are inherently opaque. We want to be able to evaluate an AI decision and ask it questions about why it recommended something – essentially making it explainable. Finally, we understand that sometimes, decisions companies want to be able to make have more than one outcome objective—cost reduction while increasing revenues balanced with ethical considerations, for example. AI can help us achieve the best balance of all of these outcomes by optimizing decision strategies in a multi-objective manner. This is another very important area in our AI research.
The next two years are considered critical for generative AI. What do you believe will be the pivotal changes in this period, and how should enterprises prepare?
We’re heading into an explosive period for the commercialization of AI technologies. Today, AI’s primary uses are improving productivity, creating better natural language-driven user interfaces, summarizing data and helping with coding. During this acceleration period, we believe that organizing overall technology and AI strategies around the core tenet of multi-agent systems and decision-making will best enable enterprises to succeed. At Cognizant, our emphasis on innovation and applied research will help our clients leverage AI to increase strategic advantage as it becomes further integrated into business processes.
How will Generative AI reshape industries, and what are the most exciting use cases emerging from Cognizant’s AI Lab?
Generative AI has been a huge step forward for businesses. You now have the ability to create a series of knowledge workers that can assist humans in their day-to-day work. Whether it’s streamlining customer service through intelligent chatbots or managing warehouse inventory through a natural language interface, LLMs are very good at specialized tasks.
But what comes next is what will truly reshape industries, as agents get the ability to communicate with each other. The future will be about companies having agents in their devices and applications that can address your needs and interact with other agents on your behalf. They will work across entire businesses to assist humans in every role, from HR and finance to marketing and sales. In the near future, businesses will gravitate naturally towards becoming agent-based.
Notably, we already have a multi-agent system that was developed in our lab in the form of Neuro AI, an AI use case generator that allows clients to rapidly build and prototype AI decisioning use cases for their business. It is already delivering some exciting results, and we’ll be sharing more on this soon.
What role will multi-agent architectures play in the next wave of Gen AI transformation, particularly in large-scale enterprise environments?
In our research and conversations with corporate leaders, we’re getting more and more questions about how they can make Generative AI impactful at scale. We believe the transformative promise of multi-agent artificial intelligence systems is central to achieving that impact. A multi-agent AI system brings together AI agents built into software systems in various areas across the enterprise. Think of it as a system of systems that allows LLMs to interact with one another. Today, the challenge is that, even though business objectives, activities, and metrics are deeply interwoven, the software systems used by disparate teams are not, creating problems. For example, supply chain delays can affect distribution center staffing. Onboarding a new vendor can impact Scope 3 emissions. Customer turnover could indicate product deficiencies. Siloed systems mean actions are often based on insights drawn from merely one program and applied to one function. Multi-agent architectures will light up insights and integrated action across the business. That’s real power that can catalyze enterprise transformation.
In what ways do you see multi-agent systems (MAS) evolving in the next few years, and how will this impact the broader AI landscape?
A multi-agent AI system functions as a virtual working group, analyzing prompts and drawing information from across the business to produce a comprehensive solution not just for the original requestor, but for other teams as well. If we zoom in and look at a particular industry, this could revolutionize operations in areas like manufacturing, for example. A Sourcing Agent would analyze existing processes and recommend more cost-effective alternative components based on seasons and demand. This Sourcing Agent would then connect with a Sustainability Agent to determine how the change would impact environmental goals. Finally, a Regulatory Agent would oversee compliance activity, ensuring teams submit complete, up-to-date reports on time.
The good news is many companies have already begun to organically integrate LLM-powered chatbots, but they need to be intentional about how they start to connect these interfaces. Care must be taken as to the granularity of agentification, the types of LLMs being used, and when and how to fine-tune them to make them effective. Organizations should start from the top, consider their needs and goals, and work down from there to decide what can be agentified.
What are the main challenges holding enterprises back from fully embracing AI, and how does Cognizant address these obstacles?
Despite leadership’s backing and investment, many enterprises fear falling behind on AI. According to our research, there’s a gap between leaders’ strategic commitment and the confidence to execute well. Cost and availability of talent and the perceived immaturity of current Gen AI solutions are two significant inhibitors holding enterprises back from fully embracing AI.
Cognizant plays an integral role helping enterprises traverse the AI productivity-to-growth journey. In fact, recent data from a study we conducted with Oxford Economics points to the need for outside expertise to help with AI adoption, with 43% of companies indicating they plan to work with external consultants to develop a plan for generative AI. Traditionally, Cognizant has owned the last mile with clients – we did this with data storage and cloud migration, and agentification will be no different. This is work that must be highly customized. It’s not a one size fits all journey. We’re the experts who can help identify the business goals and implementation plan, and then bring in the right custom-built agents to address business needs. We are, and have always been, the people to call.
Many companies struggle to see immediate ROI from their AI investments. What common mistakes do they make, and how can these be avoided?
Generative AI is far more effective when companies bring it into their own data context—that is to say, customize it on their own strong foundation of enterprise data. Also, sooner or later, enterprises will have to take the challenging step to reimagine their fundamental business processes. Today, many companies are using AI to automate and improve existing processes. Bigger results can happen when they start to ask questions like, what are the constituents of this process, how do I change them, and prepare for the emergence of something that doesn’t exist yet? Yes, this will necessitate a culture change and accepting some risk, but it seems inevitable when orchestrating the many parts of the organization into one powerful whole.
What advice would you give to emerging AI leaders who are looking to make a significant impact in the field, especially within large enterprises?
Business transformation is complex by nature. Emerging AI leaders within larger enterprises should focus on breaking down processes, experimenting with changes, and innovating. This requires a shift in mindset and calculated risks, but it can create a more powerful organization.
Thank you for the great interview, readers who wish to learn more should visit Cognizant.
Credit: Source link