Chris Wray, our lead engineer, along with members of our engineering team, attended Big Data London 2024. Here are the key takeaways from the event that highlight the latest trends and insights in the world of data and AI:
Generative AI is Here to Stay
Despite the fading hype, Generative AI (GenAI) has become more advanced over the past year and is here to stay. However, only a small number of companies are using it in production due to associated risks. Smaller companies can still leverage GenAI effectively, though likely in non-customer-facing use cases. For example, companies could use models to improve internal processes like peer-reviewing code, generating content, or enhancing documentation.
There are two main ways to safely implement GenAI: to boost business value or improve employee experiences. For business value, it could act as a virtual storefront assistant, recommending products based on customer prompts or purchase history, or it could automate repetitive tasks like generating marketing copy or personalising customer outreach. However, when deploying GenAI in customer-facing use cases, it is critical to implement robust safeguards to ensure data privacy, prevent misuse, and maintain transparency, as any mismanagement could lead to reputational and legal risks.
Looking at how GenAI can be utilised for improving employee experiences, we can improve workflows by assisting with code review, offering styling suggestions for frontend development, or organising and summarising documentation. These implementations carry little risk with a huge benefit as there’s still a level of human oversight required but it’s significantly faster than if the employees were doing the tasks themselves. We should consider GenAI as a tool to enhance our workflows, particularly those that are repetitive and time consuming.
It’s important to acknowledge that any implementation carries inherent risks. The key lies in carefully weighing the potential benefits against these risks and determining whether the implementation aligns with the business’s objectives and developing strong safety-rails to limit or mitigate the risks with implementing any new technology, and especially in the form of GenAI.
Agentic AI
Agentic AI is an emerging concept that, while new, draws parallels to the established idea of microservices – each tool is designed for a specific task. The novelty here lies in applying this principle to large language models (LLMs), enabling more accurate and task-focused responses; the agents can make their own decisions for each specific area they focus on so this becomes more of an autonomous implementation of Artificial Intelligence (AI) tooling. While hallucinations in GenAI remain an issue and likely always will, the use of agentic methods significantly boosts accuracy, potentially bringing it on par with a human co-worker’s performance.
Humans can be wrong or misinformed, and if we can demonstrate that GenAI performs as accurately as a human counterpart it opens new possibilities for trust in AI within business contexts. For example, businesses might trust AI to respond to natural language queries about data, such as interpreting dashboards or analysing figures. With agentic implementations of large language models improving reliability, the potential for AI to play a trusted role in data interactions grows significantly, reshaping how companies leverage these technologies for decision-making.
The Importance of Quality Data
The saying “good data in means good data out” remains as relevant as ever, especially in the age of AI and ML. Training models on poor-quality data is a recipe for disaster, which is why robust data governance and thoughtful strategies are crucial for companies looking to get the most out of their AI initiatives. While implementing good data practices doesn’t have to be overly complex, it does need to be intentional and integrated into any onboarding process for AI. This also applies when considering advanced topics like data catalogues and data marts, especially within a data mesh framework.
Even on-premises or legacy infrastructure that may not be as modernised can see significant benefits from ensuring their data is clean and well-managed. Consistent data governance lays a strong foundation for any business, enabling more accurate decision-making, reducing errors, and enhancing the effectiveness of AI and ML models as well as data pipeline efficiency. Investing time in this area now will pay dividends in the long run, as high-quality data drives more reliable insights, accurate predictions, and overall business value.
AI in the Boardroom
AI has become a prominent topic of discussion among the C-suite, stakeholders, and business leaders. However, due to the rapid advancements in GenAI, it can be difficult for non-technical individuals to stay fully informed about its evolving capabilities. This can lead to either unrealistic expectations or a lack of clear expectations altogether, with statements such as “GenAI will solve all our problems” or “GenAI can’t address any of our challenges” being common misconceptions.
To successfully drive AI initiatives, it is essential for professionals to be well-versed in the key factors required for effective GenAI implementation. A thorough understanding of its requirements, risks, and opportunities enables meaningful dialogue with senior leadership.
Once equipped with this knowledge, it is essential that leadership clearly communicates these insights throughout the organisation. This ensures alignment, manages expectations, and helps teams comprehend the practical steps needed for AI integration.
If you wish to learn more about AI and the impact it can have on your business, contact Optima Partners here.