The Future of AI: How Startups are Crafting Agentic Solutions with Intel® Liftoff

author-image

By

There’s no arguing that the transformative potential of agentic AI is vast, with Gartner estimating that by 2028, 33% of enterprise software applications will incorporate agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously. This rapid adoption is evident in the software development sector, where agentic coding startups like Cline and Roo are quickly gaining traction. The LLM endpoint marketplace OpenRouter.ai*, has shown these two open source plugins consumed over 1.5 trillion tokens in March alone.  

Additionally, the developer appetite for agent tooling is high, with much industry attention on frameworks that standardize interactions between LLMs and external tools such as Anthropic’s open sourced Model Context Protocol* (MCP), LlamaIndex*, and LangChain

The Manus.im March 7th launch highlighted the potential of agentic AI to execute tasks requiring iteration and discovery. OpenManus, an open source project that was released three hours after Manus.im, demonstrates the rapid progress of open source in the agentic AI space.  

All this recent progress has created excitement among startups to pave the way for the next GPT moment where AI agents deliver significant customer value through complex, and interative task execution. 

Startups Laying the Foundation for Agentic AI

Intel® Liftoff currently has a portfolio of over 300 startups, and helps these startups tackle their AI projects with open source technology, providing hands-on engineering, compute, and workshops hosted on the Intel Tiber AI Cloud. In a recent Intel® Liftoff Days event—a virtual five-day developer sprint—about fifty percent of startups were focused on building their technological foundations for agentic AI, and we saw them work on four foundational areas that we’ll cover in this post. 

1. Designing Better Agents

In our Intel Liftoff Day’s agentic AI workshops, startups discussed the toolsets and types of agency required in their product design—both critical elements for startups to create differentiated and impactful agentic AI products for customers. 

Toolset Buildout

A robust toolset forms the backbone of any agentic AI system. Startups have two primary avenues for toolset development: leveraging open source resources, or creating custom solutions tailored to specific domains. Open source platforms like LlamaIndex, and LangChain* offer a wealth of functionalities that can be integrated into agentic AI systems, providing a solid foundation for innovation.  

Tool standardization is also being openly innovated. For example, the Anthropic Model Context Protocol (MCP) and the Open Platform for Enterprise AI (OPEA) allow startups to leverage common agentic tools, and to orchestrate them seamlessly without reinventing the wheel. Adopting open source frameworks allows startups to keep pace by upgrading their models and agentic tooling, while focusing on challenges unique to delivering value in their domain.  

Agency in Product Design

Developers must consider the degree of agency embedded within workflows as a crucial factor in agentic AI product design. Tasks that are discovery-based, such as searching, analyzing, and expanding contexts, benefit from greater agency because they can be resource-intensive for humans. In situations where workflows are uncertain or ambiguous, agentic AI can transform the process by autonomously defining, refining, and executing tasks. 

On the other hand, some workflows require more control to ensure compliance or preserve unique or proprietary processes. In these cases, it's important to constrain workflows, toolsets, or prompts to adhere to standards and maintain the integrity of the intended output. Balancing agency and control is essential for creating effective and reliable AI-driven solutions, highlighting the importance of thoughtful design. 

Startup Example: Pixel ML’s AgenticFlow.ai

Pixel ML’s AgenticFlow.ai* balances structure and flexibility in its design. Users can define precise workflows using the visual builder or delegate tasks to AI agents equipped with MCP tools. These agents can perform complex actions like story generation combined with asset creation (images/video), streamlining demanding processes in sales, marketing, and creative domains. This design ensures reliable execution through predefined plans while empowering agents with the tools needed for autonomous task completion within those plans. 

AgenticFlow has been recognized as an official Model Context Protocol (MCP) Client by Anthropic (https://modelcontextprotocol.io/clients#agenticflow).

This commitment to MCP allows users to securely connect and orchestrate over 2,500 APIs and 10,000 tools, enabling AI agents to leverage a vast ecosystem of capabilities. 

2. Keeping Your Facts Straight

AI developers need solid strategies in place for knowledge organization and retrieval to create powerful agentic AI systems, especially in fields where precision and context are critical, like legal, medical, and education sectors. AI agents’ ability to access and leverage well-structured contextual knowledge significantly boosts their ability to plan, evaluate, and execute tasks effectively. Given that knowledge can be highly specific to institutions or domains, developers must adopt tailored approaches to ensure efficient retrieval and organization. 

Startups actively explored strategies to integrate robust knowledge management capabilities into their agentic AI roadmaps. Two Liftoff startups stood out in their efforts: 

 

  • Kneogin Igmisarch explored solutions like GraphRAG to optimize the organization of domain-specific knowledge and facilitate faster and more relevant information access for their LLMs by focusing on concept-based retrieval rather than traditional keyword searches. They found that extracting and organizing domain-specific knowledge based on concept relationships can be expensive with traditional language models. To address this, they’re exploring ways to index the concept-relation data more efficiently using custom models, aiming to enhance knowledge retrieval for applications that require autonomous decision-making. 

  • Qdrant is pioneering advancements in vector databases for AI systems. They recently demonstrated a https://qdrant.tech/documentation/examples/graphrag-qdrant-neo4j/ that integrates vector databases with graph databases (Neo4j) to enhance knowledge organization and retrieval. By combining graph structures to organize knowledge with vector search, they show how to pinpoint where in the knowledge graph to search and boost the speed and relevance of retrieval results available to LLMs/agents. This approach ensures AI agents can quickly leverage the most pertinent and contextually rich data for their tasks.  

3. Developing Domain-Specific Models

Fine-tuning large language models (LLMs) is important for developing agentic AI systems that can operate effectively within specific domains, producing deliverables that align to industry conventions and knowledge. 

Several key considerations emerged about domain-specific models as we collaborated and conducted training sessions with startups: 

High-Quality Synthetic Data: a recurring challenge for early-stage startups is acquiring an initial dataset for model tuning. We observed that high-quality synthetic data that accurately reflects real-world scenarios can be instrumental in overcoming this hurdle. Intel® Liftoff provided a training workshop on tools to generate and manage synthetic data and discovered that startups were interested in using open frameworks for fine-tuning, e.g., OPEA and Ray*, on Intel® Tiber™ AI Cloud (https://ai.cloud.intel.com/). By generating and validating synthetic datasets, startups can simulate diverse conditions and interactions to fine-tune their models before they accumulate real-world data. 

Future-Proofing Models: As AI architectures and hardware continue to evolve, and new data is gathered, startups must ensure their models are adaptable and can be easily deployed within existing implementations. At our LiftOff Days event, sessions highlighted the importance of designing systems that can integrate new technologies and methodologies to maintain relevance and performance over time. 

Domain-Specific Reasoning and Decision-Making: To effectively serve each vertical, AI systems must have the ability to reason and make decisions within the specific context of the domain. This requires a deep understanding of domain-specific knowledge and processes, allowing AI to provide accurate and context-specific outputs. Two startups doing interesting work on domain-specific fine-tuning at the Liftoff Days event included: 
 

  • Reama AI develops AI products to manage alternative energy assets, such as solar and wind power. By fine-tuning their models, Reama AI streamlines operational chat queries about energy assets and uses AI to prepare regulatory filings that require LLMs fine-tuned to industry conventions.  

  • ParaWave uses vision models with LLMs to help coordinate real-time search and rescue actions. By fine-tuning vision models for specific search and rescue operations, their AI systems enable first responders to query their drones using LLMs. This helps to provide more relevant notifications and updates, and to more easily coordinate actions for search and rescue teams. 

4. Exploring Agentic AI Explainability and Validation

As agentic AI systems become more autonomous, it's essential to develop innovations that enhance explainability, auditability, and validation of decision-making processes that produce the outputs.  

For example, if AI agents are deployed for research workflows, we’d want to know what the reasons were for querying certain production data, medical literature, and model statistics, and how the AI prioritized and weighed the evidence, and justified its output. Techniques such as chain-of-thought , and LLM-as-a-judge  allow executed workflows to be explained, documented, and audited at speed and scale. Such innovations flip the script—what used to be considered a “bug” of unexplainable black-box AI, now becomes a “feature” to deliver explainability to workflows that may not have previously existed. 

Startup Example: AiCella

For AiCella , which is accelerating cancer cell therapy research and manufacturing, agentic systems need to create transparency so users can understand the rationale behind the AI system's decisions, including: 
 

  • Data source prioritization and analysis 

  • Data quality and reliability 

  • Identification of major risks and assumptions made during the research process 

The Next “GPT” Moment: Intel®  Liftoff Startups Are Ready for It

The Intel®  Liftoff Days event offered invaluable insights into the transformative potential of agentic AI, highlighting four foundational agentic AI capabilities that Intel® Liftoff startups are actively developing. As we saw, agentic AI is poised to revolutionize industries by enabling LLMs to plan, execute, and iterate on complex tasks, offering unprecedented advancements in efficiency and innovation. Just as hunter-gatherers transitioned to farmers, shaping their environment instead of just reacting to it, the emergence of agentic AI represents a similar shift in the digital realm. The journey towards the next "GPT moment" for AI agents is underway.  

As AI solution engineers at Intel®  Liftoff, we’re excited to support and see the remarkable progress of these startups, and we look forward to the continued evolution of agentic AI as it reshapes industries and drives innovation across the globe.  

If you are a startup that would like to join the Intel®  Liftoff community, please find information here: Intel® Liftoff for Startups

About the Authors

Ed Lee, Sr. AI Software Solutions Engineer, Intel 

Ed Lee works with startups as part of the Intel® Liftoff® program. The program accelerates AI startups to build innovative products with open ecosystems. At Intel, he works with several open ecosystems such as OPEA, OpenVino, vLLM, and on a range of hardware and cloud services. His background spans finance, tech, and healthcare, and he has delivered solutions including LLMs, graphs, reinforcement learning, and cloud services. Ed is a co-inventor on patents about AI for chronic kidney disease, and modular data pipelines. He holds a master’s degree in financial engineering from UC Berkeley. 


Kelli Belcher, AI Software Solutions Engineer, Intel 

Kelli Belcher is an AI software solutions engineer at Intel with a background in risk analytics and decision science across the financial services, healthcare, and tech industries. In her current role at Intel, Kelli helps build machine learning solutions using Intel’s portfolio of open AI software tools. Kelli has experience with Python, R, SQL, and Tableau and holds a Master of Science in data analytics from the University of Texas. 


Alex Sin, AI Software Solutions Engineer, Intel 

Alex Sin is an AI software solutions engineer at Intel who consults and enables customers, partners, and developers to build generative AI applications using Intel software solutions on Intel® Xeon® Scalable Processors and Gaudi AI Accelerators. He is familiar with Intel-optimized PyTorch, DeepSpeed, and the Open Platform for Enterprise AI (OPEA) to reduce time-to-solution on popular generative AI tasks. Alex also delivers Gaudi workshops, hosts webinars, and shows demos to beginner and advanced developers alike on how to get started with generative AI and LLMs. Alex holds bachelor’s and master's degrees in electrical engineering from UCLA, where his background is in embedded systems and deep learning. 

1