Venture capitalist and businessman Peter Thiel offered a stark comparison between the cryptocurrency and AI industries during a recent interview with Joe Rogan. Thiel suggested that while crypto embraced decentralization, AI is poised to become a highly centralized technology, challenging the tech industry’s trajectory, and suggesting a shift from decentralization to concentrated power in AI.
Moreover, Sam Altman recently tweeted that OpenAI reached an agreement with the US AI Safety Institute for pre-release testing of their future model. He emphasized the importance of this happening at the national level, stating, “The US needs to continue to lead.” While this move aims to bolster safety and regulatory standards, it also risks creating barriers to healthy competition by favoring established players with the resources to navigate complex regulatory landscapes, potentially stifling innovation from smaller, decentralized AI developers.
Over the past few years, the AI landscape has been shaped by tech giants like OpenAI, Google, and Microsoft, whose systems have led the AI revolution. Yet, the growing interest in decentralized AI—where models are developed and deployed in a distributed and often open-source manner—brings up questions about privacy, security, and the democratization of technology. While decentralized AI holds the potential for a more inclusive and diverse AI landscape, it also underscores the urgent need for military-grade standards in AI development to ensure reliability, security, and trustworthiness.
But here’s the twist: the answer isn’t picking a side; it’s finding a balance. In this article, I make the case for why a decentralized AI landscape still requires the muscle of military-grade systems.
The Risks of DIY AI: Lessons from the CrowdStrike Incident
As more companies explore building their own AI applications internally, driven by the desire for customized solutions tailored to their specific needs, the risks associated with these efforts are becoming increasingly apparent. The recent CrowdStrike incident underscores just how easily a bug in the system can cause massive disruption, even for a company widely considered to have military-grade capabilities. If even top-tier firms can face such challenges, imagine the level of disruption if every company started building AI applications in-house without the stringent oversight and quality assurance typically found in enterprise environments.
The reality is that developing AI solutions internally can be fraught with risks if not backed by rigorous quality assurance processes and military-grade standards. This is especially concerning as companies use open platforms and tools provided by tech giants to create their own “copilots” or AI-driven applications. Security researcher Michael Bargury, a former senior security architect in Microsoft‘s Azure Security CTO office, points out that bots created or modified with these open services aren’t secure by default, leading to potential security vulnerabilities. Without military-grade quality controls, the probability of encountering critical issues increases exponentially.
What Makes an AI System “Military-Grade”?
A military-grade AI system is not just about functionality; it’s about robustness, security, scalability, and compliance. These systems are designed to support mission-critical operations, where downtime or errors can have significant consequences. Unlike experimental or ad-hoc AI models, military-grade AI solutions undergo extensive testing and validation, adhere to stringent compliance standards, and incorporate best practices for risk mitigation.
Key features of military-grade AI systems include:
Rigorous Quality Assurance: Extensive testing at every stage of development, from data collection and preprocessing to model training, deployment, and monitoring. This ensures that the system can handle real-world complexities and edge cases.
Security and Privacy: Robust measures to secure data and prevent unauthorized access. Techniques like federated learning and differential privacy are used to safeguard sensitive information while allowing for powerful, decentralized AI applications.
Scalability and Reliability: The ability to scale effectively across different environments and workloads, ensuring consistent performance and minimal downtime.
Compliance and Governance: Adherence to industry regulations and standards, ensuring that AI solutions meet legal and ethical requirements, particularly in data-sensitive industries like healthcare, finance, and government.
Decentralized AI Demands Military-Grade Quality
The decentralized AI movement—aimed at breaking away from the control of a few tech giants and democratizing AI capabilities—also needs to align with these enterprise-grade principles. While decentralized AI can offer enhanced privacy, diversity, and innovation by operating on the edge and using localized data, it must also meet the high standards of reliability and security that enterprises demand.
Open-source frameworks and collaborative AI platforms are exciting developments in the field, but they should not come at the cost of robustness. Just as enterprises cannot afford to deploy half-baked AI solutions that could lead to outages, data breaches, or biased outcomes, decentralized AI platforms must be developed with the same level of care.
A Hybrid Approach: Centralized Governance with Decentralized Innovation
The future of AI might lie in a hybrid approach that combines the benefits of centralized governance with the flexibility of decentralized innovation. Centralized AI models provide a level of security, compliance, and reliability that decentralized systems currently lack. However, decentralized AI can bring forward new forms of innovation and localized solutions, reducing bottlenecks and fostering greater diversity.
For organizations looking to explore decentralized AI, partnering with experts who understand the importance of military-grade standards is crucial. This includes investing in robust quality assurance frameworks, developing strong governance models, and ensuring all stakeholders are aligned on compliance and risk mitigation strategies.
Ensuring the Future of AI is Secure, Reliable, and Equitable
As AI continues to evolve, both centralized and decentralized systems will play a role in shaping the future of intelligence. However, the rise of decentralized AI must not lead us into a Wild West of unregulated, insecure, and unstable AI solutions. Instead, the focus should be on developing military-grade AI systems that combine the best of both worlds—offering the flexibility and innovation of decentralization with the security, robustness, and compliance of centralized systems.
By embracing military-grade standards in AI development, we can ensure that the future of AI is not only democratized but also secure, reliable, and equitable for all. The need for robust oversight, rigorous testing, and strategic partnerships has never been more critical in the AI journey. The time to act is now.
Assaf Melochna is the President and co-founder of Aquant, where his blend of decisive leadership and technical expertise drives the company’s mission. An expert in service and enterprise software, Assaf’s comprehensive business and technical insight has been instrumental in shaping Aquant.
Recently, Paul Graham, legendary computer scientist and startup investor retweeted a thought-provoking statement by Mckay Wrigley, AI thought leader and founder of Takeoff AI: “We’re at the point with AI codegen where Cursor + @Claude 3.5 Sonnet is a legit technical cofounder.” While Graham expressed skepticism, noting that technology might not be advanced enough to replace a technical cofounder fully, he acknowledged McKay as a trustworthy voice. I agree with Graham’s cautious stance.
However, if AI can take on much of the coding and technical workload traditionally handled by a technical cofounder, it could free up valuable time for these founders to focus on becoming strategic architects of the company’s vision. This shift would require less emphasis on technical skills and a greater focus on decision-making, strategic oversight, and guiding the company through complex challenges. Moreover, if there is a shift in the role of the technical cofounder, we should also expect a significant shift in how decisions are made.
From Insight to Action: A Shift in Decision-Making Paradigms
AI represents a significant evolution in the journey from traditional analytics to more advanced, automated decision-making. While traditional analytics has enabled organizations to extract valuable insights from data, AI takes this further by automating decision processes, uncovering patterns beyond human capacity, and enabling real-time, context-aware decisions.
Historically, decision-making within organizations relied heavily on human judgment, informed by data analytics. Executives and managers would interpret reports and dashboards, using their experience and intuition to make strategic choices. However, as AI systems become more sophisticated, we are witnessing a paradigm shift. The role of humans is evolving from making data-supported decisions to overseeing decisions made by AI systems. This transition from human-led to AI-driven decision-making has profound implications for business leaders.
Automation of Routine Decisions: Freeing Human Capacity for Strategic Work
One of the most immediate impacts of AI is its ability to automate routine, operational decisions. For example, AI can streamline supply chain optimizations, handle customer service interactions, and automate financial forecasting. By taking over these time-consuming tasks, AI allows human workers, particularly those in leadership positions, to focus on more strategic and creative endeavors.
A notable example is Amazon‘s use of AI-driven supply chain management, which continuously optimizes inventory levels based on real-time data, reducing costs and improving efficiency. 3D Systems Corporation leverages AI to instantly provide service managers with key insights about customers at risk or workforce performance, guiding them on what to focus on without the need to sift through hours of data. This allows managers to make quicker, more informed decisions by highlighting the most critical information and trends, optimizing both time and resource allocation.
Enhanced Predictive Capabilities: Anticipating the Future with Greater Accuracy
AI’s ability to analyze vast amounts of data in real-time provides organizations with enhanced predictive capabilities. Unlike traditional analytics, which often focuses on past performance, AI systems can forecast future trends, customer behaviors, and operational risks with unprecedented accuracy.
For instance, Netflix lix uses AI algorithms to predict viewer preferences, guiding content creation and curation decisions that have significantly boosted user engagement and retention. In manufacturing, AI analyzes data from machinery to detect patterns indicating potential faults. This allows for proactive maintenance, reducing downtime and repair costs while extending equipment lifespan. For example, industrial machinery companies like Terex Corporation use AI to optimize machine performance and reliability.
Augmented Decision-Making: Enhancing Human Judgment with AI Insights
While AI can automate many decisions, its most significant contribution may be in augmenting human decision-making. AI systems can provide decision-makers with deeper insights, identify previously unseen risks, and suggest alternative scenarios. This augmentation improves the quality of strategic decisions, ensuring that leaders can navigate complex challenges with greater confidence.
In healthcare, for example, AI is being used to assist doctors in diagnosing diseases and recommending treatments, combining data-driven insights with the physician’s expertise to improve patient outcomes. For teams responsible for medical device uptime, AI can predict equipment failures and recommend preemptive maintenance, reducing the time it takes to troubleshoot an issue, and in turn reduce equipment downtime. It can also optimize maintenance schedules based on usage patterns, ensuring devices are consistently available and performing well.
AI’s capacity for real-time data processing enables organizations to make decisions that adapt dynamically to changing conditions. Whether it’s adjusting pricing based on real-time demand or reconfiguring supply chains in response to unexpected disruptions, AI provides a level of agility that was previously unimaginable.
For example, Uberuses AI to dynamically adjust pricing in response to demand fluctuations, ensuring optimal balance between supply and demand.
Challenges and Considerations: Navigating the Complexities of AI Integration
Despite its potential, integrating AI into decision-making processes presents significant challenges. Data quality remains a critical concern; AI systems are only as effective as the data they are trained on. Poor-quality data can lead to flawed decisions, making it essential for organizations to invest in robust data management practices.
Trust is crucial for the acceptance of AI-driven decisions. Organizations must make these decisions explainable and transparent, clearly communicating the reasoning behind them to build confidence among stakeholders. In a recent blog, I discuss how experienced professionals may resist AI, believing their expertise surpasses it. Leaders must address this defensiveness and promote adaptability, encouraging teams to embrace new insights and foster a more resilient organization.
Moreover, the role of human decision-makers is changing. While AI can take over certain tasks, human intuition, ethical considerations, and strategic vision remain irreplaceable. Leaders must find the right balance between human and AI input, ensuring that AI augments rather than replaces human judgment. This balance will be crucial in navigating ethical dilemmas and maintaining a human-centric approach to business strategy.
The Future of Decision-Making: Embracing AI as a Strategic Partner
Looking ahead, AI is poised to redefine the very nature of decision-making within organizations. Companies that successfully integrate AI into their decision processes will not only become faster and more efficient but will also gain a competitive edge in navigating complex, rapidly changing environments. The key to success lies in embracing AI as a strategic partner—leveraging its strengths while recognizing the irreplaceable value of human insight and creativity.
Going all in and allowing AI to make decisions can be a pivotal moment for many companies. Transitioning from people making data-driven decisions to AI making decisions with human oversight marks a significant shift. Roelof Botha from Sequoia Capital calls these “Crucible Moments” — an inflection point where a choice you make today has an outsized bearing on your trajectory for years or even decades — and it’s a concept that has deeply resonated with me since I completed the dy/dx program at Stanford a few weeks ago.
As we stand on the brink of this new era, the companies that will thrive are those that embrace AI not just as a tool, but as a transformative force reshaping decision-making from the ground up. By thoughtfully integrating AI into their strategic framework, businesses can unlock unprecedented efficiencies, foresee challenges before they arise, and pivot with agility in an increasingly complex world. The future isn’t about choosing between human or AI decision-making—it’s about creating a harmonious partnership where each complements the other, leading to more informed, impactful, and innovative outcomes.
Assaf Melochna is the President and co-founder of Aquant, where his blend of decisive leadership and technical expertise drives the company’s mission. An expert in service and enterprise software, Assaf’s comprehensive business and technical insight has been instrumental in shaping Aquant.
Aquant is proud to announce its mention in the 2024 Gartner Emerging Tech: Demand Growth Insights for Generative AI report. The report identifies Aquant as a vendor in the GenAI Virtual Assistant space.
Gartner highlights that, “Demand for generative AI is currently driven by large enterprises, with communications, media and services leading industry interest.” According to this Gartner research, “As the market matures, Gartner expects small and domain-specific language models will mostly replace their larger counterparts.” Gartner also stated that “GenAI products and capabilities that are designed with the use case and user in mind will outcompete their generalized counterparts.”
“We are thrilled to be mentioned by Gartner as a vendor in this rapidly evolving and competitive space, where our focus on delivering domain-specific intelligence sets us apart and drives tangible value for the organizations we serve,” said Assaf Melochna, President and Co-founder of Aquant. “We think this mention underscores our commitment to developing AI solutions that are not only filling a critical gap in the service industry but also highly attuned and tailored to the specific needs of our customers.”
Aquant’s generative AI solutions empower service leaders by transforming data into actionable insights, enhancing troubleshooting capabilities, decision-making, and operational efficiency. By mining a company’s service history, documentation, and expert insights, Aquant’s AI-driven platform closes the Service Expertise Gap™—the shortage of talent needed to solve complex industry issues—by providing tailored recommendations based on user skills, problem complexity, machine condition, and customer needs. This equips teams with expert-level capabilities, enabling exceptional, personalized service that delights customers, reduces costs, and allows businesses to focus on revenue growth rather than customer escalations.
Gartner, Emerging Tech: Demand Growth Insights for Generative AI, Danielle Casey, 27 August 2024 GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
The Shift Left Strategy: A Game Changer for Service Organizations
In the manufacturing industry, where complex machinery and costly downtime can wreak havoc on operations, service leaders are constantly searching for ways to boost efficiency and cut costs. Enter the Shift Left strategy, a powerful approach that emphasizes resolving service issues closer to their origin through remote and self-service options, reducing the reliance on expensive field dispatches. Aquant’s latest report, “Unlock Hidden Savings: The Power of Shifting Left in Field Service,” dives into this strategy and reveals how companies implementing it are seeing significant financial benefits.
Based on data from over 100 service operations teams in the manufacturing sector, the report highlights that companies investing in personalized AI to implement the Shift Left strategy can achieve over $4 million in annual cost savings. These savings stem from reducing unnecessary technician dispatches, boosting First-Time Fix Rates, and enhancing other key performance indicators.
The Numbers Speak: Top Performers Reap Big Rewards
The report categorizes companies into top, average, and bottom performers, with top performers setting themselves apart through proactive use of personalized AI, which drives superior operational outcomes and cost reductions. Here are some key findings:
First-Time Fix Rate: Top performers boast an 88% first-time fix rate, significantly outpacing the median rate of 75%, and far surpassing the bottom performers at 59%.
Remote Solve vs. Remote Investigations: Top performers resolve 75% of issues remotely, compared to a median of 54% and just 27% among the bottom performers.
Avoidable Dispatch Potential: Leading companies have minimized avoidable dispatches to 3%, whereas the median stands at 8%, and the least efficient report 17%.
The Rise of Personalized AI: Meeting the Unique Demands of the Service Industry
The service industry is unique, and the tools it uses must be tailored to its needs. Gartner highlights that, “Demand for generative AI is currently driven by large enterprises, with communications, media and services leading industry interest.” According to this Gartner research, “As the market matures, Gartner expects small and domain-specific language models will mostly replace their larger counterparts.” Gartner also stated that “GenAI products and capabilities that are designed with the use case and user in mind will outcompete their generalized counterparts.”
Assaf Melochna, President and Co-Founder of Aquant, points out a critical gap in the market for such specialized AI tools. “Manufacturing relies on complex machinery, and breakdowns can lead to costly downtime. Field service teams need solutions that enhance human capabilities to troubleshoot intricate machines. Generic AI tools often fall short, delivering one-size-fits-all answers that lack the nuanced understanding of expert service professionals. Only personalized AI, crafted to replicate the unique insights of seasoned experts, can guide workers to the precise solution for each issue, ensuring a customized approach to every service challenge,” says Melochna.
Case Study: Shifting Left Saves $3 Million for a Medical Device Company
Aquant’s report also highlights a compelling case study of a leading medical device company that successfully implemented the Shift Left strategy. Despite its success in developing, manufacturing, and marketing medical devices, the company faced challenges like frequent equipment downtime, extended resolution times, inconsistent technician training, and limited visibility into critical service data.
By adopting the Shift Left strategy, the company reduced field events by 5% between 2021 and 2023, thanks to enhanced remote diagnostics and pre-visit troubleshooting. This led to a 3% increase in the First-Time Fix Rate, which improved efficiency, cut costs, and boosted customer satisfaction. The financial impact was substantial: the 3% boost in First-Time Fix Rate alone saved the company $3 million. Further minor adjustments, like a 1% shift toward early-stage resolutions and a 1% increase in remote resolutions, could save an additional $1.1 million and $1.5 million annually, respectively. These figures demonstrate the powerful financial and operational benefits of continued optimization.
Getting Started: Aquant’s Four Recommendations
To harness the power of the Shift Left strategy, Aquant recommends the following steps:
Commit to a Shift Left approach: Embrace early-stage issue resolution, prioritize remote diagnostics, and aim to reduce on-site interventions.
Organize existing data: Gather and organize all available data, even if incomplete, as a foundation for integrating AI models and uncovering immediate insights.
Invest in personalized AI tools: Choose AI solutions tailored to your industry’s specific needs. Personalized AI provides targeted guidance, outperforming generic tools in addressing service challenges.
Capture expert knowledge: Ensure that the expertise of your top service professionals is integrated into your AI system to retain and utilize critical, nuanced insights.
By adopting these strategies, service organizations can not only unlock hidden savings but also revolutionize their approach to field service, driving significant operational efficiencies and enhancing customer satisfaction. To learn more about the Shift Left Strategy, download Aquant’s latest report.
The increasing prevalence of AI-generated content on the internet is raising alarms within the AI community. Aatish Bhatia’s recent New York Times article highlights a growing concern: the risk of AI models collapsing when trained on data that includes their outputs. This phenomenon, known as “model collapse,” can lead to a degradation in the quality, accuracy, and diversity of AI-generated results.
As AI systems become more advanced and widespread, ensuring the integrity of their outputs becomes increasingly challenging. Bhatia’s article explains that as AI models ingest AI-generated content during their training process, a feedback loop can occur, leading to a significant decline in the quality of future AI outputs. Over time, this can result in AI systems producing less accurate, less diverse, and more error-prone results, ultimately threatening the technology’s effectiveness.
In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality. This drift can manifest in various ways, such as blurred images, repetitive and incoherent text, and a general loss of diversity in the generated content. For instance, an AI model trained on AI-generated images may start producing distorted visuals, while a language model might lose linguistic richness and begin repeating phrases.
The implications of this phenomenon are significant, especially as AI-generated content continues to flood the internet. Companies that rely on AI for critical tasks, such as generating medical advice or financial predictions, could see their models degrade if they don’t take proactive steps to avoid model collapse.
How to Avoid AI Model Collapse: Proven Strategies
Companies must adopt strategies that prioritize high-quality, diverse data to avoid model collapse. Here are key approaches:
1. Use High-Quality, Synthetic Data
Synthetic data – generated by humans in sophisticated ways – is inherently more reliable, diverse, and accurate than AI-generated content. By relying on data produced by humans, companies can ensure that their AI models are trained on a solid foundation that reflects real-world complexities.
Aquant’s Approach: We enhance our models by combining historical and synthetic data. This approach enriches the dataset, allowing the model to learn more diverse patterns and scenarios, which improves its accuracy and robustness. By carefully generating synthetic data that complements the historical data, we prevent overfitting and ensure the model remains generalizable to real-world applications
2. Careful Data Curation
Curating data carefully ensures that AI models learn from the most relevant and accurate sources. This helps maintain the quality and diversity of the AI’s output, preventing the model from drifting away from its intended purpose.
Aquant’s Approach: We carefully curate the data used to train our models, focusing only on what is necessary and relevant to each specific machine or business and avoiding irrelevant or noisy data. Beyond structured sources like service manuals and knowledge articles, we recognize that 30% of solutions to service challenges come directly from the expertise of seasoned technicians. To capture this valuable insight, we have a process for incorporating their knowledge. This targeted approach ensures a robust model that avoids collapse and stays highly effective.
3. Develop Industry-Specific NLP Models
Industry-specific Natural Language Processing models (NLPs) are tailored to understand a particular field’s unique language and context. This leads to more accurate and reliable AI outputs directly applicable to the industry’s needs.
Aquant’s Approach: We have developed an NLP specifically designed to understand the language of the service manufacturing business. Our AI provides more relevant and accurate insights by focusing on industry-specific terminology and context. Our proprietary model is called “Service Language Processing.”
4. Continuous Human Oversight and Feedback
Human oversight is essential for identifying and correcting errors or biases in AI models. Continuous feedback from experts ensures that the AI remains aligned with real-world data and expectations, preventing unintended drift in its outputs.
Aquant’s Approach: At Aquant, our AI models are continuously refined with feedback from human experts, like technicians, and are seamlessly integrated into the system each time they use the tool. This ongoing process keeps our AI accurate and aligned with real-world needs without requiring users to spend significant time on training or adjustments.
5. Limit AI’s Self-Referential Training
Avoiding the excessive use of AI-generated content in training future models is critical to prevent the feedback loop that leads to model collapse. By limiting self-referential training, companies can maintain the quality and diversity of their AI models.
Aquant’s Approach: We avoid training our models on AI-generated outputs, relying instead on synthetic data when the data is missing or lacking. This approach ensures that our AI models do not degrade over time.
Aquant’s approach to AI development exemplifies how to avoid the risks of model collapse. By leveraging high-quality data from expert technicians, carefully curating data to include only what is necessary and relevant, and developing industry-specific NLPs, Aquant ensures that its AI models deliver precise, actionable insights tailored to the unique needs of the service manufacturing industry.
In an era where the risk of AI model collapse looms, Aquant’s commitment to quality, relevance, and industry-specific expertise positions us as a leader in creating robust, reliable AI systems that stand the test of time.
Oded is the VP of Product and R&D at Aquant, where his passion for user experience, technology, and engineering guides his leadership of R&D teams and product design. With a clear vision for future products and cutting-edge technologies, Oded is dedicated to delivering compelling experiences that align with Aquant’s mission.
In 1991, Chris Argyris’s seminal article “Teaching Smart People How to Learn“, originally published in Harvard Business Review, explores the challenges that highly intelligent and successful professionals often face when it comes to learning. Argyris argues that smart people, particularly those in leadership or specialized roles, can struggle with learning because they are not used to being wrong and tend to resist feedback that challenges their established ways of thinking
Argyris introduced the concept of double-loop learning—where individuals must question and alter their underlying assumptions to truly change the way they think and learn.
Fast forward three decades, his insights remain profoundly relevant, particularly as we navigate the complexities of an AI-driven workplace.
AI: Amplifying Old Challenges, Introducing New Ones
AI is transforming the way we work, offering unprecedented opportunities for efficiency, innovation, and decision-making. However, with these opportunities come significant challenges, especially for the very individuals who have been traditionally celebrated for their intelligence and success. As Argyris noted, smart professionals often struggle to learn because their past successes have solidified their existing ways of thinking. This can lead them to become defensive, avoiding situations that challenge their perspectives or expose their mistakes. In the age of AI, this challenge is magnified.
AI systems are designed to optimize outcomes by processing vast amounts of data, but they operate within the boundaries of the algorithms and data they are trained on. For many smart professionals, this introduces a cognitive dissonance—they may find it difficult to trust a machine’s judgment over their own, especially when the AI’s recommendations challenge their established ways of thinking. Defensive reasoning, a concept Argyris explored, is now more relevant than ever. Professionals may resist AI not out of fear of technology but from a belief that their expertise surpasses that of the AI – individuals often rely on this type of reasoning to avoid the discomfort of admitting mistakes or ignorance.
Double-Loop Learning: The Path to AI-Augmented Success
Argyris’s concept of double-loop learning is critical in overcoming this resistance. Single-loop learning—where individuals make adjustments without challenging their underlying assumptions—will no longer suffice. In today’s AI-driven world, professionals must engage in double-loop learning, which requires a deeper examination of the assumptions behind both their decisions and the AI systems they use.
This isn’t just about accepting AI outputs at face value. It’s about professionals using their expertise to question and refine AI’s recommendations, ensuring that the insights generated are both relevant and ethical. By embracing double-loop learning, professionals can actively participate in training AI systems, personalizing and augmenting their outputs to better serve their unique needs. This approach not only enhances AI’s effectiveness but also empowers professionals to maintain their role as critical thinkers and innovators.
Leadership’s Role in Fostering a Learning Organization
Argyris emphasized that learning requires a culture where individuals feel safe to question themselves and their processes. In the age of AI, this cultural foundation is more crucial than ever. As leaders, we have a responsibility to create an environment where AI is not seen as a threat, but as a tool for growth and innovation.
Leaders must address the defensiveness that can arise when AI challenges long-held expertise. This involves promoting a culture of openness, where mistakes are viewed not as failures, but as opportunities for learning and improvement. By modeling curiosity and a willingness to adapt, leaders can encourage their teams to engage in double-loop thinking, thereby fostering a more dynamic and resilient organization.
A Call to Action for Leaders
The insights from Chris Argyris’s original work remain a powerful guide for today’s leaders. In the face of AI’s growing influence, the need for double-loop learning has never been more urgent. Smart professionals must not only adapt to working with AI but must also actively engage in shaping how AI is integrated into their work. This requires overcoming the natural defensiveness that comes with the disruption of established expertise.
While AI excels at data processing and pattern recognition, it lacks the nuanced understanding of human emotions, ethics, and social dynamics. Critical thinking, emotional intelligence, and interpersonal skills are essential for complementing AI’s capabilities, enabling more effective collaboration between humans and machines. This synergy can lead to outcomes that surpass what either humans or AI could achieve alone.
As leaders, we play a critical role in cultivating a learning organization that embraces AI. By encouraging double-loop thinking and fostering a culture of openness and inquiry, we can ensure that AI serves as a catalyst for growth rather than a source of resistance. In doing so, we prepare our organizations not just to survive, but to thrive in the ever-evolving landscape of the AI age.
Artificial General Intelligence (AGI), and Artificial Superhuman Intelligence (ASI) are coming. There’s a lot of excitement in the air about AGI and ASI, which is essentially AI that is capable of handling any task just as well as a human. Some experts even predict we’ll hit this milestone in the next five years.
In the last decade or so, the world of AI has been on a path towards achieving superhuman intelligence. Google paid more than $600 million for a machine learning startup, speciallizing in Reinforcement Learning called DeepMind at the start of 2014, which led to DeepMind’s development of AlphaGo, which marked a significant milestone. More recently, since the onset of chatGPT, large language models (LLMs) have surged into our lives, demonstrating impressive capabilities. However, the journey towards superhuman intelligence continues. The key to reaching this goal lies in combining reinforcement learning (RL) with LLMs, as LLMs alone are insufficient.
The integration of super-scaled RL with LLMs, provides a clear pathway to achieving superhuman intelligence. As a result, AI models will become more personalized and less generic. We call this, “Personalized AI”.
Understanding Personalized AI
Unlike generalist AI models, which are built to handle a broad range of tasks using massive datasets, Personalized AI models are fine-tuned for specific industries, tasks, and, in many cases, for specific users. These models use curated datasets to perform specialized functions more accurately and efficiently. The strength of Personalized AI lies in its ability to understand and adapt to the unique needs of different situations, making it super practical for industry-specific applications and will be dynamically different based on the specific situation. Consider a situation where a personalized virtual AI agent is helping a customer change a delayed or canceled flight. The service this virtual agent provides must be tailored to that customer’s specific needs. Similarly, imagine a patient going to the ER and receiving fast triage from a virtual doctor. The patient’s medical history must be considered when assessing the situation.
Why Personalized AI Matters for AGI and Superhuman Intelligence
I’m a big believer in the role of Personalized AI in our quest for superhuman intelligence. While generalist LLMs give us a broad knowledge base, they often lack the depth needed for nuanced, industry-specific decision-making and automation. Personalized AI models, on the other hand, are designed from the ground up to tackle the complexities of specific tasks and situations, offering a solid foundation for developing superhuman intelligence.
By perfecting these specialized models, we can equip next-generation AI algorithms with high-quality, relevant data to make informed decisions and handle complex tasks and situations.
The Benefits of Superhuman Intelligence for Industries
Superhuman Intelligence isn’t just about matching human performance; it promises to revolutionize industries by enhancing human abilities and driving incredible efficiency and innovation. It could lead to quicker and more accurate diagnoses, personalized treatment plans, and better patient outcomes in healthcare. Superhuman Intelligence could also streamline manufacturing production processes, cut waste, and improve quality control. The possibilities are endless, and the impact on workers and businesses could be profound.
At Aquant, we’re excited about this next phase of AI because it will allow the folks responsible for repairing and maintaining complex equipment and machinery to do their jobs more effectively and reduce the need for field visits. This means faster diagnostics, quicker solutions, and less downtime for critical equipment. By harnessing the power of Personalized AI, we can empower service teams to anticipate issues before they become problems, streamline repair processes, and ultimately enhance operational efficiency. This saves time and resources and ensures that machinery always runs at peak performance, benefiting businesses and their customers.
Challenges and Ethical Considerations
While pursuing the next step in AI is thrilling, it comes with its own challenges and ethical considerations. Developing Superhuman Intelligence means overcoming significant technical hurdles and addressing concerns about data privacy, security, and the potential for AI to displace human jobs. It’s important to approach this journey responsibly, ensuring that Superhuman Intelligence’s benefits are realized in a way that safeguards societal well-being and promotes ethical use of technology.
Looking Ahead
As we push toward the goal of AI, focusing on Personalized AI models offers a promising pathway. By leveraging specialized data and industry-specific insights and handling unique situations, we can create AI systems that excel in their designated tasks and pave the way for AI’s broader capabilities. At Aquant, we’re committed to driving this innovation forward, harnessing the power of AI to transform industries and improve lives.
Get in touch to learn more about how we’re doing this!
Assaf Melochna is the President and co-founder of Aquant, where his blend of decisive leadership and technical expertise drives the company’s mission. An expert in service and enterprise software, Assaf’s comprehensive business and technical insight has been instrumental in shaping Aquant.
Aquant is proud to announce its recognition as one of the 2024 CRM Top 100 Companies in Customer Service, Marketing, and Sales by Destination CRM. This recognition highlights Aquant’s commitment to elevating customer interactions and operational efficiencies in the manufacturing industry through AI.
The sixth annual “CRM Top 100” issue by Destination CRM recognizes the leading technology providers across the three pillars of CRM—customer service, marketing, and sales. This year, the spotlight remains on generative AI and its practical impact on CRM, from crafting instant customer responses to providing real-time support for agents, delivering insightful marketing analytics, and automating sales processes.
Aquant’s inclusion in this list underscores its key role in revolutionizing AI in customer service. Aquant’s platform provides actionable troubleshooting recommendations and business insights that enable service teams to anticipate and address customer needs proactively. By transforming raw data into valuable intelligence, Aquant helps service teams make informed decisions, resolve equipment issues in the most efficient way possible, and drive revenue growth.
“Repairing complex equipment like MRI machines or tractors is no easy task,” said Assaf Melochna, CEO and Founder of Aquant. “And as service pros retire and the skilled talent pool shrinks, more service leaders need AI to help their teams do more with less. However, the market is flooded with tools offering generic, unreliable recommendations that worsen service quality. What they need is a platform that operates like their best service pros—providing answers that consider the complexity, context, environment, and history of each machine, personalized for every user and scenario. That’s why we created Aquant.”
Aquant’s advanced AI technology ingests and analyzes your machines’ service history, documentation like service manuals and tutorial videos, unstructured data like technician notes, and the knowledge of your top performers. The platform then delivers trusted answers, tailored to fit the skill set of every user – from service leaders to field technicians to contact center reps to your end customers for self-service.
In a recent episodeof Emerj’s AI in Business Podcast, Matthew DeMello interviewed Edwin Pahk, Aquant’s SVP of Customer Success and Pre-Sales, about how AI can help enterprises retain tribal knowledge amidst turnover and expertise gaps. They discussed the challenges of digital transformation, emphasizing the need for AI to be integrated behind the scenes before becoming customer-facing.
Edwin highlighted the importance of personalizing AI for employees first, especially in complex industries like healthcare and heavy machinery, where understanding the unique needs of different equipment and environments is crucial. Capturing and converting the valuable knowledge of subject matter experts (SMEs) into data for AI use is essential to ensure accurate and practical solutions.
He also addressed the organizational changes needed to adopt AI effectively. With the labor force in technical roles shrinking, companies should focus on hiring individuals with excellent soft skills and customer service experience. AI tools can make jobs more enjoyable by reducing routine tasks and helping to retain employees.
Edwin emphasized the importance of looking for candidates with problem-solving skills, curiosity, and strong communication abilities, as these are becoming increasingly important in evolving technical roles. The conversation underscores the need for a people-focused approach in AI implementation, integrating human expertise to enhance employee and customer experiences.
Matthew: Welcome everyone to the AI and business podcast. I’m Matthew DeMello, senior editor here at Emerge Technology Research. Today’s guest is Edwin Pahk, senior vice president of customer success and customer pre-sales at Aquant. Aquant is an AI-powered tech company that builds a co-pilot platform for service workflows.
Edwin joins us on today’s show to discuss how AI can help enterprises from every industry retain their organizational knowledge or tribal knowledge, as it is called, in the face of turnover and other expertise challenges throughout the episode. Edwin draws from his experience in the service industry to emphasize the importance of personalization on both sides of B2B workflows.
He also shares how these same tools can often enhance the work-life experiences of subject matter experts across workflows. Today’s episode marks the first in a series sponsored by Aquant, and without further ado, here’s our conversation.
Matthew: Edwin, thank you so much for being on the program with us this week.
Edwin: So excited to be here. Thank you for having me.
Matthew: Absolutely. I’m a managing editor here at Emerge. So, aside from lending my game show voice to these podcasts, that’s the predominant amount of my day. We had a writer pen an introduction for an article that cited reports from the OECD and APQR, stating that around 70-80% of digital transformations either fail or don’t meet expectations in a way that can’t even be reconciled with a philosophical approach to our eyes.
We’re seeing across the industry and the hype cycle that folks were too quick to jump into the pool’s deep end. We want everybody to jump into the pool, but ensuring it’s the shallow end is essential. And once you jump in, maybe take some time.
But what do you think is going on there regarding the larger dynamic?
Edwin: The last 12 to 24 months have been very telling for us both as an organization and, quite honestly, the entire market. I don’t wanna say the inception, but the emergence of technologies like ChatGPT has created a bit of a feeding frenzy on, “Hey, how are we gonna use AI? Let’s use AI here, let’s use AI there.” It’s created this giant storm of figuring out where to plug something like this without much thought.
So first of all, this would be, well, are you delivering on a use case that makes sense and will improve the experience from an employee and customer perspective? Data security and whether the predictions are accurate should also be considered. These types of things are the second question you ask after you go down this journey, for the most part.
The best way I can create an analogy around this is, let’s say you look at some of the best use cases of machine learning and AI—for example, a recommendation engine to tell you what the next best episode or best series for you to watch, or Amazon recommending to you what you should add to your shopping cart. In both examples, if you get it right, that’s amazing. If you show another video that someone likes, it’s a positive outcome. But if you don’t show something someone likes, it’s not the end of the world either.
In our space, especially in customer service, and when you’re solving problems and troubleshooting equipment, getting it wrong is bad. We’re talking about supporting large construction equipment, diesel engines, and medical devices used in operations. So, getting it wrong—or getting a false positive response–is extremely consequential.
As a result, many of those types of outcomes cannot be tolerated. When it comes to digital transformation in our space, there is often a higher bar for the application of AI. That’s the first thing.
The second thing—a bit more of a standard thing that everyone can appreciate and agree upon—is that when you introduce something like AI, many people think of the Terminator or think about their jobs being removed.
A lot of the failure in digital transformation also comes from this: how do you show the people who are supposed to use it what’s in it for them? I always like to say that it doesn’t matter if you create the best technology in the world. If no one uses it, it doesn’t mean anything. So there’s a big piece of this that is not just about changing the mindsets of the executives, the leaders, and the IT folks and getting them to use AI. You must convince the call center agent that it will help them do their job and improve their quality of life.
So these are some of the critical things and themes that we, as an organization, see when it comes to why digital transformations fail, especially with this massive word called AI that seems to be sitting out there for everyone to try to pounce on.
Matthew: Yeah, in an industry-agnostic sense. This is the take-home for everybody listening. If you’re expecting a model that works right out of the box and is customer-facing, that’s a dream world that doesn’t exist—that should not be trusted. You want to be skeptical walking into that room and want a long sales process so they can explain it.
From what we hear across the board, not only from your last answer but with so many of our guests—if you’re not integrating it behind the curtain first… if you’re not acclimating that model to your organization the first before you acclimate it to your customers—doesn’t that kind of also flies in the face of how you train your human employees? You have to train them first on what the organization’s all about. They’ve got to know what McDonald’s is before they’re working the phones in the aisle, offering people extra fries. They’ve got to know what McDonald’s is all about. They’ve got to know the golden arches. They’ve got to know Ronald.
You also mentioned how important personalization is, as it tends to get called in a few other industries—at least once you get those models in front of customers and tailor them to their experience.
Let’s narrow this down to the B2B crowd. In B2B, especially heavy industry field services, if you’re supplying machines that keep people’s hearts beating, those machines can’t fail. Those machines cannot enter a three-week-long customer service and be put at the bottom of the list. They need those machines never to break down. You’re forced to come to real grips with the fact that you don’t need a personalized system for your customers first. You need a personalized system for your employees first.
Are there reasons beyond that why a generic AI model—one that’s not personalized to the employees and the customer—might work for service teams dealing with complex machinery?
Edwin: Yeah, it’s a really good point. When you think about the concept of generic AI, there is an understanding that AI is being fed by a specific fuel. It ultimately drives AI’s decision-making process.
What it comes down to—especially in our space, and it does apply to other spaces as well—is looking at personalization from a slightly different angle than how you might experience it in your daily life.
As a consumer, you get personalized ads sent to you, given your location in the world. A whole number of different factors are added up to create the experience that you feel. For us, our customers, the machines that they’re supporting, and all of these technologies that we’re supporting, it’s slightly different.
What is personalization? Let’s explore the concept. For example, an MRI machine in a Southern California hospital versus a rural Texas clinic will differ. The things you need to do to operate and support those machines are very different—even though they are the same.
We like to talk about servicing and troubleshooting problems on many of these devices as a beautiful orchestration of chaos. There’s the call center agent asking the customer’s questions. There’s the machine itself, the service history of that machine, and what it’s being used for. There are the parts that have been replaced in that machine, and whether those parts are high quality or not, or from one place or another. Plus, consider the field technician that goes out there. These different factors create a unique situation whenever someone calls about a problem.
It’s not as simple as saying, “Hey, my phone’s broken. Did you try turning it on turning off?” — you know how everyone says that. It’s much more about, “What have you been doing on your phone? When did you last go into Best Buy to get it repaired?” These things factor into a decision-making process that AI needs to take advantage of to provide the most effective recommendation.
Personalization is about taking all of these unique events and creating a situation or some assistive technology that allows the employee to make the best decision possible given a specific situation. That’s the critical aspect of personalization in our space here. Hopefully, that gives you some sense of the different variables that are factored into creating the right experience for our customers.
Matthew: Makes a lot of sense. I think it’s even essential for our listeners. This jargon—bespoke models and foundational models—used to be commonplace a year ago. I’ve since heard these terms fall by the wayside. When we talk about the generic models playing into that old jargon, you know, you do want a foundational model, a more general model for your organization, still tailored to the organization. My question for everything you were saying a moment ago about personalization to the employees is about getting that buy-in from the factory shop and the SMEs on the ground floor. Is that the front door? Is that the only front door, or how best do we tap into that organizational or tribal knowledge through data that avoids these problems?
Edwin: It’s a really good question. In the end, if you think about the value of an organization or a company, the unsung heroes are those subject matter experts who have been in your organization for the last 20 years. They seem able to whisper into any machine or system, figure out the problem, and get it back up to running.
The thought process and idea is this: we’ve been in business for a few years. For example, about 20-30% of the identified problems and solutions within your organization’s data exist in the experts’ minds. The experts add their specific way of solving a problem. Their specific way of observing how to get things done doesn’t exist in the data anywhere—it exists in the people’s minds.
One of the critical things we realized going forward—and if you remember when I talked about the concept of a higher bar here—is that false positives are unacceptable for us. Our ability to convert expert thought processes, translating what they deem the right solutions, into data that can be interpreted by AI is critical. This separates what we do from other organizations to drive success in these situations.
The entire industry is facing it. People are retiring. People aren’t staying in the same jobs as they used to—you don’t have the 15- to 20-year veterans anymore, the subject matter experts who will be there forever. People typically go in for a couple of years and maybe leave. So, how do you institutionalize that primal knowledge and transform it into data that an AI can then sustain through the course of people coming in and flowing in and out of the organization?
It goes beyond just getting a manual. If you talk to any customer service agent or technician, someone who’s servicing something, they’ll tell you the manual is good, but 50% of the time, it doesn’t represent what happens in real life. As a result, this is the reason behind promoting and creating this bespoke model that is accurate and produces what we call best practice here. In terms of how to solve these types of problems. A critical portion of that is consuming that tribal knowledge and converting that into data that then AI can interpret.
So, that’s a big part of why we feel strongly about why it’s needed in this industry.
Matthew: Absolutely. Let me ask you a question about that 20-30%. Is that permanent, or does that wax and wane with more advanced technologies? Can you make a dent with personalized systems and AI geared toward gaining that tribal knowledge?
Edwin: Absolutely. You’re right. So, when I’m usually talking about it, it’s the initial instance of getting the model up and running. But once it’s actually in the ongoing improvements, how does the model react to feedback being leveraged within?
The usage of our tools decreases over time. That’s not to say that there will always be something that someone will figure out on their own that never shows up in the data. We try to make it as easy as possible from a facility perspective to add that continuously. So it’s a living and breathing kind of thing more so than just writing a static manual.
Matthew: To gain this tribal knowledge, as it gets called from the organization, we need to personalize our systems for the customer. Usually, that comes last. You want to personalize it to the employees. First, tell us a little bit about what that process is like to adopt a model that will glean that information and make a product that can be customer-facing.
Edwin: It’s really important to understand how we’ve evolved in our thought process around using AI. And quite honestly, it’s lessons learned from our journey as a company as well.
We used to think night, almost naively, that we could put all of your historical service ticket data, case data, contact center scripts, chats, and all this stuff into our engine. We tried to create a recommendation or problem-solving engine out of that — this was the original hypothesis. That approach of using your basic historical data—similar to how a generic AI system or model would use it—did not get the desired results. This was a big reason for what we realized: the need for tribal knowledge and to incorporate this concept into something more specific and approachable.
I mentioned the context of a false positive as an example. We examine all of your historical data first. We then start to understand which problems are being stated in this particular case and what solutions were employed. We make that our foundation.
Then we go through a process of leveraging subject matter experts. We get their opinions: “Hey, when you see this type of problem, how often will this fix it? How often is this going to fix it? What’s the best thing to happen here, given this situation?” We go through the process of capturing that opinion and expertise in a way that translates into data for AI to interpret. So it’s not what you would call a hard-coded manual or step-by-step process. It’s more of a, “Hey, given the seven circumstances of what you’re seeing, what do you think is the most likely solution to this problem?”
We then combine all these things and factor in that asset’s history, part replacements, and quality. We factor in the technician who performed the fixes because we all know that there will always be people who are great at what they do—and there are some people who are newer and not so great at what they do. We factor everything in to help you ultimately make the best decision possible.
You see an evolution here from very basic, generic approaches of taking data and producing the most common result—to incorporating a level deeper of tribal knowledge and inserting the additional context of the work for the worker, the call center agent, the part, the asset…
That creates a personalized experience that ultimately allows our customers to get value from our actions. Hopefully, that gives you some context regarding our philosophy and how we approach the process.
Matthew: Now, going back to the use case you mentioned before about retaining that expertise, I think even your CEO, Shahar, might have mentioned it when he was in the program. That can make an even tougher sell for subject matter experts, right? Because it sounds like, “Okay, so you want to clone me into a ghost that has all my knowledge and expertise, and then you don’t need…”
So, I’m wondering what you find works to get that SME buy-in and to let people know that this will enhance their work lives, not hold them back.
Edwin: I would say it’s a challenge for many folks—especially to your point—when they hear AI and need to put a lot of their thoughts into it. It’s like, “Are you just trying to clone me? So you don’t need me anymore?”
It does depend. There are various motivations and desires, depending on who you’re talking to from a subject matter expert perspective. I can tell you a couple of the common themes we have. For example, subject matter experts love to work on difficult problems and love difficult puzzles. To a certain extent, it’s a motivation that keeps them in their job; they just love to work on these things. What they don’t like to do is get called up and deal with situations where you try turning it on, turning it off.
So one of the ways that we talk about it with subject matter experts is, “Hey, if you’d like for people to stop calling you about those kinds of things—so you can focus on the real interesting stuff—this is what this tool is supposed to help you with. It can free you up to work on truly unique and difficult fixes. AI is not going to solve everything in the world that you know, but we propose AI to solve all the simple things. That way, you can work on the cooler things.”
But others have different motivations. We have subject matter experts who are passionate about building it into the system because they see it as creating a more significant impact throughout the organization. Because all of a sudden their learnings are being shared and making a positive impact in the organization, elevating them as well. We’ve also gotten people promoted by participating in projects like this. So there are various desires and wants. And quite honestly, some people don’t want to do it—so there are those elements as well. It’s an interesting journey but a people-focused one. And I think that’s what seems to get lost in this a lot of the time.
Matthew: Yeah, absolutely. When I was at an AI vendor specializing in global taxes, we used to say that if we could at least take away a lot of those manual tasks, this would feel like an art form rather than a plug-and-chug. You can take that more artisan view of your workflows as you think of yourself as an artisan rather than just a guy who plugs things in.
Now, let’s assume that that’s a compelling sell. This means an entirely different kind of organization, right? This is a fairly radical change of workflows, especially because you don’t have to worry about expertise being lost with people, retiring people, changing jobs, etc.
Let’s take it from from the top on down. What is that? What does the organization look like to management or at least from the top? We can even work our way down to the subject matter experts regarding an organization that’s taking full advantage of tribal knowledge. What does that look like?
Edwin: We have a few customers undergoing this transformation process. One of the critical things faced across the board is that the labor force in this space is shrinking. Fewer people want to take on these types of jobs. So from a leadership perspective, the question is, I can’t hire someone who has medical device experience because there’s only a finite pool of those individuals. So, I need to change my thought process about who I need to hire. With the concept of AI-assisted Co-Pilots, it’s less about you figuring out how to turn a wrench and more about other types of skills, like interacting with customers and building relationships.
We often hear from many customers and leaders discussing finding someone. They say, “I’ll be more than willing to hire someone from Chick-fil-A who has great customer service and some affinity towards turning a wrench, and I’ll turn them into a technician.” That’s one of the significant mindset changes that our customers are now trying to push forward with because they see the future here. There’s a shrinking labor supply, growing demand, and growing complexity in the products.
If you go further down the pipe, looking at individuals and the technicians themselves, you’ll see a younger generation with an affinity towards using technology. Smartphones and AI are alluring to them because they’re using something that can assist them and help them get better. I don’t think there’s anything worse than being in a position where you’re asked to solve a problem and don’t know how to do it—it’s not a great feeling. These tools make that job more enjoyable and also potentially keep them around.
You’d think it might not be that big of a motivator. Still, it’s very interesting to see how, in different organizations, this is one of the major attraction points or retention factors. It’s really about the tools and the things you can do and the cool stuff you can work with to make your job somewhat enjoyable.
Matthew: Absolutely. Well, you were bringing up problem-solving before. People, especially the best SMEs, want to stay with the organization forever. It’s not that they’re the best technician, that they have all these metrics of all their wins up on the board. It’s more that they love solving problems.
You mentioned hiring somebody from Chick-Fil-A or any kind of background. Now, it seems more critical to find on the resume whether they have that zeal for problem-solving more than technical expertise. Are there any other skills organizations should seek in their future hires?
Edwin: What used to be very technical has become an experience or customer service role. From that perspective, many things you don’t find on a resume are soft skills, like communication, presentation, or social skills. The world is becoming more contactless. The touch points now that these folks have with people end up being the only single human point-to-point contact they have with that organization. They represent a significant portion of the brand experience now, more so than ever before, and are being asked to do things like upselling and increasing wallet share. You’ll also start seeing sales skills come into play here.
It’s a combination of these soft skills that don’t always appear in a resume but should be factored into your decision-making process. With technology and the ability to train and coach individuals, you can always coach the technical skills. Coaching things like curiosity is very difficult. I think many Human Resources and People departments are trying to figure out hiring to create a competitive advantage.
Matthew: We know that with Co-Pilot technology, having AI-enhanced software behind you guiding what you do means technical technicalities can be worked out. Thank you so much, Edwin, for being with us on the show this week. I think it’s been illuminating for the audience.
Edwin: As always, Matt, I appreciate the time. I’m excited to see what the future holds for us!
Matthew: To wrap up today’s episode, stay tuned for a conversation with Scott Burdett, Global Division CIO of Measurement and Analytics at ABB, about AI’s role in retaining organizational knowledge despite workforce challenges.
Scott expands quite a bit on what Edwin had to say in today’s show about what that will mean for field operation workflows, especially in the field. Very fascinating stuff, especially as we’re bringing more visual mediums into these workflows as an excellent primer for today’s episode. Don’t forget to check out the November 22, 2023, episode of the AI in Business podcast featuring Edwin’s boss and friend of the show.
The episode, AI Solutions for B2B Customer Experiences, features Shahar Chen, CEO of Aquant. Shahar discusses generative AI-enhanced co-pilot platforms’ role in improving B2B customer experiences in field service across multiple industrial sectors, from heavy industry to healthcare.
On behalf of Daniel Fagella, our CEO and Head of Research, and the rest of the team here at Emerge Technology Research, thank you so much for joining us today. We’ll catch you next time on the AI and Business Podcast.
About the Author
Micaela McPadden, Public Relations and Communications Manager, Aquant
Micaela is the Public Relations Manager at Aquant, overseeing the company’s PR, media, and organic social media strategies. She also produces Aquant’s Service Intel Podcast, contributing to the brand’s voice and outreach in the industry. For all media and press inquiries, don’t hesitate to get in touch with her at [email protected].