Over the past year, we have frequently received inquiries about the impact of generative artificial intelligence (AI) on engineering. As a company dedicated to developing advanced knowledge systems for engineering, we have given significant attention to this topic. Although generative AI is evolving quickly with significant new developments daily, we’ve started to draw some important conclusions.
Generative AI is highly proficient in interpreting natural language prompts and summarizing answers, provided that the underlying large language model (LLM) has access to relevant information in the domain. When applied to engineering, for instance, if an engineer asks a technical question about a preferred design technique, the LLM could provide a useful technical summary as long as it has access to relevant engineering specifications, standards, and lessons; a domain-specific knowledge-base.
In the near term however, three main challenges arise in this generalized use of AI for engineering support:
- Coverage Gaps: If the content an engineer needs isn’t included in the generative AI’s accessible knowledge-base, how will this affect the response?
- Handling Conflicting Information: In engineering, conflicting constraints are common. If the AI has access to relevant materials but they contain conflicting information, how will it summarize these conflicts, and how well can the engineer discern and understand these conflicts?
- Outdated Information: A critical issue is the AI’s handling of outdated standards or lessons that are still in its database but no longer valid. How does the AI discern and omit obsolete information?
In the near term, our conclusion is that while generative AI and LLMs are valuable for engineering, they still require a well-architected and curated knowledge-base to draw from. The curated knowledge-base ensures comprehensive coverage of important topics, the identification and creation of discrete threads to manifest conflicts, and most importantly, the systematic obsolescence of outdated information. With today’s generative AI tools and libraries, it is a straightforward affair to integrate private structured or semi-structured knowledge-bases.
In the long term, LLMs will be purpose-built for engineering and will be pre-equipped with fundamental engineering knowledge, allowing them to provide out-of-the-box value. Companies could then supplement these engineering-tuned LLMs with their specific knowledge base, tailored to their unique experiences and desired outcomes.
Another future prospect for generative AI in engineering involves non-textual interfaces, such as working with 3D models. Where important and significant model adjustments will be suggested by the Generative AI. It is unclear specifically when this type of AI will become widely useful and valuable in engineering practice.
In both the near and distant future, the necessity for a private, well-curated, reliable knowledge base is clear. These private knowledge-bases should encapsulate the ongoing experiences of a company’s specific engineering processes and will offer a competitive edge when effectively deployed through generative AI techniques.
Learn more about Auros Knowledge System and the ways it can benefit your organization, feel free to contact us today!
Comments are closed.