MyAdvantech Registration

MyAdvantech is a personalized portal for Advantech customers. By becoming an Advantech member, you can receive latest product news, webinar invitations and special eStore offers.

Sign up today to get 24/7 quick access to your account information.

AI-Powered Edge Computing Driving Tailored Specialized LLMs

9/24/2025

As technology and the world around us continue to evolve, the manufacturing sector has advanced steadily—from the early days of mechanization in Industry 1.0 to today’s intelligence-driven Industry 4.0. In recent years, the growing maturity and widespread use of Artificial Intelligence (AI) and Large Language Models (LLMs) have not only elevated the smart capabilities of Industry 4.0 but also built new pathways for applying AI across a wide range of industries. 

However, for AI to truly deliver value across a wide range of industrial applications, Edge AI is essential. Most industrial AI deployments take place in edge environments, where Agentic AI frameworks are used to support collaborative tasks. As a result, the growing adoption of AI has already increased demand for Edge AI solutions—creating new business opportunities for Industrial PC (IPC) providers investing in this space.

Edge AI Formation Depends on Industry-Specific LLM Training

Advantech observes that the recent trend of AI adoption in industry reflects a shift in focus— from cloud-based model training to practical on-site implementation. As a result, Edge AI and edge computing have become essential. Because each environment requires specific application services. LLMs must be fine-tuned to fit the needs of particular industries. This fine-tuning enables LLMs to better understand industry-specific data and terminology, allowing them to perform accurate inference at the edge. 

Particularly with the emergence of open-source frameworks like Deepseek, pre-trained LLMs based on open-source models have flourished in the market. Competition and ongoing model evolution have lowered hardware barriers, enabling more accessible AI deployment and inference at the edge. Currently, Advantech observes that many customer applications focus on image recognition—such as defect inspection in motherboard or PCBA production lines—as well as on using LLMs to enhance customer service and streamline internal administrative workflows. 

For client-facing applications, Advantech shares an example of a legal firm customer aiming to build a 24/7 AI-powered customer service portal. The goal is for the AI to filter and respond to routine inquiries autonomously, while escalating complex issues to human agents. Given the legal sector’s specialized terminology and domain knowledge, generic public LLM models fall short of meeting requirements. Therefore, fine-tuning is essential to equip the LLM with industry-specific expertise and deliver tangible benefits.

LLM Retraining: Balancing Compute Requirements, Costs, and Security Risks

For organizations adopting such solutions, optimizing the fine-tuning process and inference deployment must prioritize cost-effectiveness. These workloads require significant computational resources. While fine-tuning and inference can be outsourced to professional cloud service providers (CSPs), the associated costs and potential data security risks—particularly when handling sensitive corporate information—can be prohibitive. Understandably, companies are often hesitant to upload proprietary or confidential data to the cloud and give up control over it. 

When cloud deployment is not viable, on-premises compute infrastructure provides an alternative, ensuring both computational self-sufficiency and data sovereignty. As LLM inference continues to shift toward the edge, semiconductor companies have introduced specialized AI inference acceleration modules. However, post-training (fine-tuning) still depends heavily on high-performance GPUs.

Given the high capital investment and growing demands for compute power and energy, building on-premises infrastructure may not be a sustainable long-term solution for many enterprises. Advantech advises that companies should clearly define their problems and objectives before adopting AI. For non-real-time applications—such as customer service portals or internal administrative tasks—delayed responses through asynchronous communication (e.g., email) are often sufficient. In such cases, Advantech’s aiSSD solution can support effective AI deployment.

Advantech’s aiSSD integrates Phison’s aiDAPTIV+ technology, which offloads data traditionally stored in GPU VRAM to aiSSD storage, reducing the number of GPUs needed for fine-tuning. Leveraging this innovation, Advantech’s next-generation edge AI systems and workstations—such as the AIR-520 powered by AMD EPYC™ server-grade processors and the AIR-420 powered by AMD Ryzen™ processors—can efficiently run the Llama 70B model with only 2–4 GPUs.. While computation time increases, Advantech compares it to choosing between a high-speed train and conventional train from Taipei to Kaohsiung—both reach the destination, but with different costs and travel times. Additionally, Advantech offers a range of products to meet diverse inference acceleration needs and support higher GPU requirements for edge AI model fine-tuning, providing customers with a comprehensive hardware lineup suited for various real-world edge deployment scenarios.

While extended computation times contribute to overall costs, this approach is well-suited for applications without strict real-time constraints. A key advantage is that data is processed on-premises, enhancing security and ensuring data confidentiality—an increasingly important benefit. Although computation may take more time, Advantech emphasizes that the strength of this solution lies in its balance of cost efficiency and data security.

As Edge AI Expands in Industry, Integrated Software-Hardware Solutions Become Essential

Advantech points out that while aiDAPTIV+ technology and related products are available through other channels, aiDAPTIV+ alone cannot deliver the full range of benefits described. AI operations require a complete ecosystem, and aiDAPTIV+ is just one part of it. Without collaboration from peripheral partners, including ISVs, the system cannot function effectively. Advantech’s value lies in delivering a fully integrated solution that enables rapid AI deployment.

As AI technology and applications continue to evolve, Advantech anticipates increased adoption of models like DeepSeek to drive greater industrial efficiency. This evolution will also create a wide range of demands related to both scale and speed. To address these challenges, Advantech is investing in multiple hardware acceleration platforms and expanding AI applications by integrating additional data sources, such as imaging. This enables the extension of AI use cases to autonomous mobile robots (AMRs) and robotics. In addition, Advantech collaborates with ISVs to build a robust edge AI and edge computing ecosystem, helping customers harness the benefits of AI more easily, securely, effectively, and cost-efficiently. 

To learn more, visit Advantech’s Edge AI & Intelligence Solutions page.