Target and Mission markets
In today’s rapidly changing technological environment, the application of Generative Artificial Intelligence (Generative AI) is becoming a key trend in the business and research fields. The rise of this technology brings infinite possibilities, covering natural language processing, image generation, and other creative tasks. Faced with this potential revolution, we have decided to create a next-generation technology company focused on Generative Artificial Intelligence – Tranxform.com ( 千逢科技股份有限公司 ).
The mission of Tranxform.com ( 千逢科技股份有限公司 ) is to drive technological advancement through innovative generative artificial intelligence technology, providing efficient, flexible, and competitive solutions for businesses and research institutions. Special emphasis is placed on applications in the fields of biomedical equipment and medical care instruments to promote progress in medical technology and elevate the level of life science research. Additionally, the advantages of the Neural Processing Unit (NPU) lie in significantly reducing power consumption, achieving more efficient energy utilization than GPUs, and conserving water resources, all in response to global environmental protection needs.
Background of Innovation

Historically, CPUs were employed for various computing tasks. Subsequently, the invention of SIMD machines facilitated the handling of parallel data computing, akin to GPUs. As AI and machine learning progressed, the need for higher-dimensional tensors emerged. This presents an opportune moment to devise a processor capable of higher-dimensional and advanced graph computing.
AI Inference and Future planning
AI inference is poised to have a much larger market than AI training. While training is essential for building models, real-time inference—where AI makes instant decisions or predictions—is crucial for practical applications. It enables AI to function as personal assistants, doctors, intelligent advisors, and more, offering immediate, context-aware solutions. With advancements in edge computing and cloud integration, real-time AI inference will drive growth across industries, making it far more important for everyday use than the training phase.
Low Power Design is a Key
Low power consumption will be crucial for real-time AI applications, enabling continuous, efficient operation on devices like smartphones, wearables, and IoT systems. As AI-powered assistants, healthcare tools, and intelligent advisors rely on real-time inference, minimizing energy use is essential for sustained performance without draining resources. Advances in edge computing and hardware optimization will play a key role in delivering powerful AI solutions with low energy demands, making them more accessible and practical for everyday tasks.
Reduce Complexity of Instructions
To reduce power consumption in real-time AI applications, we can optimize the process by using one instruction per layer in neural networks. This method simplifies execution by minimizing the complexity of instructions, reducing the energy wasted on tasks such as instruction fetching, control management, looping, and waiting times.
By issuing a single instruction per layer, the AI model can process data more efficiently, eliminating the need for constant re-fetching of instructions for each small operation. This streamlined approach cuts down on overhead, allowing the system to focus on actual computation rather than control mechanisms, ultimately lowering power consumption and improving performance in real-time scenarios.