Artificial intelligence

Industrial PCs for applications with artificial intelligence

InoNet is the right partner for hardware solutions in the field of artificial intelligence. By using different computing resources, optimum performance can be guaranteed exactly where it is needed, depending on the application. Due to the mathematical peculiarities required for the IT implementation of neural network structures, there are various hardware-based approaches for carrying out these operations quickly and efficiently. The various computing resources are explained in more detail below. Do you need a high-performance inference system or do you have questions? We will be happy to advise you!

Our powerful industrial PCs make it easy to integrate artificial intelligence into your applications.

Powerful

Scalable

Efficient

Green IT

Central Processing Unit (CPU)

Due to the mathematical peculiarities described above, which are necessary for the IT implementation of neural network structures, there are various hardware approaches that are particularly suitable for carrying out these operations quickly and efficiently due to their architecture.
The CPU (central processing unit) is the heart of every computer and is characterized by a complex hardware architecture and a universal instruction set. This predestines the CPU to flexibly process a wide variety of algorithms with different objectives. However, this universality comes at the cost of sub-optimal performance for dedicated tasks, which means that the suitability for AI use is also limited. Of course, this also depends on the respective CPU (type / manufacturer), because modern high-performance server CPUs with many cores and multithreading also achieve very good performance for AI tasks and are also used for model generation in data centers. For an inference scenario, i.e. the application of an already trained model with low to medium performance requirements, less powerful CPUs can also be used without any problems.

Graphic Processing Unit (GPU)

Nevertheless, there is more suitable hardware for deep learning scenarios, such as GPUs (graphic processing units). The GPU can be part of the CPU, be located as a separate chip on the mainboard or be connected to the mainboard in the form of a plug-in card, usually via PCIe. The computing power is increased immensely by parallelizing computing tasks across a large number of available computing units compared to a CPU. Both consumer graphics cards and professional graphics cards can be used, whereby the former are cheaper to purchase initially, while the latter have a significantly longer service life.

Green IT
Green IT

Vision Processing Unit (VPU)

VPUs (vision processing units) have recently become increasingly popular in the inference environment when it comes to deep learning scenarios based on image and moving image data. Designed for industrial use, this inference hardware is more durable and can withstand extended environmental temperatures. Manufacturers of VPU modules include Nvidia, for example with the Jetson TX2 module, and Intel or Movidius with the Myriad X. By simply adding VPU modules to industrial hardware, they enable medium to high performance for inference machines with relatively low power consumption. In most cases, VPUs are offered on plug-in modules whose performance scales with the number of one to currently 8 VPUs and which have standard interfaces such as PCIe, mPCIe, M.2 or USB. Due to their compact size, the VPU modules can be easily integrated into industrial PCs and thus perform their duties in edge computing.

Field Programmable Gate Arrays (FPGA)

FPGAs (Field Programmable Gate Arrays) are programmable digital components in which the hardware structure (logic circuits) can also be programmed. These are add-on modules that combine the flexibility and programmability of software running on a general purpose processor (CPU) with the speed and energy efficiency of an application-specific integrated circuit. The configuration of FPGAs, and thus their functionality or application purpose, can also be adapted several times retrospectively. While a CPU is mainly configured using software, an FPGA is configured at hardware level. Although FPGAs have been in use since the 1980s, the FPGA market has been experiencing strong growth in demand, particularly in recent years, and this is set to increase further in the coming years. FPGA cards have low to medium power consumption, but deliver maximum efficiency and performance for the respective (AI) application thanks to the individual configuration options, as the hardware can be directly adapted to the application. Performance can be further maximized by implementing parallel hardware structures. However, this is offset by high individual development costs, which generally only pay off for applications with larger quantities.

Green IT

    Form of address:

    I agree that my submitted informations from the contact form will be collected, stored and processed to answer my request.

    © InoNet Computer GmbH. Alle Rechte vorbehalten.