Plan of Attack

 

Let your plans be dark and impenetrable as night, and when you move, fall like a thunderbolt.

- Sun Tzu -

RedSparx* is deeply driven by its internal plans and procedures. Everything is done with purpose and conviction. Here, we outline our plan to create edge-compute systems and infrastructures for large-scale AI systems.

 
Sentinel_Main_v1.0.png

1 Remote Data Aggregation for AI Systems

AI systems will always be limited by the data sets used to train them.

The types of specific questions that can be answered by data-driven AI models will always be limited by the scope and context of training data. Currently, data acquisition from physical processes must be custom-built and can only take place in-situ. Collecting physically distributed variables such as acceleration, magnetic flux or pH will be simply impossible if the engineering challenge of producing a data collection infrastructure is not met. 

We plan to master the design, production and deployment of:

  1. AI end-points for model deployment and data acquisition.

  2. Wide-area data collection infrastructures.

  3. Server-side data aggregation for AI systems.

Whether you’re collecting temperature data in a single room, an entire building or across the city, a specialized wireless data collection infrastructure will allow you to produce aggregate data sets from spatially disperse regions that would not be attainable otherwise.

PCB_Sparc_Parametric_MODULE_v1.jpg

2 Modularized AI Edge Compute Elements

Domain-specific physical data requires custom sensors and custom pre-processing to effectively work on the edge.

Patient biometrics, seasonal soil conditions, structural stresses or vehicle dynamics are difficult to model absent a facilitating data collection infrastructure. Once established, endpoints have direct access to data and can contribute compute-time that can accelerate AI training. Decentralized artificial intelligence will be key for the design of complex next-generation hierarchical models to be built.

We plan to master the design, production and deployment of:

  1. Embedded sensor data preprocessors.

  2. Embedded AI accelerators.

  3. Efficient edge-compute interfaces.

Your ability to unlock the transformational potential of AI in many industries is only possible with domain-specific hardware that supports them. Modular endpoints that have the ability to share compute loads with their data collection and processing core have the ability to produce more complex AI models with domain-specific context and purpose.

Sentinel_Main_v1.0.png

3 Expand

By virtue of this discipline, students are already familiar with data pipelines, concurrency timing deadlines and are additionally adept at writing efficient embedded code to say the least. The next generation of GPU and TPU hardware used by software systems such as TensorFlow will need to be designed for the exploding AI-economy. Furthermore, high-speed, low-power electronics will pave the way for ubiquitous AI and a workforce needs to be trained to produce it. With students in these programs already tooled for FPGA and microcontroller system design, they need rapid development platforms to accelerate embedded AI design since the design cycle is significantly longer than that for software systems, and subject to constraints.