What are LEO satellites? How are they used globally?
As of July 16, 2024, there are 10,257 satellites orbiting our planet. Out of those satellites, 8,033 of them are low Earth orbit satellites (LEO satellites), according to the satellite-tracking website Orbiting Now. LEO satellites certainly are popular, and for good reason. LEO satellites orbit the Earth at a distance of 2000 km or less, and can circle the globe in roughly 90 minutes, allowing the satellites to revisit sites quickly and provide updated information. LEO satellites are primarily used for communication purposes, as their relatively close proximity to Earth allows for low latency communication between the satellites and ground stations. Other than communication, LEO satellites are also used for Earth observation and navigation. However, these satellites aren’t able to operate without challenges. They struggle with similar issues that plague most space objects, such as minimizing their thermal footprint and avoiding other space objects. To overcome these obstacles, many proposals have come forward. One of which is to implement onboard processing through the use of AI inference accelerators. More specifically, low-power, high TOPS AI can be utilized, allowing the creation of new features and the improvement of existing ones. In this blog, we’ll take a look at the ways AI can address the issues that LEO satellites face.
Benefits of AI in LEO satellites
Space is an isolated place; power is limited, and needs to be both generated (primarily through solar panels) and stored within the small frame of the LEO satellite. Through the use of low-power, high TOPS AI in the satellite, efficient power consumption can be achieved. It can achieve this through efficient onboard processing (analyzing and filtering data), autonomous functions (less frequent transmissions required), and resource management (only powering required systems when needed and scheduling high-energy tasks during an abundance of solar energy). This allows the satellite to be designed with smaller solar panels and back-up batteries (for use when the satellite is in Earth’s shadow), reducing the weight of the satellite and thereby lowering the cost of launching the satellite. More important, however, is the thermal footprint of the satellite. The electronics on the satellite have an optimal temperature for operation; oftentimes, engineers struggle with keeping the electronics cool. The challenge with space is that it’s a hot-cold environment. Since space is a vacuum, there are almost zero particles travelling around. As a result, objects in space struggle with shedding excess heat; radiation is the only possible method. Having a low-power AI inference accelerator onboard reduces the need to design extra insulation, with the additional benefits of reduced weight.
Another benefit that comes with AI-implemented onboard processing is low latency, which is essential for a variety of time-sensitive applications. Onboard processing minimizes the time delay associated with transmitting data to ground stations for processing, which in turn reduces the time required for the satellite to receive instructions. AI processing does so by sending a simplified downlink transmission, eliminating the need to transmit a large volume of raw data to stations on Earth. This is extremely beneficial since a LEO satellite will have roughly 20 minutes to transmit to the ground station, as LEO satellites have a small area of coverage due to its proximity to Earth. Another bonus is that by processing data on the satellite, the satellite can choose to send only the relevant information back to Earth, optimizing bandwidth usage. An example of a situation that requires low latency is collision avoidance. Low latency is key to changing trajectories early, which ensures satellite safety and smaller fuel consumption.
As for the quality of data the LEO satellite sends, a satellite with onboard processing is able to execute image and signal processing, which can improve the quality and accuracy of the data collected by the satellite’s sensors. Oftentimes, the sensors on the satellite are able to gather more data than the satellite can consistently transmit. The bottleneck that’s created in satellite transmission can be alleviated through the use of AI inference accelerators running onboard processing. This especially benefits applications such as Earth observation, weather forecasting, and telecommunications, where large amounts of raw data is required to be collected.
In addition to the benefits listed above, AI inference at the edge can allow LEO satellites to operate with more flexibility compared to a traditional LEO satellite with only basic automation algorithms. AI operations taking place on the local satellite means that it has the capability to make decisions independently, without human interaction. Advanced collision avoidance becomes available, and is able operate without having to rely on ground stations, thus reducing latency and communication delays. Additionally, it can enhance mission reliability by predicting and mitigating potential system failures before they occur. The ability to adapt to changing conditions in space ensures that LEO satellites can maintain optimal performance and achieve their mission objectives more effectively.
AI on LEO satellites can enable sophisticated applications such as edge computing in space, real-time analytics, advanced communication systems, and more secure satellite networks. By adding more capabilities to satellites, satellites are able to garner more use. This could extend the operation life of the satellite, which effectively reduces the number of required replacement satellites. Considering the fact that launching a satellite into LEO is both environmentally and financially expensive, extending the mission life of a satellite is extremely beneficial.
Untether AI delivers best in class AI acceleration solutions for space
Compute workloads have radically changed in the past several years with the deployment of AI and the increasing complexity of AI. This has elevated the demand for heterogeneous computing platforms employing domain-specific architectures. This trend, especially true in LEO satellites, is driving the ideal computing platform that combines a mix of CPUs, AI accelerators, and memory.
As AI technologies further develop, high AI compute performance, low latency, a low-power / thermal envelope, and a small form factor have become the cornerstone requirements for all things space. Untether AI’s innovative at-memory compute architecture for neural network inference delivers the high performance and energy efficiency needed to support the most demanding next-generation edge computing in space.
Untether AI® provides energy-centric AI inference acceleration from the edge to the cloud, supporting any type of neural network model. Untether AI has solved the data movement bottleneck that costs energy and performance in traditional CPUs and GPUs, resulting in high-performance, low-latency neural network inference acceleration without sacrificing accuracy.