While your data continues to explode and even accelerate growth, much still goes unused, and most data teams wait in line for access or remain stuck behind capacity barriers. The limitations of traditional processors lead to missed SLAs and spiraling cloud costs, as well as prevent transformational analytics projects from getting kicked off at all.
To overcome this price-performance burden, Speedata is building the world’s first Analytics Processing Unit (APU), an ASIC chip deployed on a PCIe board and architected specifically for analytics on platforms like Apache Spark and Presto. Speedata’s APU uniquely decompresses, decodes, and processes millions (or even billions) of records from Parquet or ORC files per second, eliminating the I/O, compute, and capacity bottlenecks created by other chips that have to write and store intermediate data back to memory.
Speedata’s APU offers:
Speedata’s APU enables data teams to:
Today, the hardware and compute costs of analytics grow directly in proportion to the growth of your data. The CPU, historically the semiconductor workhorse and hero of Moore’s Law, was designed for generic tasks and is rapidly approaching its maximum processing ability, having improved performance only 3% year-over-year for the past three years — an especially short horizon in the face of today’s analytics and AI demands. In direct competition, the APU accelerates real enterprise analytics workloads up to 100x faster than CPUs.
Similarly, the cost of performance and the resource trade-offs to be allocated away from AI and ML workloads make GPUs, which were designed specifically for graphics processing, a low-ROI mismatch for most analytics priorities. In a head-to-head comparison, the APU demonstrates an average 11x cost reduction versus GPUs for analytics workloads.
The APU is custom-designed to execute a broad range of tasks in parallel and handle any data type and field length. And, importantly, the APU automatically intercepts the work that was previously going to the CPU and reroutes it with zero code change to accelerate hardware price-performance with minimal overhead, unburdening data engineers from having to migrate their workloads and manage testing and debugging just to achieve modest speedups from processors not designed for analytics.