Accelerate Analytics on Apache Spark: Built for Public Sector Security

While your data continues to explode and even accelerate growth, much still goes unused, and most data teams wait in line for access or remain stuck behind capacity barriers. The limitations of traditional processors lead to missed SLAs and spiraling cloud costs, as well as prevent transformational analytics projects from getting kicked off at all.

To overcome this price-performance burden, Speedata is building the world’s first Analytics Processing Unit (APU), an ASIC chip deployed on a PCIe board and architected specifically for analytics on platforms like Apache Spark and Presto. Speedata’s APU uniquely decompresses, decodes, and processes millions (or even billions) of records from Parquet or ORC files per second, eliminating the I/O, compute, and capacity bottlenecks created by other chips that have to write and store intermediate data back to memory.

Speedata’s APU offers:

  • A 50x average performance improvement across financial services, pharmaceutical, adtech, and hyperscale cloud customer use cases on Spark
  • A 91% capital reduction, 94% space savings, and 86% energy savings compared to CPUs and GPUs
  • Anticipated 100x price-performance improvement for critical long-running analytics workloads

Speedata’s APU enables data teams to:

  • Constantly refresh and expand your data, especially for complex domain-specific jobs and pre-processing for AI
  • Prioritize key projects that once were unfeasible, unblock data team innovation, and prevent missed SLAs

Today, the hardware and compute costs of analytics grow directly in proportion to the growth of your data. The CPU, historically the semiconductor workhorse and hero of Moore’s Law, was designed for generic tasks and is rapidly approaching its maximum processing ability, having improved performance only 3% year-over-year for the past three years — an especially short horizon in the face of today’s analytics and AI demands. In direct competition, the APU accelerates real enterprise analytics workloads up to 100x faster than CPUs.

Similarly, the cost of performance and the resource trade-offs to be allocated away from AI and ML workloads make GPUs, which were designed specifically for graphics processing, a low-ROI mismatch for most analytics priorities. In a head-to-head comparison, the APU demonstrates an average 11x cost reduction versus GPUs for analytics workloads.

The APU is custom-designed to execute a broad range of tasks in parallel and handle any data type and field length. And, importantly, the APU automatically intercepts the work that was previously going to the CPU and reroutes it with zero code change to accelerate hardware price-performance with minimal overhead, unburdening data engineers from having to migrate their workloads and manage testing and debugging just to achieve modest speedups from processors not designed for analytics.


Upcoming Events

Webcast
Speedata

Modernizing Big Data Platforms

Hosted By: Speedata & Carahsoft
Carahsoft June 05, 2024
Carahsoft 1:00 PM ET