Compute APIs are a new evolution in the mobile and embedded market space, and are subject to quite a range of different conceptual approaches compared to the evolution of 3D graphics API. The mobile-optimised OpenGL ES API for 3D graphics evolved from a market-tested and well-proven OpenGL API for desktop PCs; this allowed a rapid API evolution with fairly little risk. In the mobile space, two offerings are currently competing for developer attention: the Khronos developed OpenCL API (OS agnostic) and the Google Android-specific Renderscript API. While the goal of both APIs is the same in exposing parallel compute functionality, the approach and vision behind both APIs is very different, as I’ll discuss below.

Compute is far newer as an API concept, and the desktop market is still in the early days of API evolution, where a hardware-specific proprietary API called CUDA is playing a significant role. Custom APIs though are an approach long-abandoned in the 3D space. As a result, compute APIs are all still very young and are going through the usual growing pains, as the software and hardware vendors find the most optimal approaches and balanced feature sets.

Renderscript – a top-down development approach

Renderscript is a proprietary API developed by Google and specific to Android OS. Unlike OpenCL, there is no consortium-style approach where multiple hardware and software vendors agree and vote on features. Instead, the API is driven and set by a single company. As Google has mainly a software background, their approach to API development is driven from a somewhat different vision than the Khronos Group, which has a hardware-centric approach due to its membership.

In their API design, Google focussed on ease of application development. This means their approach was to create an API with an relative extreme feature set. The idea behind this vision is that by exposing all possible features, a developer can write an application once, and over time hardware will evolve and provide better and higher-performance implementations of the API. Thus, older applications will get faster without additional software development effort.

This is an interesting approach, but it is completely different from most APIs available today that expose hardware functionality. This approach can also only work if all of the features can always be supported. A software fallback safety net is required to enable all functionality, even on hardware limited devices. With Renderscript, the fallback option is to execute parallel compute operations on the CPU. This is unlikely to be the fastest path, but the CPU is the most flexible compute unit in mobile devices, and extreme functionality can always be emulated slowly using software paths.

The initial version of Renderscript was released with Android 3.0, and was completely focussed on supporting only a single type of compute device – the CPU. Later versions of Android enabled more devices, starting with the NEON unit, a SIMD floating point unit available with most ARM CPUs. The issue is that this hardware unit lacked the required ALU precision to enable acceleration of the Renderscript API, and hence Google had to make the first concession in their API design approach. This included exposing pragma options that allowed relaxed precision and feature set requirements, thus allowing the faster-than-the-CPU NEON unit to be used to speed up the Renderscript experience on devices.

Android version 4.1 jumped onto the GPU compute bandwagon, enabling Renderscript acceleration on GPU devices. However, it quickly became clear that performance was an issue. As a result, GPU-based Renderscript acceleration was an unpredictable experience. One GPU versus another and GPU versus CPU scenarios could lead to rather unexpected behaviours, where a software developer might not know if their script would run fast enough by using a GPU, or would end up using the fallback path, and thus would end up running quite slowly.

Again, the pragmas brought some help, and relaxing precision requirements allowed much more extensive and fast path modes for GPUs (as well as enabling the ARM NEON SIMD unit). However, using pragmas is something which developers easily forget, as they are used to the OpenCL model. They expect that everything works, is supported and runs fast. This was not what Renderscript offered which was a wide fragmentation of performance level across CPU types, CPU counts and numerous different GPUs.

Filterscript – a sensible selection of features and precision requirements

As a result, with Android version 4.2, we have seen the introduction of Filterscript, a sensible selection of features and precision requirements that aligns with the balanced feature sets offered by mobile GPUs. Developers now have two options:

  • they can develop a Renderscript script and deal with massive fragmentation where a script is unlikely to be run accelerated on the GPU in most devices (remember the low cost and mainstream markets are far larger than the superphone/tablet market)
  • they can develop a Filterscript script, which has a very high chance of running at extremely fast speeds as expected from GPU acceleration.

In a simplified way, Filterscript is a bit like the bottom-up approach where the feature set matches the capabilities of the majority of the hardware out there, thus ensuring developers a consistently high-performance accelerated experience, whereas Renderscript is a superset approach mostly there to abstract between different CPU types (MIPS, x86, ARM).

The top-down software development centric concept for Android’s Renderscript is shown below, demonstrating the evolution of bringing the API in line with a feature set optimal for power-efficient execution on mobile GPUs through pragmas, and ultimately through the introduction of the Filterscript variant:

PowerVR - mobile GPU computing - Renderscript Filterscript evolution

The initial top-down approach has led to some confusion among developers regarding the level of support for CPU/GPU architectures. Key to remember is that all PowerVR devices, based on the SGX or ‘Rogue’ architecture, support both Renderscript and Filterscript. GPU acceleration will be difficult to predict, as it depends on the features used in the script, but also on how busy the GPU is (e.g. with graphics or other compute tasks). By using Filterscript, you will in general have a higher chance of acceleration on GPUs from Imagination or other vendors.

Have you used Renderscript/Filterscript acceleration in your applications? What do you think about Google’s API for mobile compute? For more articles, news and announcements on GPU compute, keep coming back to our blog and follow us on Twitter (@ImaginationPR, @GPUCompute and @PowerVRInsider).


About the author: Kristof Beets

Profile photo of Kristof

Kristof Beets is Senior Business Development Manager for PowerVR Graphics at Imagination Technologies where he leads the in-house demo development team and works on product messaging. He has a background in electrical engineering and received a master's degree in artificial intelligence. Prior to joining the Business Development Group he worked on SDKs and tools for both PC and mobile products as a member of the PowerVR Developer Relations Team. Previous work has been published in ShaderX2, X5 & X6, ARM IQ Magazine, and online by the Khronos Group, Beyond3D and 3Dfx Interactive. Kristof has spoken at GDC, SIGGRAPH, Embedded Technology, MWC and too many other conferences to remember.

View all posts by Kristof Beets