RenderScript >

3D Graphics

In this document

  1. Developing a RenderScript application
    1. The Hello Graphics application

Related Samples

  1. Balls
  2. Fountain
  3. Hello World
  4. Samples

RenderScript provides a number of graphics APIs for 3D rendering, both at the Android framework level as well as at the native level. For instance, the Android framework APIs let you create meshes and define shaders to customize the graphical rendering pipeline. The native RenderScript graphics APIs lets you draw the actual meshes to render your scene. In general, you will need to be familiar with APIs to appropriately render 3D graphics on an Android-powered device.

Creating a Graphics RenderScript

Because of the various layers of code when writing a RenderScript application, it is useful to create the following files for a scene that you want to render:

  • The native RenderScript .rs file. This file contains the logic to do the graphics rendering.
  • The RenderScript entry point class that allows your view to interact with the code defined in the .rs file. This class contains a RenderScript object(instance of ScriptC_renderscript_file), which allows your Android framework code to call the native RenderScript code. This class also creates the RenderScriptGL context object, which contains the current rendering state of the RenderScript such as programs (vertex and fragment shaders, for example) that you want to define and bind to the graphics pipeline. The context object attaches to the RenderScript object (instance of ScriptC_renderscript_file) that does the rendering. Our example names this class HelloWorldRS.
  • Create a class that extends RSSurfaceView to provide a surface to render on. If you want to implement callbacks from events inherited from View, such as onTouchEvent() and onKeyDown(), do so in this class as well.
  • Create a class that is the main Activity class, like you would with any Android application. This class sets your RSSurfaceView as the content view for this Activity.

The following sections describe how to implement these three classes by using the HelloWorld RenderScript sample that is provided in the SDK as a guide (some code has been modified from its original form for simplicity).

Creating the native RenderScript file

Your native RenderScript code resides in a .rs file in the <project_root>/src/ directory. You can also define .rsh header files. This code contains the logic to render your graphics and declares all necessary variables and pointers. Every graphics .rs file generally contains the following items:

  • A pragma (#pragma rs java_package_name( that declares the package name of the .java reflection of this RenderScript.
  • A pragma (#pragma version(1)) that declares the version of RenderScript that you are using (1 is the only value for now).
  • A #include of the rs_graphics.rsh header file.
  • A root() function. This is the main worker function for your RenderScript and calls RenderScript graphics APIs to draw meshes to the surface. This function is called every time a frame refresh occurs, which is specified as its return value. A specified for the return value says to only render the frame when a property of the scene that you are rendering changes. A non-zero positive integer specifies the refresh rate of the frame in milliseconds.

    Note: The RenderScript runtime makes its best effort to refresh the frame at the specified rate. For example, if you are creating a live wallpaper and set the return value to 50, the runtime renders the wallpaper at 20fps if it has just enough or more resources to do so, and renders as fast as it can if it does not.

    For more information on using the RenderScript graphics functions, see Using the Graphics APIs.

  • An init() function. This allows you to do any initialization of your RenderScript before the root() function runs, such as initializing variables. This function runs once and is called automatically when the RenderScript starts, before anything else in your RenderScript. Creating this function is optional.
  • Any variables, pointers, and structures that you wish to use in your RenderScript code (can be declared in .rsh files if desired)

The following code shows how the file is implemented:

#pragma version(1)

// Tell which java package name the reflected files should belong to
#pragma rs java_package_name(

// Built-in header with graphics APIs
#include "rs_graphics.rsh"

// gTouchX and gTouchY are variables that are reflected for use
// by the Android framework API. This RenderScript uses them to be notified of touch events.
int gTouchX;
int gTouchY;

// This is invoked automatically when the script is created and initializes the variables
// in the Android framework layer as well.
void init() {
    gTouchX = 50.0f;
    gTouchY = 50.0f;

int root(int launchID) {

    // Clear the background color
    rsgClearColor(0.0f, 0.0f, 0.0f, 0.0f);
    // Tell the runtime what the font color should be
    rsgFontColor(1.0f, 1.0f, 1.0f, 1.0f);
    // Introuduce ourselves to the world by drawing a greeting
    // at the position user touched on the screen
    rsgDrawText("Hello World!", gTouchX, gTouchY);

    // Return value tells RS roughly how often to redraw
    // in this case 20 ms
    return 20;

Creating the RenderScript entry point class

When you create a RenderScript (.rs) file, it is helpful to create a corresponding Android framework class that is an entry point into the .rs file. In this entry point class, you create a RenderScript object by instantiating a ScriptC_rs_filename and binding it to the RenderScript context. The RenderScript object is attached to the RenderScript bytecode, which is platform-independent and gets compiled on the device when the RenderScript application runs. Both the ScriptC_rs_filename class and bytecode is generated by the Android build tools and is packaged with the .apk file. The bytecode file is located in the <project_root>/res/raw/ directory and is named rs_filename.bc. You refer to the bytecode as a resource (R.raw.rs_filename). when creating the RenderScript object..

You then bind the RenderScript object to the RenderScript context, so that the surface view knows what code to use to render graphics. The following code shows how the HelloWorldRS class is implemented:


import android.content.res.Resources;
import android.renderscript.*;

public class HelloWorldRS {
    //context and resources are obtained from RSSurfaceView, which calls init()
    private Resources mRes;
    private RenderScriptGL mRS;

    //Declare the RenderScript object
    private ScriptC_helloworld mScript;

    public HelloWorldRS() {

     * This provides us with the RenderScript context and resources
     * that allow us to create the RenderScript object
    public void init(RenderScriptGL rs, Resources res) {
        mRS = rs;
        mRes = res;
     * Calls native RenderScript functions (set_gTouchX and set_gTouchY)
     * through the reflected layer class ScriptC_helloworld to pass in
     * touch point data.
    public void onActionDown(int x, int y) {
     * Binds the RenderScript object to the RenderScript context
    private void initRS() {
        //create the RenderScript object
        mScript = new ScriptC_helloworld(mRS, mRes, R.raw.helloworld);
        //bind the RenderScript object to the RenderScript context

Creating the surface view

To create a surface view to render graphics on, create a class that extends RSSurfaceView. This class also creates a RenderScript context object (RenderScriptGL and passes it to the Rendscript entry point class to bind the two. The following code shows how the HelloWorldView class is implemented:


import android.renderscript.RSSurfaceView;
import android.renderscript.RenderScriptGL;
import android.content.Context;
import android.view.MotionEvent;

public class HelloWorldView extends RSSurfaceView {
    // RenderScript context
    private RenderScriptGL mRS;
    // RenderScript entry point object that does the rendering
    private HelloWorldRS mRender;

    public HelloWorldView(Context context) {

    private void initRS() {
        if (mRS == null) {
            // Initialize RenderScript with default surface characteristics.
            RenderScriptGL.SurfaceConfig sc = new RenderScriptGL.SurfaceConfig();
            //Create the RenderScript context
            mRS = createRenderScriptGL(sc);
            // Create an instance of the RenderScript entry point class
            mRender = new HelloWorldRS();
            // Call the entry point class to bind it to this context
            mRender.init(mRS, getResources());

     * Rebind everything when the window becomes attached
    protected void onAttachedToWindow() {

     * Stop rendering when window becomes detached
    protected void onDetachedFromWindow() {
        // Handle the system event and clean up
        mRender = null;
        if (mRS != null) {
            mRS = null;

     * Use callbacks to relay data to RenderScript entry point class
    public boolean onTouchEvent(MotionEvent ev) {
        // Pass touch events from the system to the rendering script
        if (ev.getAction() == MotionEvent.ACTION_DOWN) {
            mRender.onActionDown((int)ev.getX(), (int)ev.getY());
            return true;

        return false;

Creating the Activity

Applications that use RenderScript still adhere to activity lifecyle, and are part of the same view hierarchy as traditional Android applications, which is handled by the Android VM. This Activity class sets its view to be the RSSurfaceView and handles lifecycle callback events appropriately. The following code shows how the HelloWorld class is implemented:

public class HelloWorldActivity extends Activity {

    //Custom view to use with RenderScript
    private HelloWorldView view;

    public void onCreate(Bundle icicle) {
        // Create surface view and set it as the content of our Activity
        mView = new HelloWorldView(this);

    protected void onResume() {
        // Ideally an app should implement onResume() and onPause()
        // to take appropriate action when the activity loses focus

    protected void onPause() {
        // Ideally an app should implement onResume() and onPause()
        // to take appropriate action when the activity loses focus


Drawing using the rsgDraw functions

The native RenderScript APIs provide a few convenient functions to easily draw a polygon to the screen. You call these in your root() function to have them render to the surface view. These functions are available for simple drawing and should not be used for complex graphics rendering:

  • rsgDrawRect(): Sets up a mesh and draws a rectangle to the screen. It uses the top left vertex and bottom right vertex of the rectangle to draw.
  • rsgDrawQuad(): Sets up a mesh and draws a quadrilateral to the screen.
  • rsgDrawQuadTexCoords(): Sets up a mesh and draws a textured quadrilateral to the screen.

Drawing with a mesh

When you want to draw complex shapes and textures to the screen, instantiate a Mesh and draw it to the screen with rsgDrawMesh(). A Mesh is a collection of allocations that represent vertex data (positions, normals, texture coordinates) and index data such as triangles and lines. You can build a Mesh in three different ways:

  • Build the mesh with the Mesh.TriangleMeshBuilder class, which allows you to specify a set of vertices and indices for each triangle that you want to draw. The downside of doing it this way is there is no way to specify the vertices in your native RenderScript code.
  • Build the mesh using an Allocation or a set of Allocations with the Mesh.AllocationBuilder class. This allows you to build a mesh with vertices already stored in memory, which allows you to set the vertices in native or Android code.
  • Build the mesh with the Mesh.Builder class. This is a convenience method for when you know what data types you want to use to build your mesh, but don't want to make separate memory allocations like with Mesh.AllocationBuilder. You can specify the types that you want and this mesh builder automatically creates the memory allocations for you.

To create a mesh using the Mesh.TriangleMeshBuilder, you need to supply it with a set of vertices and the indices for the vertices that comprise the triangle. For example, the following code specifies three vertices, which are added to an internal array, indexed in the order they were added. The call to addTriangle() draws the triangle with vertex 0, 1, and 2 (the vertices are drawn counter-clockwise).

int float2VtxSize = 2;
Mesh.TriangleMeshBuilder triangle = new Mesh.TriangleMeshBuilder(renderscriptGL,
float2VtxSize, Mesh.TriangleMeshBuilder.COLOR);
triangles.addVertex(300.f, 300.f);
triangles.addVertex(150.f, 450.f);
triangles.addVertex(450.f, 450.f);
triangles.addTriangle(0 , 1, 2);
Mesh smP = triangle.create(true);

To draw a mesh using the Mesh.AllocationBuilder, you need to supply it with one or more allocations that contain the vertex data:

Allocation vertices;

Mesh.AllocationBuilder triangle = new Mesh.AllocationBuilder(mRS);
Mesh smP = smb.create();

In your native RenderScript code, draw the built mesh to the screen:

rs_mesh mesh;

int root(){
return 0; //specify a non zero, positive integer to specify the frame refresh.
          //0 refreshes the frame only when the mesh changes.


You can attach four program objects to the RenderScriptGL context to customize the rendering pipeline. For example, you can create vertex and fragment shaders in GLSL or build a raster program object with provided methods without writing GLSL code. The four program objects mirror a traditional graphical rendering pipeline:

Android Object Type RenderScript Native Type Description
ProgramVertex rs_program_vertex

The RenderScript vertex program, also known as a vertex shader, describes the stage in the graphics pipeline responsible for manipulating geometric data in a user-defined way. The object is constructed by providing RenderScript with the following data:

  • An Element describing its varying inputs or attributes
  • GLSL shader string that defines the body of the program
  • a Type that describes the layout of an Allocation containing constant or uniform inputs

Once the program is created, bind it to the RenderScriptGL graphics context by calling bindProgramVertex(). It is then used for all subsequent draw calls until you bind a new program. If the program has constant inputs, the user needs to bind an allocation containing those inputs. The allocation's type must match the one provided during creation. The RenderScript library then does all the necessary plumbing to send those constants to the graphics hardware. Varying inputs to the shader, such as position, normal, and texture coordinates are matched by name between the input Element and the Mesh object being drawn. The signatures don't have to be exact or in any strict order. As long as the input name in the shader matches a channel name and size available on the mesh, the run-time would take care of connecting the two. Unlike OpenGL, there is no need to link the vertex and fragment programs.

To bind shader constructs to the Program, declare a struct containing the necessary shader constants in your native RenderScript code. This struct is generated into a reflected class that you can use as a constant input element during the Program's creation. It is an easy way to create an instance of this struct as an allocation. You would then bind this Allocation to the Program and the RenderScript system sends the data that is contained in the struct to the hardware when necessary. To update shader constants, you change the values in the Allocation and notify the native RenderScript code of the change.

ProgramFragment rs_program_fragment

The RenderScript fragment program, also known as the fragment shader, is responsible for manipulating pixel data in a user-defined way. It's constructed from a GLSL shader string containing the program body, textures inputs, and a Type object describing the constants used by the program. Like the vertex programs, when an allocation with constant input values is bound to the shader, its values are sent to the graphics program automatically. Note that the values inside the allocation are not explicitly tracked. If they change between two draw calls using the same program object, notify the runtime of that change by calling rsgAllocationSyncAll so it could send the new values to hardware. Communication between the vertex and fragment programs is handled internally in the GLSL code. For example, if the fragment program is expecting a varying input called varTex0, the GLSL code inside the program vertex must provide it.

To bind shader constants to this program, declare a struct containing the necessary shader constants in your native RenderScript code. This struct is generated into a reflected class that you can use as a constant input element during the Program's creation. It is an easy way to create an instance of this struct as an allocation. You would then bind this Allocation to the Program and the RenderScript system sends the data that is contained in the struct to the hardware when necessary. To update shader constants, you change the values in the Allocation and notify the native RenderScript code of the change.

ProgramStore rs_program_store The RenderScript ProgramStore contains a set of parameters that control how the graphics hardware writes to the framebuffer. It could be used to enable and disable depth writes and testing, setup various blending modes for effects like transparency and define write masks for color components.
ProgramRaster rs_program_raster Program raster is primarily used to specify whether point sprites are enabled and to control the culling mode. By default back faces are culled.

The following example defines a vertex shader in GLSL and binds it to the RenderScript:

    private RenderScriptGL glRenderer;      //rendering context
    private ScriptField_Point mPoints;      //vertices
    private ScriptField_VpConsts mVpConsts; //shader constants


     ProgramVertex.Builder sb = new ProgramVertex.Builder(glRenderer);
        String t =  "varying vec4 varColor;\n" +
                    "void main() {\n" +
                    "  vec4 pos = vec4(0.0, 0.0, 0.0, 1.0);\n" +
                    "  pos.xy = ATTRIB_position;\n" +
                    "  gl_Position = UNI_MVP * pos;\n" +
                    "  varColor = vec4(1.0, 1.0, 1.0, 1.0);\n" +
                    "  gl_PointSize = ATTRIB_size;\n" +
        ProgramVertex pvs = sb.create();
        pvs.bindConstants(mVpConsts.getAllocation(), 0);

The RsRenderStatesRS sample has many examples on how to create a shader without writing GLSL.

Shader bindings

You can also set four pragmas that control the shaders' default bindings to the RenderScriptGL context when the script is executing:

  • stateVertex
  • stateFragment
  • stateRaster
  • stateStore

The possible values for each pragma are parent or default. Using default binds the shaders to the graphical context with the system defaults. The default shader is defined below:

("varying vec4 varColor;\n");
("varying vec2 varTex0;\n");
("void main() {\n");
(" gl_Position = UNI_MVP * ATTRIB_position;\n");
(" gl_PointSize = 1.0;\n");
(" varColor = ATTRIB_color;\n");
(" varTex0 = ATTRIB_texture0;\n");

Using parent binds the shaders in the same manner as it is bound in the calling script. If this is the root script, the parent state is taken from the bind points that are set by the RenderScriptGL bind methods.

For example, you can define this at the top of your native graphics RenderScript code to have the Vertex and Store shaders inherent the bind properties from their parent scripts:

#pragma stateVertex(parent)
#pragma stateStore(parent)

Defining a sampler

A Sampler object defines how data is extracted from textures. Samplers are bound to Program objects (currently only a Fragment Program) alongside the texture whose sampling they control. These objects are used to specify such things as edge clamping behavior, whether mip-maps are used, and the amount of anisotropy required. There might be situations where hardware does not support the desired behavior of the sampler. In these cases, the runtime attempts to provide the closest possible approximation. For example, the user requested 16x anisotropy, but only 8x was set because it's the best available on the hardware.

The RsRenderStatesRS sample has many examples on how to create a sampler and bind it to a Fragment program.

↑ Go to top

← Back to RenderScript