Geometries are sets of vertices (points) that together form triangles. It's the most basic rendering primitive in Zaplib. With them, we can create everything from interactive UI elements to 3D meshes.

A Geometry describes the shape of a renderable item, represented as triangles. Create a Geometry using Geometry::new, which takes in both vertex attributes and indices to map vertices to triangle faces.

For example, take a look at our internal representation of QuadIns. To represent the Quad shape, consider two right triangles both sharing a hypotenuse to form a square.

    pub fn build_geom() -> Geometry {
        // First, represent each corner of the quad as a vertex,
        // with each side having a length of 1.
        let vertex_attributes = vec![
            // top left vertex
            vec2(0., 0.),
            // top right vertex
            vec2(1., 0.),
            // bottom right vertex
            vec2(1., 1.),
            // bottom left vertex
            vec2(0., 1.),
        // Group the vertices into two triangles, right triangles
        // on opposing corner coming together to share a hypotenuse.
        let indices = vec![
            // top-right triangle
            [0, 1, 2],
            // bottom-left triangle
            [2, 3, 0],
        Geometry::new(vertex_attributes, indices)


A GpuGeometry is used to register a Geometry with our application context. It is called via GpuGeometry::new(cx, geometry). Under the hood, this is reference counted and can be cheaply cloned to add a new reference to the same geometry. When all references are dropped, the buffer will get reused in the next call to GpuGeometry::new.


You can statically assign a Geometry to a Shader, by passing in a build_geom when creating a Shader. To render such a shader, use add_instances.

It's also possible to omit a build_geom when creating a Shader, and instead dynamically assign it a GpuGeometry when drawing. In that case, use add_mesh_instances.

See Drawing for more information on different APIs for drawing.