LightsprintSDK 2021.08.08


Mesh is a set of triangles and vertices with fixed positions in local space.

Object is an instance of mesh, with position, rotation, scale and material properties. Static object never moves, rotates, deforms or significantly changes material properties. Dynamic object freely changes these properties.

Scene is a set of objects.

Lightmap is a per-pixel buffer (texture) or a per-vertex buffer with irradiance values (incoming light), not modulated by material color.
Lightmap could contain direct, indirect or global (both) illumination.
Lightmap with indirect illumination is sometimes called ambient map.
Textures are typically created in sRGB color space with 8bit per channel precision, vertex buffers are typically created in linear space with float precision, but this is completely user defined, all combinations are supported.

Automatic library linking

By default, including library header automatically links Lightsprint DLLs using #pragma comment(lib,name). Debug library is linked to debug exe, Release library to release exe.
#define RR_STATIC / RR_GL_STATIC / RR_ED_STATIC / RR_IO_STATIC selects static library instead of DLL.
#define RR_MANUAL_LINK / RR_GL_MANUAL_LINK / RR_ED_MANUAL_LINK / RR_IO_MANUAL_LINK disables automatic library linking, so you can e.g. manually link release library with debug exe.
(version with RR_ affects LightsprintCore, RR_GL_ affects LightsprintGL, RR_ED_ affects LightsprintEd, RR_IO_ affects LightsprintIO)

Illumination units (radiometry, photometry, screen)

Lightsprint solvers calculates in physically based units, in 32bit float per component precision. However, it is possible to feed solvers and receive results in screen colors or other units. Everything you need is to setup appropriate convertor, see RRSolver::setColorSpace(). RRColorSpace converts your values to and from our linear colors.

In typical situations, it is most straightforward to think and communicate in screen colors. This means you can set nearly all inputs in screen colors (scaled to 0-1 range) and read all outputs in screen colors. To setup this mode, call RRSolver::setColorSpace(RRColorSpace::create_sRGB()). RealtimeRadiosity sample demonstrates it.

Geometry units

All sizes and positions in space are expressed in generic units, Lightsprint doesn't need to know if it's meter, inch or parsec. However, some functions have default parameter values calibrated for human-sized scenes specified in meters (feature sizes roughly between 0.01 and 100 units), so using meters may give you advantage in typical scenes.

If it's possible, existing scene adapters adapt your scene from custom units to meters. (Source code of adapters is in src/LightsprintIO/ *.*)


Lightsprint libraries support all types of scaled objects: positively, negatively, uniformly, non-uniformly. Negative scale, however, could bring inconsistent lighting in some GI solvers.


Dynamically created objects (using new, create() etc) are nearly never adopted, ownership does not change.
This means that parameters that need to be destructed are not destructed inside called function, responsibility for object is not passed to someone else. So when you create object, delete it yourself when no longer needed.

The only exception to this rule is RRBuffer - classes RRObjectIllumination and RRMaterial adopt textures and delete them destructor.

Reference counting

There is no internal reference counting in majority of Lightsprint SDK, so if you create e.g. RRCollider out of RRMesh, you are not allowed to destroy mesh before destroying collider. This danger should be mentioned on all appropriate places.

The only exception to this rule is RRBuffer, it has built-in reference counter.

Finite numbers

If not otherwise specified, all inputs must be finite numbers. With Inf or NaN input, result of any operation is undefined.


Lightsprint uses 3x4 matrices for description of object transformation. See RRMatrix3x4 for explanation why we found it optimal.

Up vector

Although there is no limitation on orientation of 'up' vector, lighting works the same in all directions, all importers try to convert data so that 'up' is in positive Y (0,1,0), and camera navigation in samples expects the same. There is no special reason for Y axis, but it helps to be consistent across whole SDK.