Register

News - PA - May 30, 2022

‘Blackspace Engine’ – Developing a Next Generation Engine for future projects – GCON Pearl Abyss

Subject : Next Generation Engine Development Postmortem
Lecturer : Kwang-Hyun Koh – Pearl Abyss / Lead Engine Programmer
Presentation area: development, engine
Lecture Hours: 2021.11.19 (Fri) 14:00 ~ 14:50
Lecture Summary: Koh Kwang-hyun, lead engine programmer, developed R2, Black Desert Remaster, Black Desert Mobile, and Black Desert Console, and is currently developing the next-generation engine of Pearl Abyss. It introduces various technologies used and developed in the next-generation engine of Pearl Abyss, and outlines what it aimed for, why such technologies were developed, and how they are used. In addition, it introduces how the company’s various new projects are being developed, and also mentions which technologies will be additionally applied in the future.


■ Why did Pearl Abyss start developing the next-generation engine? – “We will provide high-quality games to more users quickly.”


Lead engine programmer Koh Gwang-hyun talked about why Pearl Abyss started developing the next-generation engine before the full-scale announcement. The basic value that Pearl Abyss believes to be important when preparing a new game lies in ‘can we make a game quickly and efficiently’. As the quality of the game improves, more and more time, effort, manpower, and cost are required to make one game. This is a huge risk, so many parts had to be automated in order to be effective.

In addition, since the direction of the project is in the open world structure, the ability to express the distant view to allow users to feel the expansiveness of the scene cannot be neglected, so the ability to express the near view has also become an important factor that cannot be overlooked.

In addition, Pearl Abyss had to be able to be used not only for several new projects in progress at the same time, but also for future projects, and it was also important to have it run on various platforms. In short , ‘to provide high-quality games to more users quickly and to more users’ is the core value of the next-generation game engine of Pearl Abyss. Programmer Gwang-Hyun Ko started making the engine with this mindset, and said that the various functions to be introduced through the lecture are all the result of reflecting these demands.

▲ Kwang-Hyun Koh

■ ‘Lighting’ of Pearl Abyss’ next-generation engine


The first to introduce the Pearl Abyss next-generation engine function is ‘Lighting’. Lighting is the core of rendering, and it can be said that it is an essential element and foundation from the expression of very trivial things to realistic screen output. He said that in order to achieve ‘rapid development’, one of the core values ​​of Pearl Abyss, it was necessary to minimize pre-processes that require a long time such as pre-baking. He explained that he should be able to do it.

For ‘Direct Lighting’, an area directly visible from the light source, Clustered Shading is applied for a lot of light processing and for ease of management, all of which are dynamic light sources. He divided the frustum regions of the screen into grids and called them ‘Clusters’. This is a method of registering light sources that intersect the cluster area in the storage space of the cluster, accessing the cluster through the coordinates of the target pixel when calculating lighting, and obtaining light sources for lighting. It supports several types of light sources, and BRDF uses GGX Smith.

Area Light is heavier than Point Light or Spot Light, but it needs to be used for realistic expression. The next-generation engine of Pearl Abyss uses some optimizations of Approximate Importance Sampling, and provides shapes such as Quad, Disk, Tube, and Sphere. Unlike Point Light and Spot Light, shadows cannot be cast directly with Shadow Map, so shadows are applied using Raymarching or Raytracing.

▲ Image with lighting applied after arranging an area light source on a surface with various roughness
 ▲ Some of the forms supported by the area light source. 
Tube, Disk and Quad are arranged.

Tube can be used by placing it in a fluorescent lamp or a lightsaber, and Quad can be used for an electric signboard, etc.
▲ Images without shadow processing and when shadows are expressed using ray tracing and ray matching

The IES profile is a file format that records the light distribution in space of a light source. When placing a light source, the IES profile can be used to express realistic shape and intensity. The IES profile is often made public by lighting manufacturers, so it is easy to access.

In Pearl Abyss, Physical Units were used to express a more realistic light source. In this case, there is a lot of information on the light sources that have already been measured, including the physical information of products released by lighting manufacturers, so there is no need to worry about whether the light source is correct or not, and it is suitable for use as an absolute standard of light when rendering.

Cascaded Shadow Map (CSM) was used for the directional light of the direct lighting shadow, and the Massive Shadow method was introduced for Point and Spot Light. It also supports ray tracing shadows by mixing the Raymarched Shadow method. Here, a texture space is provided in the form of a pool for the mass shadow, and when the camera moves, the distance between the camera and the light source is checked and reconfigured to provide a wider area as the shadow is closer. This can theoretically represent the shadows of hundreds of light sources. Point Light is composed of a tetrahedron, so it can achieve better performance than a hexahedron.

▲ The screen displaying the Massive Shadow Pool. 
The area provided varies by distance.

It can also be seen that the shadow of the Point Light is composed of a tetrahedron.

Massive shadows also suffer from performance problems if there are too many dynamic objects and light sources. Pearl Abyss has devised the ‘Raymarched Shadow’ method to prevent this. Signed Distance can be obtained from the voxel structure for indirect lighting, and it is traced in the direction of the light source to check whether there are voxels that are hit before meeting the light source. Details that are difficult to express in the structure are checked for shadowing while ray marching the screen depth. This allowed us to express 32 shadows per cluster, or almost any light shadow.

In the case of a character, information is composed by approximating it with an additional capsule, and it is processed together when calculating the Signed Distance. Area Light creates a ray according to the shape, and Point Light or Spot Light creates a ray according to the size of the light source, making it possible to express natural phenumbra. During gathering, denoising is performed using adjacent samples of intermediate results, and this is stabilized using Temporal Reprojection.

Raytraced Shadow uses the same structure as ray marching, but uses ray tracing for intersection determination. Any Hit Query is relatively fast, and the shadow of the local light source is often hit from a relatively close location, so it is rather light.

▲ If you apply Massive Shadow and Raymarched Shadow, more detailed expression is possible.
▲ When Raytraced Shadow is applied. 
Depending on whether the hardware is supported, you can select and use the appropriate one.

Indirect lighting uses a voxel-based format, is calculated dynamically, and is reflected directly without any pre-processing. Here, SDF (Signed Distance Field) has been added, denoising has been improved, and ray tracing and this linkage can be seen as one of the changes.

The voxel-based format consists of clipmaps, and each clipmap has a size of 128 x 64 x 128 in consideration of performance and quality. Various voxel clipmaps exist according to usage, and these include Irradiance Voxel, Indirect Irradiance Cache Voxel, Sky Visibility Voxel, Signed Distance Voxel, and Axis-Aligned Distance Voxel.

First, Irradiance Voxel is a space to store diffuse lighting results. With the scene lighting result and off-screen distributed processing, this result is reflected over several frames again to create a multi-bounce effect. Off-screen distributed processing refers to processing of an invisible area on the screen.

▲ The final result of the scene’s diffuse is configured like this in Irradiance Voxel

The Indirect Irradiance Cache stores only diffuse indirect lighting excluding albedo and is used for forward lighting such as particles and translucency. Sphericalization is used for directionality, but the capacity is too large to handle the entire channel, so YCoCg is encoded and only Y is treated as sphericalization.

Density and Material are saved for separate use. It is also created through scene lighting and off-screen distributed processing. Sky Visibility Voxel is used when forward lighting or some parts need sky visibility check, and is created together with Indirect Irradiance Cache Voxel.

Signed Distance Voxel has the same form as the commonly known SDF (Signed Distance Field) structure, and is used for collision handling or shadow check in multiple places.

▲ Example image of Indirect Irradiance Cache Voxel
▲ Example image of Signed Distance Voxel and Signed Distance Voxel

At this time, Pearl Abyss has devised ‘Axis Aligned Distance Voxel’ for fast tracing. The AABB region is obtained by storing the number of empty voxels in each axis direction, and it is accelerated by skipping to the intersection of Ray and AABB during ray marching. Additional information such as opacity and character capsule index is stored in channel A. It is advantageous for tracing that is created by merging hierarchically through 2 passes and proceeds parallel to the surface. A rough occlusion can be obtained even with single tracing, and it is used for distant expression for diffuse indirect lighting.

Ray tracing improves performance by using voxel cache without separate shading with Closest Hit Query. Impotance sampling is also used here, and only the hit distance is stored, and then the hit position is obtained using this to sample the irradiance.

The method introduced above is stored using 3-Band SH in the screen area divided into tiles, and after applying SVGF (Spatio-Temporal Variance Guided Filter), Bilateral Upsampling is performed. If the sample cannot be obtained because it is far from the center of the tile, Indirect Irradiance Cache Voxel is used. At this time, Bilateral Sampling is applied based on SH Dominant Direction.

▲ Screen using ray marching. Below is the screen when denoising, SSAO, and ray tracing are applied here.

Specular indirect lighting prioritizes the most detailed SSR (Screen Space Reflection). However, if there is no information available from the SSR, voxels are used like diffuse around the location, and if there is no information, a cubemap is used. When constructing a cubemap, even the depth is recorded, and the value is obtained by ray marching using this. Cubemaps are mainly used in the distance to minimize Parallax errors. A different BRDF is used for each material, and the impedance sampling for each BRDF is performed, and in the case of hair or clothes, anisotropic PDF is calculated and used in advance.

Denoising of specular indirect lighting uses Recurrent Blur and Wavelet Filter and stabilizes it through Temporal Reprojection. Recurrent Blur is used to process Pre Blur and Disocclusion Fix, and Post Blur is replaced with Wavelet Filter to preserve detail and reduce noise.

▲ If you combine both Recurrent Blur and Wavelet Filter, you can see that details are processed better.

■ ‘Atmosphere’ processing of Pearl Abyss next-generation engine


In the Atmosphere expression of the Pearl Abyss next-generation engine, the frame distribution method and upscaling improvement were additionally applied to increase the cloud resolution. The key here is not to process various atmospheric phenomena individually, but to process the items necessary for atmospheric expression at once. Through it the sky, the sun and the moon. It becomes possible to express clouds, fog, and rays of light. Here, it is characterized by supporting Multi-Scattering and expressing a mixture of short-distance and long-distance methods.

For close range, a Frustum Aligned Voxel called Froxel was used. In addition to directional light, lighting and shadows from local light sources are included. For directional light shadows, Cascaded Shadow Map is used. The shadow generated by each light source illuminating the atmosphere or fog is calculated by accumulating the density of the fog injected into the Froxel in the direction of the light source. If you want, you can place different shapes of fog volumes, and you can optimize them by configuring the fog volume information in the cluster. 

▲ Example of sunlight expression image using Froxel

Since it is not suitable in terms of memory or performance to express a long distance with Froxel, the remaining area is expressed through remote ray marching. By using Air/Aerosol Density, the degree of attenuation of directional light can be known, and the intensity of directional light to be used for direct lighting can be approximated using this.

If you perform long-distance ray marching using various tables, cloud density information, height fog information, planet information, thickness of the atmosphere, atmospheric condition, air density, moisture density, angle of the sun, etc., and perform integration using the scattering formula, Aerial perspective, clouds, high fog, light beams, and even blue sky can be expressed. If it is processed in this way, it is possible to solve the disconnection or awkwardness that may occur when mixing after individual processing, making it possible to express natural atmosphere.

▲ Example images expressing sunny days, sunsets, and high humidity days at the same location

Since ray marching is a heavy task, it is done separately at lower resolutions, and since everything is computed at once, upscaling to the full screen resolution is necessary. Pearl Abyss has devised a new upscaling method for this purpose.

4×4 pixels are grouped into one tile area, and 16 depth slices are formed in the corresponding tile. The depth slice is recorded when the point passed during the ray marching process is near the depth of the pixel in the tile, and then, if a slice suitable for the corresponding depth is sampled on the full screen, it is naturally upscaled.

▲ It can be seen that even complex parts like tree branches are sharply upscaled.

Next introduced is ‘Fluid’. Developer Koh Gwang-hyun said that he tried to express the natural movement of fog or smoke through fluid simulation, and explained that when Density and Velocity are injected in the particle system, the rest is naturally produced through fluid simulation. Rendering was expressed through Froxel, and it was implemented to interact with surrounding objects such as characters as they move.

▲ An example of the screen where the emitter set to be injected through the particle system is installed. Realistic movement can be confirmed
▲ An example of interaction with a character. You can see the surface of the object repels the fluid.

■ ‘Geometry’ and ‘Physics’ of Pearl Abyss’ next-generation engine


As a solution to the ‘far-sighted expression power to make users feel the expansiveness’ that Pearl Abyss considers important, the Pearl Abyss next-generation engine has added a function to easily place objects in large quantities in the level and express them from a distance. In particular, when composing fields or forests, rather than arranging individual objects one by one, mass configuration was made possible simply by setting areas or materials, and subdivided LODs and improved imposters were used to express this.

▲ Most of the trees and crops shown on the screen are not placed directly, but are automatically created by setting

In addition to the mass deployment system, Pearl Abyss implemented automation as much as possible in areas that can automate level configuration. A function that is automatically configured in real time based on various conditions such as material and area even in close-up expressions has been added, and when the setting is changed, the change can be immediately confirmed. If necessary, the density can be adjusted according to the performance option.

▲ Example image with real-time setting change applied in near expression

In addition, various feelings can be expressed by using the function of creating an object on the surface of an object. By adjusting information such as the object or density to be created on the surface, the change point can be checked in real time.

▲ With a simple setup, you can make a house full of grass. 
It is used to add details or to express old materials.

A very distant perspective plays a very important role in creating a sense of space in a huge world. However, using the previously introduced method, the expression of extremely distant objects becomes very heavy. To solve this problem, in the Pearl Abyss engine, proxies were configured by level, and proxies were further configured hierarchically to express distant objects more efficiently. If the proxy region spans the load range by searching from the upper layer, it is possible to output the entire world by searching and loading the lower layer proxy.

Not only the distant view but also the close-up view are very important, and for this purpose, more detailed geometric expression and displacement processing through height map have been added. By adding shadow treatment to this, extreme shadow detail can be expressed.

▲ You can see the difference in detail that enhances realism by applying Displacement

Next, an introduction to the ‘physics environment implementation’ followed. Several games to be produced with the next-generation engine of Pearl Abyss are expected to support various means of transportation.

Pearl Abyss uses the Habok system as its physics engine, and it is characterized by improving the mount system provided by Habok to meet the needs in various aspects. Developer Koh Gwang-hyun explained that he had to additionally secure the stability of simulation simulations such as overlap check and roll control. In addition, various improvements such as steering stiffness adjustment according to speed, simulation of a character sitting in a vehicle, and vehicle destruction were applied.

▲ Vehicle testing in the new project ‘Plan 8’ with the improved vehicle system

Pose rigging can be set freely with Animation Constraints or Wire Parameters in the DCC tool as desired by the modeler. For example, ancillary movements such as gears or pistons connected to a mechanic character can be viewed as originating from pose rigging. By extracting the set data and applying it to the engine in real time, the various IK/Pose-Modifiers used in the game make it possible to accurately express the movements of the connected gears or pistons even if the large movements of the character are corrected. It is also used for helperbone settings to remove skinned mesh skinning artifacts such as candy wrap.

▲ Example applied to ‘Plan 8’. The piston connected through IK is moving while satisfying the constraint according to the setting

‘Breakable’ is a function to set an object that can be destroyed. If you pre-bake a shard on a regular object, it will automatically be set as a breakable object. The physical material can be set for each sub-mesh, and destruction can occur even with impacts from rigid body simulations.

When a large object is destroyed by an impact, the portion to which the impact was applied is destroyed, but the remaining portion tends to retain its original shape. Afterwards, additional destruction by fragments occurs in the remaining parts. In order to naturally implement this series of processes, a hierarchical breakable structure was applied in the Pearl Abyss engine.

▲ The image of the siege of ‘Crimson Desert’. Destruction of an individual entity and its chain shock result in another destruction
▲ You can check the hierarchical breakable structure applied to the Pearl Abyss engine.

For the cloth simulation, PBD (Position Based Dynamics) was used for a more stable simulation, and the Jacobi method was adopted to make it GPU-based. In addition, the Long Range Attachments method was used to minimize solver iteration, and the method of simplifying collision calculation through contact plane prediction was applied. Through this, the success rate of collision detection in the situation of high relative speed was increased.

▲ The same driving method was applied not only to the fabric, but also to the hair and accessories.

After the introduction of the physical environment, several other systems applied to the next-generation engine were introduced. First, we introduced the commonly used Material System to control shading in art and make it a resource. Basically, you can control vertices and pixels, and you can set parameters that need to be changed and control them with various routes. In addition, various performances can be composed through the sequence editor including the timeline.

▲ This scene in ‘DokeV’ was also produced with the Material System.

Not only lighting and time, but also ‘weather changes’ are very important factors in expressing the world. Various weather expressions are possible in the next-generation engine, and artists can use them in the form they want by linking with the particle system. Since it is possible to implement a muddy or snowy system, various world configurations are possible.

▲ The surface of the accumulated snow changes according to the character’s trace. The same applies to mud.


The engine also needs a solution for pathfinding or AI. Developer Koh Gwang-hyun introduced that the voxel navigation announced at GDC in 2019 was further developed and improved, and an obstacle overcoming function or a ‘flow field’ function that allows a large number of objects to move in groups was added. This function can be used in situations where a large number of people are tracking.

In addition, the collision avoidance function and movement path between characters were improved, and the data structure was improved to cope with the dynamic agent size. Road movement and signal systems of vehicles and pedestrians appearing in the trailer video of ‘DokeV’, which were previously released, are also newly added specifications. Through this, it is possible to express the flow of traffic.

The ability to interact with surrounding objects is also required for natural expression. It is applied to objects such as trees, grass, and particles that are difficult to express with rigid body simulation. The movement of the character may cause the grass to be stepped on or fallen leaves to fill up, and shock waves may be transmitted through actions, magic, or explosions to give a physical shock. In the Pearl Abyss engine, when a force is applied to the air, it is implemented in a way that calculates the flow of air through fluid simulation and applies this flow to the object again.

▲ You can set how much the leaves will be affected and what will be the physical quantity of the leaves themselves.


In addition, the surrounding environment and the movement of objects are affected by the wind. Trees, cloth and hair are all affected, too, and the particle system refers to the wind value to handle the effect, and the artist can set how much or how it will be affected.

▲ You can see how the leaves, clothes, and hair are affected by the strength of the wind.

The expression of water ripples was also applied. In the game engine of Pearl Abyss, the water ripples were numerically calculated using Discrete Wave Equation. In this process, the Finite Difference Method was used, and for a more natural look, a wave was created on the surface of a moving object. He explained that the information on the moving object that was previously used in the fluid is also used here as it is.

▲ Ripples from the surface of the object are reflected back from the water boundary, causing another ripple.

Developer Koh Gwang-hyun introduced some features developed in the next-generation engine of Pearl Abyss through the announcement, but said that this is only a part of the whole, and that there is still a lot of work to be done in the future. Next, he said that optimization for ‘smooth gameplay’ is important above all else, and he expressed his ambition that he would put more effort into optimization. Pearl Abyss plans to reflect requests from new projects one by one to the next-generation engine, and continue development so that it can run on more platforms in the future.

Source: Inven