|3DS (3D Studio): A file format used by Autodesk 3ds Max software for storing 3D models, materials, and animations.
|ABC (Alembic): A file format designed for efficient storage and interchange of animated 3D geometry and other visual effects data.
|Alembic: A file format developed for storing and sharing large amounts of animation and simulation data in the film and visual effects industry. Alembic is efficient in terms of storage and allows the transfer of complex information such as geometry, animations, cameras, and physics simulations.
|Alpha Blending: Technique used to simulate transparency in computer graphics, where the alpha value determines the degree of transparency of a pixel.
|Alpha Channel: Additional channel in a digital image that represents the opacity of each pixel.
|Alpha Compositing: Process of combining an image with a background to create the appearance of partial or full transparency.
|Alpha Testing: Rendering technique that discards certain pixels based on their alpha value.
|Ambient Light: Light assumed to reach all parts of a scene equally, regardless of location or orientation.
|Ambient Occlusion (AO): Shading technique that simulates how enclosed or sheltered areas have less ambient light.
|AMF (Additive Manufacturing File Format): A format used for storing data related to 3D printing and additive manufacturing processes.
|Animation: It is the process of designing, drawing, modeling, and preparing sequences of static images that, when displayed rapidly in sequence, create the illusion of movement.
|Anisotropic Filtering: Method for improving the quality of textures mapped onto surfaces that are far away and viewed at acute angles.
|Anti-Aliasing Methodologies: Techniques like SSAA, MSAA, FXAA, TXAA, SMAA, MLAA that aim to reduce aliasing (jagged edges) in an image.
|Anti-aliasing: It is a technique used to smooth jagged edges that may appear in rasterized graphics.
|AoE (Area of Effect): referring to the range or area in which an ability, spell, or attack affects multiple targets in video games and RPGs.
|Area Light: Light source that has an area and can produce soft shadows.
|Artifact: Generally refers to unwanted errors or distortions in a digital image or video that are the result of a certain process being applied, such as data compression..
|Backface Culling: Technique used in 3D graphics to improve performance by not drawing polygons that are facing away from the camera.
|Baking: video games refers to the process of precalculating and storing certain elements, such as lighting, shadows, textures, and physics, to be later applied to the game in real-time. This helps enhance the game’s performance and visual quality by reducing the computational load required during execution.
|Bézier Curve/Surface: Mathematically defined curve/surface used in computer graphics to model smooth shapes.
|Bilinear Filtering: Texture filtering technique that uses the four nearest texels in the same mipmap.
|Billboarding: Technique in which an object always faces the camera, often used for 2D objects in a 3D scene.
|Bloom: Effect that simulates light spilling over from bright areas to darker areas in an image, giving the impression of extremely bright light or glow.
|Bone Animation / Skeletal Animation: Animation method that uses a hierarchy of “bones” to deform a model, often used for characters.
|Boolean Operations: Operations used to combine 3D objects, including union, intersection, and difference.
|Bounce: refers to the property of light rays reflecting off surfaces and changing direction. This phenomenon is crucial for creating realistic lighting in virtual scenes. Bounce can occur in various ways, such as when light reflects off a mirror, bounces between multiple surfaces, or interacts with materials that have different reflective properties. The term “bounce” is commonly used to describe this behavior in the field of computer graphics and rendering.
|Bounding Box: It is an invisible rectangular volume that completely encloses a 3D object and is used to simplify certain collision detection and culling operations.
|BRDF (Bidirectional Reflectance Distribution Function): Function that defines how light is reflected from a surface, used in physically based rendering.
|BSDF (Bidirectional Scattering Distribution Function): is a function used in computer graphics that describes how light is distributed when interacting with a surface. It defines how light is reflected, refracted, or scattered in different directions, allowing for the realistic simulation of materials in virtual environments.
|Bump Mapping: It is a technique used to increase the realism of 3D graphics by simulating small details on the surface of an object. Technique that simulates bumps and wrinkles on the surface of an object by modifying the surface’s normal.
|Caustics: Light patterns that result from the refraction of light through a transparent surface or the reflection from a rippled surface.
|Caustics: Patterns of light that are reflected or refracted on a shiny or transparent surface.
|Cel Shading: Rendering technique that gives 3D graphics a hand-drawn or comic book appearance.
|Chroma Keying: Technique for combining two images based on color, commonly used in film and television effects (such as “green screen”).
|Clipping Plane: A plane used to discard parts of the scene that are outside a certain range to optimize the rendering process.
|Clipping: It is the process of removing objects or parts of objects that are outside the field of view, or masking certain regions of the image.
|COLLADA (COLLAborative Design Activity): An XML-based format for exchanging digital assets between various 3D software applications.
|Collision Detection: Process of determining if two or more objects have collided or are about to collide.
|Collision Response: Process of deciding what to do once a collision has been detected, such as bouncing, sticking, sliding, etc.
|Color Grading: Process of altering or correcting the color of an image, often used to give a certain mood or style to a scene.
|Constructive Solid Geometry (CSG): Modeling technique that combines solid objects using boolean operations.
|Coordinate Transformation: It refers to the alteration of the coordinates of an object to move, rotate, or scale it.
|Crop: Trim or cut a specific region of an image or rendering to focus on a desired area of interest.
|Cube Map: Texture that contains six 2D squares that can be mapped to the six faces of a cube, typically used for skies or reflections.
|Culling: Process of removing objects from rendering when they will not be visible, to improve performance.
|DAE (Digital Asset Exchange): A format based on COLLADA for storing 3D models, animations, and scenes, widely supported by various software applications.
|DCC: in software stands for “Digital Content Creation,” referring to tools and software used in the production of visual content, such as 3D graphics, animations, and special effects.
|Decal: Texture or shader applied to the surface of an object to add details such as burn marks, graffiti, etc.
|Decimation: It is a process that reduces the number of polygons in a 3D object, often to increase rendering or animation efficiency.
|Deferred Rendering: Rendering technique that first stores information about objects in buffers before calculating the final pixel colors.
|Deferred Shading: Shading technique in computer graphics that allows for high performance by deferring the computation of shading data until after all scene geometry information is known.
|Depth Buffer/ Z-buffer: Map of the depth of each pixel in the scene, used to resolve object visibility.
|Depth Map: It is an image or map that contains information related to the distance of objects in a scene from a particular viewpoint.
|Depth of Field: Technique that simulates the focus of a real camera, causing nearby or distant objects to appear blurred while the subject is in focus.
|Depth Write (in shader): (A short definition) the ability of a fragment to modify the depth buffer during the rendering process..
|Digital Compositing: Process of creating an image by combining multiple different images. In visual effects, this often involves combining actors shot on a set with CGI backgrounds or effects.
|Directional Light: Light that has a direction but is effectively at an infinite distance, such as sunlight.
|Displacement Map: Type of texture map used to push the vertices of a 3D model to add detail.
|Displacement Mapping: Technique that displaces the geometry of a model based on a texture, adding more realistic details.
|Dithering: Technique used to create the illusion of colors or shades that are not present in the original color palette, usually using dot patterns.
|Double Buffering: Technique in which the CPU prepares the next frame in a back buffer while the GPU is rendering the current frame from the front buffer.
|DXF (Drawing Exchange Format): A file format developed by Autodesk for exchanging 2D and 3D CAD (Computer-Aided Design) data between different software applications.
|Edge: An edge is a line that connects two vertices in a three-dimensional object. Edges define the boundaries and shape of the faces (polygons) in a model. Each edge has two endpoint vertices and may have additional properties such as edge smoothing or texture information.
|Emissive Map: Texture map that allows certain areas of an object to emit light in a render.
|Environment Mapping: Rendering technique that simulates reflective or refractive surfaces by using texture maps representing the surrounding environment.
|Euler Angles: Way of representing rotation in 3D space using three angles, although it can have issues with gimbal lock.
|Event – message (state machine): is a specific action or occurrence detected by the program, often caused by user interaction. In the context of state machines, an event triggers a state change. For example, if a character is in an ‘Idle’ state, a ‘Jump’ event would transition the character to a ‘Jumping’ state.
|Extrusion: It is the process of stretching a 2D shape into 3D space to create a solid object.
|FBX (Filmbox): A versatile file format for exchanging 3D models, animations, and other data between different software applications.
|Flat Shading: Shading technique in which each polygon of an object is uniformly shaded, resulting in a faceted look.
|Fluid Dynamics: Simulation of liquids and gases.
|Fog: Effect that decreases the visibility of distant objects to simulate atmosphere.
|Fogging / Atmospheric Effect: Technique used to simulate atmosphere and fog, making objects appear dimmer or bluer as they are farther from the camera.
|Forward Kinematics (FK): Animation method used to position the joints of a character in a movement by rotating each individual joint.
|Forward Rendering: Traditional rendering technique that calculates the final color of each pixel as each object is traversed.
|Forward Shading: Shading technique that calculates shading during the rendering stage.
|FOV (Field of View): It is the extent of what can be seen at a given moment, essentially the camera’s scope in a 3D space.
|Frame Rate (FPS): Number of frames or images displayed per second.
|Frustum Culling: Specific culling technique that removes objects that are outside the field of view.
|Frustum: It is the portion of a solid that lies between two parallel planes that intersect it. In 3D graphics, it often refers to the camera’s field of view, which has a truncated pyramid shape.
|Gamma Correction: Process of adjusting the brightness and/or contrast of an image to compensate for the nonlinear response of display systems.
|Geometry: In CGI, geometry refers to the creation, manipulation, and representation of three-dimensional shapes and structures for objects and virtual environments. It involves using points, vertices, edges, and polygons to define the shape, position, and visual properties of objects in 3D space.
|Gimbal lock: is a problem where an object loses one degree of rotational freedom due to the alignment of two of the three rotational axes. This issue occurs when using Euler angles for rotation, leading to unexpected or unnatural movements in 3D animations.
|Global Illumination (GI): Set of techniques that simulate both direct and indirect lighting in a 3D scene.
|Global Illumination: Techniques used to simulate how light is reflected and interacts with a scene, including direct light and light that has bounced off surfaces.
|GLTF (GL Transmission Format): A format designed for efficient transmission and loading of 3D models and scenes, often used for web and real-time applications.
|Gouraud Shading: Shading technique that smooths surfaces by calculating a color for each vertex and then interpolating those colors across the polygons.
|GPU (Graphics Processing Unit): Specialized processor used to accelerate image creation in a buffer intended for output to a screen.
|GUI: Graphical User Interface
|Hard Body Dynamics: Simulation of rigid objects that do not change shape.
|Hardware Acceleration: Use of specialized hardware in the GPU to accelerate certain graphics calculations.
|HDR (High Dynamic Range): It is a technique that allows a much wider range of luminosity and color in an image, similar to what the human eye can perceive.
|Heightmap: A 2D data map used to create a 3D terrain, where each pixel in the map represents a height.
|High Dynamic Range (HDR): Technique that allows for a greater range of luminosity values, capturing details in very bright and very dark areas.
|Holodeck: A term derived from Star Trek, representing a virtual room that can produce fully immersive virtual reality.
|IBL (Image-Based Lighting): Lighting technique that uses an image or an environment map to illuminate a scene.
|IFC (Industry Foundation Classes): A format for exchanging data in the construction and building industry, enabling interoperability between different software applications.
|IGES (Initial Graphics Exchange Specification): A neutral format for exchanging CAD data between different software applications.
|Implicit Surface: Surface defined by a function that returns a value at each point in space, rather than by vertices and polygons.
|Instancing: Technique that allows rendering multiple copies of an object with a single drawing command, often used for things like grass, trees, particles, etc.
|Inverse Kinematics (IK): Animation method used to position the joints of a character in a desired movement.
|Inverse Kinematics: Method for calculating the position and orientation of a chain of bones in an animation to reach a target.
|JSON (Java Script Object Notation): Is a lightweight data format used for exchanging and storing structured data in a readable and easy-to-parse syntax.
|Keyframe: A frame in an animation that defines the start or end of a transition.
|Level of Detail (LOD): Technique for adjusting the detail of objects based on their distance from the camera to improve performance.
|Level Set Method: Technique for tracking the evolution of interfaces in computational problems, such as the merging of liquids in CGI.
|Light Field: Function that describes the amount of light flowing in each direction through each point in space.
|Light Probe: A tool used to capture environment lighting from various points and later apply that lighting to rendered objects.
|Lightmap: It is a technique that stores precalculated lighting information of a scene into a texture, allowing detailed and realistic lighting effects with lower computational cost.
|Lights and Shadows: These are techniques for simulating light sources and the interaction of that light with objects to create shadows.
|LOD (Level of Detail): It is a technique that allows reducing the complexity of 3D models based on their distance from the viewer or other parameters.
|LOD Bias: Adjustment that can make the LOD system use a higher or lower detail version of an object.
|LOD Bias: Parameter used to adjust the level of detail of textures in 3D to make them appear sharper or blurrier.
|Low Dynamic Range (LDR): In contrast to HDR, LDR has a more limited range of luminosity values.
|L-System: Type of formal grammar used to model the growth of plants in CGI.
|Match Moving: Technique that allows CGI to be inserted into live-action footage with the correct position, scale, orientation, and motion relative to photographed objects.
|Material: It is a set of surface properties of a 3D object that defines its appearance, including color, reflectivity, transparency, etc.
|Metaballs: 3D geometric objects whose surface is defined by a threshold value from a scalar function.
|Microfacet Theory: Model that describes surfaces at a microscopic level to accurately represent how they reflect light.
|Mipmap: Technique used to improve efficiency and quality of textures in 3D by generating lower-resolution versions of a texture to use when an object is far from the camera.
|Mipmapping: Optimization technique that creates multiple versions of a texture at different resolutions and selects the most appropriate one for the object being rendered.
|Mocap (Motion Capture): Process of recording the motion of objects or people for use in CGI animation.
|Morph Target Animation / Blend Shapes: Animation method that interpolates between different versions of a model to create effects such as facial expressions.
|Morph Target Animation: 3D animation technique used to interpolate between different shapes of an object.
|Morphing: It is an animation technique that smoothly transforms one object into another.
|Motion Blur: Technique that simulates the blur that occurs when an object moves rapidly during a long exposure.
|Motion Capture: Animation technique that captures the motion of real-life objects or people for use in computer graphics.
|Motion Capture: Process of recording the motion of a human actor and applying it to a 3D model.
|Navmesh: Data structure that defines areas where an AI entity can move.
|Non-Uniform Rational B-Spline (NURBS): Mathematical representation of a 3D curve or surface used in 3D modeling.
|Normal Map: Type of texture map that stores normal vectors to simulate surface detail.
|Normal Mapping: Texture mapping technique that simulates fine surface details by affecting the surface’s normal direction.
|Normal: A vector that is perpendicular to a surface or plane, often used for calculating reflections and shadows.
|NURBS (Non-Uniform Rational B-Splines): They are a mathematical representation used to generate and represent curves and surfaces in 3D graphics.
|Non-Photorealistic Rendering (NPR): is a style of computer graphics rendering that aims to represent scenes in artistic and stylized ways rather than photorealistically.
|OBJ (Wavefront OBJ): A file format used for exchanging 3D models and associated data.
|Occlusion Culling: Specific culling technique that removes objects that are occluded by other objects.
|Occlusion Query: Technique used to determine whether an object is visible or occluded by other objects, often used to optimize rendering.
|Octahedral impostor: is a technique used in video games to represent complex 3D objects, such as trees or buildings, by projecting a flat image onto an octahedron shape. This image is then displayed from different angles to simulate a three-dimensional object, achieving a convincing visual effect with lower computational performance cost.
|Opacity Map: Type of texture map that determines the transparency of different parts of an object.
|Parallax Mapping: Advanced mapping technique that gives the illusion of depth to textures by altering the position of pixels based on a height map and viewing angle.
|Particle System: CGI technique for simulating fluid and diffuse natural phenomena such as fire, water, smoke, dust.
|Path Tracing: Advanced ray tracing technique that simulates many types of lighting and camera effects.
|Perlin Noise: Type of gradient noise developed by Ken Perlin, widely used in CGI to generate procedural textures, shapes, and motion.
|Perlin Noise: Type of gradient noise used for texture and model generation procedures.
|Phong Shading: Shading method in which vertex normals are interpolated and used to calculate the pixel’s color, resulting in smooth highlights and shadows.
|Photon Mapping: Global rendering technique that simulates how light interacts with objects and the environment.
|Physically Based Animation: Animation based on the laws of physics to achieve more realistic movement.
|Physically Based Rendering (PBR): It is an approach to rendering that seeks to emulate how light interacts with surfaces in the real world.
|Pixels: They are the smallest units of a digital image. Each pixel contains color information.
|PLY (Polygon File Format): A file format commonly used for storing geometric information of 3D models, typically representing polygons and vertices.
|Point Filtering: Texture filtering technique that uses the nearest texel.
|Point Light: Light that emanates from a single point in all directions.
|Point: In CGI, a point refers to a coordinate in three-dimensional space. It represents the most basic position in space and is defined by its x, y, and z coordinates. Points are used to construct objects and represent locations in three-dimensional space.
|Polygon: A polygon is a flat surface defined by a closed sequence of three or more vertices connected by straight edges. In CGI, polygons are used to represent the faces of three-dimensional objects. The most common polygons are triangles (with three vertices) and quadrilaterals (with four vertices), although polygons with more sides can also be used.
|Polygon: In 3D graphics, a polygon is a flat figure with three or more sides. 3D objects are often composed of many polygons.
|Post-Processing: Set of operations performed on an image after the main rendering is completed.
|PRC (Product Representation Compact): A file format used for storing 3D models and associated data, often used in the aerospace and manufacturing industries.
|Procedural Animation: Animation generated in real-time by the game system, often in response to player or AI actions.
|Procedural Generation: It is a technique that uses algorithms and mathematical functions to automatically generate content, such as textures, 3D models, and entire universes.
|Procedural Modeling: Creation of 3D models from a set of rules or algorithms, rather than manually modeling them.
|Procedural Terrain Generation: Creation of terrains using algorithms at runtime.
|Quaternions: Way of representing rotation in 3D space that avoids gimbal lock, often used in 3D graphics and physics.
|Radiosity: Global illumination technique that simulates indirect lighting in a scene, i.e., light that has bounced off multiple surfaces before reaching the camera.
|Rasterization: It is a process that converts an image described in vectors (such as a 3D model) into a bitmap image that can be displayed on a screen.
|Ray Casting: Simplified ray tracing where rays are cast from the camera and their intersections with scene objects are traced.
|Ray Marching: Rendering technique used to produce images with a high level of depth and detail, as in rendering clouds or fluids.
|Ray Tracing: It is a technique that simulates the path of light to produce highly realistic images.
|Ray Tracing: Rendering technique that simulates the path of light in a scene, providing highly realistic visual effects.
|Raymarching Distance Fields: Rendering technique that uses a signed distance field to determine the intersection of a ray with a surface.
|Raymarching: Rendering technique used to render certain types of scenes, particularly those involving heavy refraction and reflection.
|Reflection Pass: Layered rendering process that generates an image showing only reflections.
|Reflection Probe: A tool that captures an environment map from its location in the world, which can later be used to provide reflections to other objects.
|Render Farm: Networked set of computers working together to process large rendering jobs.
|Render Target: Object onto which a GPU can render, which can be a back buffer, a texture buffer, or a depth buffer.
|Rendering: It is the process of generating a still image or animation from a 3D scene. This process can be highly complex and involve stages such as ray tracing, texture mapping, calculation of reflections and refractions, etc.
|Rigging: It is the process of creating a “skeleton” for a 3D model that can be animated.
|Rigid Body Dynamics: Simulation of solid objects that do not deform.
|Rotoscoping: Technique in which artists trace over film footage frame by frame for use in live-action animation films.
|Sampling: Process of converting continuous signals (such as an image or sound) into a series of discrete values.
|Scene Graph: Data structure that organizes scene objects in a hierarchical or graph structure to facilitate transformations and culling.
|Screen Space: The 2D coordinate system representing the display device’s screen.
|SDF (Signed Distance Function): in volumes refers to a mathematical representation of a three-dimensional shape or volume, where each point in space stores the signed distance to the nearest surface of that shape. It provides a compact and efficient way to represent and perform calculations on volumetric data, facilitating tasks such as rendering, collision detection, and shape manipulation in computer graphics.
|Shader Language: Specialized programming language used for programming shaders.
|Shader Model: Specification that defines the capabilities and limitations of a shader, including the number of operations it can perform, the types of data it can handle, etc. Specification of the capabilities and features of shaders on a GPU.
|Shader: A program executed on the GPU that performs rendering calculations and special effects.
|Shader: It is a computer program used in 3D rendering to calculate lighting and color effects on an object or scene. There are various types of shaders, including vertex shaders, pixel shaders, and geometry shaders.
|Shadow Mapping: Rendering technique used to calculate and render shadows in a 3D scene.
|Shadow Mapping: Technique for calculating shadows in a 3D scene.
|Signed Distance Field: Representation of a field that stores the closest distance to a surface, often used for efficiently rendering complex shapes and text.
|Skybox: Rendering technique that uses a large box with textures inside as a background to give the illusion of a distant landscape.
|Skybox: Technique used to create sky or environment backgrounds in 3D, where a texture is mapped inside a cube that surrounds the scene.
|Slerp (Spherical Linear Interpolation): Technique used to interpolate between two points on a sphere, often used for smooth rotations.
|Soft Body Dynamics: Simulation of objects that can deform and change shape.
|Specular Highlight: It is the bright spot seen on objects when light is directly reflected into the viewer’s line of sight.
|Specular Map: Type of texture map that determines the reflectivity and color of specular highlights.
|Specularity: Property that determines how shiny and reflective a surface is.
|Spherical Harmonics: Series of mathematical functions used to approximate functions over the surface of a sphere, used for lighting and shading.
|Spot Light: Light that emanates from a point in a specific direction, with a cone of influence.
|Sprite: 2D image or animation integrated into a larger 3D scene.
|State Machine: is a design model that allows an object to change its behavior based on its current internal state. These states could represent character actions, game modes, AI behaviors, and more.
|Stencil Buffer: A buffer that allows control over pixel writing in the color buffer.
|Stencil Buffer: Rendering technique that allows artists to control the rendering of objects pixel by pixel.
|STEP (Standard for the Exchange of Product model data): A widely used format for exchanging CAD data and product information in various industries.
|Stereoscopy: Technique for creating or enhancing the illusion of depth in an image by presenting slightly offset separate images for the left and right eyes.
|STL (Standard Tessellation Language): refers to the representation of visual elements in a way that is deliberately not realistic, often to achieve a distinctive, creative, or artistic effect.
|Stylized: Stylized refers to the representation of visual elements in a way that is deliberately not realistic, often to achieve a distinctive, creative, or artistic effect.
|Subdivision Surface: It is a 3D modeling technique used to smooth surfaces by subdividing the polygons of an object into smaller polygons.
|Subpixel Rendering: Technique that increases the perceived resolution of a display by taking into account individual pixels.
|Subsurface Scattering (SSS): Light effect that occurs when light penetrates the surface of a translucent material, scatters beneath the surface, and then exits.
|Subsurface Scattering: Phenomenon in which light penetrates the surface of a translucent material, scatters, and then exits the surface at a different location.
|Supersampling: Anti-aliasing technique that renders a scene at a higher resolution and then downsamples it to the desired output resolution.
|Surface Modeling: It is the process of creating the geometry of a 3D object.
|Tangent Space: Coordinate system local to the surface of an object, often used for normal mapping and other texture effects.
|Temporal Anti-aliasing (TAA): Anti-aliasing technique that reduces flickering or blurring in motion by accumulating samples from multiple frames.
|Terrain: 3D surface generated to simulate real-world terrains such as mountains, hills, plains, etc.
|Tessellation: It is the process of subdividing a polygonal surface into smaller shapes, allowing for a higher level of detail.
|Texture Atlas: A large image that contains many smaller textures, used to improve performance by reducing the number of texture switches.
|Texture Atlas: Technique that uses a single texture to store multiple separate textures, often to improve rendering efficiency.
|Texture Baking: Process of precalculating texture information such as lighting and shadows into a texture that can then be used for faster rendering.
|Texture Compression: Reduction of texture size to save memory and bandwidth.
|Texture Filtering: Process of determining the color of a pixel when mapping a texture, which may involve sampling one or multiple texels.
|Texture Filtering: Set of techniques used to determine the color of a pixel when mapping a texture, including bilinear, trilinear, and anisotropic filtering.
|Texture Wrap Mode: Determines how a texture is sampled when texture coordinates are outside the range of 0 to 1.
|Texture: In CGI, a texture is a digital image that is “wrapped” around a 3D object to give it color and details.
|Transform and Lighting (T&L): Graphics processing stage that transforms vertices and then calculates their lighting.
|Trailmeshes, Swoosh, Soulercoaster: Refer to a 3D graphics technique used to create a trailing effect behind moving objects. It’s done by animating UV coordinates along a mesh that follows the object, allowing for a continuous and fluid visual trail. This method can give a sense of speed and direction and is commonly used in video game development and visual effects.
|Transformation Matrix: Matrix used to perform transformations on 3D objects, such as translations, rotations, and scaling.
|Trilinear Filtering: Improved version of bilinear filtering that also interpolates between mipmaps.
|Triple Buffering: Improvement over double buffering that can offer higher performance by allowing the CPU to start working on another frame as soon as it finishes with the current one.
|USD (Universal Scene Description): A format for representing and exchanging complex 3D scenes and assets, developed by Pixar Animation Studios.
|Utah Teapot (Curiosity): The Utah teapot is a famous 3D computer graphics model that has become an iconic symbol in the field of computer graphics. It was created in 1975 by Martin Newell at the University of Utah as a simple yet versatile model for testing rendering algorithms. The Utah teapot is a teapot-shaped 3D object with a spout, handle, and a lid. It has a relatively simple geometry, consisting of a curved body and a cylindrical spout. The teapot’s design allows it to showcase various rendering techniques, such as shading, reflections, and refractions. Due to its widespread use as a test object, the Utah teapot has become a symbol in computer graphics and is often used as a reference point in research papers, tutorials, and software demonstrations. It represents the challenges and techniques involved in creating realistic 3D renderings.
|UV Mapping: It is the process of assigning a 2D texture to a 3D object. The “U” and “V” coordinates represent the coordinates of the texture in 2D.
|UV Mapping: Process of applying a 2D texture to a 3D object.
|UV Mapping: Process of projecting a 2D texture onto a 3D object.
|UV Unwrapping: Process of “unfolding” a 3D object onto a 2D surface to allow for UV mapping.
|VDB (Voxelized Distance Field or Voxel Database): VDB is a data structure commonly used in computer graphics and visual effects to represent volumetric data efficiently. It provides a way to store and process volumetric information in a grid-like structure of voxels, where each voxel can store various attributes such as density, color, or distance. VDBs are particularly useful for tasks like volume rendering, simulation, and modeling. They allow for fast and memory-efficient access to volumetric data, enabling realistic rendering of complex scenes, physics-based simulations, and efficient editing or manipulation of volumetric objects. VDBs are widely supported in industry-standard software and libraries used in the field of computer graphics and visual effects.
|Vector Graphics: Method of generating images using geometric descriptions.
|Vector: An entity with magnitude and direction, often used to represent positions, directions, and velocities in 3D graphics.
|Vertex Buffer Object (VBO): It is a feature in OpenGL that allows the creation of vertex buffers to store vertices, usually in GPU memory for faster access.
|Vertex: A vertex is a special point in the geometry of a three-dimensional object. In a polygonal model, a vertex is a point of intersection where the edges of the faces (polygons) meet. Vertices have three-dimensional coordinates and contain additional information such as normals and texture coordinates, which help determine the appearance of the model in the graphical rendering.
|Vertex Animation Texture (VAT): is a technique used in the videogame industry to animate complex objects by using textures instead of the traditional skeleton and mesh technique. In simple terms, it’s like storing the position of each vertex of a 3D object at every frame of an animation within a texture. Then, during runtime, the object’s shader reads this texture and moves the object’s vertices to the positions stored in the texture, thereby creating the illusion of animation. It’s a useful technique when dealing with very complex objects that have a lot of vertices and need to be animated efficiently, or for certain special effects where traditional animation techniques may not be suitable.
|View Frustum Culling: Optimization technique that discards objects outside the camera’s view pyramid (frustum) and thus do not need to be rendered.
|Viewport: It is the rectangular region on the screen where 3D graphics are projected. You can have multiple views on a single screen, each with its own camera and perspective.
|Viewport: The area of the application window where the 3D scene is rendered.
|Volume Rendering: Set of techniques used to display a 2D representation of a 3D model that allows visualization of interiors.
|Volumetric Lighting: Rendering technique that simulates the scattering of light in the air, smoke, fog, dust, etc.
|Voxel: It is a volumetric pixel, a concept used in 3D graphics where a shape is represented in a three-dimensional space.
|VRML (Virtual Reality Modeling Language): A text-based format for representing 3D interactive scenes and virtual worlds.
|Vsync (Vertical Synchronization): Method to prevent screen tearing by synchronizing the frame rate with the monitor’s refresh rate.
|WFC (Wave Function Collapse): this is a technique used in computer graphics and procedural generation to create complex patterns and structures from limited input data. It is based on the quantum mechanical concept of wave functions, where the combination of different elements collapses into a deterministic state.
|Wireframe Rendering: Visualization of a 3D scene that shows only the edges of polygons, useful for viewing the underlying structure of a scene.
|Wireframe: It is a visual representation of a three-dimensional object that shows only the edges of the object. It is like seeing the skeleton of a 3D object.
|World Space: The global coordinate system in which all objects in the scene exist.
|X3D (Extensible 3D): An XML-based format for representing and exchanging 3D graphics and multimedia content.
|XVL (eXtensible Virtual world description Language): A format for compact and highly compressed representation of 3D data, often used for technical documentation and visualizations.
|Z-Buffering: It is a technique for managing depth in 3D graphics, used to determine which parts of an object should be displayed if they overlap with other objects.
|Z-fighting: Graphic artifact that occurs when two or more primitives have similar or identical values in the depth buffer, causing visual flickering.