Realistic 3D image. Three-dimensional graphics. Lighting is not only light, but also shadows

Unlike 2D animation, where much can be drawn by hand, in 3D, objects are too smooth, their shape is too regular, and they move along too "geometric" paths. True, these problems are surmountable. Animation packs improve rendering tools, update special effects tools, and expand material libraries. To create "uneven" objects, such as hair or smoke, the technology of forming an object from many particles is used. Inverse kinematics and other animation techniques are introduced, and new methods of combining video recording and animation effects are emerging, which makes scenes and movements more realistic. In addition, open systems technology allows you to work with several packages at once. You can create a model in one package, paint it in another, revive it in a third, supplement it with a video in a fourth. And finally, the functions of many professional packages today can be extended with additional applications written specifically for the base package.

3D Studio and 3D Studio Max

One of the most famous 3D animation packages on IBM is Autodesk's 3D Studio. The program runs under DOS, provides the entire process of creating a three-dimensional film: object modeling and scene formation, animation and visualization, work with video. In addition, there is a wide range of application programs (IPAS processes) written specifically for 3D Studio. A new program by the same company called 3D Studio MAX for Windows NT has been in development over the past few years and claims to be a competitor to the powerful SGI workstation packages. The interface of the new program is the same for all modules and has a high degree of interactivity. 3D Studio MAX implements advanced animation control capabilities, stores the life history of each object and allows you to create a variety of lighting effects, supports 3D accelerators and has an open architecture, that is, it allows third parties to include additional applications in the system.



TrueSpace, Prisms, Three-D, RenderMan, Crystal Topas

Electric Image, Soft Image

To create three-dimensional animation on IBM and Macintosh computers, it is also convenient to use the Electric Image Animation System package, which includes a large set of animation tools, special effects, sound tools, and a font generator with customizable parameters. Although this program does not have modeling tools, it does have the ability to import over thirty different model formats. The package also supports working with hierarchical objects and inverse kinematics tools. In turn, Microsoft's Softimage 3D program runs on SGI and Windows NT platforms. It supports polygon and spline modeling, special effects, particles, and motion transfer technology from live actors to computer characters.

Imagine how the object will fit into the existing building. Viewing various versions of the project is very convenient in a three-dimensional model. In particular, you can change the materials and coverage (textures) of project elements, check the illumination of individual areas (depending on the time of day), place various interior elements, etc.

Unlike a number of CAD systems that use additional modules or third-party programs for visualization and animation, MicroStation has built-in tools for creating photorealistic images (BMP, JPG, TIFF, PCX, etc.), as well as for recording animation clips in standard formats (FLI, AVI ) and a set of frame-by-frame pictures (BMP, JPG, TIFF, etc.).

Creating realistic images

Creating photorealistic images begins with the assignment of materials (textures) to various elements of the project. Each texture is applied to all elements of the same color lying in the same layer. Given that the maximum number of layers is 65 thousand, and colors 256, it can be assumed that an individual material can really be applied to any element of the project.

The program provides the ability to edit any texture and create a new one based on a bitmap image (BMP, JPG, TIFF, etc.). In this case, two images can be used for the texture, one of which is responsible for the relief, and the other for the texture of the material. Both relief and texture have different placement parameters per element, such as: scale, rotation angle, offset, way to fill uneven surfaces. In addition, the bump has a “height” parameter (changeable in the range from 0 to 20), and the texture, in turn, has a weight (changeable in the range from 0 to 1).

In addition to the pattern, the material has the following adjustable parameters: scattering, diffusion, gloss, polishing, transparency, reflection, refraction, base color, highlight color, the ability of the material to leave shadows.

Texture mapping can be previewed on standard 3D solids or on any project element, and several types of element shading can be used. Simple tools for creating and editing textures allow you to get almost any material.

An equally important aspect for creating realistic images is the way of visualization (rendering). MicroStation supports the following well-known shading methods: hidden line removal, hidden line shading, permanent shading, smooth shading, Phong shading, ray tracing, radiosity, particle tracing. During rendering, the image can be smoothed (stepped out), as well as a stereo image can be created, which can be viewed using glasses with special light filters.

There are a number of display quality settings (corresponding to image processing speed) for ray tracing, radiosity, particle tracing methods. To speed up the processing of graphic information, MicroStation supports graphics acceleration methods QuickVision technology. To view and edit the created images, there are also built-in modification tools that support the following standard functions (which, of course, cannot compete with the functions of specialized programs): gamma correction, tone adjustment, negative, wash, color mode, crop, resize, rotate , mirroring, converting to another data format.

When creating realistic images, a significant part of the time is occupied by the placement and management of light sources. Light sources are divided into global and local lighting. Global Illumination, in turn, consists of ambient light, flare, sunlight, skylight. And for the sun, along with the brightness and color, the azimuth angle and the angle above the horizon are set. These angles can be automatically calculated by the specified geographical location of the object (at any point on the globe indicated on the world map), as well as by the date and time the object was viewed. The light of the sky depends on the cloudiness, the quality (opacity) of the air, and even on the reflection from the ground.

Local light sources can be of five types: remote, point, conical, surface, opening for the sky. Each source can have the following properties: color, luminous intensity, intensity, resolution, shadow, attenuation at a certain distance, cone angle, etc.

Light sources can help identify unlit areas of an object where additional lighting is needed.

Cameras are used to view project elements from a specific angle and to move the view freely throughout the file. Using the keyboard and mouse control keys, you can set nine types of camera movement: flying, turning, descending, sliding, avoiding, rotating, swimming, moving on a cart, tilting. Four different types of movement can be connected to the keyboard and mouse (the modes are switched by holding the Shift, Ctrl, Shift + Ctrl keys).

Cameras allow you to view the object from different angles and look inside. By varying the camera parameters (focal length, lens angle), you can change the perspective of the view.

To create more realistic images, it is possible to connect a background image, such as a photograph of an existing landscape.

3D imaging

With the growth of computing power and the availability of memory elements, with the advent of high-quality graphic terminals and output devices, a large group of algorithms and software solutions have been developed that allow you to form an image on the screen that represents a certain three-dimensional scene. The first such solutions were intended for the tasks of architectural and mechanical design.

When forming a three-dimensional image (static or dynamic), its construction is considered within a certain coordinate space, which is called stage. The stage implies work in a three-dimensional, three-dimensional world - that's why the direction was called three-dimensional (3-Dimensional, 3D) graphics.

Separate objects are placed on the stage, made up of geometric volumetric bodies and sections of complex surfaces (most often, the so-called B-Splines). To form an image and perform further operations, the surfaces are divided into triangles - minimal flat figures - and are further processed precisely as a set of triangles.

At the next stage " world” coordinates of grid nodes are recalculated using matrix transformations into coordinates specific, i.e. depending on the point of view of the scene. Viewpoint Position, usually called camera position.

Preparation system workspace
3D graphics Blender (example from the site
http://www.blender.org
)

After formation frame(“wire mesh”) is performed shading- giving the surfaces of objects some properties. The properties of a surface are primarily determined by its light characteristics: luminosity, reflectivity, absorptivity, and scattering power. This set of characteristics allows you to define the material whose surface is being modeled (metal, plastic, glass, etc.). Transparent and translucent materials have a number of other characteristics.

As a rule, during the execution of this procedure, clipping invisible surfaces. There are many methods for performing this pruning, but the most popular method has been
Z-buffer
, when an array of numbers is created denoting “depth” - the distance from a point on the screen to the first opaque point. The next surface points will be processed only when their depth is less, and then the Z-coordinate will decrease. The power of this method directly depends on the maximum possible distance of the scene point from the screen, i.e. on the number of bits per point in the buffer.

Calculation of a realistic image. Performing these operations allows you to create the so-called solid models objects, but this image will not be realistic. To form a realistic image on the stage are placed sources of light and performed illumination calculation every point on the visible surfaces.

To make objects more realistic, the surface of the objects is “fitted” texture - image(or the procedure that forms it), determining the nuances of appearance. The procedure is called "texturing". During texture mapping, stretching and anti-aliasing methods are applied - filtration. For example, anisotropic filtering, mentioned in the description of video cards, does not depend on the direction of texture transformation.

After determining all the parameters, it is necessary to perform the image formation procedure, i.e. calculation of the color of dots on the screen. The counting process is called rendering.When performing such a calculation, it is necessary to determine the light falling on each point of the model, taking into account the fact that it can be reflected, that the surface can block other areas from this source, etc.

Two main methods are used to calculate illumination. The first is the method back ray tracing. With this method the trajectory of those rays that eventually fall into the pixels of the screen is calculated- in reverse. The calculation is carried out separately for each of the color channels, since light of different spectra behaves differently on different surfaces.

Second method - radiance method - provides for the calculation of the integral luminosity of all areas falling into the frame, and the exchange of light between them.

The resulting image takes into account the specified characteristics of the camera, i.e. viewers.

Thus, as a result of a large number of calculations, it becomes possible to create images that are difficult to distinguish from photographs. To reduce the number of calculations, they try to reduce the number of objects and, where possible, replace the calculation with a photograph; for example, when forming the background of an image.

Solid model and the final result of model calculation
(example from website http://www.blender.org)

Animation and virtual reality

The next step in the development of 3D realistic graphics technologies was the possibility of its animation - movement and frame-by-frame change of the scene. Initially, only supercomputers could cope with such a volume of calculations, and they were used to create the first three-dimensional animated videos.

Later, hardware specifically designed for calculating and forming images was developed - 3D accelerators. This allowed in a simplified form to perform such formation in real time, which is used in modern computer games. In fact, now even ordinary video cards include such facilities and are a kind of narrow-purpose mini-computers.

When creating games, shooting films, developing simulators, in the tasks of modeling and designing various objects, the task of forming a realistic image has another significant aspect - modeling not just the movement and change of objects, but modeling their behavior, corresponding to the physical principles of the surrounding world.

This direction, taking into account the use of all kinds of hardware for transmitting the influences of the outside world and increasing the effect of presence, was called virtual reality.

To embody such realism, special methods are created for calculating parameters and transforming objects - changing the transparency of water from its movement, calculating the behavior and appearance of fire, explosions, collisions of objects, etc. Such calculations are quite complex, and a number of methods have been proposed for their implementation in modern programs.

One of them is the processing and use shaders - lighting procedures.(or exact position)at key points according to some algorithm. Such processing allows you to create the effects of a "luminous cloud", "explosion", increase the realism of complex objects, etc.

Interfaces for working with the “physical” component of image formation have appeared and are being standardized, which makes it possible to increase the speed and accuracy of such calculations, and hence the realism of the created world model.

Three-dimensional graphics is one of the most spectacular and commercially successful developments in information technology, often referred to as one of the main drivers of hardware development. 3D graphics tools are actively used in architecture, mechanical engineering, in scientific work, when shooting movies, in computer games, and in education.

Examples of software products

Maya, 3DStudio, Blender

The topic is very attractive for students of any age and arises at all stages of studying a computer science course. Attractiveness for students is explained by a large creative component in practical work, a visual result, as well as a broad applied focus of the topic. Knowledge and skills in this area are required in almost all branches of human activity.

In elementary school, two types of graphics are considered: raster and vector. Discussed are the differences between one species and another, as a result - the positive aspects and disadvantages. The areas of application of these types of graphics will allow you to enter the names of specific software products that allow you to process one or another type of graphics. Therefore, materials on topics: raster graphics, color models, vector graphics - will be in demand to a greater extent in the primary school. In high school, this topic is supplemented by consideration of the features of scientific graphics and the possibilities of three-dimensional graphics. Therefore, topics will be relevant: photorealistic images, modeling of the physical world, compression and storage of graphic and streaming data.

Most of the time is occupied by practical work on the preparation and processing of graphic images using raster and vector graphics editors. In elementary school, this is usually Adobe Photoshop, CorelDraw and/or Macromedia Flach. The difference between the study of certain software packages in basic and high school is more manifested not in the content, but in the forms of work. In the basic school, this is practical (laboratory) work, as a result of which students master the software product. In high school, the main form of work becomes an individual workshop or project, where the main component is the content of the task, and the software products used to solve it remain only a tool.

Tickets for elementary and high school contain questions related to both the theoretical foundations of computer graphics and the practical skills of processing graphic images. Such parts of the topic as the calculation of the information volume of graphic images and the features of graphics coding are present in the control measuring materials of the unified state exam.

3D modeling and visualization are essential in the production of products or their packaging, as well as in the creation of prototypes of products and the creation of volumetric animation.

Thus, 3D modeling and visualization services are provided when:

  • an assessment of the physical and technical features of the product is needed even before it is created in the original size, material and configuration;
  • it is necessary to create a 3D model of the future interior.

In such cases, you will definitely have to resort to the services of specialists in the field of 3D modeling and visualization.

3D models- an integral part of high-quality presentations and technical documentation, as well as - the basis for creating a product prototype. The peculiarity of our company is the ability to carry out a full cycle of work to create a realistic 3D object: from modeling to prototyping. Since all work can be carried out in a complex, this significantly reduces the time and cost of finding contractors and setting new technical specifications.

When it comes to a product, we will help you to release its trial series and establish further production, small-scale or industrial scale.

Definition of the concepts "3D modeling" and "visualization"

3D graphics or 3D modeling- computer graphics, which combines the techniques and tools necessary to create three-dimensional objects in a technical space.

Techniques should be understood as methods of forming a three-dimensional graphic object - calculating its parameters, drawing a "skeleton" or a three-dimensional non-detailed form; extrusion, building up and cutting out parts, etc.

And under the tools - professional programs for 3D modeling. First of all - SolidWork, ProEngineering, 3DMAX, as well as some other programs for volumetric visualization of objects and space.

Volume rendering is the creation of a two-dimensional raster image based on a constructed 3d model. At its core, this is the most realistic image of a three-dimensional graphic object.

Applications of 3D modeling:

  • Advertising and marketing

Three-dimensional graphics are indispensable for the presentation of the future product. In order to start production, you need to draw and then create a 3D model of the object. And, already on the basis of a 3D model, using rapid prototyping technologies (3D printing, milling, silicone mold casting, etc.), a realistic prototype (sample) of the future product is created.

After rendering (3D visualization), the resulting image can be used in the development of packaging design or in the creation of outdoor advertising, POS materials and exhibition stand design.

  • urban planning

With the help of three-dimensional graphics, the most realistic modeling of urban architecture and landscapes is achieved - at minimal cost. Visualization of building architecture and landscape design allows investors and architects to feel the effect of being in the designed space. That allows you to objectively assess the merits of the project and eliminate the shortcomings.

  • Industry

Modern production cannot be imagined without pre-production modeling of products. With the advent of 3D technologies, manufacturers have been able to significantly save materials and reduce financial costs for engineering design. With 3D modeling, graphic designers create 3D images of parts and objects that can then be used to create molds and object prototypes.

  • Computer games

3D technology has been used in the creation of computer games for more than a decade. In professional programs, experienced specialists manually draw 3D landscapes, character models, animate created 3D objects and characters, and also create concept art (concept designs).

  • Cinema

The entire modern film industry focuses on 3D cinema. For such filming, special cameras are used that can shoot in 3D. In addition, with the help of three-dimensional graphics for the film industry, individual objects and full-fledged landscapes are created.

  • Architecture and interior design

The technology of 3D modeling in architecture has long established itself from the best side. Today, the creation of a three-dimensional model of a building is an indispensable attribute of design. Based on the 3d model, you can create a prototype of the building. Moreover, both a prototype that repeats only the general outlines of the building, and a detailed prefabricated model of the future building.

As for interior design, with the help of 3d-modeling technology, the customer can see how his home or office space will look like after the repair.

  • Animation

With the help of 3D graphics, you can create an animated character, "make" him move, and also, by designing complex animation scenes, create a full-fledged animated video.

Stages of 3D model development

The development of a 3D model is carried out in several stages:

1. Modeling or creating model geometry

We are talking about creating a three-dimensional geometric model, without taking into account the physical properties of the object. The methods used are:

  • extrusion;
  • modifiers;
  • polygonal modeling;
  • rotation.

2. Texturing an object

The level of realism of the future model directly depends on the choice of materials when creating textures. Professional programs for working with three-dimensional graphics are practically unlimited in their possibilities for creating a realistic picture.

3. Setting up lights and viewpoints

One of the most difficult steps in creating a 3D model. Indeed, the realistic perception of the image directly depends on the choice of the tone of light, the level of brightness, sharpness and depth of shadows. In addition, it is necessary to select an observation point for the object. This can be a bird's-eye view or scaling the space to achieve the effect of being in it - by choosing a view of the object from a human height.+

4. 3D visualization or rendering

The final stage of 3D modeling. It consists in detailing the display settings of the 3D model. That is, the addition of graphic special effects, such as glare, fog, radiance, etc. In the case of video rendering, the exact parameters of the 3D animation of characters, details, landscapes, etc. are determined. (time of color differences, glow, etc.).

At the same stage, visualization settings are detailed: the required number of frames per second and the extension of the final video are selected (for example, DivX, AVI, Cinepak, Indeo, MPEG-1, MPEG-4, MPEG-2, WMV, etc.). If it is necessary to obtain a two-dimensional raster image, the format and resolution of the image is determined, mainly JPEG, TIFF or RAW.

5. post-production

Process captured images and videos with media editors - Adobe Photoshop, Adobe Premier Pro (or Final Cut Pro / Sony Vegas), GarageBand, Imovie, Adobe After Effects Pro, Adobe Illustrator, Samplitude, SoundForge, Wavelab, etc.

Post-production is to give media files original visual effects, the purpose of which is to excite the mind of a potential consumer: to impress, arouse interest and be remembered for a long time!

3D modeling in the foundry

In the foundry industry, 3D modeling is gradually becoming an indispensable technological component of the product creation process. If we are talking about casting into metal molds, then 3D models of such molds are created using 3D modeling technologies, as well as 3D prototyping.

But no less popular today is gaining molding in silicone molds. In this case, 3D modeling and visualization will help you create a prototype of an object, on the basis of which a mold will be made of silicone or other material (wood, polyurethane, aluminum, etc.).

3D visualization methods (rendering)

1. Rasterization.

One of the simplest rendering methods. When using it, additional visual effects (for example, the color and shadow of the object relative to the viewpoint) are not taken into account.

2. Raycasting.

A 3D model is viewed from a certain, predetermined point - from a human height, a bird's eye view, etc. Rays are sent from the point of view, which determine the chiaroscuro of the object when it is viewed in the usual 2D format.

3. Ray tracing.

This rendering method means that, when it hits a surface, the ray is divided into three components: reflected, shadow and refracted. Actually, this forms the color of the pixel. In addition, the realism of the image directly depends on the number of divisions.

4. Path tracing.

One of the most difficult 3D visualization methods. When using this 3D rendering method, the propagation of light rays is as close as possible to the physical laws of light propagation. This is what ensures the high realism of the final image. It should be noted that this method is resource intensive.

Our company will provide you with a full range of services in the field of 3D modeling and visualization. We have all the technical capabilities to create 3D models of varying complexity. We also have extensive experience in 3d visualization and modeling, which you can see for yourself by examining our portfolio, or our other works not yet presented on the site (on request).

Brand agency KOLORO will provide you with services for the production of a trial series of products or its small-scale production. To do this, our specialists will create the most realistic 3D model of the object you need (packaging, logo, character, 3D sample of any product, mold, etc.), on the basis of which a product prototype will be created. The cost of our work directly depends on the complexity of the 3D modeling object and is discussed on an individual basis.

The construction of realistic images involves both physical and psychological processes. Light, that is, electromagnetic energy, after interacting with the environment, enters the eye, where, as a result of physical and chemical reactions, electrical impulses are generated that are perceived by the brain. Perception is an acquired property. The human eye is a very complex system. It has an almost spherical shape with a diameter of about 20 mm. It is known from experiments that the sensitivity of the eye to the brightness of light varies according to a logarithmic law. The limits of sensitivity to brightness are extremely wide, on the order of 10 10 , but the eye is not able to simultaneously perceive the entire range. The eye responds to a much smaller range of values ​​relative to brightness, distributed around the level of light adaptation.

The speed of adaptation to brightness is not the same for different parts of the retina, but, nevertheless, is very high. The eye adjusts to the "average" brightness of the scene being viewed; therefore, an area of ​​constant brightness (intensity) appears brighter or lighter against a dark background than against a light background. This phenomenon is called simultaneous contrast.

Another property of the eye of relevance to computer graphics is that the edges of an area of ​​constant intensity appear brighter, causing areas of constant intensity to be perceived as having variable intensity. This phenomenon is called the Mach band effect after the Austrian physicist Ernest Mach who discovered it. The Mach band effect is observed when the slope of the intensity curve changes abruptly. If the intensity curve is concave, then in this place the surface seems lighter, if it is convex, it is darker. (Figure 1.1)

Rice. 1.1. Mach band effect: (a) piecewise linear intensity function, (b) intensity function with continuous first derivative.

1.1 A simple lighting model.

Light energy incident on a surface can be absorbed, reflected, or transmitted. Partly it is absorbed and converted into heat, and partly reflected or transmitted. An object can only be seen if it reflects or transmits light; if the object absorbs all the incident light, then it is invisible and is called a black body. The amount of absorbed, reflected or transmitted energy depends on the wavelength of light. When illuminated with white light, in which the intensity of all wavelengths is reduced approximately equally, the object appears gray. If almost all of the light is absorbed, then the object appears black, and if only a small part of it is white. If only certain wavelengths are absorbed, then the energy distribution of the light emanating from the object changes and the object appears colored. The color of an object is determined by the absorbed wavelengths.

The properties of reflected light depend on the structure, direction and shape of the light source, on the orientation and properties of the surface. Reflected light from an object can also be diffuse or specular. Diffuse reflection of light occurs when light appears to penetrate under the surface of an object, is absorbed, and then re-emitted. In this case, the position of the observer does not matter, since diffusely reflected light is scattered uniformly in all directions. Specular reflection comes from the outer surface of the object.

Fig.1.2. Lambertian diffuse reflection

The surface of objects rendered using a simple Lambertian diffuse reflection lighting model (Figure 1.2) looks faded and matte. It is assumed that the source is point, so objects that are not directly hit by light appear black. However, the objects of real scenes are also affected by diffused light reflected from the environment, for example, from the walls of a room. Scattered light corresponds to a distributed source. Since the calculation of such sources requires large computational costs, in computer graphics they are replaced by the scattering coefficient.

Let two objects be given, equally oriented relative to the source, but located at different distances from it. If you find their intensity according to this formula, then it will be the same. This means that when objects overlap, they cannot be distinguished, although the intensity of the light is inversely proportional to the square of the distance from the source, and the object further from it should be darker. If we assume that the light source is at infinity, then the diffuse term of the illumination model will turn to zero. In the case of a perspective transformation of the scene, the distance from the projection center to the object can be taken as a proportionality factor for the diffuse term.

But if the projection center lies close to the object, then for objects lying at approximately the same distance from the source, the difference in intensities is excessively large. As experience shows, greater realism can be achieved with linear attenuation. In this case, the lighting model looks like this (Fig. 1.3.)

Fig.1.3. Mirror reflection.

If the observation point is assumed to be at infinity, then it is determined by the position of the object closest to the observation point. This means that the nearest object is illuminated at the full intensity of the source, and more distant objects at a reduced intensity. For colored surfaces, the lighting model is applied to each of the three primary colors.

Due to the specular reflection on shiny objects, light reflections appear. Due to the fact that the specularly reflected light is focused along the reflection vector, the glare also moves when the observer moves. Moreover, since light is reflected from the outer surface (with the exception of metals and some solid dyes), the reflected beam retains the properties of the incident beam. For example, when a shiny blue surface is illuminated with white light, white rather than blue highlights occur.

Transparency

In the main lighting models and algorithms for removing hidden lines and surfaces, only opaque surfaces and objects are considered. However, there are also transparent objects that transmit light, such as a glass, a vase, a car window, or water. When passing from one medium to another, for example, from air to water, the light beam is refracted; therefore, a stick sticking out of the water seems to be bent. Refraction is calculated according to Snell's law, which states that the incident and refracting rays lie in the same plane, and the angles of incidence and refraction are related by a formula.

No substance transmits all the incident light, some of it is always reflected; this is also shown in (Fig.1.4.)

Fig.1.4. Geometry of refraction.

Just like reflection, transmission can be specular (directional) or diffuse. Directional transmission is characteristic of transparent substances, such as glass. If you look at an object through such a substance, then, with the exception of the contour lines of curved surfaces, there will be no distortion. If light scatters when passing through a substance, then we have diffuse transmission. Such substances appear translucent or opaque. If you look at an object through such a substance, it will look fuzzy or distorted.

Shadows

If the positions of the observer and the light source are the same, then the shadows are not visible, but they appear when the observer moves to any other point. An image with built-in shadows looks much more realistic, and, in addition, shadows are very important for modeling. For example, an area of ​​particular interest to us may be invisible due to the fact that it falls into the shadow. In applied areas - construction, spacecraft development, etc. - shadows affect the calculation of incident solar energy, heating and air conditioning.

Observations show that the shadow consists of two parts: penumbra and full shadow. The full shadow is the central, dark, sharply defined part, and the penumbra is the lighter part surrounding it. In computer graphics, point sources are usually considered, creating only a total shadow. Distributed light sources of finite size create. both shadow and penumbra: in full shadow, there is no light at all, and penumbra is illuminated by a part of a distributed source. Due to high computational costs, as a rule, only the total shadow formed by a point light source is considered. The complexity and, consequently, the cost of computations also depend on the position of the source. It's easiest when the source is at infinity and the shadows are defined using orthogonal projection. It is more difficult if the source is located at a finite distance, but out of the field of view; a perspective projection is needed here. The most difficult case is when the source is in the field of view. Then you need to divide the space into sectors and look for shadows separately for each sector.

In order to build the shadows, one must essentially remove the invisible surfaces twice: for the position of each source and for the position of the observer or viewpoint, i.e. this is a two-step process. Consider the scene in Fig. 1.5. One source is at infinity above: in front to the left of the box. The observation point lies in front: top right of the object. In this case, the shadows are formed in two ways: this is the own shadow and the projection one. Own shadow is obtained when the object itself prevents light from falling on some of its faces, for example, on the right side of a parallelepiped. At the same time, the algorithm for constructing shadows is similar to the algorithm for removing non-facial faces: the faces shaded by their own shadow are non-facial if the observation point is aligned with the light source.

Fig. 1.5. Shadows.

If one object prevents light from reaching another, then a projection shadow is obtained, for example, a shadow on a horizontal plane on (Fig. 1.5, b.) To find such shadows, you need to build projections of all non-front faces onto the scene. The center of the projection is at the light source. The intersection points of the projected face with all other planes form polygons, which are marked as shadow polygons and entered into the data structure. In order not to introduce too many polygons into it, you can project the outline of each object, rather than individual faces.

After adding shadows to the data structure, as usual, a view of the scene is built from a given viewpoint. Note that to create different views, you do not need to recalculate the shadows, since they depend only on the position of the source and do not depend on the position of the observer.

Development of algorithms

The founders of computer graphics developed a certain concept: to form a three-dimensional image based on a set of geometric shapes. Usually triangles are used for this purpose, less often - spheres or paraboloids. Geometric shapes are solid, with the foreground geometry obscuring the background geometry. Then came the development of virtual lighting, which produced flat, shadowy areas on virtual objects that gave computer images clear contours and a somewhat man-made look.

Henry Gouraud suggested averaging the coloring between the corners to get a smoother image. This form of anti-aliasing requires minimal computation and is currently used by most graphics cards. But at the time of its invention in 1971, computers could only render the simplest scenes in this way.

In 1974, Ed Catmull introduced the concept of the Z-buffer, the essence of which was that an image can consist of horizontal (X) and vertical (Y) elements, each of which also has depth. In this way, the process of removing hidden edges was accelerated, and now this method is the standard for three-dimensional accelerators. Another invention of Catmull was wrapping a 2D image around a 3D geometry. Projecting a texture onto a surface is the primary way to give a realistic look to a 3D object. Initially, the objects were uniformly painted in one color, so, for example, the creation of a brick wall required individual modeling of each brick and filling between them. These days, you can create such a wall by assigning a brick wall bitmap to a simple rectangular object. This process requires a minimum amount of computing and computer resources, not to mention a significant reduction in operating time.

Wu Tong Fong improved on Gouraud's anti-aliasing principle by interpolating the hues of the entire surface of a polygon, not just the areas adjacent directly to the edges. Although rendering in this case is a hundred times slower than with the previous version of anti-aliasing, the objects get the "plastic" look inherent in early computer animation as a result. Maya uses two Phong coloring options.

James Blinn combined elements of Phong coloring and texture projection to create a relief texture in 1976. If the surface has Phong anti-aliasing applied and you can project a texture map onto it, why not use grayscale according to the direction of the normals to the edges to create a bump effect? Lighter shades of gray are perceived as elevations, and darker ones as depressions. The geometry of the object remains unchanged, and you can see its silhouette.

Blinn also developed a method for using environmental maps to form reflections. He proposed creating a cubic environment by rendering six projections from the center of an object. The images obtained in this way are then projected back onto the object, but with fixed coordinates, as a result of which the image does not move with the object. As a result, the surface of the object will reflect the environment. To successfully implement the effect, it is necessary that there is no rapid movement of environmental objects during the animation process. In 1980, Turner Whitted proposed a new visualization technique called tracing. This is tracking the paths of individual light rays from the light source to the camera lens, taking into account their reflection from objects in the scene and refraction in transparent media. Although the implementation of this method requires a significant amount of computer resources, the image is very realistic and accurate.

In the early 1980s, when computers became more widely used in various fields of activity, attempts began to apply computer graphics to the entertainment field, including cinema. For this, special hardware and heavy-duty computers were used, but a start was made. By the mid-1980s, SGI began manufacturing high-performance workstations for scientific research and computer graphics.

Alias ​​was founded in 1984 in Toronto. This name has two meanings. Firstly, it translates as a "pseudonym", because in those days the founders of the company were forced to work part-time. Second, the term is used to describe the jagged edges of an image in computer graphics. Initially, the company focused on the release of software. designed for modeling and developing complex surfaces. Then Power Animator was created, a powerful and expensive product that many manufacturers considered the best available at that time.

In 1984, Wavefront was founded in Saita Barbara. This name literally translates as wave front. The company immediately moved into developing 3D visual effects software and producing graphic intros for the Showtime, Bravo and National Geographic Explorer television programs. The first application created by Wave-front was called Preview. Then, in 1988, Softimage was released, which quickly gained popularity in the market for products designed to work with computer graphics. All the software and hardware used to create animation in the 80s was specialized and very expensive. By the end of the 80s, there were only a few thousand people in the world involved in visual effects modeling. Almost all of them worked on computers manufactured by Silicon Graphics and used software from Wavefront, Softimage, etc.

Thanks to the advent of personal computers, the number of people involved in the creation of computer animation began to grow. The IBM PC, Amiga, Macintosh, and even Atari began developing 3D imaging software. In 1986, AT&T released the first personal computer animation package called TOPAS. It cost $10,000 and ran on computers with an Intel 286 processor and a DOS operating system. Thanks to these computers, it became possible to create free animation, despite the primitive graphics and relatively low speed of calculations. The following year, Apple Macintosh released another personal computer-based 3D graphics system called Electric Image. In 1990, AutoDesk began selling 3D Studio, a product created by the Yost Group, an independent team that developed graphics products for Atari. The cost of 3D Studio was only $ 3,000, which in the eyes of personal computer users made it a worthy competitor to the TOPAS package. A year later, NewTek's Video Toaster came along with the easy-to-use LightWave software. Amiga computers were needed to work with them. These programs were in great demand in the market and sold thousands of copies. By the beginning of the 90s, the creation of computer animation became available to a wide range of users. Everyone could experiment with animation and tracing effects. Now you can download Steven Coy's Vivid program for free, which allows you to reproduce tracing effects, or the program Persistence of Vision Raytracer, better known as POVRay. The latter provides children and novice users with a wonderful opportunity to get acquainted with the basics of computer graphics.

Films with stunning special effects demonstrate a new stage in the development of computer graphics and visualization. Unfortunately, most users believe that creating impressive animations depends entirely on the power of the computer. This misconception still exists today.

As the 3D application market grows and competition increases, many companies have consolidated their technologies. In 1993, Wavefront merged with Thompson Digital Images, which used NURBS curve modeling and interactive rendering. Later, these features formed the basis of interactive photorealistic rendering in Maya. In 1994, Microsoft bought Softimage and released a version of the product for Windows NT platforms based on Pentium computers. This event can be considered the beginning of an era of inexpensive and accessible to the average user of a personal computer programs for working with three-dimensional graphics. In response, SGI bought and merged Alias ​​and Wavefront in 1995 to prevent a decline in interest in applications that ran exclusively on SGI's dedicated computers. Almost immediately, a new company called Alias] Wavefront began combining the technologies at its disposal to create an entirely new program. Finally, Maya was released in 1998, costing between $15,000 and $30,000, for the IRIX operating system on SGI workstations. The program was written from scratch and offered a new way to develop animation with an open application programming interface (API) and tremendous extensibility. Despite SGI's original intention to retain the exclusive right to provide an environment for Maya, a version for Windows NT appeared in February 1999. The old pricing scheme has been dropped, and Maya's base package now costs just $7,500. Maya 2 appeared in April of the same year, and Maya 2.5 appeared in November, containing the Paint Effects module (Drawing Effects). In the summer of 2000, Maya 3 was released, which added the ability to create non-linear animation using the Trix (Video Editing) tool. In early 2001, Maya versions for Linux and Macintosh were announced, and Maya 4 for IRIX and Windows NT/2000 began shipping in June.

Maya is a program for creating 3D graphics and animation based on models created by the user in virtual space, illuminated by virtual light sources and viewed through virtual camera lenses. There are two main versions of the program: Maya Complete ($7,500 at the time of writing) and Maya Unlimited ($16,000), which includes some specific features. Maya runs on Windows NT/2000 PCs as well as Linux, IRIX or even Macintosh operating systems. The program allows you to create photorealistic bitmap images, similar to those you get with a digital camera. At the same time, work on any scene begins with an empty space. The lu-th parameter can be made to change over time, resulting in an animated scene after a set of frames is rendered.

Maya outperforms many of the 3D animation packages currently on the market. The program is used to create effects in a large number of films, has a wide range of applications in the areas that we have listed above, and is considered one of the best in the field of animation, despite the difficulty in learning it. At the moment, Maya's main competitors are LightWave, Softimage XSI, and 3ds max, which cost between $2,000 and $7,000. Software under $1,000 includes trueSpace, Inspire 3D, Cinema 4D, Vguce, and Animation Master.

Most of these programs work well on personal computers and have versions for various operating systems such as Macintosh. It is rather difficult to compare them, but basically, the more complex the program, the more complex animation it allows you to create and the easier it is to model complex objects or processes.