Composition Mapping.


51 views
Uploaded on:
Description
Surface Mapping. Surface mapping is the procedure of mapping a picture onto a triangle so as to build the subtle element of the renderingThis permits us to get fine scale points of interest without depending on rendering huge amounts of little trianglesThe picture that gets mapped onto the triangle is known as a composition guide or composition and is normally a standard shading picture.
Transcripts
Slide 1

Surface Mapping CSE167: Computer Graphics Instructor: Steve Rotenberg UCSD, Fall 2005

Slide 2

Texture Mapping Texture mapping is the way toward mapping a picture onto a triangle keeping in mind the end goal to expand the subtle element of the rendering This permits us to get fine scale points of interest without depending on rendering huge amounts of minor triangles The picture that gets mapped onto the triangle is known as a surface guide or surface and is generally a consistent shading picture

Slide 3

Texture Map We characterize our surface guide as existing in surface space , which is a typical 2D space The lower left corner of the picture is the direction (0,0) and the upper right of the picture is the direction (1,1) The real surface guide may be 512 x 256 pixels for instance, with a 24 bit shading put away per pixel

Slide 4

Texture Coordinates To render a finished triangle, we should begin by relegating a surface direction to every vertex A surface direction is a 2D point [ t x t y ] in surface space that is the direction of the picture that will get mapped to a specific vertex

Slide 5

Texture Mapping (1,1) v 1 t 1 v 0 y t 2 t 0 v 2 (0,0) x Texture Space Triangle (in any space)

Slide 6

Vertex Class We can augment our idea of a Model to incorporate surface directions We can do this by just developing the Vertex class: class Vertex { Vector3 Position; Vector3 Color; Vector3 Normal; Vector2 TexCoord; open: void Draw() { glColor3f(Color.x, Color.y, Color.z); glNormal3f(Normal.x, Normal.y, Normal.z); glTexCoord2f(TexCoord.x, TexCoord.y); glVertex3f(Position.x, Position.y, Position.z); //This must be last };

Slide 7

Texture Interpolation The genuine surface mapping calculations occur at the sweep change and pixel rendering phases of the illustrations pipeline During output transformation, as we are circling through the pixels of a triangle, we should introduce the t x t y surface directions correspondingly to how we insert the rgb shading and z profundity values As with all other interjected values, we should precompute the inclines of every direction as they differ over the picture pixels in x and y Once we have the added surface direction, we turn upward that pixel in the surface guide and utilize it to shading the pixel

Slide 8

Perspective Correction The sweep transformation handle for the most part uses straight introduction to add values over the triangle (hues, z, and so on.) If we utilize straight introduction to add the surface directions, we may keep running into errors This is on account of a straight line of routinely dispersed focuses in 3D space maps to a straight line of sporadically separated focuses in gadget space (on the off chance that we are utilizing a viewpoint change) The outcome is that the surface maps precisely onto the triangle at the vertices, yet may twist and extend inside the triangle as the survey edge transforms This is known as surface swimming and can be very diverting on substantial triangles To settle the issue, we should play out a point of view right insertion, or hyperbolic introduction This requires adding the w coordinate over the triangle and playing out a viewpoint division for every pixel See page 121 & 128 in the book for more data

Slide 9

Pixel Rendering Usually, we need to join surface mapping with lighting Let\'s expect that we are doing vertex lighting and utilizing Gouraud shading to add the lit hues over the triangle As we render every pixel, we figure the added light shading and increase it by the surface shading to get our last pixel shading Often, when we are utilizing surface maps, we don\'t have to utilize vertex hues, thus they are verifiably set to (1,1,1) The surface guide itself as a rule characterizes the material shade of the item, and it is permitted to fluctuate per pixel rather than just per vertex

Slide 10

Pixel Rendering Let\'s consider the sweep change prepare at the end of the day and take a gander at how the pixel rendering process fits in Remember that in output change of a triangle, we circle from the top line down to the base column in y, and afterward circle from left to right in x for every line As we are circling over these pixels, we are augmenting different added qualities, (for example, z , r , g , b , t x , and t y ) Each of these augmentations requires just 1 option for every pixel, except point of view redress requires 1 an extra partition for every pixel and 1 extra duplicate for every pixel for every point of view amended worth Before really composing the pixel, we contrast the added z esteem and the quality composed into the zbuffer, put away per pixel. In the event that it is more remote than the current z esteem, we don\'t render the pixel and continue to the following one. In the event that it is nearer, we wrap up the pixel by composing the last shading into the framebuffer and the new z esteem into the zbuffer If we are doing costly per-pixel operations, (for example, Phong interjection & per-pixel lighting), we can put off them until after we are certain that the pixel passes the zbuffer correlation. In the event that we are doing a ton of costly per-pixel rendering, it is in this manner quicker on the off chance that we can render nearer questions first

Slide 11

Tiling The picture exists from (0,0) to (1,1) in surface space, yet that doesn\'t imply that surface directions must be constrained to that reach We can characterize different tiling or wrapping standards to figure out what happens when we go outside of the 0… 1 territory

Slide 12

y x Tiling (1,1) (0,0) Texture Space

Slide 13

y x Clamping (1,1) (0,0) Texture Space

Slide 14

y x Mirroring (1,1) (0,0) Texture Space

Slide 15

Combinations One can generally set the tiling modes freely in x and y Some frameworks bolster autonomous tiling modes in x+, x-, y+, and y-

Slide 16

Texture Space Let\'s investigate surface space It\'s not exactly like the standardized picture space or gadget spaces we\'ve seen so far The picture itself ranges from 0.0 to 1.0, free of the real pixel determination It permits tiling of qualities <0 and >1 The individual pixels of the surface are called texels Each texel maps to a uniform measured box in surface space. For instance, a 4x4 surface would have pixel focuses at 1/8, 3/8, 5/8, and 7/8

Slide 17

Magnification What happens when we get excessively near the finished surface, with the goal that we can see the individual texels very close? This is called amplification We can characterize different amplification practices, for example, Point examining Bilinear testing Bicubic inspecting

Slide 18

Magnification With point examining , each rendered pixel just specimens the surface at a solitary texel, closest to the surface direction. This causes the individual texels to show up as strong shaded rectangles With bilinear examining , each rendered pixel plays out a bilinear mix between the closest 4 texel focuses. This causes the surface to show up smoother when seen very close, be that as it may, when seen excessively shut, the bilinear way of the mixing can get to be observable Bicubic inspecting is an improvement to bilinear examining that really tests a little 4x4 matrix of texels and plays out a smoother bicubic mix. This may enhance picture quality for very close circumstances, yet includes some memory access costs

Slide 19

Minification is the inverse of amplification and alludes to how we handle the surface when we see it from far away Ideally, we would see surfaces from such a separation as to bring about each texel to guide to approximately one pixel If this were the situation, we could never need to stress over amplification and minification However, this is not a practical case, as we normally have cameras moving all through a domain continually getting ever closer from various items When we see a surface from excessively shut, a solitary texel maps to numerous pixels. There are a few prominent amplification tenets to address this When we are far away, a numerous texels may guide to a solitary pixel. Minification addresses this issue Also, when we see level surfaces from a more edge on perspective, we get circumstances where texels guide to pixels in an extremely extended manner. This case is regularly taken care of comparatively to minification, yet there are different choices

Slide 20

Minification Why is it important to have exceptional taking care of for minification? What happens on the off chance that we simply take our per-pixel surface arrange and utilize it to turn upward the surface at a solitary texel? Much the same as amplification, we can utilize a basic point inspecting guideline for minification However, this can prompt a typical surface issue known as shining or humming which is a type of associating Consider a definite surface guide with loads of various hues that is just being examined at a solitary point for each pixel If some area of this maps to single pixel, the specimen point could wind up anyplace in that district. In the event that the district has substantial shading varieties, this may bring about the pixel to change shading fundamentally regardless of the fact that the triangle just moves a moment sum. The consequence of this is a flashing/gleaming/humming impact To alter this issue, we should utilize some kind of minification system

Slide 21

Minification Ideally, we would take a gander at all of the texels that fall inside a solitary pixel and mix them by one means or another to get our last shading This would be costly, chiefly because of memory access cost, and would deteriorate the more remote we are from the surface An assortment of minification strategies have been proposed throughout the years (and new ones still appear) One of the most famous techniques is known as mipmapping

Slide 22

Mipmapping was initially distributed in 1983, despite the fact that the procedure had been being used for a few years at the time It is a sensible bargain regarding execution and quality, and is the strategy for decision for most illustrations equipment notwithstanding putting away the surface picture itself, a few mipmaps are precomputed and put away Each mipmap is a downsized variant of the first picture, registered with a medium to top notch scaling calculation Usually, each mipmap is a large portion of the determination of the past picture in both x and y For e

Recommended
View more...