Interest: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Kuru
rmv odd addition to lede (bad definition and covered below clearly); rmv incorrect addition of author to ref
 
en>Ugog Nizdast
m Reverted 1 edit by 122.161.50.81 identified as test/vandalism using STiki
Line 1: Line 1:
Just about 1 out of every 100 people suffers from this skin condition. Micheal's longtime makeup artist, Karen Kaye, remembers how he struggled with the disease:. Unfortunately, most diets and supplements sound too good to be true. It works by decreasing the excretion of melanin from the melanin producing cells in your skin. Prolonged exposure to ultraviolet rays (UVA and UVB). <br><br>The ingredients in these supplements have been clinically proven to be helpful in the treatment of vitiligo. Vitiligo disease can be appearing in both male & female, with no age limit. You must have noticed people with white patches on their skin especially face. Perceptibly, it is occurred due to the depigmentation of the skin. In some cases, especially when it occurs suddenly and aggressively, hirsutism might signalize cancer or a tumor. <br><br>During the time of 7 to 8 hours don't wash your face once you applied anti vitiligo oil. This means that you need to control your activities. 32- Apple juice or cider is treated with UV light to reduce the levels of microbial pathogens. The clay should be mixed in ginger juice in a ratio of 1:1, and applied over the white spots once a day. “Although any part of your body may be affected by vitiligo, depigmentation usually develops first on sun-exposed areas of your skin, such as your hands, feet, arms, face and lips. <br><br>Due to wholesome factor vitamin E, it is also used as baking oil to gain balanced, wholesome life. Changes in color of the mucous membranes, which form the lining of the tissue, are some other common symptoms as well. 18- Ultraviolet is colloquially called black light, as it is invisible to the human eye. It is an aesthetic problem, although it is not contagious or infectious. Depigmentation is a concept of loss of pigment cells in skin. <br><br>People with dark skin experience more prominent patches. In most of the cases, a blend of transmissible, immunologic and ecological factors can trigger Vitiligo. When you then combine these foods with garlic and a few berries you may have a incredibly powerful Vitiligo diet regime that can provide you with relief by enhancing your immune method, lessen free radicals and boost melanin production in your skin. Except for genital warts that are associated with increased risk of cancer, warts developed elsewhere on the body are generally harmless and often disappear on their own. Autoimmunity is the to the point reason for vitiligo. <br><br>Some of the Symptoms or Causes of leucoderma treatment in india are white patches on the skin, including the face, limbs, torso, and groin, golden brown patches on mucous membranes and around the eyes, nostrils, and mouth, and premature graying of hair. Experts have described total restoration of the original locations color applying Corticosteroids creams. Over 50 % of the patients with localized unilateral vitiligo have patches of white hair. It also occurs on covered areas too like genitals, breast and legs. vitiligo patients carry such white spot appearances on the face, neck, armpits, groin, hands or knees.<br><br>If you have any inquiries concerning where and ways to use [http://discover-prague.info/ new vitiligo treatment], you can contact us at the web site.
[[Image:Z buffer.svg|thumb|Z-buffer data]]
 
In [[computer graphics]], '''z-buffering''', also known as '''depth buffering''', is the management of image depth coordinates in three-dimensional (3-D) graphics, usually done in hardware, sometimes in [[software]]. It is one solution to the [[visibility problem]], which is the problem of deciding which elements of a rendered scene are visible, and which are hidden. The [[painter's algorithm]] is another common solution which, though less efficient, can also handle non-opaque scene elements.
 
When an object is rendered, the depth of a generated [[pixel]] (z coordinate) is stored in a [[Buffer (computer science)|buffer]] (the '''z-buffer''' or '''depth buffer'''). This buffer is usually arranged as a two-dimensional array (x-y) with one element for each screen pixel. If another object of the scene must be rendered in the same pixel, the method compares the two depths and overrides the current pixel if the object is closer to the observer. The chosen depth is then saved to the z-buffer, replacing the old one. In the end, the z-buffer will allow the method to correctly reproduce the usual depth perception: a close object hides a farther one. This is called '''z-culling'''.
 
The granularity of a z-buffer has a great influence on the scene quality: a [[16-bit]] z-buffer can result in [[Artifact (observational)|artifact]]s (called "[[z-fighting]]") when two objects are very close to each other. A [[24-bit]] or [[32-bit]] z-buffer behaves much better, although the problem cannot be entirely eliminated without additional algorithms. An [[8-bit]] z-buffer is almost never used since it has too little precision.
 
==Uses==
 
The Z-buffer is a technology used in almost all contemporary computers, laptops and mobile phones for performing 3-D (3 dimensional) graphics, for example for computer games. The Z-buffer is implemented as hardware in the silicon ICs (integrated circuits) within these computers. The Z-buffer is also used (implemented as software as opposed to hardware) for producing computer-generated special effects for films.
 
Furthermore, Z-buffer data obtained from rendering a surface from a light's point-of-view permits the creation of shadows by the "shadow mapping" technique.
 
==Developments==
 
Even with small enough granularity, quality problems may arise when [[accuracy and precision|precision]] in the z-buffer's distance values is not spread evenly over distance. Nearer values are much more precise (and hence can display closer objects better) than values which are farther away. Generally, this is desirable, but sometimes it will cause artifacts to appear as objects become more distant. A variation on z-buffering which results in more evenly distributed precision is called '''w-buffering''' (see [[#W-buffer|below]]).
 
At the start of a new scene, the z-buffer must be cleared to a defined value, usually 1.0, because this value is the upper limit (on a scale of 0 to 1) of depth, meaning that no object is present at this point through the [[viewing frustum]].
 
The invention of the z-buffer concept is most often attributed to [[Edwin Catmull]], although Wolfgang Straßer also described this idea in his 1974 Ph.D. thesis<sup id="fn_1_back">[[#fn_1|1]]</sup>.
 
On recent PC graphics cards (1999–2005), z-buffer management uses a significant chunk of the available [[computer storage|memory]] [[Bandwidth (computing)|bandwidth]]. Various methods have been employed to reduce the performance cost of z-buffering, such as [[lossless compression]] (computer resources to compress/decompress are cheaper than bandwidth) and ultra fast hardware z-clear that makes obsolete the "one frame positive, one frame negative" trick (skipping inter-frame clear altogether using signed numbers to cleverly check depths).
 
==Z-culling==
 
In [[rendering (computer graphics)|rendering]], z-culling is early pixel elimination based on depth, a method that provides an increase in performance when rendering of hidden surfaces is costly. It is a direct consequence of z-buffering, where the depth of each pixel candidate is compared to the depth of existing geometry behind which it might be hidden.  
 
When using a z-buffer, a pixel can be culled (discarded) as soon as its depth is known, which makes it possible to skip the entire process of lighting and [[Texture mapping|texturing]] a pixel that would not be [[Visibility (geometry)|visible]] anyway. Also, time-consuming [[pixel shader]]s will generally not be executed for the culled pixels. This makes z-culling a good optimization candidate in situations where [[fillrate]], lighting, texturing or pixel shaders are the main [[Bottleneck (engineering)|bottlenecks]].
 
While z-buffering allows the geometry to be unsorted, sorting [[polygon]]s by increasing depth (thus using a reverse [[painter's algorithm]]) allows each screen pixel to be rendered fewer times. This can increase performance in fillrate-limited scenes with large amounts of overdraw, but if not combined with z-buffering it suffers from severe problems such as:
* polygons might occlude one another in a cycle (e.g. : triangle A occludes B, B occludes C, C occludes A), and
* there is no canonical "closest" point on a triangle (e.g.: no matter whether one sorts triangles by their [[centroid]] or closest point or furthest point, one can always find two triangles A and B such that A is "closer" but in reality B should be drawn first).
As such, a reverse painter's algorithm cannot be used as an alternative to Z-culling (without strenuous re-engineering), except as an optimization to Z-culling. For example, an optimization might be to keep polygons sorted according to x/y-location and z-depth to provide bounds, in an effort to quickly determine if two polygons might possibly have an occlusion interaction.
 
==Algorithm==
 
'''Given:''' A list of polygons {P1,P2,.....Pn}
<br />
'''Output:''' A COLOR array, which displays the intensity of the visible polygon surfaces.<br />
'''Initialize:'''
          note : z-depth and z-buffer(x,y) is positive........
            z-buffer(x,y)=max depth; and
            COLOR(x,y)=background color.
'''Begin:'''
 
      for(each polygon P in the polygon list)
      do{
          for(each pixel(x,y) that intersects P)
          do{
                Calculate z-depth of P at (x,y)
                If (z-depth < z-buffer[x,y])
                then{
                      z-buffer[x,y]=z-depth;
                      COLOR(x,y)=Intensity of P at(x,y);
                    }
            }
        }
  display COLOR array.
 
==Mathematics==
 
The range of depth values in camera [[space]] (see [[3D projection]]) to be rendered is often defined between a <math>\mathit{near}</math> and <math>\mathit{far}</math> value of <math>z</math>. After a [[perspective transform]]ation, the new value of <math>z</math>, or <math>z'</math>, is defined by:
 
<math>z'=
\frac{\mathit{far}+\mathit{near}}{\mathit{far}-\mathit{near}} +
\frac{1}{z} \left(\frac{-2 \cdot \mathit{far} \cdot \mathit{near}}{\mathit{far}-\mathit{near}}\right)
</math>
 
After an [[orthographic projection]], the new value of <math>z</math>, or <math>z'</math>, is defined by:
 
<math>z'=
2 \cdot \frac{{z} - \mathit{near}}{\mathit{far}-\mathit{near}} - 1
</math>
 
where <math>z</math> is the old value of <math>z</math> in camera space, and is sometimes called <math>w</math> or <math>w'</math>.  
 
The resulting values of <math>z'</math> are normalized between the values of -1 and 1, where the <math>\mathit{near}</math> [[plane (mathematics)|plane]] is at -1 and the <math>\mathit{far}</math> plane is at 1. Values outside of this range correspond to points which are not in the viewing [[frustum]], and shouldn't be rendered.
 
===Fixed-point representation===
 
Typically, these values are stored in the z-buffer of the hardware graphics accelerator in [[Fixed-point arithmetic|fixed point]] format. First they are normalized to a more common range which is [0,1] by substituting the appropriate conversion <math> z'_2 = \frac{\left(z'_1+1\right)}{2}</math> into the previous formula:
 
<math>z'=
\frac{\mathit{far}+\mathit{near}}{2 \cdot \left( \mathit{far}-\mathit{near} \right) } +
\frac{1}{z} \left(\frac{-\mathit{far} \cdot \mathit{near}}{\mathit{far}-\mathit{near}}\right) +
\frac{1}{2}
</math>
 
Second, the above formula is multiplied by <math>S=2^d-1</math> where d is the depth of the z-buffer (usually 16, 24 or 32 bits) and rounding the result to an integer:<ref>{{cite web | url = http://www.opengl.org/resources/faq/technical/depthbuffer.htm | title =  Open GL / FAQ 12 - The Depth buffer | author = The OpenGL Organization | accessdate = 2010-11-01}}</ref>
 
<math>z'=f\left(z\right)=\left\lfloor  \left(2^d-1\right) \cdot \left(
\frac{\mathit{far}+\mathit{near}}{2 \cdot \left( \mathit{far}-\mathit{near} \right) } +
\frac{1}{z} \left(\frac{-\mathit{far} \cdot \mathit{near}}{\mathit{far}-\mathit{near}}\right) +
\frac{1}{2} \right) \right\rfloor
</math>
 
This formula can be inverted and derivated in order to calculate the z-buffer resolution (the 'granularity' mentioned earlier). The inverse of the above <math>f\left(z\right)</math>: 
 
<math>z=
\frac{- \mathit{far} \cdot \mathit{near}}{\frac{z'}{S}\left(\mathit{far} - \mathit{near}\right) - {far}}
=
\frac{- \mathit S  \cdot {far} \cdot \mathit{near}}{z'\left(\mathit{far} - \mathit{near}\right) - {far} \cdot S } </math>
 
where <math>S=2^d-1</math>
 
The z-buffer resolution in terms of camera space would be the incremental value resulted from the smallest change in the integer stored in the z-buffer, which is +1 or -1. Therefore this resolution can be calculated from the derivative of <math>z</math> as a function of <math>z'</math>:
 
<math>\frac{dz}{dz'}=
\frac{-1 \cdot -1 \cdot \mathit S  \cdot {far} \cdot \mathit{near}}
    {\left( z'\left(\mathit{far} - \mathit{near}\right) - {far} \cdot S \right)^2}
\cdot \left(\mathit{far} - \mathit{near}\right)
</math>
 
Expressing it back in camera space terms, by substituting <math>z'</math> by the above <math>f\left(z\right)</math>:
 
<math>\frac{dz}{dz'}=
\frac{-1 \cdot -1 \cdot \mathit S  \cdot {far} \cdot \mathit{near} \cdot \left(\mathit{far} - \mathit{near}\right)}
    {\left( \mathit S  \cdot \left(\frac{-\mathit{far} \cdot \mathit{near}}{z} + \mathit{far}\right) - {far} \cdot S \right)^2} =
</math>
 
<math>
\frac{ \left(\mathit{far} - \mathit{near}\right) \cdot z^2 }{ S \cdot \mathit{far} \cdot \mathit{near} }=
</math>
 
<math>
\frac{z^2}{S \cdot \mathit{near}}-\frac{z^2}{S \cdot \mathit{far} }=
</math>
 
~ <math>
\frac{z^2}{S \cdot \mathit{near}}
</math>
 
This shows that the values of <math>z'</math> are grouped much more densely near the <math>\mathit{near}</math> plane, and much more sparsely farther away, resulting in better precision closer to the camera. The smaller the <math>\mathit{near}/\mathit{far}</math> ratio is, the less precision there is far away&mdash;having the <math>near</math> plane set too closely is a common cause of undesirable rendering artifacts in more distant objects.<ref>{{cite web | url = http://www.codermind.com/articles/Depth-buffer-tutorial.html | title =  Depth buffer - the gritty details | author = Grégory Massal | accessdate = 2008-08-03}}</ref>
 
To implement a z-buffer, the values of <math>z'</math> are [[Linear interpolation|linearly interpolated]] across screen space between the [[vertex (geometry)|vertices]] of the current [[polygon]], and these intermediate values are generally stored in the z-buffer in [[Fixed-point arithmetic|fixed point]] format.
 
===W-buffer===
 
To implement a w-buffer,{{what|date=December 2011}} the old values of <math>z</math> in camera space, or <math>w</math>, are stored in the buffer, generally in [[floating point]] format. However, these values cannot be linearly interpolated across screen space from the vertices&mdash;they usually have to be [[Inversion|inverted]]{{dn|date=November 2012}}, interpolated, and then inverted again. The resulting values of <math>w</math>, as opposed to <math>z'</math>, are spaced evenly between <math>\mathit{near}</math> and <math>\mathit{far}</math>. There are implementations of the w-buffer that avoid the inversions altogether.  
 
Whether a z-buffer or w-buffer results in a better image depends on the application.
 
==See also==
*[[Edwin Catmull]]
*[[3D computer graphics]]
*[[3D scanner]]
*[[Z-fighting]]
*[[Irregular Z-buffer]]
*[[Z-order]]
*[[Hierarchical Z-buffer]]
*[[A-buffer]]
*[[Depth map]]
*[[Atmospheric perspective]]
 
==References==
{{Reflist}}
 
==External links==
*[http://www.sjbaker.org/steve/omniv/love_your_z_buffer.html Learning to Love your Z-buffer]
*[http://www.sjbaker.org/steve/omniv/alpha_sorting.html Alpha-blending and the Z-buffer]
 
==Notes==
 
<cite id="1">[[#fn_1_back|Note 1:]]</cite> see W.K. Giloi, J.L. Encarnação, W. Straßer. "The Giloi’s School of Computer Graphics". Computer Graphics 35 4:12–16.
 
{{DEFAULTSORT:Z-Buffering}}
[[Category:3D rendering]]

Revision as of 16:10, 20 January 2014

Z-buffer data

In computer graphics, z-buffering, also known as depth buffering, is the management of image depth coordinates in three-dimensional (3-D) graphics, usually done in hardware, sometimes in software. It is one solution to the visibility problem, which is the problem of deciding which elements of a rendered scene are visible, and which are hidden. The painter's algorithm is another common solution which, though less efficient, can also handle non-opaque scene elements.

When an object is rendered, the depth of a generated pixel (z coordinate) is stored in a buffer (the z-buffer or depth buffer). This buffer is usually arranged as a two-dimensional array (x-y) with one element for each screen pixel. If another object of the scene must be rendered in the same pixel, the method compares the two depths and overrides the current pixel if the object is closer to the observer. The chosen depth is then saved to the z-buffer, replacing the old one. In the end, the z-buffer will allow the method to correctly reproduce the usual depth perception: a close object hides a farther one. This is called z-culling.

The granularity of a z-buffer has a great influence on the scene quality: a 16-bit z-buffer can result in artifacts (called "z-fighting") when two objects are very close to each other. A 24-bit or 32-bit z-buffer behaves much better, although the problem cannot be entirely eliminated without additional algorithms. An 8-bit z-buffer is almost never used since it has too little precision.

Uses

The Z-buffer is a technology used in almost all contemporary computers, laptops and mobile phones for performing 3-D (3 dimensional) graphics, for example for computer games. The Z-buffer is implemented as hardware in the silicon ICs (integrated circuits) within these computers. The Z-buffer is also used (implemented as software as opposed to hardware) for producing computer-generated special effects for films.

Furthermore, Z-buffer data obtained from rendering a surface from a light's point-of-view permits the creation of shadows by the "shadow mapping" technique.

Developments

Even with small enough granularity, quality problems may arise when precision in the z-buffer's distance values is not spread evenly over distance. Nearer values are much more precise (and hence can display closer objects better) than values which are farther away. Generally, this is desirable, but sometimes it will cause artifacts to appear as objects become more distant. A variation on z-buffering which results in more evenly distributed precision is called w-buffering (see below).

At the start of a new scene, the z-buffer must be cleared to a defined value, usually 1.0, because this value is the upper limit (on a scale of 0 to 1) of depth, meaning that no object is present at this point through the viewing frustum.

The invention of the z-buffer concept is most often attributed to Edwin Catmull, although Wolfgang Straßer also described this idea in his 1974 Ph.D. thesis1.

On recent PC graphics cards (1999–2005), z-buffer management uses a significant chunk of the available memory bandwidth. Various methods have been employed to reduce the performance cost of z-buffering, such as lossless compression (computer resources to compress/decompress are cheaper than bandwidth) and ultra fast hardware z-clear that makes obsolete the "one frame positive, one frame negative" trick (skipping inter-frame clear altogether using signed numbers to cleverly check depths).

Z-culling

In rendering, z-culling is early pixel elimination based on depth, a method that provides an increase in performance when rendering of hidden surfaces is costly. It is a direct consequence of z-buffering, where the depth of each pixel candidate is compared to the depth of existing geometry behind which it might be hidden.

When using a z-buffer, a pixel can be culled (discarded) as soon as its depth is known, which makes it possible to skip the entire process of lighting and texturing a pixel that would not be visible anyway. Also, time-consuming pixel shaders will generally not be executed for the culled pixels. This makes z-culling a good optimization candidate in situations where fillrate, lighting, texturing or pixel shaders are the main bottlenecks.

While z-buffering allows the geometry to be unsorted, sorting polygons by increasing depth (thus using a reverse painter's algorithm) allows each screen pixel to be rendered fewer times. This can increase performance in fillrate-limited scenes with large amounts of overdraw, but if not combined with z-buffering it suffers from severe problems such as:

  • polygons might occlude one another in a cycle (e.g. : triangle A occludes B, B occludes C, C occludes A), and
  • there is no canonical "closest" point on a triangle (e.g.: no matter whether one sorts triangles by their centroid or closest point or furthest point, one can always find two triangles A and B such that A is "closer" but in reality B should be drawn first).

As such, a reverse painter's algorithm cannot be used as an alternative to Z-culling (without strenuous re-engineering), except as an optimization to Z-culling. For example, an optimization might be to keep polygons sorted according to x/y-location and z-depth to provide bounds, in an effort to quickly determine if two polygons might possibly have an occlusion interaction.

Algorithm

Given: A list of polygons {P1,P2,.....Pn}
Output: A COLOR array, which displays the intensity of the visible polygon surfaces.
Initialize:

          note : z-depth and z-buffer(x,y) is positive........
           z-buffer(x,y)=max depth; and
           COLOR(x,y)=background color.

Begin:

      for(each polygon P in the polygon list) 
      do{
          for(each pixel(x,y) that intersects P) 
          do{
               Calculate z-depth of P at (x,y)
               If (z-depth < z-buffer[x,y]) 
               then{
                      z-buffer[x,y]=z-depth;
                      COLOR(x,y)=Intensity of P at(x,y);
                   }
            }
        }
  display COLOR array.

Mathematics

The range of depth values in camera space (see 3D projection) to be rendered is often defined between a and value of . After a perspective transformation, the new value of , or , is defined by:

After an orthographic projection, the new value of , or , is defined by:

where is the old value of in camera space, and is sometimes called or .

The resulting values of are normalized between the values of -1 and 1, where the plane is at -1 and the plane is at 1. Values outside of this range correspond to points which are not in the viewing frustum, and shouldn't be rendered.

Fixed-point representation

Typically, these values are stored in the z-buffer of the hardware graphics accelerator in fixed point format. First they are normalized to a more common range which is [0,1] by substituting the appropriate conversion into the previous formula:

Second, the above formula is multiplied by where d is the depth of the z-buffer (usually 16, 24 or 32 bits) and rounding the result to an integer:[1]

This formula can be inverted and derivated in order to calculate the z-buffer resolution (the 'granularity' mentioned earlier). The inverse of the above :

where

The z-buffer resolution in terms of camera space would be the incremental value resulted from the smallest change in the integer stored in the z-buffer, which is +1 or -1. Therefore this resolution can be calculated from the derivative of as a function of :


Expressing it back in camera space terms, by substituting by the above :

~

This shows that the values of are grouped much more densely near the plane, and much more sparsely farther away, resulting in better precision closer to the camera. The smaller the ratio is, the less precision there is far away—having the plane set too closely is a common cause of undesirable rendering artifacts in more distant objects.[2]

To implement a z-buffer, the values of are linearly interpolated across screen space between the vertices of the current polygon, and these intermediate values are generally stored in the z-buffer in fixed point format.

W-buffer

To implement a w-buffer,Template:What the old values of in camera space, or , are stored in the buffer, generally in floating point format. However, these values cannot be linearly interpolated across screen space from the vertices—they usually have to be invertedTemplate:Dn, interpolated, and then inverted again. The resulting values of , as opposed to , are spaced evenly between and . There are implementations of the w-buffer that avoid the inversions altogether.

Whether a z-buffer or w-buffer results in a better image depends on the application.

See also

References

43 year old Petroleum Engineer Harry from Deep River, usually spends time with hobbies and interests like renting movies, property developers in singapore new condominium and vehicle racing. Constantly enjoys going to destinations like Camino Real de Tierra Adentro.

External links

Notes

Note 1: see W.K. Giloi, J.L. Encarnação, W. Straßer. "The Giloi’s School of Computer Graphics". Computer Graphics 35 4:12–16.