Jump to content

Talk:Ray tracing (graphics)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Too specific for subject?

[edit]

This page is perhaps too specific.

  • Ray tracing itself is not concerned with rendering, only ray-object intersections. Shading models (phong, etc) are not a part of ray tracing.
  • Most renderers now use a hybrid solution e.g a fast scan-line or REYES algorithm to "draw" the visible parts, and ray tracing to determine shadows, reflections, and refrations. They often even allow selection between ray tracing and faster but more primitive methods (e.g shadow/environment maps) on a per-light/material basis.
  • Ray tracing is even used in various radiosity implementations to determine the contribution of one patch on another. In this capacity it's simply used to determine visibility, much like light rays in a traditional ray tracer.

Thus I suggest you make a distinction between the core of "ray tracing" as an algorithm and the various ways in which it is used to render images. --—Preceding unsigned comment added by Imroy (talkcontribs) 10:37, 26 September 2004

"Ray tracing itself is not concerned with rendering, only ray-object intersections. Shading models (phong, etc) are not a part of ray tracing."
Not sure I agree. Ray tracing is the process of rendering an image by tracing rays through a scene and seeing how the energy propegates. I think you are thinking about ray casting, not ray tracing.
Maybe the article itself would benefit from having sections going into depth with the various issues with ray tracing, eg. the light transportation equation, trajectory splitting (eg. shadow rays), the monte carlo solution to light transport, the various basic sampling methods and their variance and uniformity, and possible even a section describing quasi-monte carlo sampling. --Kolibri 13:17, 19 October 2005 (UTC)[reply]
I agree with Kolibri that having sections regarding issues of raytracing should have their own section within the article, but I think that particularly large portions (Monte Carlo) should have their own article. Right now, this article needs some formating; it is not very well organized. --Osmaker 19:21, 20 October 2005 (UTC)[reply]

Possible linkspam

[edit]

Ad potential link spam in the article:

the ffconsultancy.com link in the "external links" section actually refers to a study that primarily deals *not* with raytracing, but rather with a strongly biased programming language comparison. Presumably should be removed as it is of no real relevance to raytracing. - Thomas Fischbacher --—Preceding unsigned comment added by 86.135.98.42 (talkcontribs) 23:27, 27 October 2005

The link is to a page entirely about ray tracing, comparing equivalent ray tracers written in different languages. The page includes and links to further pages that contain detailed information about the construction of these programs and explanations of how they act as ray tracers. The page also acts as a language comparison specifically in the context of ray tracing. - Jon Harrop -- —Preceding unsigned comment added by 80.229.56.224 (talkcontribs) 22:36, 31 October 2005

More and more links seem to point to commercial applications rather than web pages or documents that people can look at to learn about ray-tracing. For example lately Arauna & Photon Studio. Would everybody agree to remove them ? Or separate links in two categories. One for papers/tutorials and another one for any link that points to a product or a company ? --83.112.48.158 (talk) 06:34, 23 June 2008 (UTC)[reply]

Agreed; unless it is particularly significant, I think that we should remove these commercial links; I've never heard of Arauna or Photon Studio before, despite being professionally involved in this field, I cant imagine that there could be any sort of consensus for viewing these as significant. I will remove. Cdecoro (talk) 08:15, 23 June 2008 (UTC)[reply]

I don't want to remove any links but it seems to me that a couple of them are not greatly contributing to the article namely, "A series of tutorials on implementing a ray tracer using C++", "Tutorial on implementing a ray tracer in PHP". The "chapters" for the first article are not working, and an example of very basic ray tracing in PHP is really not adding any value to the article. I would like to suggest they get removed. There's tons of webpages of people putting the code of their ray tracer online. Only professional links should be kept in this article. jeancolasp (talk) 19:21, 16 November 2014 (UTC)[reply]

Rewrite and formatting

[edit]

I was researching Ray Tracing and immediately came to Wikipedia for a start to finesse my definition and realized this definition needed finessing. I reformatted and re-arranged the article today (11/18). It seems a little more linear now. I also agree that the links section needs bolstering - and is a more appropriate place for deeper explorations of the Ray Tracing topic (Monte Carlo, SIGGRAPH papers, etc.). Whoever put in all the stuff I edited seems to have included the most appropriate historical and technical balance for the definition. Thanks!kcdot -- —Preceding unsigned comment added by 12.7.137.83 (talkcontribs) 19:38, 18 November 2005

Ray coherence

[edit]

Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately.

Coherence is exploited in high performance parallel ray tracing, even if mostly to get better cache behaviour. Such a system typically shots a set of (hopefully) highly coherent rays together. As they have coherent memory access patterns, this greatly reduces memory bandwidth compared to shoting rays at random. This is done for example in the Ray Processing Unit. --Taw 12:53, 18 December 2005 (UTC)[reply]

Ray casting VS Ray Tracing

[edit]

I think that the differences between ray casting and ray tracing are not made clear in this article. Ray casting can be thought of as a subclass of ray tracing. Both techniques cast rays from the eye into the scene through the pixel to intersect with objects/surfaces in the scene.

The general idea is that each surface is illuminated by lights and other surfaces. So the algorithm traces rays from the surface to other surfaces recursively. This is ray tracing.

Ray casting only considers the initial ray-object intersection. The color / shading of the pixel is dependant on the characteristics of the initial surface that the ray intersected with.

Please note that I am using the terms objects and surfaces interchangebly above. --—Preceding unsigned comment added by 81.97.15.130 (talkcontribs) 00:30, 14 January 2006

Example

[edit]

Is the example that was just added too simple to be of any use? There is a lot more going on in raytracing than just finding the point where a line intersects a sphere. BTW, please check the math. I think it was dead wrong as originally posted, since the author confused vector and scalar math. I think I've fixed it but a second opinion would be useful.--Srleffler 23:10, 4 February 2006 (UTC)[reply]

Sorry if I confused things - that's my first mathematical contribution here, and I'm not used to all the standards! The maths, however, was correct as it stood if one correctly interpreted dot product vs. multiplication - I've implemented it in a raytracer I wrote myself. Also, I know it was specific, and hence I was doubtful myself as to whether to add it - as I attmepted to make clear, it is only a example/taster of one of a particular algorithm used. "be brave" is the motto here - I just thought I'd give it a go. If you still dislike it, remove it. I was only trying to provide something of some interest, so if that was not the case, it would be better were it removed. -- Wrayal

No problem. I only asked because I wasn't sure. Nobody else seems to feel strongly about it, so it might as well stay. Yes, by all means be bold in editing! In retrospect, I agree the math was correct except for the omission of the sign for dot product. It confused the heck out of me, though, because there was nothing at the beginning to indicate that the quantities indicated were vectors.--Srleffler 04:14, 16 February 2006 (UTC)[reply]

I think the example is useful in the case of an optics application using a spherical lens. Anyway, I cleaned up the equations a bit. The boldface on the 'd' had been dropped halfway through, so I fixed that. I added a note saying it was vector notation. Also, I put the equation in quadratic form so it's easier to read and understand. Mikiemike 16:32, 8 November 2006 (UTC)[reply]

Does this equation really have more than one solution?

Now this quadratic equation has solution(s):


Unless t (time) can be negative, there will be either one or no solutions at all. --Niks 11:34, 2 April 2007 (UTC)[reply]



It might be worth noting that 1) t isn't time, it's just a scalar distance (effectively); 2) you can have two positive solutions (V.d may be negative). Out of interest, should one root be positive, one negative, you would discard the latter, though in this instance you'd be inside the sphere. 131.111.245.246 20:56, 21 April 2007 (UTC)[reply]
This example is just what I needed for a college assignment. Thanks guys/gals! Yes, the equation can have up to two solutions. If you intersect a line with a sphere you might get an "entry" and "exit" point. Yang (talk) 20:33, 4 March 2008 (UTC)[reply]

Added Simple Example of Algorithm

[edit]

I have added an image and a short description describing the algorithm in process. It should make the pseudocode easier to understand. --Kolibri 10:57, 27 October 2006 (UTC)[reply]

Shouldn't the article be explicit that the example algorithm is pseudocode? Some people may actually think computer languages look like this. - 15:57, 18 January 2008 (UTC) —Preceding unsigned comment added by 137.99.115.235 (talk)

External trimmage

[edit]

The section below is mercilessly removed. The article already has the "See also / Software" subsection of wikipedia articles about ray-soft. The idea of the deletion is simple: if a software is notable, make an article (survive AfD :-) and wikilink it ito the abovementioned section. Otherwise the corresponding ext link is spam. Good luck, `'mikka 02:24, 19 May 2007 (UTC)[reply]

Raytracing software

[edit]


Should be renamed Optical Ray Tracing

[edit]

This article should be renamed 'optical ray tracing', because ray tracing has, for decades, also been done at RF and much of the argument here would need to be modified for that. `---Aussienick

I agree. Ray tracing is used in many areas apart from the CG applications mentioned in the article; for example, I came here looking for ray tracing as related to acoustics. --Devnevyn 11:32, 1 November 2007 (UTC)[reply]

I also agree. Ray tracing is a technique used to determine the path a wave will take, given its frequency and wavenumber; as the above authors mention (RF & acoustics) it is applicable to science beyond just computer graphics; for example, I came in search of earth/planetary wave ray tracing. 07:22, 10 February 2008 (UTC) —Preceding unsigned comment added by 131.128.73.5 (talk)

Well, I'm pretty sure that this is the "most common" use. So the best thing to do would probably be to create Ray tracing (disambiguation), link it from the header, and point to the other uses there. Adam McCormick (talk) 13:17, 18 April 2008 (UTC)[reply]

Mention Raytracing is an Embarassingly Parallel Algo?

[edit]

Not entirely sure where to put this, but I believe a mention should be made about the fact that the ray tracing algorithm is embarrassingly parallel - an example from that page, "In ray tracing, each pixel may be rendered independently. In computer animation, each frame may be rendered independently."

Anyone concur?

Smerity 09:30, 22 October 2007 (UTC)[reply]

raytrace is best thing KOSOVO —Preceding unsigned comment added by 80.80.161.147 (talk) 20:24, 27 February 2008 (UTC)[reply]

Possible misleading figure PathOfRays.svg

[edit]

The figure PathOfRays.svg, and the associated text (below the code sample, beginning with "First, a ray is created at an eyepoint"), seems to imply that when a ray intersects a diffuse surface, that ray is reflected onto another diffuse surface, and so on, until it reaches a light source.

While this is correct for reflective, mirror-like surfaces, or transparent glassy surfaces, it is not true for *diffuse* surfaces. When a ray intersects a diffuse, non-reflective surface, the recursion stops, and shadow rays are projected only to light sources in order to compute the direct illumination term. The recursion does not continue. This is why ray-tracing cannot do indirect illumination, which is precisely what inter-reflections among diffuse surfaces would entail.

Note that while the descriptive text and the figure are incorrect, the pseudo-code excerpt is in fact correct. Note that it specifies "generate ... ray: recurse" only if the surface is either "reflective" or "transparent". If neither of these is true, the surface is diffuse and the recursion stops.

The statement that "For example if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue" implies color blending from one surface to another, which is not true for ray-tracing (but is true for radiosity or other general illumination methods)

-- marciot

I've removed the offending text and replied on the user's talk. Adam McCormick (talk) 13:24, 18 April 2008 (UTC)[reply]
That is completely wrong. Global illumination with a ray tracer is possible, and works by recursing upon contact with a diffuse surface. This models the effect of light scattering off of diffuse surfaces, and it works great. Timrb (talk) 08:40, 22 May 2008 (UTC)[reply]
Yes, as the writer of that part, reflection from diffuse surfaces does indeed happen. See Diffuse reflection and Diffuse Shading. --Kolibri (talk) 12:05, 9 January 2010 (UTC)[reply]

Suggest separating into two articles

[edit]

Ray tracing in computer graphics and ray tracing in other applications are similar in concept, but quite different topics. I suggest creating "Ray Tracing" and "Ray Tracing (physics)". Ray tracing in graphics is quite a deep topic, and I don't think it makes sense to group it with discussion about tracing radio waves through the ionosphere, etc.

The former of the two pages would focus on graphics applications, with a "if you are looking for..." thingie at the top pointing to the physics page. A quick google search for "ray trace" shows almost entirely graphics results, so it should probably be the main article. This would give the article more space to talk about some of the different variations and non-physical tricks that ray tracing algorithms use.

The "Ray Tracing (physics)" page would have a similar redirect, and would discuss purely scientific applications of ray tracing, like the part about radio waves & lens design. Ray tracing is also used to trace acoustic signals through the varying density of the ocean, and the physics page would be a good place to put this as well.

It seems like other people on this talk page agree that the two topics are somehow fighting with each other. Thoughts?

Timrb (talk) 09:11, 18 April 2008 (UTC)[reply]

I could agree with that, if physics is the major other use and you have some expertise in the matter feel free to be WP:BOLD and pull some content for physics. I'd be happy to help reorganize and clean up. Adam McCormick (talk) 13:20, 18 April 2008 (UTC)[reply]
The split was a great idea -- thanks for that! —Preceding unsigned comment added by 99.251.254.165 (talk) 18:50, 23 April 2008 (UTC)[reply]

Wording of main image caption

[edit]

There seems to be some dispute over whether the main image uses diffuse interreflection and area light sources. I can tell you as the creator of the image and programmer of the renderer that it does, and this is documented in the image's description page. If you still don't believe me, I can render it again with global illumination turned off and you can see the difference for yourself. Timrb (talk) 08:40, 22 May 2008 (UTC)[reply]

I for one would like to see examples. One with neither diffuse interreflection nor area light sources, one with just the interreflection and one with just the area light sources. Would you be willing to create those so we can see the difference? It would probably improve the article to show what those things are (both here and in those articles) rather than just stating that they are used. Adam McCormick (talk) 22:52, 29 May 2008 (UTC)[reply]
Very well:
  • Diffuse interreflection: yes. Depth of field: yes. Area light sources: yes. The full image, with all effects turned on.
    Diffuse interreflection: yes.
    Depth of field: yes.
    Area light sources: yes.
    The full image, with all effects turned on.
  • Diffuse interreflection: no. Depth of field: yes. Area light sources: yes. Notice that objects do not reflect light onto each other. Even though the ground is relatively bright, the bottom of the sphere just above it is totally black. In general, the image is darker because all light is absorbed on the first "bounce".
    Diffuse interreflection: no.
    Depth of field: yes.
    Area light sources: yes.
    Notice that objects do not reflect light onto each other. Even though the ground is relatively bright, the bottom of the sphere just above it is totally black. In general, the image is darker because all light is absorbed on the first "bounce".
  • Diffuse interreflection: no. Depth of field: no. Area light sources: yes. Depth of field is off. Notice that all objects are in focus, regardless of how close or far away they are. Rendering images out-of-focus requires special handling in a ray tracer (though it is not too difficult).
    Diffuse interreflection: no.
    Depth of field: no.
    Area light sources: yes.
    Depth of field is off. Notice that all objects are in focus, regardless of how close or far away they are. Rendering images out-of-focus requires special handling in a ray tracer (though it is not too difficult).
  • Diffuse interreflection: no. Depth of field: no. Area light sources: no. Area light sources are off. Notice that all shadows are quite sharp. This is because the sun is modeled as a point light source. The effect is subtle compared to the previous image due to foreshortening.
    Diffuse interreflection: no.
    Depth of field: no.
    Area light sources: no.
    Area light sources are off. Notice that all shadows are quite sharp. This is because the sun is modeled as a point light source. The effect is subtle compared to the previous image due to foreshortening.
  • Below is a more clear example of area lights versus point lights:
    Alan has suggested adding this stuff to the article; if anyone wants to write it up, feel free. I'm planning on doing a major re-write for this whole article anyway, so I'll pass. Timrb (talk) 09:42, 16 June 2008 (UTC)[reply]
    I second that a major rewrite is definitely needed; that would be quite admirable of you if you have the time! I would strongly suggest that you look at the German Wikipedia page on Raytracing (linked at the top). The article is very well put together, and I think it correctly focuses more on the ray-tracing aspect as opposed to the more general physically-based rendering (which of course relies nearly exclusivly on ray tracing, but is certainly a broader term). Cdecoro (talk) 04:00, 17 June 2008 (UTC)[reply]
    Oh BTW, its quite clear to me that the main image is displaying diffuse color bleeding and area light sources; perhaps I'm just more sensitive since I look at this stuff all day, but it certainly doesn't require the pictures to justify it. If you are planning a major rewrite, perhaps you can start it on a user page and let us know about it here? I'd be interested in giving feedback as you put it together. Cdecoro (talk) 04:03, 17 June 2008 (UTC)[reply]
    What is really clear is that both this image and the cups and bottles one from Gilles Tran do not quite pertain to an article about pure raytracing as they involve global illumination algorithms as well. They are stunningly good looking no doubt, but pure raytracing should be a lot more flat. 189.27.20.223 (talk) 03:09, 11 October 2008 (UTC)[reply]
    See User:Timrb/Ray tracing for a (heavily annotated) work in progress. Timrb (talk) 03:11, 21 June 2008 (UTC)[reply]

    Page move broke everything

    [edit]

    When this page was moved to "Ray tracing (graphics)" from just plain "Ray tracing", all the links to this page were suddenly broken, and now every article that was supposed to link to an article about ray tracing in graphics now points to a disambig page.

    This Should Not Be and I think drives the point home that the (graphics) part of the title is not necessary and just messes things up, as anyone who wrote [[ray tracing]] would reasonably expect it to point to the graphics article. A quick google search for "ray trace" verifies that the vast majority of pages are about ray tracing in graphics. The Wikipedia:Disambiguation page allows for a non-disambiguated title if there is a clear primary topic. If there are no objections, I would like to move this page back to its original title. Timrb (talk) 09:09, 22 May 2008 (UTC)[reply]

    Ray tracing is a common technique in optics, and this is the primary and original use of this technique. Any physicist or optical engineer who writes "ray tracing" will expect it to link to the other article. The problem here is that the person who moved the page and created the disambiguation page should have gone through all of the links to the dab page and corrected them to point to the appropriate article. The correct solution here is not to move the article back, but to fix the links. On the other hand, dab pages are not supposed to have only two entries. Disambiguation with only two articles is handled via redirects and hattext. I'll fix it.--Srleffler (talk) 17:24, 29 May 2008 (UTC)[reply]
    OK, done. Variations of Ray tracing link here. Ray tracer and Ray trace link to the physics article. Someone still should go through all of the links and ensure that they point to the correct article.--Srleffler (talk) 17:35, 29 May 2008 (UTC)[reply]
    I don't agree with that, ray tracer is the name given to the vast majority of raytracing graphics renderers, and I think that this page is still the most likely thing being referenced when someone wikilinks ray tracer. Adam McCormick (talk) 19:36, 29 May 2008 (UTC)[reply]
    I've gone through all the links to Ray tracing and only about 15 of the 110 or were used in a physics context so I've changed them to point to the graphics article directly. I think ray tracer is probably similar in that it is used in more graphics articles than physics articles. In fact, of the six links, none are physics-based, so I'm going to point it here not there. Adam McCormick (talk) 20:24, 29 May 2008 (UTC)[reply]
    OK. Thanks for checking and updating links.--Srleffler (talk) 22:25, 29 May 2008 (UTC)[reply]
    The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

    The result of the proposal was Do not move. Looks like someone is messing with the target redirect now. Everyone play nice please. —Wknight94 (talk) 17:19, 11 June 2008 (UTC)[reply]

    Requested move

    [edit]

    Ray tracing redirects here. As this is the primary topic, there is no need to add (graphics) to the article name. 199.125.109.107 (talk) 06:01, 30 May 2008 (UTC)[reply]

    • Oppose – The classical and most broadly used meaning is the general meaning that is covered in ray tracing (physics). The use of ray tracing in graphics is just one currently hot application of that general principal, and should not be the "main" interpretation. The fact that ray tracing was recently changed to be a redirect to here has now been undone by me; it is again a redirect to the disambig page. Dicklyon (talk) 06:16, 30 May 2008 (UTC)[reply]
    On the other hand, I hadn't read all the above discussion and the history before commenting. If it's true that this article had the name Ray Tracing before, and it was hijacked without discussion, then I'd be OK with fixing it back to the way it was. I don't, however, buy srleffler's logic that a dab page needs more than two entries; I don't find support for that at WP:MOSDAB. Dicklyon (talk) 06:21, 30 May 2008 (UTC)[reply]
    You're right. The MOS recommends using hatnotes instead of a two-item dab page, but does not forbid the latter. My mistake. The article wasn't hijacked without discussion. See #Suggest separating into two articles. What happened was the physics and graphics content in the original raytracing article were split out into two separate articles. The problem is that the editor who did that did not go through and fix all the links to the page. Most of the physics links were subsequently fixed to point to the physics article, but many graphics-related links still point to pages that now redirect to the dab page. Someone should go through and fix all those links.--Srleffler (talk) 08:13, 11 June 2008 (UTC)[reply]
    That's not quite right. I was the one who separated the two articles, into "ray tracing" and "ray tracing (physics)". I did go through all the links and fix everything up properly. Then penubag came along and made a disambig page, and redirected "ray tracing" toward it, and did so without any discussion. That was the move that broke everything. If you ask me, the hatnotes were working just fine, the disambig page is redundant step, and the (graphics) suffix isn't needed. Timrb (talk) 12:15, 11 June 2008 (UTC)[reply]

    Oppose Ray tracing has a long history in fields other than CG; Gauss was tracing rays in his studies of optics long before computers. The title should stay as it is. Cdecoro (talk) 17:26, 10 June 2008 (UTC)[reply]

    Does anyone know how this move happened, even though it had only opposition here? Dicklyon (talk) 03:25, 11 June 2008 (UTC)[reply]

    Never mind, I see what happened now. It's good. Dicklyon (talk) 03:32, 11 June 2008 (UTC)[reply]

    Oppose. Independent of frequency of use, the physics sense of the term is primary. The graphics technique is an application of physics-based ray tracing.--Srleffler (talk) 08:13, 11 June 2008 (UTC)[reply]

    The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.

    Unfocused Article

    [edit]

    This article really doesn't know what its focus is on; I agree with one of the first posters: is it about visibility determination along a ray, specifically expressed as ray-object intersection? Or is it about rendering algorithms that use ray tracing as a component, which is basically equivalent to saying ALL physically-based rendering algorithms (even radiosity algorithms generally use ray tracing to compute form factors, and frequently use ray tracing to perform the final image synthesis to pixels). I suggest that this article be primarily focused on the former (visibility determination by intersecting rays) and specific articles be made to cover the rendering algorithms that use it. Cdecoro (talk) 17:26, 10 June 2008 (UTC)[reply]

    Cdecoro's section rewrite

    [edit]

    User:Cdecoro says "Replaced incorrect and misleading information on radiosity, photon mapping, and MLT with a (hopefully!) clearer and more informed version." But how can we tell if it's less misleading, clearer, or more informed, if you don't tell us what it's informed by? To encourage the citation of a source or two, I think I'll just revert it for now. Dicklyon (talk) 17:32, 10 June 2008 (UTC)[reply]

    I've now added additional references to the original papers on bidir path tracing and photon mapping; in conjunction with the references by Timrb, this is sufficient. Cdecoro (talk) 21:41, 11 June 2008 (UTC)[reply]

    I converted the first two to proper refs using Template:cite journal and Template:cite web. Take a look at those for what other parameters are available, and do a few more; let me know here if you need help. Dicklyon (talk) 04:50, 12 June 2008 (UTC)[reply]

    Complete re-write coming

    [edit]

    I've got a work in progress here: User:Timrb/Ray tracing. Some comments on structure and content could be helpful. (In particular, I'm now wondering about all that lighting model stuff-- perhaps it belongs in an article of its own? On the other hand, it will make formally talking about things like path tracing a lot easier.)Timrb (talk) 12:18, 23 June 2008 (UTC)[reply]

    Hey, I think its looking really good. I think the light modeling section is very good to have; if we're going to talk about ray tracing as the rendering algorithm, as opposed to merely a visibility determination method, than I think theres no better place than here. I would strongly suggest (for the Rendering equation article also) using and instead of w and w' for the exitant and incident directions, respectivly. It's now mostly the standard usage, and I think its a lot clearer, both in indicating that we are talking about a direction (with omega) and disambiguating the directions. The illustrations look really good btw, I just figured I'd mention that sooner or later someone is going to attach one of those "should be SVG" tags, though. Cdecoro (talk) 20:31, 23 June 2008 (UTC)[reply]

    This new version is interesting and agree, illustrations are good. However I feel it is not justified to switch from the existing version to this one. I believe it would have been much better to improve the existing document which is already rather good. The main difference between the two articles is your section on illumination models. However illumination models are not specific to ray-tracing. It would be therefore, as suggested, a much better idea to move that content to a specific article on CG illumination models. As for the article itself. The two articles are still not focused enough on the concept of rays and how they can be used in a rendering program. We should included example such as shooting rays to compute indirect diffuse, shooting rays to simulate area lights, shooting rays to compute BSSRDF, etc. and have clearer references to topics in which rays can be used (importance sampling, advanced light transport models, etc). There's no note on 'acne' which is a typical rendering artifact of ray-tracers. A last note, citation list could be cleaned with at last proper citation of Appel & Whited's papers, I would suggest to clean the Software list, keep renderers' names which have been known to contribute to make this algorithm popular (MRay, POV, lately PRman) and remove the others (this is not an article on 3D rendering programs). I am unsure as why the External Links is different in the suggested article than the existing one (links can't be left to purely personal choices). Also the pseudo code version is good but reference to Java could be removed and it would make it really pseudo code then. For example the code is readable without the public declaration... plus floats could be used instead of doubles (which sticks to what most render programs use). I don't want to discourage any attempt in improving this article. On the contrary it's great it has some attention. We are just a bit off track here while a lot can still be done to add more useful information on the subject. ---Mast4as (talk) 07:01, 27 July 2008 (UTC)[reply]


    Calculations are wrong

    [edit]

    It's true, I just checked them myself, 3 times and I know what I'm talking about. I have a PhD in Advanced Mathematics from MIT. 89.123.87.161 (talk)M.Johnson —Preceding undated comment was added at 17:59, 19 January 2009 (UTC).[reply]




    Calculation for beginners

    [edit]

    I might only be in algebra, but that doesn't mean I can't try to make a ray-tracer! The only problem is, I can't find equations to re-create reflection of a line off any surface! Could someone maybe edit the page to describe an equation to allow you to MAKE reflective objects? Becuase, until then, my ray-tracer can only be diffused. —Preceding unsigned comment added by GMEncrypt (talkcontribs) 01:32, 10 July 2009 (UTC)[reply]

    If p is the direction of the incident ray and q is the surface normal (both being unit vectors), then the reflected ray's direction is p - 2*(p·q)*q, if I remember right, where p·q is a dot product. —Tamfang (talk) 03:47, 13 July 2009 (UTC)[reply]

    More of a question than a comment - is there any research into combining foveated imaging with ray tracing to cut down on the calculation overheads? I am imagining some sort of view-direction detection for animated ray-traced graphics, so that the image rendered is high-resolution only where the viewer is actually looking at that moment. Simon Oliver (nli) 212.125.69.106 (talk) 13:59, 26 January 2010 (UTC)[reply]

    Ray tracing will never be correct on computer until size of virtual camera (or eye) matrix will not be 1000 times smaller

    [edit]
    Retina
    Right human eye cross-sectional view. Courtesy NIH National Eye Institute. Many animals have eyes different from the human eye.
    Details
    Arterycentral retinal artery
    Anatomical terminology

    I mean glass, water light refraction and even for computer graphics, if object is big like human at 1 meter distance, then you will not see him like real 3D person but with some unrealistic 3D effects. Only small objects like bugs or mouse from closely (from close look) can be viewed correctly, but even mouse is too big. So for ray tracing (glass, water light refraction) this "BIG EYE"-"BIG VIRTUAL CAMERA" effect will be deadly and distortion and wrong ray tracing will appear. But since not many of us, think what can be correct ray tracing and what can be wrong ray trancing, then many can do not notice difference, between correct ray tracing and wrong ray tracing for big objects (bigger than size of eye; bigger than 1 cm size).

    What is solution? Solution is, that need to change all 3D drawing algorithms, because they are all wrong even not for ray tracing, but for all big and not distance objects, but this will require addition computing power of about 2-10 times more.

    In current games iris size is about 1 square meter.

    The exact solution would be to shrink virtual camera (in game or rendering) size or to maximize size of all objects around and then in front of virtual camera [matrix] need to put glass lens, which have correctly chosen light refraction of about 1 cm of size in front of virtual camera (or virtual eye). This is possible for 3D rendering like in 3Dmax, but just need to put on camera correct lens of glass and need to magnifie objects, which need to render. But in current 3D games, where you can't have light refraction in glass - is impossible to see correct 3D shapes/objects.

    — Preceding unsigned comment added by Versatranitsonlywaytofly (talkcontribs)

    This could make an interesting addition to the article is it has been noted by a reliable source. GDallimore (Talk) 13:23, 30 January 2011 (UTC)[reply]
    What the heck are you guys talking about? Dicklyon (talk) 22:08, 30 January 2011 (UTC)[reply]
    Is the point here that the real pupil is not an ideal pinhole? Or perhaps that the real retina is not a flat raster? How should correct ray-tracings look different from the present incorrect ray-tracings? —Tamfang (talk) 00:07, 31 January 2011 (UTC)[reply]
    I can to provide pictures-screenshots, to show the problem. If object is in center of the screen, then object shape is good, but if you move mouse to right, then object will be on left side and will be longer horizontaly (wider). And if you move computer mouse down, then object will go up and will be longer (taller) than when object was in center of screen. And if you move mouse to corner, then object will be in wrong shape and will be bigger than on center. And those shape distortions do not depending on how far or how close object is. Need somehow to remove some part of vertical and horizontal lines from rendered image and need remove proportionally depending on how far this visible lines are from center. Or need to change all 3D algorithms. So if objects is not correctly rendered if his position is far from screen center, then is possibility in ray trancing when need to refract rays in glass there also can be serious distortions or all rays can go wrong, because algorithm pretty stupid and don't have real lens and iris of real certain size, but some perspective only in one axis (say this axis x into deep, and over axis do not have perspective - distance felling, like sloping line to corner is the same distance for algorithm, like line along center into deep/far distance and thats why objects become with distorted shape).
    Car in center of screen: http://img404.imageshack.us/i/screenshot0001n.png/
    Car on top of screen looks longer verticaly: http://img831.imageshack.us/i/screenshot0002.png/
    Car in the bottom of the screen looks taller: http://img225.imageshack.us/i/screenshot0003o.png/
    Car on the left of screen is wider, than in screen center: http://img21.imageshack.us/i/screenshot0004jpg.jpg/
    Car on the right of screen
    File:Screenshot0005jpg.jpg
    File:Screenshot0002jpg.jpg
    Car on top of screen looks taller, higher.
    File:Screenshot0003jpg.jpg
    Car on bottom of screen looks longer, higher.
    File:Screenshot0006jpg.jpg
    Car in left bottom corner is bigger than in center and have wrong shape.
    File:Screenshot0001jpg.jpg
    Car in the center of screen looks correct.
    File:Screenshot0007jpg.jpg
    Car in right top corner is bigger than in center and have wrong shape.
    File:Screenshot0000jpg.jpg
    Man shape distortion, when he is in corner, he then looks bigger than in center and have wrong shape.
    Looks to me like you've got the wrong end of the stick. Firstly, I don't think these are raytraced images, just 3D graphics in general. Secondly, these distortion effects could quite easily be deliberate and caused by a fish-eye-lens style view to increase the field of view and aid in gameplay. Don't see anything fundamentally wrong with either ray-tracing or these graphics.
    Ultimately, there is no point continuing this conversation unless you can find a source which says the same thing as you. Saying "look at these images" is not providing a source. This talk page is for improving the article and we can't improve the article without sources. GDallimore (Talk) 22:07, 2 February 2011 (UTC)[reply]
    Yes it's indeed about sources. But sources can be any site with images (how you feel with sources about Progression of the bench press world record?). And it (distortions) can not be corrected by changing angle of view in console (~ like in Counter Strike game, I don't know about new games). In reality objects (in filmed or photographed image) on left side or on right side looks smaller, than in center. But in 3D games or in rendering 3D is vice versa - objects on left and on right side looking bigger and wider. If they only looks bigger, then it would be vice versa, but they looks wider and this is not exactly vice versa, but even probably worse. And objects existing in say left bottom corner are, I think, not only bigger, but also with wrong shape. — Preceding unsigned comment added by Versatranitsonlywaytofly (talkcontribs) 10:08, 5 February 2011 (UTC)[reply]
    I'm convinced that game graphics are not as true as one might wish, but that's no surprise. Why bring it up here? What realtime game uses ray tracing? —Tamfang (talk) 10:25, 6 February 2011 (UTC)[reply]
    I'm quite sure none of the images presented is ray traced. Apart from that, I never came across this distortion problem in any reliable literature and from the principle of ray tracing, I don't think the virtual camera size matters. Niky cz 17:08, 8 February 2011 (UTC)[reply]
    They're certainly not raytraced, but the underlying mathematical principles are probably the same - since it's all going to be based on well-known techniques for representing 3D scenes on a 2D plane (the screen). I still think it's a deliberate effect to increase field of view for improved gameplay, which naturally results in distortion around the edges and "unreal" effects because the field of view is "unreal".
    I've gone back to my basic books on perspective to check. This is paraphrased from Gills "Perspective: From Basic to Creative" (to keep it brief and to avoid copyright infringement):
    The field of vision (of a person) is over 180 degress, but it is not possible to see clearly over this whole range, and a cone of 90 degrees is the maximum range that can be seen clearly. Perspective drawing is usually limited to 60 degrees or less. Any object which would not normally be seen clearly because it is lying outside of the cone of vision will be distorted if it is drawn.
    I think it could be an interesting point in game design about finding the balance between an adequate cone of vision for gameplay purposes versus realism. This is why I've asked the editor to find some reliable sources to back up what he's saying, since he appears to have some knowledge of the topic. But, as people do keep on pointing out, it's a discussion that belongs at the much more general levels of 3D graphics or perspective drawing in general and it can be taken there if any sources are found. GDallimore (Talk) 18:33, 8 February 2011 (UTC)[reply]
    Here some sources:
    http://www.rjdown.co.uk/projects/bfbc2/fovcalculator.php
    http://gaming.stackexchange.com/questions/17917/how-to-change-fov-in-crysis-2-demo

    And by the way, glass refraction is (I don't know how correctly) is possible even on Radeon 9700 card (about 2004 year) in ATI Debevec 9700 demo. But multiple times of reflections in current games are still imposible.

    http://www.youtube.com/watch?v=n5Lf5FMhvS4&feature=related
    http://www.youtube.com/watch?v=fW_GPCR9_GU
    http://www.youtube.com/watch?v=dTAc2CTQcLw

    (glass balls are purple, green for instance and big in center) — Preceding unsigned comment added by Versatranitsonlywaytofly (talkcontribs) 11:52, 14 March 2011 (UTC)[reply]

    This is negative possibly effect (do not must be holes in object, when refracted through glass sphere):
    http://img703.imageshack.us/i/atisushi00.jpg/
    http://img820.imageshack.us/i/atisushi01.jpg/
    http://img809.imageshack.us/i/atisushi02.jpg/
    http://img130.imageshack.us/i/atisushi03.jpg/
    http://img717.imageshack.us/i/atisushi04.jpg/
    This is positive thing:
    http://img859.imageshack.us/i/atisushi05.jpg/
    http://img692.imageshack.us/i/atisushi06.jpg/
    http://img28.imageshack.us/g/atisushi07.jpg/
    positive thing is that at bigger distance refracted object is diferent size, than from close looking. Like in real for example through glasses, if put glasses close to eyes and far, then object at 1-3 metters distance will be bigger or smaller.
    I try everything to change angle with "fov" or "fov_default", but it only writes on screen "fov 0" or "default_fov 90" (for counter-strike and half-life1) and puting over number do not changing anything. It's probably Valve marketing trick and lie. It is not possible to change angle of view nor aspect ratio with console command "~" in any game. — Preceding unsigned comment added by Versatranitsonlywaytofly (talkcontribs) 17:43, 14 March 2011 (UTC)[reply]
    You're wasting your time linking to all these images. They are not reliable sources which can be used to add anything to the article. Find a computer graphics magazine article or something similar if you want to provide useful contributions to wikipedia. GDallimore (Talk) 19:21, 14 March 2011 (UTC)[reply]
    I supose, that if you sitting near at big TV, when playing game or at 23" display, then distance [from eye] to corners is bigger and it compensates each over and in this case, even positive effect makes longer and wider objects on display sides/corners. — Preceding unsigned comment added by Versatranitsonlywaytofly (talkcontribs) 22:04, 14 March 2011 (UTC)[reply]
    http://img51.imageshack.us/i/screenshotallcars4.jpg/
    http://imageupload.org/?d=4D8DE6D31
    All screenshots was taken do not quiting from game and do not touching keyboard move keys, but only rotating mouse. In the middle car is car, when it was in center. In the right up corner car is from screenshot, when car was in the right up corner and so on. So forget about any possible shape fixing tricks. There need new algorithm. So ray-tracing never will be even close to real not distorted shapes. Because ray-tracing using the same algorithms, which was used for rendering this car.
    There is one possible solution to fix all things completly in rendering. This solution is based on fact, that filmed or photographed image from webcam (or any photocamera, which have small photo-matrix and very small glass lenses; webcam actualy have two lenses and bigger one is inside, close to photomatrix) is not blured and enough big resolution (like 640*480), from whatever distance you taking picture with webcam, the picture is always not blured and correct. So if maximize all objects in scene, which will be rendered, and to put on top of camera the same 3d-glass lenses of same shape like in web cam, then possible rendering will be as real, as picture and very realistic, without any distorted shapes.
    Are you listening? Screenshots are useless! Stop wasting your and everyone else's time. GDallimore (Talk) 15:19, 26 March 2011 (UTC)[reply]
    You got to see this from real life:
    http://img710.imageshack.us/i/questionlensokoradjuste.jpg
    http://www.imageupload.org/?d=4DC2765B1
    It is the same as here http://img51.imageshack.us/i/screenshotallcars4.jpg/ and here http://img807.imageshack.us/i/screenshot0008jpg.jpg/ in game. But I don't think, that in real life on edges objects becoming bigger like in 3D game. But what if become bigger even in real life? Well I don't believe it. Also can be, that lenses was adjusted in this real life picture for more objects to fit in into frame. Or maybe even cheap lenses is used in photoaprat with which was taked picture. Or all lenses are wrong shaped, or wrong to put many lenses together (unlike on eye is only one lense). But it's unlikely that all linses are wrong shaped, because in overs photographs I everything see correct - on edges objects are little bit smaller than in center of picture and no shape distortions if I see horizont line in center. If in 3D games objects on edges is just bigger than in center - then very big monitor in front of you and very close sitting to it can fix all problems almost exactly (just hard to adjust distance to monitor and size of monitor). Display of 30 inches and 30 cm eyes from display would be nice, pretty correct solution (if you sit close to big monitor and look only with one eye, perpendicular to monitor screen center, then theoretically there is possible to adjust exactly correct distance between eye and monitor and this correct distance would mean, that 3D rendered image or in 3D-game action is exactly the same like in real world looking with one eye, just need hold one eye perpendicular to monitor center and only you can look in monitor screen edges by using eye muscles, but not moving body; problem is only to choose exact distance between your one eye and monitor; and if you using two eyes there will be only almost correct like in real world). I can explain, why 3D computer graphics is not exactly like in real life. Say there is stone 10 meters from you and you looking straight to stone. So stone is in front of your vision. Now imagine that one line going through from one you ear (A) to another ear (B). Line from your eye (point C) to stone (point D) let's call CD. So CD line is perpendicular to AB line. Now there is going third line EF through stone, which is parallel to line from your one ear to another ear. So line EF is going through stone and is parallel to line AB and perpendicular to line CD. Now if you, by moving eye muscles, will look at line EF right (or left) edge, which going through stone, then distance between EF line edge/corner and eye will be bigger than distance between eye and stone. So in real life distance from eye to EF line edge is bigger than from eye to stone. So if you will put stone on EF line edge then it will look smaller than in point, where intersects CD and EF lines. But not in 3D graphics. In 3D graphics stone on EF line edge is the same size like stone in EF and CD lines intersection point and this is wrong. Because of it, 3D game graphics is not exactly real life 3D.
    In previous images one up car and one down car is not from the same distance in screenshots snapshoted (and thats why looks the same size like in center car). So here all cars are from same distance snapshoted (game capable to write into folder "documtents/my games/far cry 2/screenshots" all screenshots automaticly, without exiting game). So here all cars are snapshoted without quiting game and do not changing player position, but only moving mouse: http://imageshack.us/g/29/screenshot0014oh.jpg/ . Here all cars cuted from each screenshots to compare they size, when player was looking around, but standing in the same point: http://imageshack.us/photo/my-images/96/allcarscomparition3.jpg/ . As we see cars on edges are bigger than in center. Should be the same size, when looking from one point, if distance from player to car is always the same.

    To what rendering algorithm class does ray tracing belong?

    [edit]

    I wonder what the class of rendering algorithms is called that traces paths of particles, either it is from the eye or from a light source, and lets the particles bounce around in the scene (ray tracing and path tracing (what's the difference?), Metropolis light transport, photon mapping, instant radiosity etc.). "Path tracing" seems to be a suitable name for such a family since all of those algorithms basically trace paths, and the article says it is a generalization of conventional ray tracing, but path tracing refers to a specific algorithm in which paths are traced from the eye and samples the light input to the eye in Monte Carlo fashion (or so I've heard). Besides, the article talks about the path tracing algorithm. Ray tracing on the other hand, can refer to Whitted-style ray tracing, which simulates direct illumination from delta (point and directional) light sources and specular reflection and refraction. So what should you say if you want to talk about all these algorithms at once?

    If you leave an answer, please notify me on my talk page. —Kri (talk) 00:58, 14 August 2011 (UTC)[reply]

    What is relationship between this method and Monte Carlo method for photon transport? 142.196.169.88 (talk) 05:19, 29 July 2013 (UTC)[reply]

    example image does not meet WP standards

    [edit]

    Nobody can verify if the example image was in fact produced using the techniques described in the article. We have to trust the author. That's not the way an open encyclopedia works. --84.177.32.89 (talk) 08:24, 19 August 2011 (UTC)[reply]

    Disagree with move.

    [edit]

    I disagree with the move from "Ray tracing (graphics)" to "Ray tracing". As discussed previously, the graphics usage is not unambiguously primary.--Srleffler (talk) 05:18, 9 December 2011 (UTC)[reply]

    I disagree too, and have discussed this unilateral move with him at User_talk:Thumperward#Ray_tracing. Thumperward, I request that you put it back to where it was and make a two-way disambig page. Dicklyon (talk) 05:23, 9 December 2011 (UTC)[reply]
    Actually, I was able to move it back myself. But "Ray tracing (disambiguation)" is not as easy to move back to "Ray tracing" now. I notice their are several articles linking to Ray tracing from when it was a disambig, which is wrong, but they include both meanings; some disambiguating of links is in order. Dicklyon (talk) 05:25, 9 December 2011 (UTC)[reply]
    Having a disambiguation page for two links is actively harmful. I can just about see the rationale behind leaving the root title as a redirect to one of the two topics, though as I've said to Dicklyon I consider this to be a distinctly minority reading of the weight of the "primary importance" section of WP:PRIMARYTOPIC, but a disambiguation page simply means that rather than inconveniencing 50% (say) of readers with an additional mouse click to get to their desired page we inconvenience 100% of readers. Article titles are a means to an end and are not canonical representations of importance. Similarly, dab pages are technical matters of convenience and not arbitrators of equality. Chris Cunningham (user:thumperward) (talk) 07:30, 9 December 2011 (UTC)[reply]
    You appear to be in conflict with the policy you cite, so I suggest you take it up there if you want a change in policy. I have no preference either way, but you've shot yourself in the foot here. GDallimore (Talk) 12:43, 9 December 2011 (UTC)[reply]
    I'm not even sure that comment was supposed to be addressed to me, such is the level of vagueness. Who and what is in conflict with what part of what policy? Chris Cunningham (user:thumperward) (talk) 12:45, 9 December 2011 (UTC)[reply]
    You're the only person citing policies and you're the only person to be misrepresenting them in your comments: "if an ambiguous term is considered to have no primary topic, then that term should lead to a disambiguation page". GDallimore (Talk) 12:54, 9 December 2011 (UTC)[reply]
    I'm not arguing that there's no primary topic. I am arguing that of the two methods given at WP:PRIMARYTOPIC, it is far more useful to optimise for usage and thus readers (who are over six times more likely to visit the graphics page than the physics page) than for alleged importance, and thus nobody. Of the two methods, the former strongly indicates that the graphics article is the primary topic, while the latter has no consensus. We should therefore consider the graphics page to be primary. My comments should be taken in that light. As to the current situation, where the root title redirects to the graphics page, I feel that it may have some diplomatic value in hopefully placating physicists upset with the idea of usage being favoured over importance, though I'd be just as happy with the graphics page being at the root title personally. Chris Cunningham (user:thumperward) (talk) 13:12, 9 December 2011 (UTC)[reply]

    I've added Category:Computer graphics algorithms

    [edit]

    If this is wrong to do, please explain to me why. AFAIK this is one of the most classic graphics algorithms there is... ToolmakerSteve (talk) 06:56, 27 October 2012 (UTC)[reply]

    Well, it's wrongish in that it's already in Category:Global illumination algorithms, which is itself a member of Category:Computer graphics algorithms. —Tamfang (talk) 07:03, 27 October 2012 (UTC)[reply]
    That doesn't make it "wrongish", it makes it plain wrong. Removed. GDallimore (Talk) 16:57, 27 October 2012 (UTC)[reply]

    Disadvantages clarity

    [edit]

    I am not an expert in ray tracing, but even I can tell that there is a problem with the disadvantages section as it is written. I think the problem stems from ambiguity between "ray tracing in the narrow, historical sense of Whitted's ray tracing algorithm" and "ray tracing as a technique, that is used in a wide variety of algorithms". Correct me if I'm wrong, but without the concept of tracing rays, most (performance practical) global illumination algorithms would not exist. Photon mapping & path tracing - couldn't these (oversimplified) be described as casting additional rays (in a particular manner)? Or am I confusing the basic act of "ray casting" with a specific use of ray casting, that is considered "ray tracing"? Is there consensus on the meaning of "ray tracing"? ToolmakerSteve (talk) 07:36, 27 October 2012 (UTC)[reply]

    According to the wiki article on ray casting, I am not confusing the two. Paraphrasing, "ray casting" refers to a technique WITH NO RECURSION (no secondary rays). As soon as there is recursion / secondary rays, a technique is categorized as a "ray tracing" technique. Therefore, these advanced Global Illumination (GI) algorithms that are mentioned, fall into the category of "ray tracing techniques". That is, they are algorithms, one of whose central characteristics is, they involve secondary rays. Therefore, I consider the sentence "Other methods, including photon mapping, are based upon ray tracing for certain parts of the algorithm, yet give far better results." to be inappropriately placed in the disadvantages section. What I mean is, this is *NOT* a "disadvantage of ray-tracing". Indeed, there is no such thing as photon mapping WITHOUT ray-tracing. So I am going to be bold, and instead reword this as a need to EXTEND ray-tracing with additional algorithms. ToolmakerSteve (talk) 07:44, 27 October 2012 (UTC)[reply]
    I am now going to go even further, and reference a recent GI technique, that is not generally (AFAIK) classified as a ray tracing technique, though it involves ray casting. To me, it falls in a grey area (classification-wise). But again, I'm not an expert -- perhaps I am over-relying on the radically different implementation approach (parallel projection; rasterization; depth peeling). Perhaps underneath, the mathematics SHOULD categorize this as a ray-tracing technique. The bottom line is that this single paper fundamentally altered my personal understanding of how to approach high-performance GI. It challenged my underlying assumptions (as a technical evaluator examining competing vendors and future directions) of the discussion of "rasterization techniques versus ray-tracing techniques", as we attempt to have more realistic lighting in computer graphics. The existence of such a technique to me is fundamentally important in discussing pros and cons of rasterization and ray-tracing, hence my attempt to insert it into this discussion. http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter38.html ToolmakerSteve (talk) 08:07, 27 October 2012 (UTC)[reply]
    And for anyone who is skeptical that the above paper does describe a true GI approach (when I first read it, it seemed "too good to be true"), here are the two images that together I found compelling http://www.bee-www.com/parthenon/stairs.png and http://www.bee-www.com/parthenon/ibl_sample2.png and the page containing downloadable code and other images http://www.bee-www.com/parthenon/ ToolmakerSteve (talk) 08:34, 27 October 2012 (UTC)[reply]

    Illustration needed - Bounding volumes

    [edit]

    The section https://en.wikipedia.org/wiki/Ray_tracing_(graphics)#Bounding_volumes would be far more understandable to non-mathamaticians if there was an illustration of the pieces that make up the bounding volume.

    Proposed merge with Ray-tracing hardware

    [edit]
    The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
    No consensus. Feel free to perform the merge yourself per WP:BOLD. MorningThoughts (talk) 16:21, 27 January 2020 (UTC)[reply]

    Hardware raytracing is just a subtopic of raytracing Ethanpet113 (talk) 04:20, 20 April 2019 (UTC)[reply]

    The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
    My view is that WP:BOLD merge not be appropriate as the references suggest that there is evidence for independent notability. Therefore, proposing first, giving a reason for the proposal and starting a discussion, is the way forward as per WP:MERGEPROP. My view would then be against a merge unless the argument was strong. Klbrain (talk) 09:25, 8 April 2020 (UTC)[reply]

    Ray Tracing can be applied to any phenomenon that behave linearly

    [edit]

    The last paragraph of the introduction has this sentence: "In fact, any physical wave or particle phenomenon with approximately linear motion can be simulated with ray tracing." I am wondering what would be an example of a physical phenomenon that behaves nonlinearly? I am wondering if there is a phenomenon that behaves in a spiral shape or something else? ScientistBuilder (talk) 02:14, 27 January 2022 (UTC)ScientistBuilderScientistBuilder (talk) 02:14, 27 January 2022 (UTC)[reply]

    [edit]

    I would like to add this source https://reference.wolfram.com/language/tutorial/PhysicallyBasedRendering.html. It would be good to add a few details about the limitations of ray tracing. ScientistBuilder (talk) 21:08, 27 January 2022 (UTC)ScientistBuilderScientistBuilder (talk) 21:08, 27 January 2022 (UTC)[reply]

    The page you are referring to is discussing limitations of Physically_based_rendering (PBR) which is a shading technique that can be used with or without ray tracing. Furthermore, I am not sure how many of the limitations mentioned in this page are really relevant for ray-tracing for computer graphics (e.g. lack of gravitational lensing or relativistic doppler effect ) @ScientistBuilder. ThothOfTheSouth (talk) 21:30, 27 October 2024 (UTC)[reply]

    Disadvantages

    [edit]

    Is there a way to measure how much more computing power is needed for every increase in resolution? I am curious about this part of the Disadvantages section: "A serious disadvantage of ray tracing is performance (though it can in theory be faster than traditional scanline rendering depending on scene complexity vs. number of pixels on-screen). Until the late 2010s, ray tracing in real time was usually considered impossible on consumer hardware for nontrivial tasks." given a finite amount of computing power, which is more realistic: a lower resolution image with more ray tracing calculations done or a higher resolution image (4k for example) with reduced ray tracing calculations? Is there a way to approximate using Big O or asymptotic notation the number of additional calculations needed? ScientistBuilder (talk) 21:12, 27 January 2022 (UTC)ScientistBuilderScientistBuilder (talk) 21:12, 27 January 2022 (UTC)[reply]

    "Ray tracing (graphics" listed at Redirects for discussion

    [edit]

    An editor has identified a potential problem with the redirect Ray tracing (graphics and has thus listed it for discussion. This discussion will occur at Wikipedia:Redirects for discussion/Log/2022 October 27#Ray tracing (graphics until a consensus is reached, and readers of this page are welcome to contribute to the discussion. Steel1943 (talk) 19:24, 27 October 2022 (UTC)[reply]

    Emissions theory of sight

    [edit]

    It turns out that ray tracing in the most common form is reminiscent of the emission theory as set forth by Greek philosophers, notably Empedocles. This ancient, now scientifically obsolete theory postulated that our eyes emitted rays lighting up things for us to see. Remember, Empedocles was the same man who theorized that everything was composed of air, earth, fire, and water, with love pulling the elements together and hate tearing them apart. In his theory of light, the fire in our heads emitted beams that would bounce off objects before entering the eye again. Essentially, our eyes were like flashlights. Never mind that a startling number of people continue to adhere to the obsolete theory. It has been pointed out that it has practical use in computer graphics as recursive ray tracing since it is computationally orders of magnitude more efficient than to have the light sources emitting rays in all directions, most of which never reach the camera. Does this mention have a place in the article, perhaps under history? FreeMediaKid$ 15:54, 16 November 2022 (UTC)[reply]

    Interactive Ray Tracing: What's BVH?

    [edit]

    Section uses acronym "BVH" without that being defined. Marcusmueller ettus (talk) 14:18, 16 January 2023 (UTC)[reply]

    @Marcusmueller ettus, good point, the concept was used in an earlier section, before it was used, but the abbreviation wasn't defined and the order of the words was a bit different. I fixed it. ThothOfTheSouth (talk) 21:03, 27 October 2024 (UTC)[reply]

    noise discussion not relevant for this page

    [edit]

    There is a short sentence mentioning noise and denoising for ray-tracing. This page doesn't discuss distribution, sampling, monte-carlo or any other ray-tracing technique that would create noise and therefore would need noise to be removed. Furthermore the sentences has no citation. I recommend removing this sentence from this page as it is not related to the rest of the page. Perhaps a discussion of noise in ray tracing can be added to distributed ray tracing and then a discussion of denoising techniques would be appropriate there. At any rate, I recommend removing this sentence from this page.

    This is the sentence: "Ray tracing-based rendering techniques that involve sampling light over a domain generate image noise artifacts that can be addressed by tracing a very large number of rays or using denoising techniques." ThothOfTheSouth (talk) 21:10, 27 October 2024 (UTC)[reply]