Chapter 4: Entering the Third Dimension Released

I’m happy to inform you that “Chapter 4: Entering the Third Dimension” is now available online by clicking on this link.

Let me know if you have any issues, bugs, errors, or simply wish to yell at me. Happy reading.

This entry was posted in Announcements and tagged , . Bookmark the permalink.
  • http://thereactivearts.com Quazi Irfan

    >> this line  of code
    glShaderSource(shader_id, 1, &temp, NULL); 

    causes the following error:
    error C2664: ‘void (GLuint,GLsizei,const GLchar **,const GLint *)’ : cannot convert parameter 3 from ‘char **’ to ‘const GLchar **’ 1>          Conversion loses qualifiers

    >> Changed it the following way to make it run,
                        const GLchar *temp = glsl_source;
                        glShaderSource(shader_id, 1, &temp, NULL);

    What’s gonna be the next tutorial about? Material? Lightning?

    • http://openglbook.com/ E. Luten

      Can you send me over the project file or solution that you’re using? I’m unable to reproduce the error, and I compile at warning level 4, so I should get a level 2 error.

      Have you tried compiling the project from the source code repository?
      http://code.google.com/p/openglbook-samples/

      • http://thereactivearts.com Quazi Irfan

        i directly copied and pasted to see if it works. And then there was this error. What file should I send? just the code.cpp file?

        • przemo_li

          tarball whole code (or rar it if you are win user)

          this should not happen, so pls attach info about OS, compiler, IDE, compiler options etc.

        • http://openglbook.com/ E. Luten

          Hi @iamcreasy:disqus ,

          Just compress the entire solution or project and put it somewhere online for me to download or email it to me at eddyluten@gmail.com if it’s not too large.

          Eddy

          • http://thereactivearts.com Quazi Irfan

            Sent. :)

          • http://openglbook.com/ E. Luten

            For others with this issue, the problem turned out to be a compiler setting within Visual C++. You need to compile the source code as C, not C++ by setting the “Compile As…” option to /TC (see attached screenshot).

    • przemo_li

      There is no such thing as material in OpenGL 4.0, lightning is also not a part of core, any more. This mean you can simmulate any lightning model and any materials you want you just have to DIY.

  • Christian Rau

    First of all, good work. But I have some remarks to make.

    You say, that you store matrices column-major, like GL does. But your statement, that GL requires the translation vector in the bottom row (instead of the last column) is just wrong. You also don’t do that in your code (you store it in the last column), so I don’t really understand this comment, as it’s just wrong.

    I also think, that in a right-handed coordinate system (like we in GL use) it is more appropriate to rotate right-handed (counter-clockwise, when viewing the arrows around the axes), whereas you rotate left-handed. This could be just a matter of taste, but the fixed-function GL calls also rotated right-handed, I think.

    Next, you could elaborate a bit on homogenous ccordinates (but not too theoretically), so that the reader understands why 3d points are actually 4-vectors, so that affine transformations can be represented by a single matrix representation. And you could make it a bit more clear, that transformations are applied by multiplying the matrices by a vector and likewise concatenated by multiplying the matrices together, with the leftmost one applied last (this comes through implicitly, but some clear words would help a bit).

    Sorry for all those lamentations, but this is just a crucial chapter.

  • http://thereactivearts.com Quazi Irfan

    You said when you are passing value to the rotation matrices you are using radian for determining the angle. But when you are using “float angle” inside of the function, you are using sin(angle). Question is how you can using sin(radian value) to evaluate sinθ? Isn’t θ should be a degree?

    • http://openglbook.com/ E. Luten

      No, take a look at the docs: http://www.cplusplus.com/reference/clibrary/cmath/sin/

      • http://thereactivearts.com Quazi Irfan

         Thanks, I should have noticed it.

        Q 1. I dont understand is how the 3 matrices are related to the VAO. Does every VAO, need to have one Model and View matrix to handle all the transformation? What is the purpose of of using Model and View matrix in the draw cube function?

        Q 2. In the translation matrix section, you said, ” This same principle
        applies to matrices when we wish to move an entire coordinate system
        instead of a single point:” But, what does it mean by “an entire
        coordinate”?

        Q 3. Another thing, how a matrix can handle a full set of vertices. Is the model matrix is like the pivot point of the set of vertices.

        • przemo_li

          Q1:
          Model matrix store all transformations that apply to __just__ that one model.

          View store all transformations that applay to point of view. (or world in general, since you can move “eye” or whole world).

          Projection is transformations that give you perspective you want.

          Sometimes you can compute ModelView matrix in cpu, and send it as one. Sometimes you can compute ModelViewProjection on cpu and send only one.
          Sometimes you need Mode, View, ModelView, and Projection matrices.
          It depend on computations you want to do!

          Generally less Matrices computation of GPU the better. (because there is no point of computing them for every triangle if you can do it once, and then send as one uniform).

          Also Each VAO that reffer to the same object (one object multiple VAOs) can share one Model matrix if transformations are the same (and you do not have to change this uniform if you send those two VAO’s one after another!).

          Projection changes when you wnat to change perspective.

          And view change when you change point of view.

          Q2:
          Assume that you want to move Earth out of solar system.
          You can do two things.

          Move Earth.
          Move every other plantes, star, etc.

          So it is just matter of interpretation, if you move object or you move coordinate system. (Thats why when you want to do second thing you do it in the same manner).

          Q3:
          Generally __every__ vertex from VAO will get the same uniforms. So what ever computations you apply for vertex. Each of them will get the same values.

          So if you want, any transformations to be applied per vertex, you have to attach those values that vary per vertex to VAO itself.

          In other words, when you know what equotation you want to compute, check what values are per vertex, and put them in VAO. And what are shared by set of them. Those put in uniform, and set of vertexes will define what vertexes will land in one VAO.

          Q4:
          You can rotate cube in two different ways.

          Set cube to vanilla state, and then rotate by angle.
          Or
          Rotate cube (in what ever rotation it already is) by small angle.

          First do clean approach.
          Results of second are based on results of previous transformations.

          Q6:
          No! Because in this simple example we do not want to change perspective. How ever there will be performance penalty for constantly changing perspective uniform!!
          Better wold be setting new one if we know that perspective have changed.
          But in this simple example perspective is constant, so we can set it once.

          Q7:
          You just described how to use rotation that is sensitive to previous transformations. If you want that then it is ok!

          Q8:
          That is because, your app do not store shader by itself!!! It is gpu driver who manage it. So you can operate only on ID. And by this ID you tell gpu driver which shader gpu holds for us we want to use(target) for api call.

          Q9:
          Yes. Mixing different View matrices for objects that should be rendered in one “scene” can give crazy results. But you may sometimes need it (to render crazy effects).

          But you will more often encounter situations when you have multiple view matrices per objects. For different eye positions. Eg shadows can be implemented in a way, that you render scene x times where x i count of lights + 1. And there will be x views matrices for every object.

          I’m all in for “further reading|for more theory|good additional resources”

          • przemo_li

            Maybe, some of this Q&A should land in chapter??

          • http://thereactivearts.com Quazi Irfan

            There could be a FAQ link(part) with every chapter, where new question that might come up ofter, might get included. This wont clutter up the chapter itself.

          • http://thereactivearts.com Quazi Irfan

            Q 9: That means there is only one view matrix for a scene for (just)viewing object of that scene.

            & Every object’s model matrix is multiplied by THAT view matrix to bring them to eye coordinate.

            If these are correct, then “most of the time” , I will associate only one model matrix with one object, one view matrix(of course one camera for one scene) and one projection matrix. Is it right?

          • http://openglbook.com/ E. Luten

            Yes, unless you have multiple “cameras” or “eyes,” you’ll have multiple view matrices.

  • http://thereactivearts.com Quazi Irfan

    What’s gonna be the next chapter about? Would you please make the next chapter about ‘how to texture your object’ ?

    • przemo_li

      Texturing is pretty easy.
      Just feed texture, and text, coordinates. Then in vertex shader play with texture coordinates in any way you want (usually you just pass them along with appropriate interpolation). And in fragment shader use texture coordinates to fetch texels.

      Multiply that work by number of textures you need, add mixing of those colors way you want.

      And remember that in ADS lightning model you mix texturing before adding S.

      But I think, that first will be lightning. Both directional and parallel. Per pixel, etc.
      Because all pieces that are needed are already in place. Just add uniforms with light properties and use some lightning algorithm.

      • przemo_li

        I mean, that for lightning we would need almost only few more lines in shaders. (and setting up those uniforms)

  • Przemysław Lib

    When will come new stuff?

    PS can you review “OpenGL 4.0 Shading Language Cookbook” and post it as blogpost?

    Upsssssss, changed post as, and it posted it once. Did not seen it, so posted another time.. Can this post be deleted?

  • Przemysław Lib

    When will we see new stuff????

    PS can you review “OpenGL 4.0 Shading Language Cookbook”?

    • http://openglbook.com/ E. Luten

      A new chapter will be released when I have some time to dedicate to the site away from other more pressing duties. As for the review, probably not unless there is enough interest.

    • http://openglbook.com/ E. Luten

      Let me clarify on the review: the book is from Packt, so I will not review it even if you’d pay me.

      • przemo_li

        I’m lost on it. Why Packt is not so good for reviewing their books?

        PS I’ll wait for next, chapter, & thx for response.
        PSPS Do you like to torture others, right? ;)

  • Przemysław Lib

    Post Next Chapter pls!!!

  • Pablo Aizpiri

    Just wanted to say this online e-book has been helping me TREMENDOUSLY!! Anxiously awaiting further releases! Thank you for your hard work! I love how detailed you are in your explanations and yet easy enough for anyone to follow along!

  • Khushiitrans

    please post the next chapter soon. Its been more then 4 months i think.