I love to work on projects at the lab not only because the space is home to many technological resources for research and software development, but also because it is the meeting ground for intellectual and creative discussions among professors, students, and our collaborators. I find myself most able to “get something done” when I know I can consult and chat with experts. In short, the Future Reality Lab is a great space for the cultivation and realization of ideas, particularly because of the people who are there.
On the subject of my projects, for the challenge and learning experience, I have been developing a 2D side-scrolling game built nearly from scratch using C++ and opengl. I have wanted to do this for a while.Normally one would use an engine like Unity to get a working prototype, but in this case I find it incredibly satisfying to work from the ground up. Games have “been done,” but I figure that since new research usually requires some form of “ground uppiness” (even if it isn’t in the form of building all of the low-level software), I can justify this pursuit. I enjoy it after all, and I can ask plenty of questions when in the vicinity of my colleagues.
At this point, the program is an interactive in-game editor in which I can draw 2D collision paths for my as-of-yet nameless box character to traverse.
The placeholder background I have drawn (a fortress or castle) comprises multiple layers that scroll according to the camera’s position, creating the illusion of depth: (parallax scrolling)
(Note that the videos have poorer frame-rate than the live program due to the recording softtware. They will have to do for today.)
Multiple grid sizes are useful for creating more detailed colliders:
(The player has two (red) ground collision sensors. The green lines indicate points at which the sensors collide with the floor.)
I have also made it possible to detach the camera in a “free” mode so I can view and edit the environment away from the player:
(Note: I am aware of the unrealistic acceleration, especially on slopes. Physics will be improved.)
So far, I have found it especially important to develop the infrastructure for visual debugging and feedback before anything else. I wish to see lines and colors corresponding to events (such as collision) so I can see what “is going on” in the world I am creating. Ease of visualization and clarity are key when it comes to my process, no matter the project. This reminds me of why I like that other project so much, not to be repetitive.
As for my long-term project plans, eventually I would like make world rotation a central mechanic. The story and setting could revolve around VR, allowing the game to take place virtually anywhere.
The rotation gameplay will complicate collision detection and require more robust colliders that change as the world moves. The “updating model and view matrices instead of moving the world itself” trick is unlikely to work here, as I want each connected area of the world to be able to rotate independently. If I simulated world rotation by modifying the player’s gravity instead, keeping track of all orientatation and offsets for collision detection would necessitate excessive and error-prone trickery, especially whenever the player or other entities would transition between areas. I definitely want to avoid the error-prone trickery.
Following a lengthy chat with another student, I concluded that if I were to track where the player is in relation to fixed grid blocks in the world, then during world rotation I could simply update the collision closest to the player, rather than updating every collider. Of course, I will need to go ahead and implement this system first to see whether it will work here. The idea is a little bit unusual from what I can tell, so I apologize in advance for the technical babble.
Next time (I think): more on this project and “why I am thankful that we have a piano.”
~Karl