Representation in VR. Max.

Greetings!

My perfect representation of a VR that will be suited a for the medium is a social media network. I know that sounds a bit strange and introvertish but that is how I see VR. We use the social media every day. Facebook, Instagram, Snapchat and etc. Why won’t we use them in VR. Imagine you are a person that walks in the hall of your friends, watches their pictures, hit the like button, but you are actually doing it in a VR world. I think that would be an amazing representation of VR.

Representation in VR

My friend and I were talking about the fact that despite the radical differences between different groups of languages, the structure of word groups are usually the same. Thus, if we map all the word in English, for example, in a virtual space, we can expect to see that all the English words related to family cluster together. Meanwhile, when we map all the Chinese words in the same space, all the Chinese words related to family will gather at the same space. Such a corresponding relationship between English words and Chinese words makes translation and language learning much easier, considering how different they are in terms of grammars, algorithms, characters, etc.

The reason why I think words should be mapped in a 3D space rather than a 2D one is that the connections between words and word groups are too complex to be represented in a 2D space. In a 3D word, the “distance” between different words will be less skewered: people pick one word/word group, look around, and then they can see all the connected words/word groups around them. This becomes an interface because people could be immersed in the world of words, and seeing the connections between words/word groups shape their understanding of language.

Form of Representation Suited for VR: Coding World

I’ve always been attracted to VR simulations that go beyond entertainment. I firmly believe that the possibilities for VR are limitless and such can be used to solve problems that hinder humans all over the world.  Therefore, I believe that the best VR simulation that can be implemented is educational. Ever since I was a kid, I considered myself to be a visual learner. Despite the fact that I can understand things on a superficial level just by listening, all information that is cemented in my brain has some form of visual representation attached to it. The same applies nowadays as I am studying Data Structures and Algorithms. I always need to spend time making drawings and concept maps in order to fully understand the concepts in class. Some of the concepts are really difficult to follow just by listening to a lecture and a virtual 3D animation of concepts like recursion, binary trees, and sorting algorithms will best be understood by learners all over the world who struggle with these concepts. This world will resemble those platforms like Scratch that teach programming to kids but will match the complexity of higher level programming concepts and algorithms with a medium like virtual reality which will much better represents the concepts.

Teaching Coding to kids

Form of Representation Suited to VR: Immersive Architectural Model Building

When designing a building, often, no one knows what the building would really feel like to be inside until it  is built. Now, with VR, we can explore the inside of a building that hasn’t been built yet, or even a building that is impossible to build.

The form of representation I would like to write about in this post is based on my own experience in creating three dimensional models. I went to architecture school for a year. We were encouraged to start thinking about our projects by making sketches. Though I would start out this way, scribbling in  a sketchbook, I would quickly become frustrated by how, though my sketches managed to capture the feel and aesthetic of what I wanted they never managed to convey a sense of the space. I quickly moved on to making rough three dimensional sketches or sculptures with bits of paper, pizza boxes and a box cutter. This really helped me think spatially, to see in one go what it would take a plan, several sections and an isometric view to see before was immediately visible with the three dimensional sketches.

A few years later, I was teaching myself Maya. The modeling capability of the software was more powerful than my limited real life model building ability. Yet the interface of the 2D surface of the screen was a barrier to being really able to see what I was doing as I built. I would move a vertex a certain amount and once I rotated the camera, I would realize that I had moved it too much in the x, y or z direction without even noticing.

VR would be useful in taking a tour through the model of a building to a client. But it could also be extremely valuable at the sketching, conceptualizing and designing stages as well. The software would consist of ‘dynamic material’ that the user can manipulate by holding onto and dragging and scaling surfaces, vertexes, edges and volumes, like on maya.

The software would also have two modes, a miniature one where the architect can tinker with the model and change things and an immersive mode. Thinking about the scale of the body is also extremely important in designing architecture. The architect would be able to move from a miniature scale to placing themselves inside the model as they work on it.

There are some elements of a building that VR would not be able to capture as yet, like heat and air currents but VR would be excellent for representing and creating a sense of space.

Visualizing Data in VR

During Bret Victor’s talk, I loved learning about William Playfair and how he invented the bar chart and other graphical methods to represent data. Related to these methods are “explorable explanations,” abstract representations that show how a system works or a way for authors to see what they are authoring without the black box of code.

Data visualizations are a powerful representation that is suited for VR. Though there are some visualizations that have been developed in VR, they usually rely on the game engine to navigate between charts or they will have some irrelevant motion like the bars rising in a bar graph when it is first loaded. I think with VR we can do more to incorporate the different modes of understanding that Bret Victor mentioned. For instance, we can build upon our spatial understanding to understand quantities, time, associations between nodes of information, or even how the charts are organized (like a library of books). We can build upon our aural understanding through having audio explaining the data and walking the user through it at a level specific to the user’s experience.

VR can make an data visualization an interface to information that makes the data accessible and easy to understand through abstraction. However, there is also potential for it to unpack the layers of abstraction and show how the data visualization has been made or even give the context behind the data. For instance, if there is a chart showing the amount of snowfall, could the user be immersed in the environment showing the snowfall and the data visualization of its levels? Data visualizations are a person’s stories of that data, so they are already created in mind with a specific objective for their audience. The trouble with these visualizations is that they tend to dehumanize the context behind that data, so VR really has the ability to use its potential for immersion to help the audience better understand the story. However, it is important for VR to not exploit this potential and to falsify the data through creating a specific immersive experience that causes a different perception of that data. I also think using VR to visualize data relates to the dynamic models that Bret Victor discusses at the end of his talk. Imagine data being updated in real time and seeing how the representation changes: the bar increasing, a point on a line graph being added, etc.

Project 2 Development Blog: Zenboo

Group members: Adham, Cassie, Nico, Vivian

March 3

planning/storyboarding – see Nico’s post

March 10

This past weekend I started to build up the environment. I started out by messing around with the terrain builder, as part of our plan included having the bamboo garden surrounded by mountains. This was relatively straightforward and fun to play with, though there were a few differences from when I went from using my laptop (I have 2017.3) to the PC in the classroom.

When I made a perimeter of mountains, however, placing the camera in the center made the mountains seem a little overbearing. To accommodate this, I placed a platform mountain of sorts in the center, with the camera on top of it. This way, it feels like the user is up in the mountains and gives a much more peaceful effect:

I chose to create somewhat jagged-y mountains because they reminded me of some of the mountains in China – rough yet mystical.

I also played around with the painting feature, and painted on flower details of the platform the user stands on. I got the texture from Grass Flowers Pack Free, and didn’t realize until I placed it in the environment that the flowers actually moved around, like they were swaying in the wind. I’m not sure if we will keep this effect or not, but for now I think it adds a nice peaceful effect, and could possibly be accompanied by calming wind sound effects.

To build up the rest of the environment I relied on other prefabs. To create the circle of rocks surrounding the user, I used rocks from LowPoly Rocks. I got the bamboo from Shuriken Set (which Nico found), the watering can and sickle from Garden Realistic Tools, the tree stump from LowPoly Trees and Rocks, and the skybox from Skybox Series Free. At the moment, this is what one view of the environment looks like:

We’ll have to talk more about how we want the bamboo to be represented and how close it should be to the user. Since there is a circle of rocks, it might make sense for there to also be a circle of bamboo rather than just a section of bamboo in front of the user.

March 11

During class we touched base on more stuff to do for the environment:

  • Waterfall – found something in the assets store for this, Water Fx Particles
  • Make log wider – this way it works with the gravity of the objects on top of it (before the watering can and sickle were falling off of the log for some reason…this was why)
  • Make the bamboo closer to the user – less walking, limited space
  • Prettier bamboo material
  • Not have grass/flowers too high or else when objects fall you can’t see them
  • Have a different color flower – right now it looks like hay

I also started looking around for some music and sound effects we could use. I found some nice sounds of birds chirping and leaves rustling in the wind, as well as a sound that could be good for when the bamboo is being cut.

Later in the evening Nico and I also worked on putting the script he had and the environment. This was really nice because it gave me a better idea of how the space looks and can be better designed when wearing the actual headset. We ended up scaling everything in the environment down so this would be easier to work with in the scripts, and so that the user didn’t have to walk as much. This actually ended up having a nice visual effect as well, since it somehow felt more like a canyon. We also talked about more things to work on with the environment:

  • With the rock circle – make the circle smaller, with less but bigger rocks, and experiment with them floating
  • Take flowers off of the mountains (I had accidentally painted these on, which wasn’t apparent until we scaled everything down)
  • Add material variation in the mountains
  • Add more bumps in the platform mountain the user stands on for terrain variation

March 12

Today I worked on making some of the improvements to the environment:

  • Removed the flowers that were on the background mountains
  • Made the rock circle into larger floating rocks, tilting at different angles and floating at different heights. I actually really like this effect, I think it gives an odd sense of power yet is still zen.
  • Added in more bumps/raised terrain sections around the platform mountain that the user stands on
  • Started experimenting more with the ground…put pinker flowers in the back and short green grass in the circle where the user stands. However, there is a kind of warp in color that happens when the user moves around their head, which doesn’t necessarily cause any issues but it might look slightly out of place. I’ll have to see what the others think.

Here’s what the environment looks like at this point (sans bamboo – will add this in as a group tomorrow):

March 17

These past couple days were spent making finishing touches on the environment. The general consensus on the grass and flowers was that the color change was due to rendering, and was too distracting for the user. I found a blog post online about how to make grass using the tree building tool, but I was having a little trouble getting it to work. I also tried to mess around in the terrain settings, yet this was in vain as well. Eventually, the issue was resolved by adding grass by painting them as trees rather than as a detail – I ended up finding a grass model in a package from the asset store and using this. I also added bushes in the mountains to look kind of like trees to get some variation in the color in the mountains and to look more like those mystical yet peaceful Chinese mountains I was inspired by.

The last thing I worked on was the placement of the bamboo. The space the user is in is a bit limited, so the bamboo was going to have to be placed in a way so that the user did not have to walk much. I ended up placing the bamboo in a semi-circle around the user, so the user can simply turn around to view the other bamboo stalks that are available to interact with. I think this placement also gets the user to turn around and look at the 360 view around them, whereas a clump of bamboo in front of them would simply station their viewpoint in one spot.


Forms of representation in VR

I think one of the biggest strengths of VR is the fact that the player feels immersed in an alternative environment and feel an intimate connection to that world and objects within it.

Given this feature, I think this medium is fitting to represent building of relationships between people – a simulation of how people interact, connect, and bond.

Image result for vr friend
multinational friends in VR talking to the player

I first thought about this idea because of the environment we’re in at NYU Abu Dhabi, where we have students and faculty coming from literally everywhere around the world. I was just serving as a Peer Ambassador in the last Candidate Weekend where I talked to prospective students and realized how it is sometimes overwhelming for people who have never travelled abroad before. I think simulating human interactions through VR could help people like these students to know how to situate their thoughts and conversations in a global context.

In this sense, VR could be used as an educational tool for people wanting to learn about people hailing from all over the world. For this specific representation, it could incorporate intricate details specific to different cultures and societies by simulating the different reactions and responses from characters within the VR world coming from different countries. The player can meet and converse with these characters and be informed about the nuances of different cultures.

Such representation in VR provides an interface for interaction where people learn to approach people of different backgrounds, ask informed questions, and know what factors to keep in mind when conversing.

But this leads to questions such as

  • Why not just interact with real people?

I think such representation of multi-cultural and multi-ethnic in VR is definitely in no may suggesting that this is the only and most accurate manifestation of what these interactions would look like, but rather is a way for people to get started thinking about how to act, approach and behave when they are in such situations. There are benefits to meeting people like these characters in person, but for those who fear making mistake and want to avoid being overwhelmed, this is a good option to try out.

  • What does VR bring to this specific form of representation?

I think having realistic characters to guide you through human interactions can prove to be very important for people not just wanting to “practice” and get to know other cultures before meeting people in person, but also for those who actually struggle to socialize and are introverted – VR can strive to provide an experience as similar to reality as possible.

Form of Representation

As Bret Victor mentioned in his talk, The Humane Representation of Thought, “the powerful medium is what powerful representations to be spread,” the medium is what determines how and to what extent the representation can be executed to its full potential.

I think that the medium of virtual reality allows the user to get indulged in an environment that the user cannot or is difficult to get exposed to. For example, the medium of virtual reality can allow an ordinary person to feel as if he/she is walking on the surface of the moon, which only skilled and selected astronauts can do. In fact, virtual reality allows the user to be an active participant rather than a passive observant. What I mean by this is that with the use of the hand controllers, the user has the ability to maneuver what is inside the virtual reality. From this essence of virtual reality, I think that a form of representation that I feel would be suited for the medium of virtual reality would be “thinking,” more specifically creating “dynamic environments-to-think-in.”

Visual Image of Bret Victor’s Dynamic Environments-To-Think-In

In a sense, the medium of virtual reality “treat[s] the human beings as sacred” (Victor). Virtual reality can give superpowers to the user by giving the user agency to see and alter the virtual world they are in. Let’s say that the user is placed in a lone island and the user can see the setting through the headset. This user, if given the option, can perhaps light a fire or cook food, using the controllers at the deserted island. More realistically speaking, the user can craft his own image of his house in his virtual space, which he can use it to execute his thoughts, and also come up with new ideas. Such interface would trigger different parts of the users’ brain and allow the user to think in a different manner. Because thoughts and ideas tend to arise when one changes his environment, the medium of virtual reality would be effective in providing that alternate space.

Blog: Representation

In “The Humane Representation of Thought,” Bret Victor discusses different modes of understanding mediums, specifically the use of different sensory channels and enactive, iconic, and symbolic methods. At the moment, virtual reality is most typically represented using different gaming systems, such as the Vive, which allow virtual reality to be relatively dynamic. The user navigates and understands the game through action, image, and language-based representations. These games also appeal to various sensory channels; they are visual because of the graphics, they are aural because of sound effects, they are tactile because of the use of controllers, and they are spatial because of the 360 degree aspect, complete with moveable depth.

What is perhaps lacking in this current representation, however, is how the user can understand the virtual reality medium kinesthetically. There are not very many games out there, to my knowledge, that involve movement of the user’s body in a way that matches actual reality effectively. The bow-and-arrow activity in the Unity example world, for instance, does not accurately match how you would shoot an arrow in real life. It takes into account aim, a small pull-back motion, and the push of a button in order to shoot the arrow. When shooting an actual bow, there are several other aspects that go into how the archer’s arrow will shoot. The position of the archer’s elbow, for example, is very important. There is also tension in the bow’s strings that you are not able to feel in your fingers and arms with Vive controllers.

A well-suited representation for virtual reality, therefore, would be one that better takes into account the kinesthetic mode of understanding without compromising other modes of understanding the medium. Sensors all around the body to better map body movements, for example, could be a possibility.

Development blog #2. Mai, Max and Shenuka

Pre-development blog.

Hello friends
Our virtual reality is based on not a very everyday action, but it is done very very often – Watering the plants. We came up with this idea after we have met at the market place and one of the things that have inspired us was a man-eater plant from “Little shop of horror”.

https://www.youtube.com/watch?v=GLjook1I0V44
Here is the video of that plant 🙂

Our idea was to create a terrarium/greenhouse where the player would be placed and he/she will see a desk in front. So here is the storyboard.

There will be various seeds on the table, but if the person will choose a specific one, a man-eater plant will grow. Here are some pictures that really relate close to what we imagined as our environment

There will also be a man-eater plant at the back, but the person does not know about it unless he looks back 🙂
That is all for our idea, hope you like it 🙂

Development blog #2

Greetings!

Here is our development blog #2. We have created a work station and implemented interaction with objects. We have used the prefabs that were used in the “Steam VR” interaction example such as “throwable” script and of course the camera from it. It made it much easier since the camera contains the controllers. We created a working station which was taken from parts of the “Green house 3D” asset. We just took wooden parts and created a workbench and some shelves. After finishing with the work station and adding the “throwable” script to some cubes and spheres, we added a water jug and a pot and actually made the pot to hold objects (mesh collider), and now we can place the objects into the pot. We have also added the same scripts to the water jug. Now we were able to pick up the water jug and throw it (just for fun). After doing that we tried to add water particles to the tip of the jug to make it pour water. We have added the “Water FX” asset, but it had only rain. We have decided to change the area of the rain to a tiny area and make the source be the tip of the water jug. There were some problems with gravity and at that moment we have stopped 🙂

Here are some videos :

Development blog #3

Greetings. This week was the biggest move for us. We have finished the project and we have implemented a lot of new things and updated some scripts:
-Water from watering can fixed and now works only when can is tilted more than 45 degrees
-Added a seed interaction with the pot. When the seed is in the pot and is watered, it grows.
-Added some plants for the inside of the greenhouse
-Added more trees on the outside of the greenhouse
-Added a “small” butterfly 🙂
-Added a cage with a plant at the back
-Implemented sounds
-Added some throwable objects

Here is a short video of the project.