Enjoying The Colors

1.Project Description: describe the space you created and the modes of interaction.

Junior, Claire, and I decided to create a realistic bathroom space in which the user could walk around in. We did have limited space as we used the front part of our classroom, but in a sense, the space limitation worked to our advantage. A regular bathroom is not that big, and so recreating the bathroom space within the limited space replicated the real life situation. While there is a bathtub in the corner, it is relatively the normal size, neither too small nor too big. We also decided to put a tower rack and some towels to show that it is a bathroom that is being used frequently, and not a sample bathroom in a showcase. The big shelf was added to place one of the glasses that would be used for the interaction. And of course, the toilet is placed in the corner to emphasize the fact that the user is in a bathroom. There is also the wide sink with the mirror on top of it which we intentionally chose to match the overall atmosphere of the bathroom.

Overview of the Scene

We used the grab and select tool using the Vive controller’s trigger button as the mode of interaction. By hovering the controllers over the glasses, the user can click on it using the controller’s trigger, which allows them to “take a look” through that specific glass. This means, when the user choses a glass that has the function to make everything look red, the user will see everything in red after selecting those glasses. Another interaction would be walking in the virtual bathroom setting. By calibrating the Vive headset, the user can walk freely inside the bathroom, and look closely at the various objects.

2. Process and Implementation: discuss how you built the scene within the development environment and design choices your group made What did the brainstorming process involve? How did you go about defining what would be everyday in this world? What were the steps to implementing the interaction? Share images or sketches of visual inspiration or reference, including the storyboard/layout.

As for the brainstorming process, Junior, Claire, and I met and discussed what kinds of daily life situations can be replicated in an interesting way. As much as we knew that we had to replicate some kind of daily life situation, we wanted to use the full potential of virtual reality. We talked about how “sight” is an essential part of life and that having bad eyesight can sometimes be a barrier when wanting to examine everything carefully. We thought that experimenting with “sight” would be our main theme. We then discussed how the user can be given a task to find and try on the different glasses in a space. That way, the user can interact with the objects in the space (glasses) and go through the different experiences. We also discussed where we wanted the setting to be. The different ideas were the living room (common room) of a share house, the user’s own room, and the bathroom. We thought that the common bathroom would be the most realistic since people can leave their glasses behind in the common bathroom and forget and have to come back to find their glasses.

We used various assets from Unity’s asset store and created the bathroom setting. While there are a variety of assets in the Unity’s asset store, the ones that looked the most sophisticated and appealing were not free. Therefore, we had to scavenge through the free assets in order to create the bathroom. We gathered different assets packages and played around by placing the objects that were related to the bathroom. For example, we tried different iterations of the sink to see what actually fit the space, mood and interior design. The first sink that we originally put seemed to be too bland, and after some experimentation, we decided to settle down with the current sink.

3. Reflection/Evaluation: This should discuss your expectations and goals in the context of the what you felt was achieved with the finished piece.

Originally, we wanted to add more components to the current piece. However, none of us had experience with using Unity before this class and so there were many components that we had to learn. Our expectation was that we would have each glasses have its own filter, and we would place different scripts on each of the glasses to possess different effects. Our original idea was to have one glasses to zoom in, another one to zoom out, another one to have double-layer vision, and the real one that would give the corrected vision. However, figuring out how to create all of these different kinds of vision took so much of our time, that we needed to come up with a plan B in case we could not debug the scripts that we had written.

In the end, we settled with the idea of having each glasses possess the power of switching scenes. We switched our main theme to “experiencing the colors” and so we decided to blur the vision a little and have the user focus more on the color change. When one glasses was selected, the user will “put on” those glasses, and see the objects in red. Then, if the user “put on” another glasses, the user will see the objects in green.

Red Scene

Blue Scene

Green Scene

We were able to achieve the basic of what we wanted to achieve, in the sense that we wanted to provide the user of being in a bathroom setting, trying on the different glasses, and allowing the user to experience the different vision with the different glasses. Although the different visions were a little different from the original idea, the way we dealt with trying to recreate the effect in a different manner was a result of our good teamwork.

I would say that the most difficult task in this project was selecting the glasses and placing the script in the object that would jump to different scenes. Because we had the idea of each glasses having its own filter, we needed to create different iterations of the same setting in order to augment that effect. Moreover, our group only had three (two) members, and so we struggled in regards to knowledge and implementation compared to other groups who had four members.

Midterm project: final documentation

Recycling @NYUAD is a project that strives to bring awareness to the environment and waste issue at NYU Abu Dhabi. It seems that many of our students lack the knowledge of recycling properly and frequently, so we want to use this project to address that problem.

The look & feel:

the environment that I made!
look from another angle

One important decision we made in terms of the campus was to focus on one portion of it – not too much space, but enough to move around, be able to see trash littered around, but at the same time exude a sense of a closed space so players don’t wander around and stay within that designated area. Our set area was the space right outside D2, where there are grass patches and A6 and The Arts Center are on the side.

My job was to create the actual space. I took references from taking photos in the actual space and also looking at Google Map to see what the player would actually be seeing if they were standing in that space.

Initially I tried looking for prefabs that could be used for this project but because our campus is very unique in design, it was difficult to find anything similar. So I started building them from scratch on Unity using 3D shapes. The key was to layer them together to mimic the building structures and add elements for details.

On my part, I’m pretty satisfied with how the environment turned out. It was my first time building assets from scratch and it took a lot of trials and errors, but I enjoyed the process and liked how it turned out. I also spent a while experimenting with different skyboxes and eventually settled on a bright, cloudy sky look, which fit the environment quite well. The main things I learned in the process of building the 3D space were 1) using the right color, 2) getting the relative size of buildings correct, 3) adding small but important details that can make that space look more realistic and accurate.

After I completed all buildings and the environment was finished, I passed it onto Ju Hee, who incorporated prefabs of objects that populate the space, such as chairs, tables, and trash.

For the interaction, Simran and Yufei worked on how the player would pick up the trash. The pieces of trash glow yellow when the player is nearby, indicating that they can pick them up, which they can then dump them in the recycling bin. A sound plays if it is recycled properly and another sound plays if it’s not.

In reflection, if we had more time I think we could have worked more on making the interaction more sophisticated – for instance, making the trash come to live and react angrily if the player chooses to ignore it and not pick it up to recycle it. It could shake and make a roaring sound until the player actually picks it up. I think this would have made the experience more engaging and interesting. Making the trash come more alive would also be taking advantage of VR as a medium as it’s not bound by how things work in the real world.

We also had issues re-styling the environment for the interaction as the space itself was pretty big. Looking back, I think we could have spent more time trying to adjust the size and scale more.

I would also work more on the space, decorate the buildings a little more, and maybe even add animations of people sitting around and chatting to each other near the Dining Hall. All of these contributions would add to the experience when the player is in the space, making it engaging and immersive.

After user-testing & presentation in class:

I was very delighted to find that a lot of my classmates found our project very fun to play. To our surprise, people started throwing the trash around to see if they can throw it into the trash can from afar. It was interesting to see how our supposed weakness of having a huge space contributed to the fun element. Moving on, we could make use of this feature – if the player throws the trash from afar and fails to throw it into the trash can, it comes flying back and doubles in number! To add to the educational element, we could also have words pop up onscreen, giving numbers and facts about our waste control at NYU Abu Dhabi and how different kinds of trash should be properly recycled.

I was also pleased that people found the environment very familiar. I spent a lot of time trying to build The Arts Center, The Dining Hall, A6 as well as the grass patches from scratch, so it was very rewarding to hear my friends telling me that they could immediately recognize the space.

Documentation – Don’t feed the plants !

Original Idea

Our idea was to create an environment that places the player inside a greenhouse, surrounded by plants and the sound of the rainforest, with a planting station in front of them with several tools to choose from. The main items to interact with were the pot, seed, and watering can. The twist would be what would grow from this seed. And what was awaiting their discovery behind them.

Our “everyday action” that we played around with was the action of gardening, but in this world what was planted was a seed of a carnivorous plant. And if looked behind, the player realizes that this plant is one that must be contained in a cage for the safety of the people in this world.

Storyboard;

The player would find themselves in front of a planting platform with a pot, seed and watering can by their reach. The player would them pick the seed place it in the pot and water it. As they water it, the seed would shrink till it disappears and a carnivorous plant would grow in its place and start trying to attack the player.

When the player starts looking around their environment, they would be able to notice a large butterfly flying outside above them, the shadow of the insect on the ground would indicate the player there is something above them.

Different perspectives

Assets

While creating the environment, we found ourselves at the luxury of having a variety of prefabs to choose from;

The main asset we used was a $1 greenhouse (Green House 3D Model) furnished with a table, several pots, and a hanging lamp. At first the greenhouse was well, green and then we changed the color to have it be white to fit the image we had of it being a victorian inspired greenhouse. We separated the items (pots and table) to make it easier on us to choose from them what we required to have placed inside. And when designing the inside of the greenhouse we placed pots around the player, some were empty and some housed plants taken from the enemy plants package which came with animations.

The other asset we used an abundance of was the asset of UniTrees 3; which included detailed, fantasy trees, plants, and bushes.

What was learned?

When creating the outside environment, we found an easier method to place multiple items that are the same. By adding to the terrain the object of the trees and plants; and having the brush place them randomly and be a part of the terrain. What was also learned was when we came across the resizing of the butterfly prefab; whenever it would play its animation the size of the butterfly would shrink back to the original. When going through the tabs of the prefab, you must not only resize the object but its animation as well.

Interaction

Main interaction

Limitations and Reflection

With the time limitation, we found it difficult to implement several pots and seeds for the user to interact with so we ended up on settling on one pair.

Another difficulty we faced was how when items were grabbed, the mesh becomes nonexistent and items such as the watering can goes through the pot. We tried multiple settings and options, but as we kept trying it would get worse and eventually the hands in the game just deleted completely. Max had to create a new project and re-import steamVR for it to go back to normal.

Reflecting on the final scene, it was quite satisfying to come pretty close to our original idea. And when including the butterfly overhead; it gave the player a sense of miniature size compared to what is outside this greenhouse. And also a sense of the dangerous outdoors.

The final thing we added was the audio, extracting audio files from free sound.org we found several sounds of the forest; that included birds, wildlife, and wind. When including that file to our scene, it immerses the player in the game and gives them feedback another one of their senses. We included three other sound files as well, one for the caged plant located behind the player, and that sounds setting was put at a 3D sound and that means that if the sound is coming from the right, the player would hear it from the right earphone and vice versa. The plant that the player grows also will play an audio file of low growling to add a scary factor to these plants. And the watering can also has a water audio file attached to it.

What we hoped to include

Having such an environment allows us to the freedom of expression, and the freedom to add whatever our mind imagines. We initially hoped we could give the player choices of seeds that they can plant, and each growing a different plant. And the main plants interaction, we could’ve had its animation actually feel more like it attacked the player and thus ending the game. But if the game ended there; the player might not be able to have time to fully enjoy the 360 view of the environment entirely. Allowing the player to fight back against the plant would immerse the individual further into the environment giving them a way to react to what is happening. Although the scene we created is pretty detailed, having animated animals outside and a running river would further give life to the location would be nice but it was quite difficult finding a well suited animated animal to include.

Project 1 Documentation

What I have learned from this project

This was my first time building an environment on unity. My biggest issue when it came to building the environment was recognizing the three dimensional space. It was really hard to see where exactly lots of the objects were placed. Because I was so used to the two dimensional designs that I used to do, making a three dimensional space was difficult. I could not make good use of the total space. Instead all the objects were placed on one side of the environment. However, I was able to learn how to think of the space in a three dimensional way. I think this will help me in my future projects. I will be able to plan out the space better and place things better.

Limitations and Reflection

Biggest issue I had with the project was not considering the viewer. When I designed the place I was thinking of the space more as a picture of a place. I thought of it as drawing a painting instead of building a physical space. Because of this, when I was planning everything I did not consider the viewer or the viewer’s experience. The viewer rarely had any interaction with the space due to it. Moreover this was the biggest reason why I had issues with the camera.

I did have technical issues with setting the camera, but I was lost when I was thinking of where to put the camera. There was only one way to put the camera which was putting it in front of the scene, but once I did, there wasn’t any interaction factor to the space.

What I want to work on my next project

What I want to work on my next project is to build a good base for my space and to use all of the space possible. I want to build a solid plane and walls around the space so the viewer could have easier time understanding the place. I also want to try building different buildings on the plane.

Also I would like to add more elements to the environment. I want to make sure the user and turn around and see different objects around him/her instead of not having anything once they turn around.

Here is the link to my presentation about this project

Google Cardboard VR : Invasion

I have decided to try Invasion for my Google Cardboard experience. It was a very immersive experience but there were several factors that I thought was really significant.

As soon as I started watching the video, I was kind of confused. Nothing was really happening in the screen and I was looking around. However, something that really helped me figure out what was happening was sound. I heard sound and I looked around to see where the sound is coming from. Without the sound it would have been hard to figure out what exactly was happening. When the alien spaceship was showing up in the sky, the surrounding sound made the viewer look around. There was a sound that made me look around and look up to see what exactly was happening. It made me realize, that sound is as important as the environment itself when it comes to viewer experience.

Moreover, What I have realized is that when I am building environment, I do not get to use the whole space. I usually would use half of the space and not use the space behind the viewer. What I found interesting is that this VR experience lets the viewer explore and move around a lot. It was the usage of space that made the viewer, for example, myself to look around and fully experience it.

Also there were some parts where the characters approach the user. Personally I thought that was very adorable. Because of the interaction with the characters it made me feel like I was actually there with them instead of feeling like I am watching them from far away. The eye contact these characters make and the noises they make was very significant.

Project 2 Documentation

1. Project Description:​ 

Project name: “Zenboo”

Zenboo is a relaxing space that demonstrates the state of ZEN by giving user relaxing watering&growing&cutting bamboo activities to offer them a chance to interact with mother nature in a fun way. A user will be surrounded by mountains, floating rocks and bamboo clumps, he/she will find tools nearby to give the user hints to initiate some activities with them. In Zenboo, the user is able to water the bamboo using a watering can and watch it grow in a unique way; moreover, he/she can also play with the newly grown bamboo. When picking up the sickle next to the watering can, a user can wave it to the bamboo clump and make the bamboo pieces disappear. If the use water it to much and make bamboo grow too fast, due to the effect of gravity, some piece of bamboo will fall on the ground and disappear after few seconds by themselves as well!

Space view
User view

2. Process and Implementation:​ 

This is how we started:


How did we build the scene and the the ideation of designing choices:

We want to create a scene/space that makes people feel relax in and doesn’t have a lot of things/movement to distract them from feeling the sense of “Zen”, therefore we came up with the idea of having a giant mountainous background with the sunset mode and put the user in the middle, also in order to amplify the interactions between the user and the growing plants experience, we put rocks around where the user is standing to make sure he/she won’t have to move too much to interact with the environment.

The reason for this inspiration is they are very mystical and calming, which would help contribute to the relaxation aspect of Zenboo.

The design of the circle of rocks was decided through the user feedback that the rocks could be larger and floating around the user rather than the initial plan to have them resting on the ground. This turned out to have a pretty cool effect and create the Zen atmosphere.

Another design decision that has been changed during the project development was the location of the bamboo. Since space is a limitation, having the clump of bamboo in front of the user all spaced out would have been problematic, or perhaps not as intuitive that the user had to go over to it and water it. Instead, we decided to place the bamboo in a semi-circle close to and around the user. This way, the user does not have to walk very much in order to water all of the bamboo.

What would be the “everyday” thing in this world?

The simple actions/interactions like pick up/drop/throw/watering and cut that everybody already knows what to do the first time they see the scene. And according to out existing knowledge we have the perception that when you water the plant it will grow, when you wave the sickle it will cut the plant. Therefore there’s not a lot of education of how this project works is needed and it’s easy for any user to take it up.

Steps to implement the interactions:(highlight some key interactions I worked on)

1: Particle system – water:

I worked on the particle system to make it on/off when an object is rotated at the certain angle – when the watering can face downwards the water particle system will be on and when it’s at the normal position the article system is off and the water effect won’t be shown. 

I reached the goal by using the transform.enulerAngles and catching the Z angle input of the water can object. We have a boolean function called “IsPouring”, so I grabbed the particle system under it and I added the code if the angles are beyond the range then the the system stop, else the system play. And we call the function “Is pouring” under the “void update” to make sure it is running all the time.

There was a small problem when I practice the code – the particle system is alway on when it’s playing. So I assumed it was disconnected from its parent object, then I added the code “Print” on the IsPouring function to check if it’s connected to the watering can when the codes are running. It turned out to be that there’s nothing printed out in the console log, so I dragged the the particle system to the water can to make sure it’s under the component section (although the particle system is already a child of the watering can), and then it worked.



2: Particle system – mist:

Another particle system I worked on is the the mist effect and it will only be triggered when the sickle is cutting/colliding with the bamboo. 

At first I was thinking about attaching the particle system (the mist) to the bamboo script, so whenever it is detected that the bamboo is colliding with the sickle (the sickle is hitting the bamboo ), the result of which is to destroy a GameObject (a piece of bamboo), the mist particle system will be triggered to play. However, this design has two significant difficulties: one is that OnParticleCollision is really hard to be repositioned in the “instantiate” to make the mist effect only be shown on the specific piece of bamboo that is hit by the sickle (since there will be a lot of bamboo grow out of the OG bamboo); another difficulty is that since at the same time the game object will be destroyed will the child function on it has been trigger, they effect will not be shown at all because the moment it’s triggered, its parent also dies so the mist has nothing to show on.

Taking these conditions into consideration, I tried to created a new mist script just for the sickle and it’s separated from the bamboo function so we don’t have to reposition the particle system for each specific bamboo. At first I tried to detect the dillion of the particle system by “OnParticleCollision”, however it turns out to be super hard to be detected accurately since there are millions of small particles and it almost collide with everything. Therefore I switched to detect the collusion of the sickle – once the collision of the sickle hits a game object, the particle system (mist) that is attached to be sickled will be triggered. 

3:Bamboo Grow when been pointed by particle system

We finished this part by using OnParticleCollision to detect the collision between the bamboo and the particle system that has been attached to the watering can. In the beginning we decided to add the particle system collision detect in the Bamboo OG script, because the growing function is in the same script so it’ll be easier to call, however, even been put into different layers and been set as “only collide with bamboo”, the particle system will literally collide with everything. Then we tried to only write the particle collision detection code in the canfunction and call the bamboo grow function from a different script to make sure the two parts are not messed up with each other. So basically in the particle system we say once it’s collide with bamboo, then it triggers the grow function from the BambooOG script, and then it worked. The codes we uses are shown below:

4: Floating effect of rocks:

In order to improve the user experience and create the sense of ZEN, I added the floating effect:


3. Reflection/Evaluation:​ This should discuss your expectations and goals in the context of the what you felt was achieved with the finished piece.

I think we as a team successfully achieved our initial goals and expectations of this project – that we have a beautiful Zen environment, we built the growing and cutting bamboo interactions, and surprisingly during the process of coding bamboo functions we found a cool way to let the bamboo has the same X and Z position, so even if the user play around the bamboo/lift it up/touch it, the bamboos will come back to the same position. We also achieved our initial thought of limiting the user’s movement – by putting the semi-circle bamboos in front of the users. For the coding part I was involved in particularly, I reached the initial expectation to make the particle systems, trigger the bamboo grow function when it is collides with the water particle system, create the floating rocks effect and create the mist effect when the sickle collides with other game objects.

However, there are something that needs to be reconsidered/improved in this project. In out initial thought, we design the circle of rocks to limit the user’s movement, but it turns out to be in the real VR scene those rocks just become a part of the distant backgrounds that didn’t really work to limit the user’s movement. What’s more, I think there should be an ending of our project – because what we have now is just to endlessly water and cut the bamboos, it should stop or have something else to show at some point. The parts that I could potentially work more on includes making the collision detection more discreet – for example, the mist particle system will only be triggered when specifically collides with bamboo, not with any game objects.

Project 2 Documentation – Zenboo

Project Description

Zenboo creates a space for the user to relax, placing the user in the middle of the mountains and free to play with bamboo. The user is able to water the bamboo using a watering can and watch it grow in a unique way, use the watering can to bounce around different parts of the bamboo, and make parts of the bamboo disappear with a sickle. Surrounding the user is a mountain range, a circle of floating rocks, and a tree stump to place the watering can and sickle on. Rather than being a game or a narrative, Zenboo’s purpose is to make the user feel relaxed and playful.


Process and Implementation

I was mainly responsible for creating the physical environment the user is placed in. This involved a lot of playing around with different aspects as well as going through cycles of feedback from the rest of the team. When we were storyboarding, we had a general idea of what the environment would look like:

The user would stand within a circle of rocks (to indicate a sort of barrier that the user would have to stay within), which was surrounded by a circle of mountains. A group of bamboo would be directly in front of the user, with a tree stump containing the watering can and sickle beside them.

When I set out to create the environment, I initially stuck to this design. I created mountains using the terrain tool, using the Yellow Mountains as inspiration:

The reason for this inspiration is they are very mystical and calming, which would help contribute to the relaxation aspect of Zenboo. However, when placing the user in the middle of these mountains, it was a bit overbearing so I created a platform mountain for the user to stand in. This way, it feels like the user is more in the mountains rather than standing below and looking up at them, adding a more mystical effect.

The circle of rocks was another thing implemented into the environment. When receiving feedback, however, it was suggested that the rocks be larger and floating around the user rather than the initial plan to have them resting on the ground. This turned out to have a pretty cool effect, adding another layer of mystique.

One thing that changed from our initial plan was the location of the bamboo. Since space is a limitation, having the clump of bamboo in front of the user all spaced out would have been problematic, or perhaps not as intuitive that the user had to go over to it and water it. Instead, we decided to place the bamboo in a semi-circle close to and around the user. This way, the user does not have to walk very much in order to water all of the bamboo. The final environment looks like this from afar:

Reflection and Evaluation

I think we successfully created an environment that is peaceful for the user to be in. The surroundings are green and full of nature, the background music is calming, and the main movements the user can do to interact with the environment, pouring and cutting, require gentle motions. I think something unexpected that was added was the added interaction the user can do outside of our initial planning, which slightly transforms the space. For instance, the way the bamboo grows is not how bamboo normally grows, and you can play with the bamboo pieces by bouncing them up and down with the watering can, kind of like a volleyball. I’ve found that this is my favorite activity to do when I’m testing out the space, which is perhaps more playful than it is relaxing. However, I don’t think this is a negative thing; I think the added playfulness fits nicely. But, if we did want to keep Zenboo a strictly relaxing space, then it would perhaps have been constructed differently. The bamboo could float away gently, for example. The sickle could be more low poly. More allowed movement, like a big open space the user could walk around in, would also perhaps be more relaxing.

Project #2: Development Blog

For this project, Junior, Claire, and I decided to play around the everyday theme of having to find your glasses. Coincidentally, all three of us have bad eyesight and so we either wear glasses or contact lenses. We shared our experience of how we struggle to find our glasses, especially in the mornings, as we tend to forget where we placed our glasses the night before.

Rough Draft of the Setting

We decided to choose the bathroom as our location and include a bathtub, a toilet, sink, bathroom shelf, a towel hanger and towels. We had originally planned to place different objects, but after placing some objects, figuring out each of the positions, and removing some objects according to the overall balance, we decided to settle down with the current bathroom setting. We also added the hanging light as our light source.

Getting the correct Position, Rotation, and Scale

Claire and I were in charge of creating the bathroom setting, matching the virtual space to the actual available space in the classroom, placing the four glasses in different places, and making sure that the user felt that he/she was in a bathroom trying to find his/her glasses.

Top View of Bathroom Setting
Side View of Bathroom Setting
Different Side View of Bathroom Setting

For our next steps, we need to add the different functions of the four glasses in order to make it interactive. Each of the four glasses will have different functions – whether they are zooming in, zooming out, different colored tints, and the corrected vision. Although we understand that the blurry vision will cause nausea for the user and may not be suitable for long use, we will play around the degree of blurriness to see how we can make it work.

Demo Video

Documentation [Zenboo]

Zenboo was based on the concept of a Zen environment with a simple yet endlessly executable action in place. Originally, the plan was to have flowers that could be endlessly grown but then the idea of bamboo came up. Since, in reality, bamboo grows incredibly quickly and is aesthetically attractive, we decided on this vegetation instead (fig.1). A positive addition was that bamboo inherently already had some connection to the idea of Zen. We wanted to place the user in a comforting environment that presented them clearly with a task which they could continuously do in order to relax for the daily stresses of everyday life. All the artistic choices behind the environment were directed towards this comforting attitude. The interaction were also kept simple and obvious. The interactions involved the picking up of the two objects and the using of the two objects. A watering can could be picked up and used to pour water on the bamboo, which would make it grow, and the sickle could be used to chop the grown bamboo, and make it disappear (fig.2).

Knowing that there was a lot to be done, we split the tasks evenly into two groups: scripting and designing. One individual was responsible for the designing of the environment, another in charge of the music and sound effects, and two responsible for making all the desired actions feasible. I was responsible for scripting actions and did most of my testing in a separate scene than where the environment was being designed. Since the actions had to be explainable without any description, we made sure to use everyday objects and code for recognizable physics behind them. This meant that the watering can could be lifted up and that water would only appear when poured at a certain degree, or that bamboo would grow upwards when water interacted with it. The testing area was modeled around what the final scene would encompass for the user. The tools were placed near to the spawn point of the user and could be used on the bamboo that was close by (fig.3-4).

The behaviors of the objects were expected because they were similar to reality and this made them seem like everyday actions. This meant that tools could be lifted, thrown, dropped, and act the correct way when coming into contact with other objects or when being poured. The only area where an unexpected result appears is when bamboo grows (fig.5). It was discovered, during testing, that bamboo balancing on itself was a lot more attractive and brought more comfort to the user, similar to stacking stones (fig.6), so it replaced the regular straight growth of bamboo shoots. Having the segments of bamboo fall to the ground after they reached a certain height was also a feature of this balancing. This brought new possibilities to a used behavior and also prevented clutter by having the segments disappear after a moment. After all the objects were designed and equipped with their respective behaviors, they were made into prefabs and placed in the final scene, in similar coordinates (see Cassie’s blog). The environment was designed to have warm sunset lighting, comforting wind, grass, and hills in order to bring ease to user. A small oddity observed was the floating rocks, though these objects are not following our reality’s physics they look incredibly mesmerizing and thus were maintained in the environment. Generally, having a few quirks that brought personality to the area, was expected to give the user more reason to desire realizing in this world.

Our expected world was a place a used could freely spend their time in with the goal of alleviating stress. This was achieved because the user had a simple task that could be endlessly continued and a surrounding that promoted comfort. With more experience and time, the world could eventually be expanded. There could be more tasks for the user to indulge in and more scenery that was intriguing to look at and enjoy. Expanding is always a possibility to entertain the user but keeping them in a roughly enclosed area was a solution too. Keeping them enclosed and with only a few tasks to focus on lets them possibly enter a form of meditation, which is by far the best stress relieving method. Better designing of the current scene could have involved the matching of asset styles and consideration for certain behaviors. Making the bamboo that falls intractable by hand and making it so that tools were always held in the correct method would have been logical. Making a better match of the tools’ material with the design style of the environment would have been more attractive. After showing the project in class I also noticed that some of the music could have been worked on to be less hostile and the water system needed some tweaking. Simply, there were a few factors that made the objects in the project seem unworldly and made it harder for the player to immerse themselves.

Though there were several factors that could be worked on there was also a sure sign that the project was a success. This is evident in three behaviors, players would want to place the controller on the stump after they were done, players tried to move out of the way of falling bamboo, and players continued to water the bamboo endlessly without tire. This shows that players were about to connect their reality with the world we created to such an extent that they the lines between the two existences became blurred.