Yoga Sense

http://creative.colorado.edu/~keli5466/YogaSense/yogasense.html

About:

“Yoga sense” is a responsive electronic yoga mat that detects where your weight is being placed and visually shows you how much pressure/force that is being used In your hands and feet. Giving the user feedback on how to correct certain poses.

Materials

  • Conductive Foam
  • CopperWire

HardWare

  • BreadBoard
  • Ardunio

SoftWare

  • Processing
  • Ardunio

Start:DONNA   breadset conductive yogam

Stage 2:

demo

 IMG_6452 (1)

IMG_6447 IMG_6449

RoadBlock:

.copperin

Next Steps:
expo

After some feedback:

IMG_7017-2 IMG_7022IMG_6995

Debugging:
breadback

Tutorials

Physical Pixel

Serial Event

Multi Serial Mega

Virtual Color Mixer

Serial Call Response

Advertisements
Posted in Uncategorized | Leave a comment

Goal Detection

What I wanted to do with my final project was to solve the problem of goal line detection. In sports such as Hockey or Soccer, what counts as a “goal” is when the ball or puck makes it all the way inside of the goal, e.g. across the goal line. Now, there is a big issue right now with the goal being made but not counted, because after the ball or puck crosses the line, the goal keeper knocks it back out.

I tried several iterations with this, but eventually settled on using OpenCV with a webcam as my solution to this problem. The nice part about this, is that the webcam is cheap, and it is an easy matter to adjust the code to look for the specific ball/puck that we are trying to detect. Essentially, you set up the webcam underneath the goal line, and watch for when the ball crosses that line. Here are some photos of the set up the I used during my demo to demonstrate the setup.

IMG_2353 IMG_2354 IMG_2352

You can see that the there is a piece of plexi-glass on top of which the goal is sitting. It is through this that the webcam looks. Now, this set up will work great with hockey, all you need to do is put very high quality ice under the goal, and because the code looks for a part of the ball/puck, even if the ice gets scratched up, the camera will still be able to pick up on it. The issue comes in with sports such as soccer, because it is difficult to make a grass field clear.

Code:

Here’s a screen shot of the code that I used for my project.

final code

With this, I wrote it all up in Python, using OpenCV for the image detection. This was actually the easiest way of doing the image recognition, and while it took me about 5-6 hours to acquaint myself with the library, once I had learned it, it was a simple matter to start looking for the ball. The resource that I used to learn OpenCV is at this link: http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_tutorials.html

Hardware:

The only hardware that is needed to make this work is a simple webcam. The one that I got was $20 at BestBuy. This is a pretty inexpensive implementation, and would be very easy to set up for pee-wee sports leagues, and school teams. The rest of the set up, with the plexi-glass and 2X4s was simply for the demo, and is not necessary to get everything going on goal detection.

Final product:

In the end, I managed to get everything working with the camera. Here are a few screenshots from my computer that show it working:

Final 1 Final 2

You can see in these photos that when the ball is on the left of the line in the center of the screen, nothing displays, however, once that ball crosses that line, the screen pops up and says “GOAL!” So, all that you have to do is line up the center of the camera’s view with the goal line, and you’re good to go.

Future productions:

In the future, I would like to implement some code that will track the ball’s path, and determine if the ball crossed the line, even if it obstructed by the goal keeper once it crosses that line. I would also like to implement a more hardware based method of detection as well that will provide a second source of confirmation, in case the camera fails. This method will likely involve using an RF field to detect the goal.

Thanks for reading through!

Posted in Final Project Reports, Uncategorized | Leave a comment

GestFi

1. Introduction

The way we interact with our computers has evolved as the result of evolution in our computers. A decade ago we mostly used desktop computers that we need to sit in front of them and control them with a mouse and keyboard. After that we experienced more portable computers known as laptops which can be used mouse free with a touch pad. Then we had smartphones which introduced us a new way of interaction: touch based interaction. This new way of interaction lets us to get rid of mouse and keyboard. But now we live in the era of smartwatches, virtual reality and Internet of Things (IoT). touch based interaction is not necessarily the best way of interaction with these kind of new technologies. As an example we can consider smartwatches. Even though having a powerful computer on your wrist is amazing but their small screen makes it hard to interact with them or you can consider a Google cardboard in which you have to put your phone inside it and it makes your phone inaccessible. One possible solution to approaching this problem can be wearing a glove with a set of sensors that lets you to control your device by doing some gesture. Even though this approach can solve the problem to some extend but wearing a glove for doing daily activities is not something that all the users were comfortable with it. So what can be the next technology for us to interact with these kind of devices? The answer is a device free technology.

GestFi is a device free gesture recognition system that lets you to interact with your devices by doing some gestures. It exploits WiFi signals affected by body movements of a user to distinguish among different gestures. For this project we used a pair of off the shelves WiFi adapter card to build a system that can distinguish among 3 different gestures in experimental environments.

In the rest of this report is structured as follows: In section 2 we review some related work to device free gesture recognition systems. Then in section 3 we describe our implementation and in section 4 we describe our system evaluation. After that in section 5 we have discussion and conclusion sections and after that in appendix A we mention our other attempts that doesn’t work properly. Finally appendix B you can find the links to our codes.

2. Related Works

WiSee is among one of the initial works which used wireless signals to recognise different gestures but the researchers in this project didn’t use a commodity wireless card. You can read more about it here: http://wisee.cs.washington.edu/.

There was another work after WiSee that you can find it here: http://arxiv.org/pdf/1411.5394.pdf

In this work people used a commodity wireless card but they hardcoded the features of different gestures into the classifier. This approach prevent the users to define a new gesture.

In our work we also used commodity wireless cards but we didn’t hardcode any feature into the classifier and classifier can learn different gestures by seeing the CSI values.

3. Implementation

In this section we describe what you need to build this system and how to build this system:

3.1 Prerequisite

Hardware:

A pair of “intel wifi link 5300” wireless adapters. You can buy them from here: http://www.amazon.com/gp/product/B0099FLLFK/ref=pd_lpo_sbs_dp_ss_1?pf_rd_p=1944687642&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B001CXT6NQ&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=18A10F1G84G5D1Y7JD20

This is just a normal wireless adapter that has a specific driver that lets you to get packet information from physical layer to detect some information from the signals in the air.

Figure 1 describes how you should put this wireless card into your laptop.

DSC_0309

Figure 1

For installation of this driver you can use this link:

http://dhalperi.github.io/linux-80211n-csitool/installation.html

 

Software:

Operating System: We have tested this system in Ubuntu 14.04 with kernel version 3.16.0. If you have another version of the kernel you can download 3.16 from here:

http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.16-utopic/

You have to download these 3 files and put the in same directory:

Linux-headers-3.16.0-031600-generic_3.16.0-031600.201408031935_amd64.deb

Linux-headers-3.16.0-031600_3.16.0-031600.201408031935_all.deb

Linux-image-3.16.0-031600-generic_3.16.0-031600.201408031935_amd64.deb

After Downloading cd into that directory and run the following command to install the new kernel:

sudo dpkg -i *.deb

Anocanda: Anaconda is a free Python distribution. It includes more than 400 of the most popular Python packages for science, math, engineering, and data analysis. We need it for classifying different signals. You can download Anocanda from here:

https://www.continuum.io/downloads

Matlab: We need Matlab because the researchers who develop that driver provided some matlab scripts to read and parse signal information in the user space.

Torch 7: Torch is a scientific computing framework with wide support for machine learning algorithms. We used this framework to develop our neural networks as a classifier. You can use this link to download torch 7:

http://torch.ch/docs/getting-started.html

 

3.2 System Architecture:

Before discussing about system architecture I have to explain how WiFi communication work in the physical layer. For every WiFi communication we have a transmitter and a receiver Each of them can have 1 or multiple antenna for communication and the data will be sent over different subcarrier. After receiving every single packet in the receiver it calculates a matrix called “Channel State Information (CSI)”. Every element of this matrix is a complex number which describes the current state of the channel between a pair of transmitter and receiver antennas and its magnitude can be affected by the movements of nearby humans. In our experiments we had 3 antenna in receiver and 1 antenna in the transmitter and the data is sent over 30 different sub carrier. So for every packet we have 3*1*30 = 90 different CSI value. We want to use these CSI values for gesture recognition. For creating feature vectors we only used 10 different subcarriers because they were redundant and it makes computation faster. So for every sample we have 30 CSI values.

 

Figure 2 describes the building blocks of our system:

Blank Flowchart - New Page (1)

Figure 2

This system is combined from a transmitter, a receiver and a user. User is between transmitter and receiver. Whenever user wants to do a gesture he should do a click by mouse and then it tells the receiver to record the CSI values from the received packets. In our system the length of every gesture is considered as 2 seconds and in every seconds 2500 packets will be received by the receiver. So for every gesture we have 2500*2 different samples and each sample has 30 different CSI value. So the size of a feature vector is 150,000. After reading these values this feature vector is sent to a SVM classifier (we also examined a Convolutional Neural Network (CNN) and a K Nearest Neighbour (KNN). But SVM worked better.) and then the classifier will tell us what is the gesture and then we can map that gesture to any task we want.

In the following we describe how can you setup the transmitter and receiver to get CSI values from the physical layer.

There is an injection mode in the updated driver that lets us to generate packets at any rate we want. We send these huge number of packet every second to have a precise information about channel state information during a gesture. We ended up to use this injection mode after a tons of failure with different approaches. (See appendix A).

In order to setup the transmitter to use injection mode you have to cd to linux-80211n-csitool-supplementary/injection/ directory first. Then,  run “setup_inject_csi.sh” with root privilage.

Then you can generate 2500 packets every second with the following command on the transmitter:

While true; do sudo ./random_packets 2500 100 1 350;done

After that we have to configure the receiver to use monitor mode. For this purpose you have to cd to linux-80211n-csitool-supplementary/injection/ directory on the receiver machine. Then you have to run “setup_monitor_csi.sh” script by root privilage.

For running the actual program you should run testWiFi.py script. In this script we have used a Matlab api to run the matlab script which is written for reading the CSI values and conditioning the received signals from received packets. The python script tells the driver to record CSI values and then run the matlab script to create the CSI waveforms. These waveforms are too noisy and cause a problem for classifier. So we applied a low pass filter to these waveforms to reduce the level of noise before sending them to a classifier. In figure 3 and figure 4 you can see a CSI waveform before and after applying the low pass filter. Then the matlab script will return the smoothed signal to the python script and then it will create a feature vector and send it to the SVM classifier.

before

Figure 3: Before applying low pass filter

after

Figure 4: After applying low pass filter

4. Evaluation

To use this system the first thing we have to do is to train it. We trained this system in an environment without any interference. The first time We decided to classify four different gestures we created 30 samples for every gesture. Then we used a cross validation approach to see how well our classifier work. We trained the system with 80% of data and then tested it on the other 20%. In average it gave us 87% accuracy.

For the second time we decided to test our system with 3 different gestures and 100 samples for every gesture. Again we used a cross validation approach. For this scenario the classifier gave us 100% accuracy. In the following you can see the waveforms for different gestures. Notice that different gestures produce different pattern in the CSI waveforms. Each of these slideshows is related to different samples of a specific gesture:

This slideshow requires JavaScript.

This slideshow requires JavaScript.

This slideshow requires JavaScript.

5. Discussion and Conclusion

Device free gesture recognition systems is a new approach which lets the user to interact with their computer without wearing anything. One of these approaches exploits WiFi signals around a user to recognize their body movements and their gesture. In this project we built GestFi which uses a pair of commodity wireless card to recognize different gestures. Our system can detect 4 different gesture with a linear SVM classifier with 87% accuracy and 3 different gestures with 100% if you increase the number of train set. This system is tested in a controlled environment without any interference. Using WiFi signals has its own limitations. One of them is presence of other people near the user. The other one is that based on our experiments you have to train this system in every new environment that you want to use it. The other one is that the position of transmitter and receiver should be fixed. Even a very small change in the position of these devices make the system useless and forces the user to train it from the beginning.

Appendix A:

In this appendix we mention some of our failed paths.

Before using injection mode we tried to connect a intel chipset to a normal access point and send a ping flood to access point to collect CSI values from the reply packets. This approach doesn’t work because normal access points is not designed to receive 2500 packets per second and after a while these huge number of packets make the access point buffer full and make it to crash. You can reduce the number of packets per second but it results in a low resolution waveform and classifier can’t distinguish between different signals very good and it provide a low accuracy.

We also tried to send this ping flood directly from access point to the intel 5300 chipset. But this approach also had same problem as previous one. The receiver buffer will become full and the system will crash.

We tried to use some kind of different normalization methods (normalization of every subcarrier and every sample and every feature) on the feature vectors to make this solution more generalize to train it once and use it in different environments but it didn’t work. Even a small change in the orientation of transmitter or receiver completely change the channels and CSI waveform shapes would be different.

Appendix B:

You can find all of our codes in our github for your future use:

The link is: https://github.com/S-Mohammad-Hashemi/GestFi/

 

Posted in Final Project Reports | Leave a comment

Virtual Camera and RC Car

IMG_0432

GitHub

The inspiration behind Virtual Camera was that we wanted to add “A first-person view and experience to RC projects everywhere”. While we didn’t quite reach the “Virtual Reality” stages for the camera we did make great progress including integrating the camera with an  upgraded version of our RC Car (look to end of post for RC Car detail).

The Process:

  • Choose/buy camera (we chose the Kaicong sip1303)
  • Get Camera working (as designed) out of box – Not Trivial with Chinese instructions
  • Inspect KaiCong Web App through Chrome (look at HTML elements while clicking around)
  • Create Python Scripts (aka Replace Web App):
    • Intercept HTTP Packets (for Video)
    • Send motor position instructions
  • Convert Python scripts to a deployable phone application
  • Convert video to Virtual Reality format

Searching the web I found a few Instructables detailing similar projects. One (made by the people at fabericate.io) even gave a great tutorial on how to hack a different KaiCong camera (useful libraries included!).

Plug In Camera –

  1. Find and plug in power supply to camera and outlet (don’t be alarmed, when initially plugged in, camera may complete a full pan/tilt calibrating routine)
  2. Find and plug in the ethernet cord to the camera
  3. Connect ethernet to Router (Note some setup instructions claim simply plugging ethernet into computer will suffice, I found that a router MUST be used)
  4. All KaiCong’s have a domain address printed on the camera. Find and copy the domain into your browser, it should begin with a unique 6 digit number followed by “.kaicong.info” (ours is 487285.kaicong.info).
  5. If everything is working your “Username and Password” will be prompted when your  domain address is entered into your browser (key point of failure if network issues are occurring)
  6. You now should see a few different modes for displaying the video – choose Push Mode and your camera feed should be viewable. Take Note – Your camera’s ip address is in the URL bar at the top of your browser

Screen Shot 2016-05-04 at 1.05.36 AM

Network Settings –

  1. Go into settings (While in Push Mode).
  2. This is where you can use the web app to change all of your camera’s setting
  3. For the Expo we needed a work around due to the lack off router access for the CU network
  4. We created a hotspot on our mobile device. Reconnected the camera. Located our personal hotspot in the Wireless Lan Settings within web app

Screen Shot 2016-05-04 at 1.10.40 AM

Inspecting Code/Analyze how web app works –

  1. With the HTML viewable in chrome, we used the buttons in the web app to move the camera.
  2. With the network activity tab chosen you can see a list of GET methods and the more you move the camera, the more you will see of a method of type XHR, notice it has an associated script called “decoder_control.cgi”. This moves the camera.
  3. We see this .cgi script is creating a long url, and indeed this url, when put into the browser, moves the camera
  4. Now our python scripts must simply generate those urls appropriately
  5. With Video it proved very similar except with no buttons to press we followed a hint that the “livestream.cgi” was handling video
  6. We did rely on Google and stackoverflow for the python structure on how to parse the http stream

Screen Shot 2016-05-04 at 2.33.37 AM

Screen Shot 2016-05-04 at 2.27.07 AM

Writing the Python Scripts

  1.  Look to github for python script source code
  2. Parsing HTTP Stream
  3. pygame to get keyboard instructions
  4. OpenCV for video

RC Car Integration

We spliced the camera cord, connected it a computer via a personal hotspot, and attached to the car. But we wanted to try and integrate the camera further, so we pushed to have our RC controller (Xbox) move the camera, that way someone can move the car with their left thumb and the camera with their right. Look to VirtualMotorXbee.py. Unfortunately we only had two XBee’s thus had to use our RC Car XBee to attach to our computer. We used the same structure as our original VirtualMotor code but now introduce a serial read to check incoming xbox controller commands. The camera only moves left and right but is functional (Look to project 2 about serial communication). While we may not have a full car, camera, and display assembly (we would need  3 XBees) it successfully proves the macro concept. We also had to spend some time fixing some RC Car bugs, namely, we had to correct how the Arduino handled simultaneous commands (i.e. left and forward, right and reverse) – New RC Car Code in VirtualCamera repo.

References:

**Completed Video Project 1 – GitHub

 

Posted in assignments, Final Project Reports, Uncategorized | Leave a comment

I Like Trains!

This is also available (and I recommend viewing this from the README on github) here: https://github.com/driabwb/ILikeTrains-

 

I Like Trains!

NOTE I Like Trains is still a work in progress and the description reflects the intent for the finished product. For a better understanding of the current status of the project read the How Is It Made? and The Design sections.

Table of Contents

  1. What is it?
  2. A User’s Perspective
  3. Beyond Just a Game
  4. How is It Made?
  5. The Design
  6. Future Work

What is it?

I Like Trains! is a puzzle based game for mobile devices. The game places the user as an new employee whose job is to make trains traveling on tracks safe.
The user is provided in each level a track and some number of trains; they then must use locks, signals, and other tools provided to create a scenario that will allow all trains to move around their track without collisions.

Beyond Just a Game

While my hope is that I Like Trains (ILT) becomes an addicting game that others will enjoy and play much to the scope they play similar games, ILT was conceived as a more useful tool. It has become an increasingly popular view that we should make games which are a teaching platform for something. There are games for children which introduce them and teach them about topics in art, mathematics, science, and much more. ILT attempts to follow in that vein and be a tool to teach users about parallel computing. In particular the aim is educating users about hazards oft presented when something goes wrong, such as deadlock and race conditions. These insidious bugs are often difficult to understand because of the difficulty for people two follow two things happening at once. To combat this ILT aims show users a visual representation of these effects by analogizing computer processes to a trains running on interconnected railways. The game presents users with different track layouts, number of trains which are intended as representations of program code and execution respectively. The user is also provided tools such as mutex locks and condition variables to solve the puzzle. It is the goal of ILT to provide a tool which helps educate a user in the perils as solutions to common parallel computing problems.

A User’s Perspective

A run through of a sample set of screens for I Like Trains! is provided below.

  1. The session begins with the user opening the application and viewing a welcome screen. welcomeScreen
  2. Once the user presses the play button, they are presented with a level screen. levelScreen_1.png
  3. Within the level screen the user drags and drops the mutexes on to the tracks to try to prevent trains, which run clockwise on one track and counter clockwise on the other, from crashing. After creating their desired arrangement the user can test it by pressing the play button. If the solution is correct they will move to the next level otherwise an animation of a failing scenario will be played.

How is it made?

The project current is built in Android Studio using the LibGDX game development library. The program starts with the Iter1ILikeTrains class which extends the LibGDX Game class. This setup allows for multiple screens in the overall application; the class delegates to those screens and otherwise simply acts to switch between them. The screens are then implemented individually for what they need. The project currently has two designs implemented: one used by the OpeningScreen, and one used by the Level1Screen and Level2Screen. These two designs will be discussed further in The Design section. Screens however have three primary parts: the world state, rendering, input processing. The world state keeps track of the positions and details of every object, such as the position and state of locks on the track. Each thing that is in a level with the exception of text is handled by its own class. The encapsulation helps keep the screen organized for the programmer. Rendering refers to the actual drawing of elements of a screen onto the screen, or what does all this stuff in the scene look like. The final part, input processing, deals with how the application handles what a user does, i.e. what happens when a user touches a train. The game and screens provide a base for the rest of the program.

LibGDX also provides other functionality which this project attempted to use. In particular dragging and dropping locks from the left part of the screen to onto the track was intended to be done through the LibGDX. This process involves creating Actors which LibGDX provides mechanisms to drag and drop onto each other. Using this functionality proved more difficult than advertised largely due to a lack of example and documentation. In the process of trying to use Actors the majority of the code base was re-factored numerous times going between various strategies and designs. Ultimately, the attempt to use LibGDX for this functionality was abandoned and in its place was a manual drag and drop system. In this system the touchDown event (when the use touches the screen) gets and object to be dragged. The touchDragged event updated the program about where the dragged object is on the screen. And finally, the touchUp event signals dropping. For this two work at every stage the position of the event is used to update the world with a single dragged item at a time.

The Design

The design pieces are broken into three parts. The High Level Design which describes how screens as a whole are dealt with. Game Objects which is how the world handles each scene object. And, Tracks which discusses the rationale behind how the tracks were put together.

High Level Design

There are two high level designs used in the game for handling screens. The first appears in the OpeningScreen and creates a class for each tasks it does: world state, rendering, and input handling. These are respectively named OpeningScreenWorld, OpeningScreenRenderer, and OpeningScreenInputHandler. In this design the world class holds all everything the screen knows about as a model of what is happening. For example, the world might be comprised of the planets and the sun in a solar system simulation. The world is responsible for holding those classes and updating them when the worlds update function is called. The Renderer class is responsible for the miscellaneous drawing tasks, such as clearing the screen before drawing, and calling the appropriate draw method for each object in the scene. Finally the input handler class is responsible for reading a user input and finding the correct action to take because of that input. For example, the input handler should figure out if the user clicks a button and if they are then it should tell the button that it is pressed. In the case of the Opening Screen none of these are doing much. The alternative design has only a single class for the world. In this design the world handles the world state, rendering, and input handling.

The second design was chosen for the Level Classes because it was less awkward to implement than the first design. This is because each part, the world, renderer, and the input handler all have to know about all the objects in the scene in order to function. Thus the levels conglomerate those functions into the world which already stored the screens elements. The world still delegates most of its work to the scene’s objects and thus the world is not made much more complex. However that is only true for updating and rendering. Input handling is a larger problem. A large portion of input handling follows the same line as rendering and updating where the world just loops through all of the scene objects informing them of the input event. However, the drag and drop functionality breaks this. Resulting from manual handling of drag and drop the input handling must deal with knowing what object is being dragged and other details. These particular details along with details in regards to other events which require unrelated scene objects to interact extensively increase the worlds complexity. The separation of these into an input handler class to reduce complexity to one particular place that is only responsible for that is the primary goal of the first design alternative.

The first design was used in the implementation of the Opening Screen because it served as a learning and comparative example of designs for future reference.

Game Objects

The world has to deal with a lot of different elements. In I Like Trains! there are buttons, trains, tracks, and more. If the world had to deal with each of those in a different way then it would become very complex very quickly. To alleviate this issue the concept of a game object is used. Game objects are any object which will be seen and/or manipulated by a user. The general game object provides a consistent interface through which the world can reliably interact with any scene object without needing to know what that object particularly is. This thus simplifies the workings of world to just tell each object to draw itself, for example. Then it is only necessary that each type of game object, such as trains and tracks, follow the game object interface and do whatever is appropriate to each call.

Tracks

Tracks are the most interesting game object in I Like Trains. The tracks need to be adaptive and handle change simply. To this end tracks are comprised of track pieces which most importantly know what track piece is next. This structure, known as a linked list, allows track pieces to be added or removed with minimal changes. This is used in when a mutex is added along a track. Instead of a track monitoring where a train is and where all the mutexes along the track are mutexes are just a different track piece. Therefore, when a new mutex is added to the track the linked list needs to add the new mutex track piece in the correct position, and spit the track piece the mutex is on into the part before the mutex and the part after the mutex. By designing the track in this fashion modifying the track by any addition and subtraction is simplified.

Future Work

Note: These appear better in github where they appear as checkboxes.

The following list of Future Work appears in no particular order. [ ] Decide on a consistent design [ ] More Track Types [ ] Improve the Design. Too much is rather ad hoc and needs more thought [ ] Be able to move and remove placed mutexes [ ] Collision detection for trains [ ] Deadlock detection [ ] starvation detection [ ] Algorithm for finding bad scenarios [ ] Investigate other frameworks [ ] Use as a simulation tool for debugging misbehaving programs

Posted in Uncategorized | Leave a comment

Death Valley Final Report (Formerly called Zombie Survival)

 

What is Death Valley?

Death Valley is a first person shooter computer game. The premise of the game is you are a lone human survivor trapped in the desert during the zombie apocalypse. The object of the game is to shoot and kill as many zombies as you can, and stay alive for as long as possible. Initially there are very few zombies, however they will continue to spawn every second and rapidly increase in numbers, causing the game to become more difficult as it goes. For each zombie killed the player is awarded ten points. When the layer is ultimately eaten by the zombies, the score is wiped clean and the game restarts.

 

Why was it made?

The purpose of this game was to create a fun, yet simple video game. It was intended to be entertaining, yet simple and intuitive enough that almost anyone could play it successfully with only a brief explanation of the game and the controls. The creation of this game was also used as a learning experience for me to learn how to create a video game and how to use the Unity game engine. This project looked to improve upon the brief experience with Unity that I had gained in making my first game (Virtual Pong), by making a slightly more complex game that utilized more of the features of Unity.

 

How was this game made?

This game was created using Unity 5.3.2 game engine. The mechanics of the game were created using scripts in javascript, and all of the models and sounds used were found for free on the Unity Asset store.

The Player:

The player was created using the built in Unity FPS character controller, and attaching a gun model to the character. The gun used a model that was found on the Asset store. It was animated by moving the rotation of the gun between frames and adding a particle system, a light source, and a line renderer. Scripts were created to match the animation with the clicking of the mouse, as well as to shoot a ray cast from the end of the gun. If the ray cast hit a zombie it would then subtract damage from the zombie’s life. An extra audio source was added to the player with the music clip attached to play the music everywhere that the player was.

The Zombies:

The zombies were created from a 3d model taken off of the asset store. They were given a navMesh agent and a script that had them traverse a navMesh in order to find the player character. They were also given scripts that allowed them to attack, and to take damage and die. A trigger collider was added to the zombie prefab, and if the player made contact with this trigger it signified that the zombie was able to attack. This would then trigger the zombie attack animation and lower the player’s health. A sound was added to the zombies that played every time a new zombie was created. Four spawn points were created by the zombies and a script controlled how often they were spawned while randomizing which spawn point it took place at.

The Environment:

The environment was created by duplicating many planes, adding a sand texture, and placing them together.  Once the ground was created a border was made around the playable area. The border was created by a random assortment of 3d objects, including ruins, pillars, and rocks. This border served to box the player in to the environment and it was reinforced by placing a transparent box object on each wall of the border to ensure that the player could not get through or over it. The rest of the environment was made up of placing these same 3d models throughout the area to create an interesting desert level design. All of these models used a mesh collider to ensure that the player could not walk through them. The entire environment then needed to be baked as a NavMesh so that the enemies could traverse the environment and navigate around obstacles in order to find the player.

 

The HUD:

A HUD was created using Unity’s UI system. A health bar was created from a slider object and placed in the lower right hand corner of the screen. A clear image was created over the entire screen that could be changed to flash red to signify that the player was injured.  The crosshair was made by 4 equally spaced rectangular images that were anchored to the middle of the screen. Text was added to the upper right hand corner to display the score. The game over screen was combined of a green image that covered the screen, and red text that displayed the words game over. The game over screen was animated to appear on the screen when the player lost all of his health. All of these UI objects were given scripts for them to interact properly, such as taking health away from the life bar, updating the score count, and applying the game over screen and then restarting the game.

 

Images:

Image showing final game with player being chased by zombies.

DeathValley

 

Image Showing the Game Over screen which appears when the player dies, before the game restarts.GameOver

 

Image showing an earlier stage of the game with a large number of zombies and an unfinished environment.zombies

Assets used

All assets were found from the Unity store and were downloaded for free.

Zombie model and animations : Zombie from Pxltiger

Sand Texture: Yughues Free Sand Materials from Nobiax/Yughues

Zombie Sound: Voices SFX from Little Robot Sound Factory

Gun Particles: Simple Particle Pack from Unity Technologies

Gun Sound : Post Apocalyptic Gun Demo from Sound Earth Game Audio

Rock Models : Mountains Canyons Cliffs from Infinita Studios

Biometric Joe text font : Grunge Font Pack from Ray Larabie

Gun Model : Assault Rifle A3 from Stronghold Creative

Ruins models : Ancient Ruins in the Desert – Part 1 from NECKOM Entertainment

Pillar models: Ancient Ruins in the Desert – Part 2 from NECKOM Entertainment

Soundtrack: Free Horror Music Track from T.I.D.N. Music

 

Scripts:

The scripts used to create this game can be found at:

https://github.com/MitchLewis/Death-Valley

 

Posted in Final Project Reports | Leave a comment

2D Side-scrolling Game Final Report

During my initial attempt at making the game earlier in the semester I had quite a few things that ended up not working for me. For example, my death animation was not working, did not have a way to restart the game after death, all parts of the game were static, not moving obstacles and much more. On this second attempt I made it my goal to get all these things working and have a much more polished game. After presenting at our Expo, have to say that the game turned out to be much more successful than I thought I would have been.

What I end up doing was just starting a new project and starting from the beginning again, that way I had a “clean canvas” to work with. Everything I did was done step by step and was tested in order to make sure everything was working properly. Using the knowledge that I had acquired from the previous work on the game helped a lot during development of the final game version.

When making a 2D side scrolling game you need to have a sprite sheet which is a sheet of art that is used make all the actors and environment in game, I will include picture of the sprite sheet that I used. After you have acquired your desired sprite sheet, you will import the sheet on to the Unreal Engine and then extract each individual sprite so that you can use them in your game development. With each sprite you can select multiple sprites and then combine them into a flip book, which serves as your animated object. In my example I combined three sprites that each where a different instance in a run and when combined into a flip-book, made my actor look like it was running when I would move left or right. The same thing was done in order to make the jump animation, death animation, sitting animation and the knock back animation.

2016-05-03 (8)2016-05-03 (9)2016-05-03 (10)2016-05-03 (11)

After the animations were complete, it was on to the event graph for my actor. The event graph is where all the coding and functionality for the actor is done. In the event graph, I had a section that controlled damage, movement, jumping, animation update depending on what the actor was doing, and landing section after a jump or knock back. Each section was then color coded in order to quickly find the area that I need to work on. Also included in the event graph was a widget, which can be used to make a HUD. In my case I used it to make a health bar for the character, that way the person playing can keep track of their health. I will be including pictures of the event graph.

2016-05-032016-05-03 (1)2016-05-03 (3)2016-05-03 (4)2016-05-03 (2)2016-05-03 (5)2016-05-03 (7)2016-05-03 (6)

After getting my actor (main character) to where I wanted him to be, it was on to work on the map. The goal for this game was to make something different to your typical 2d side-scrolling game. In most games one has many individual levels with a beginning, run across to the other side to complete it and then move on to the next level. What I wanted to do was to make one huge level, which I called the Master level, in which the actual level changed happened at a branch in the map. At the branch, the player had the option to either proceed up or down, which brought the player to a new part of the map with new challenges and obstacles. The other thing I wanted for the game, was to make it really hard and quickly become much more difficult as the player progressed. I would say I was successful in this aspect since one of my class mates said that I was really cynical in my map designs. Included in this post will be pictures of each different section of my Master Level.

2016-05-03 (12)2016-05-03 (13)2016-05-03 (14)2016-05-03 (15)2016-05-03 (16)2016-05-03 (17)

Now as I said earlier, the as the player progress through the game, the game would become increasingly difficult and would pose new obstacles, challenges and traps. For example in the picture with the red tiles, this area was on of the branch options that the player could take but would ultimately lead to the death of the player. As the player keeps going on a lot this path, the tiles become a darker shade of red, used as a form of warning that what is up ahead might not be good for the player, which it is not. This leads the player to go into the killz volume that used in order to kill of the player in case he falls of the map. Another example of one of my traps is in the first image in which the blocks that are to the right and left of the long strip of spikes, which also look different in design, are actually false floors and if the player walks on them, they will fall through to their death. As you can see I was pretty cynical with my level design and I love it haha.

All in all, it was a blast getting to work on this game and learning about the process of making a video game. I learned that it is a lot more difficult than it seems to make a game and actually takes a long time. I can’t even begin to imagine the amount of work that gets put into making the top end games like Halo, Call of Duty, Battlefield and many more.

Posted in Final Project Reports, Uncategorized | Leave a comment