SkiApp Final Report

Abstract:  The original purpose of this project was to offer new skiers a second opinion when deciding whether or not to wax their skis.  To accomplish this, I was going to use some sort of image recognition software coupled with iOS to distinguish between a ski that was in need of wax or not.  The user would open the app for the iPhone, take a picture of the bottom of their skis, and the app would then compare that image to a database of images and would then send the user a message to wax their skis or not based on which image was deemed most similar to the image that the user had taken.  I decided I wanted to further assist the user by adding a couple more functions to the app later in the project.  These functions were intended to assist the user in finding the proper wax for their skis should the app suggest that they needed it.  This was done with a Google Map of the user’s location and a UIWebSearch view that provided a user with online options for ski wax purchase based on various categories pertaining to their skier level and ski park conditions.


Process:  This project was created using Apple’s SDK Xcode7 for iOS9.  The framework was created using root controllers and various views, including map, web, and picker views.  The UIWebSearch, Navigation, and Home view controllers were created first using Xcode’s objects and views.  The .h file associated with that view controller stored the properties of those objects and the .m used those properties to add the functionality.  The Google Map view controller was created using the Google Maps SDK.  The SDK was installed into Xcode using CocoaPods and the libraries required were added to the Build Phases of the Target.  Acquiring the user’s position and adding the buttons and alerts was done through the .h and .m files.  The only change needed in the AppDelegate file was the addition of the Google Maps API key to access Google Maps.  The last part to this project was the graphics and new buttons added.  These were created in photoshop and added to Xcode.

Link To Project:

Issues:  I originally attempted to use an API for image recognition developed by an organization called Moodstocks.  The API, once installed in Xcode, would allow the app to take a picture and then reference a database of images provided by Moodstocks to identify the object in the image.  The API was developed to be installed in Xcode5, two versions in the past from the SDK I was using.  This created major issues when implementing into my project since it required constant changes to the AppDelegate files, which are largely untouched in current Xcode projects.  In the end this issue was consuming too much time for no progress and I decided to look for a different solution.

I switched to openCV which had numerous capabilities when it came to image recognition and augmented reality.  After some research I decided to settle on openCV’s algorithms for HoughLines and LineSegments.  These algorithms were designed to recognize the edges in an image.  I believed this could have been used to count the number of scratches on the bottom os a ski, since many scratches accumulated are relatively linear.  Based on the count the user would be presented with a message to wax their skis or not.  Implementing the openCV algorithms proved too difficult and too time consuming for the amount of progress I was making, which was near none, and thus I switched focus to other areas of the app.

Link To OpenCV Attempt:

Conclusion:  In the end the app did not accomplish my original goal.  There was no functionality in the app that could recognize whether a ski needed to be waxed or not and the Google Maps implementation was incomplete since the address bar did not work.  Despite this, I learned a ton from this project.  I learned more about Xcode, Google Maps, and openCV.  I also learned more about things such as time management, goal setting, and  proper planning.  The project was extremely frustrating, but any small success more than made up for it and is what made the project fun and addictive.  For the future I would definitely like to come back to it and successfully implement some kind of image recognition and complete the Google Maps view.

This entry was posted in Project 2 reports. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s