Stop Motion 3D in Unity with Kinect Sensor
I am currently researching how to put together stop motion OBJ sequences in unity. I have been doing 3D scans of my real life model with an ASUS XTION sensor to create my main character. The first sequence is a simple walking movement which will be made up of only 8 frames. Once I have a greater understanding for the process in unity I will be creating more scans. To do this I will be using the unity asset called Mega Cache which natively supports the import of OBJ Sequences.
360 ‘VR’ Video with Ricoh Theta & Youtube
The Ricoh Theta is a 360 degree camera, it can be used as the most amazing fisheye lens for still images ever, it can also easily be used to make 360 degree/’VR’ films. Since Youtube now natively supports these interactive films getting them to the public is easier than ever.
When viewing this videos through the youtube app on a mobile device with a gyroscope you can use said device to physically pan around the video. If you’re on android the app actually supports google cardboard for an instant VR experience, they have not added this functionality in the iOS app. If on your computer use the WASD keys, or arrow keys to look around.
If you want to experience 360 youtube videos with iOS and a cardboard you can use the non google app in360tube on your iphone. The app is a bit buggy and seems to be lower quality than the youtube app, but it works.
To create the above video I did not need to use any external 3D software to make this, I have laid out the straightforward process below:
1. Transfer video to computer
Transfer the video directly from Ricoh Theta device – Do not use iPhoto/Photos if on OSX – it will change the video format.
This is what the initial output from the video looks like, as you can see it is cut in half, so now we must stitch it together. To fix this run the video through Ricoh’s Theta application. This will stitch it together. After this you can use it in a variety of ways including as a 360 mesh in 3D programs.
3. Embed Metadata
Now we must make sure Youtube recognizes and processes our file as an interactive 360 video. To do this we must use an application or python script to embed meta data that can later be read by youtube. Doing this is easy, just follow the instructions from google..
4. Upload your new video to Youtube, no more special treatment needed.
** After upload the video will initially appear stitched together, but do not worry. Youtube automatically detects the metadata you embedded earlier in the video then converts it into an interactive 360 video for you, they claim the process can take up to an hour but it took my three minute video about five minutes. Enjoy!
Assignment 3
ATTEMPT 3
ATTEMPT 2
ATTEMPT 1
Assignment 2
Assignment 1
For assignment 1 in computer graphics I started to play with 3D shaders, I started with the test file we created in class.
First Attempt – Live Link
During my first attempt I decided to zoom into the ball so it took up the entire canvas, I then added user color interaction along the mouse X axis. I used the uTime variable to give it a pulsing nature along with sin and cosin.
Second Attempt – Live Link
For my second attempt I smoothed the interaction of cursor.x by dividing it by 1000, I also stretched the visible canvas to take up the entire screen. I changed the value of the canvas holding the ball to 1200×1200 to maintain a high resolution as not to pixelate the screen. I created a variable called zTime that resizes the ball.
Assignment 0
Since arriving at ITP I have found myself extremely interested in pixel manipulation, having never coded outside of css and html I began working last year on learning the mathematics behind basic image processing techniques (gamma, saturation, contrast, etc). I loved learning how to manipulate certain ranges of pixel brightness to create my own affects. Having a background in photography I found the mechanics behind processes I have used on a daily basis for most of my adult life fascinating.
During my second term at ITP I became interested in 3D graphical work. This began by creating imagery using scanning techniques using the ASUS xtion IR scanner – virtually identical to the MS Kinect.
I began using skanect software to operate the device as a scanner/camera, then importing the scans and creating collages from the images using blender (a selection of work can be seen at http://f1f2.works). This was my first time working in 3D, I started setting up cameras, and lights within blender to give life to the collages. I have become obsessed with using skanect and my asus xtion scanner and would eventually like to modify an xtion sensor to be more powerful and find a way to untether from my laptop for increased portability. This would involve creating a light weight application to capture data with perhaps a rasberry pi. Then later having a program that creates a 3D model from that data.
As my interest in 3D has increased I have started to become interested in the mechanics behind everything I am doing. I would love to learn more specifically about shaders and ray tracing. The idea of having the knowledge and understanding of the code behind these processes is really exciting.
RWET Final: Twitter Portraits
For my RWET final I created a program that uses a users twitter account feed in conjunction with a photo to create a unique poem based on that user. The above two poems were extracted from @barackobama and @christymack.
The code can be found here
Creating Poetry from Images – xFiles
Using the Python programming language I created a script that creates poetry based on images and a source text. When ran the program detects pixel brightness values of the image and assigns a popular word from the source text to that pixel brightness, the result is fun, new poetic form that bases word frequency and spacing on the assigned image.
To demonstrate this script I used the X-files. I took the entire script and used it as my source text, I pulled the most popular words from ALL episodes of the x-files and mapped them to different photos from the television show. The code for this project can be found here: https://github.com/aj701/Rwet-Midterm
Portraiture in 3D
My exploration in 3D portraiture originated during my Computational Portraiture class at ITP. Coming from a traditional photographic background I had not been overly impressed with what available 3D technologies was bringing to the table, I constantly saw artists battling with the technology, trying to fix its imperfections, creating odd looking 00s’s style videogame caricatures. My first experiments were with an original xbox kinect where I scanned myself using skanect software for mac, after my first scan I plugged the file into meshmixer and began distorting it. My final product from this experiment ended up being a quick GIF I made (photo 1). Afterwards this experiment I got excited about making statuesque surreal figures based on 3D scans, I quickly took a Micrsoft Scanner home and started creating scans of spaces and friends, with the holiday of passover was approaching and I thought it would be a good idea to create scans of my family. Instead of a Microsoft Kinect I decided to try the structure.io sensor which ran on iOS devices. For my needs the structure.io was a failure – it did not allow me to get close enough to my subjects to obtain detailed portrait scans. Through research I found the best scanner for my needs was an ASUS XTION with RGB camera, which is basically a rebranded Primesense scanner (a pioneering company in 3D scanning that was recently bought by Apple). Through mounting my ASUS XTION to my laptop I was able to create a semi portable “rig” for 3D scanning photography.
After finishing my passover scans I didn’t feel satisfied with the results, after discussing details of the project with my friend and co-classmate Pat Shiu we decided to collaborate on a series of images. Using her wearing a winter coat we created unique character who we followed along in different situations around New York. We walked the city doing outside environmental scans,carefully composing scenes as we searched for detail in physical depth (not light as in traditional photography) – sometimes combining elements from different scans to create surreal atmospheres for our character to travel through.
The resulting project is a series of three images (with more in the works) following our character through a surreal journey in New York City.