Mars VR Demo

Just wanted to post this. The source is pretty self explanatory and i’ll put up a github later once I clean up a few things.

I’ll explain later. Be patient, it takes a second to load.

DEMO –>

Augmented Reality with getUserMedia and Mobile: Part 1

Hey ya’ll,

This is going to be a multipart series talking about using getUserMedia on a mobile device to get a camera stream and ultimately use it to for the purpose of augmented reality. Eventually, I will introduce some webGL components but for now let’s get up and running.

First let’s take a look at compatibility: http://caniuse.com/#feat=stream you need to make sure you’re not using it for something that needs to be implemented within the next year on iPhone (they just introduced webGL in IOS 8, so it should be some time, if ever, that they allow native webcam streams from the browser). This technology should only be used on edge devices on android.

Ok let’s get started by looking at the available media sources:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

navigator.getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);

function init() {
                        
                        if (typeof MediaStreamTrack === 'undefined'){
                            alert('This browser does not support MediaStreamTrack.\n\nTry Chrome Canary.');
                        } else {
                            MediaStreamTrack.getSources(gotSources);
                        }
                    }

function gotSources(sourceInfos) {
                        for(var i = 0; i < sourceInfos.length; i++) {
                            var si = sourceInfos[i];
                            if(si.kind === 'video') {
                                
                                // could also face the 'user'
                                if(si.facing === 'environment' || si.facing.length === 0) {
                                    sourceId = si.id;
                                    // init webcam
                                    initWebcam();
                                }
                                console.log(si);
                            }
                        }
                    }

[/pastacode]

Now lets make that “initWebcam” function:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

function initWebcam() {
                        if(navigator.getUserMedia) {
                            sp = $('.streamPlayback')[0];
                        } else {
                            alert('no user media!');
                            return;
                        }
                        
                        var mandatory = {
                            mandatory: {
                                maxWidth: videoSize.width, 
                                maxHeight: videoSize.height
                            },
                            optional: [
                                {sourceId: sourceId}
                            ]
                        };
                                ;
                        var options = {
                            video: mandatory,
                            audio: false
                        };
                        navigator.getUserMedia(options, handleStream, handleStreamError);
                    }

[/pastacode]

This sets up a user stream and passes in some constraints to the video camera stream like video size. I’ve disabled audio and selected a sourceId to identify the forward facing camera. NOTE: selecting a camera in this way only works on chrome 30+. On a desktop the sourceInfo.facing variable will be an empty string, even with multiple cameras. On a mobile device it will either say ‘user’ or ‘environment’, being the user facing or forward facing cameras respectively.

And of course our stream handlers:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

function handleStream(stream) {
                        var url = window.URL || window.webkitURL;
                        // IMPORTANT: video element needs autoplay attribute or it will be frozen at first frame.
                        sp.src = url ? url.createObjectURL(stream) : stream;
                        sp.style.width = videoSize.width / 2 + 'px';
                        sp.style.height = videoSize.height / 2 + 'px';
                        sp.play();
                    }
                    
                    function handleStreamError(error) {
                        alert('error: ' + error);
                        console.log(error);
                        return;
                    }

[/pastacode]

Once we get our stream, we set the video element’s width and height as well as the src. Make sure to include “autoplay” in the video element itself or you will only end up with the first frame of the video and it will look frozen, even if you call the video’s “play” method.

Now we just need to add our globals:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

var sp, sourceId;
                    
                    var videoSize = {
                        width: 1920,
                        height: 1080
                    };

[/pastacode]

I’ve made a small demo here:

ar-demo

 

DEMO

All you need to do is allow the webcam, if you are on android. In the next lesson I’ll explore writing the video element to a canvas so we can start to play with pixels, add some cool effects like webcamtoy and ultimately start reacting to things like edge detection, blob detection. Finally, we’ll finish up with a simple facial recognition system and ultimately augmented reality.

 

 

Web Speech API

Made a quick library (https://github.com/newshorts/WebSpeechJS) last week that’s working alright for me. Thinking of making it an actual project on github. Just wanted to post a little blurb about it here, before I add documentation and make it a real thing.

It’s called WebSpeechJS. Basically, it just makes it easy to loop the web speech API return (primitive and final) text and call it into a callback.

Use it like this:

[pastacode lang=”markup” message=”” highlight=”” provider=”manual”]

<!-- include the js file -->
<script src="path/to/js/webspeech.js"></script>
<script>
    // when the window loads call the script
    var options = {
        startButton: $('.start'),
        stopButton: $('.stop'),
        tmpOutput: $('.tmp'),
        finalOutput: $('.final'),
        onResult: handleWebSpeechResult
    };
    var ws = new WebSpeech($, options);

    // start the processes
    ws.init();
</script>

<!-- then in your page template, include necessary the elements -->
<p class="tmp"></p>
<p class="final"></p>
<button class="start">Start</button>
<button class="stop">Stop</button>

[/pastacode]

That’s it for now, easy peasy.

DEMO

GSP First Floor

Just thought I’d capture the first floor in a 360 photo sphere and then put it up to show people our space!

DEMO

CSS Boxes

I have no idea. It was inspired by something I saw on a google site, so I stole the images and re-wrote the animation sequence.

DEMO

Prometheus Clock

Simple experiment using a ton of CSS3 animations. Hand written so I could understand what was going on.

DEMO

My Room

A simple webgl experiment where I cut a hole in my room so I can watch TV. Going to finish this by uploading a movie of battlefield instead of the demo video. Code to come later, but for now check it out. I’m using a boilerplate I put together called BoilerGL.

DEMO

Dark World

My first real webgl experiment using threejs. I’ve made some custom textures and will continue to mess around with this until I get something that is super weird and crazy.

DEMO