Augmented Reality with getUserMedia and Mobile: Part 1

Hey ya’ll,

This is going to be a multipart series talking about using getUserMedia on a mobile device to get a camera stream and ultimately use it to for the purpose of augmented reality. Eventually, I will introduce some webGL components but for now let’s get up and running.

First let’s take a look at compatibility: http://caniuse.com/#feat=stream you need to make sure you’re not using it for something that needs to be implemented within the next year on iPhone (they just introduced webGL in IOS 8, so it should be some time, if ever, that they allow native webcam streams from the browser). This technology should only be used on edge devices on android.

Ok let’s get started by looking at the available media sources:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

navigator.getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);

function init() {
                        
                        if (typeof MediaStreamTrack === 'undefined'){
                            alert('This browser does not support MediaStreamTrack.\n\nTry Chrome Canary.');
                        } else {
                            MediaStreamTrack.getSources(gotSources);
                        }
                    }

function gotSources(sourceInfos) {
                        for(var i = 0; i < sourceInfos.length; i++) {
                            var si = sourceInfos[i];
                            if(si.kind === 'video') {
                                
                                // could also face the 'user'
                                if(si.facing === 'environment' || si.facing.length === 0) {
                                    sourceId = si.id;
                                    // init webcam
                                    initWebcam();
                                }
                                console.log(si);
                            }
                        }
                    }

[/pastacode]

Now lets make that “initWebcam” function:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

function initWebcam() {
                        if(navigator.getUserMedia) {
                            sp = $('.streamPlayback')[0];
                        } else {
                            alert('no user media!');
                            return;
                        }
                        
                        var mandatory = {
                            mandatory: {
                                maxWidth: videoSize.width, 
                                maxHeight: videoSize.height
                            },
                            optional: [
                                {sourceId: sourceId}
                            ]
                        };
                                ;
                        var options = {
                            video: mandatory,
                            audio: false
                        };
                        navigator.getUserMedia(options, handleStream, handleStreamError);
                    }

[/pastacode]

This sets up a user stream and passes in some constraints to the video camera stream like video size. I’ve disabled audio and selected a sourceId to identify the forward facing camera. NOTE: selecting a camera in this way only works on chrome 30+. On a desktop the sourceInfo.facing variable will be an empty string, even with multiple cameras. On a mobile device it will either say ‘user’ or ‘environment’, being the user facing or forward facing cameras respectively.

And of course our stream handlers:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

function handleStream(stream) {
                        var url = window.URL || window.webkitURL;
                        // IMPORTANT: video element needs autoplay attribute or it will be frozen at first frame.
                        sp.src = url ? url.createObjectURL(stream) : stream;
                        sp.style.width = videoSize.width / 2 + 'px';
                        sp.style.height = videoSize.height / 2 + 'px';
                        sp.play();
                    }
                    
                    function handleStreamError(error) {
                        alert('error: ' + error);
                        console.log(error);
                        return;
                    }

[/pastacode]

Once we get our stream, we set the video element’s width and height as well as the src. Make sure to include “autoplay” in the video element itself or you will only end up with the first frame of the video and it will look frozen, even if you call the video’s “play” method.

Now we just need to add our globals:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

var sp, sourceId;
                    
                    var videoSize = {
                        width: 1920,
                        height: 1080
                    };

[/pastacode]

I’ve made a small demo here:

ar-demo

 

DEMO

All you need to do is allow the webcam, if you are on android. In the next lesson I’ll explore writing the video element to a canvas so we can start to play with pixels, add some cool effects like webcamtoy and ultimately start reacting to things like edge detection, blob detection. Finally, we’ll finish up with a simple facial recognition system and ultimately augmented reality.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.