MPMoviePlayerController Black Screen When Fading

Let’s say you want to animate a new view controller in and immediately start playing a video. You might things it’s as easy as taking a static image of the first frame of the video as the background and then crossfading the video overtop when it’s ready to play…right?

Kinda.

This was a 24 hour bug for me. I kept seeing this black screen before the video loads. I tried adjusting alphas, delays before playing the video, even changing the background color of the view of the video itself! Nothing worked.

Normally many of these solutions would seem right! It turns out you need to look far back in time to find the solution. In IOS 6, MPMoviePlayerController was given a new event to catch. For those of us using “MPMoviePlayerLoadStateDidChangeNotification” you’re doing it wrong… You see this load event is telling you when the video content is loaded and can play all the way through, however, it does NOT tell you when your movie is ready for the display to handle it.

There’s a new-ish notification to watch out for, it’s called: “MPMoviePlayerReadyForDisplayDidChangeNotification” and it’s your silver bullet. This thing only fires after the video has loaded…and is ready for the display!

Set everything up like you normally would, ad your observer:

[pastacode lang=”c” message=”” highlight=”” provider=”manual”]

[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(movieReadyForDisplayChanged:) name:MPMoviePlayerReadyForDisplayDidChangeNotification object:nil];

[/pastacode]

Then add your handler:

[pastacode lang=”c” message=”” highlight=”” provider=”manual”]

- (void) movieReadyForDisplayChanged: (NSNotification *) sentNotification
{
    if (player.loadState & (MPMovieLoadStatePlaythroughOK)) {
        [UIView animateWithDuration:0.1 delay:0.1 options:UIViewAnimationOptionCurveLinear animations:^{
            // animation
            [[player view] setAlpha:1.0];
        } completion:^(BOOL finished) {
            // done
            [player play];
        }];
    }
}

[/pastacode]

Now your video should only fade in when it’s ready for display. After fading, you actually call “play” on the controller.

I’ll make a video and post it later…

 

Local Website in an IOS App

So you’re a master at html5 and you want to build a native app on ios, but you don’t know swift or objective-c.

No worries, there are a ton of platforms out there to help with this. But with some basic knowledge of ios you can build an using mostly web technologies!

What we will do is a build a locally hosted website, put it inside a native ios app and run it all together to give the appearance of a native app but built using html5.

You can download the project on my github: https://github.com/newshorts/LocalWebsiteInIOSApp

 

Timelapse Movie of Your Desktop While You’re Working

i want to start doing timelapses of coding while I’m working, so here goes:

First off set up a terminal command to take a screenshot every 25 seconds:

[pastacode lang=”bash” message=”” highlight=”” provider=”manual”]

i=1;while [ 1 ];do screencapture -t jpg -x ~/Desktop/TrickyWays/$i.jpg; let i++;sleep 25; done

[/pastacode]

Next stitch together the screens with ffmpeg:

[pastacode lang=”bash” message=”” highlight=”” provider=”manual”]

ffmpeg -framerate 1 -pattern_type glob -i '~/Desktop/screens/*.jpg' -c:v libx264 out.mp4

[/pastacode]

thats all!

Inline Video on the iPhone

This is an example of inline video playing on the iphone

Use cases include webGL video textures, interactive video experiences, and video timing projects.

Please contribute to the project:

https://github.com/newshorts/InlineVideo/blob/master/js/inline-video.js

An example:

/*!
 * Inline Video Player v0.0.1
 * https://iwearshorts.com/
 *
 * Includes jQuery js
 * https://jquery.com/
 *
 * Copyright 2015 Mike Newell
 * Released under the MIT license
 * https://tldrlegal.com/license/mit-license
 *
 * Date: 2015-18-07
 * 
 * TODO: look for the webkit-playsinline playsinline attributes and replace videos on iphones with canvas
 * 
 */

var video = $('video')[0];
var canvas = $('canvas')[0];
var ctx = canvas.getContext('2d');
var lastTime = Date.now();
var animationFrame;
var framesPerSecond = 25;
function loop() {
    var time = Date.now();
    var elapsed = (time - lastTime) / 1000;

    // render
    if(elapsed >= ((1000/framesPerSecond)/1000)) {
        video.currentTime = video.currentTime + elapsed;
        $(canvas).width(video.videoWidth);
        $(canvas).height(video.videoHeight);
        ctx.drawImage(video, 0, 0, video.videoWidth, video.videoHeight);
        lastTime = time;
    }

    // if we are at the end of the video stop
    var currentTime = (Math.round(parseFloat(video.currentTime)*10000)/10000);
    var duration = (Math.round(parseFloat(video.duration)*10000)/10000);
    if(currentTime >= duration) {
        console.log('currentTime: ' + currentTime + ' duration: ' + video.duration);
        return;
    }

    animationFrame = requestAnimationFrame(loop);
}

$('button').on('click', function() {
  video.load();
  loop();
});

See the Pen InlineVideo by Mike Newell (@newshorts) on CodePen.

 

DEMO

How to Circumvent Annoying Adblock Messages

As you load a new site, a message appears that tells you to disable your adblock plugin so you can see ads on their site. I hate this. It ruins my experience and makes me never want to come back. However, if you’re determined to get in, there’s hope. Many sites, just put a popup overlay on the page to disable you from clicking/browsing.

Easy enough with a little developer tool action to fix. Let’s see how:

 

Calling UIWebView from JavaScript

you think you’re soooo smart making a responsive site that gets loaded into a webview so you can say you made a native app…clever. until you realize you have to set some settings and need a pipeline between javascript and objective-c!

This will help you out. There are a couple libraries out there:

From JavaScript to Objective-C

if you’re looking for something small/easy you do this:

  1. make your own protocol (instead of http:// you would use mike://)
  2. make a url request, which ultimately is an event you can look for with UIWebView
  3. process the request in UIWebView and handle any items from your scheme accordingly

So on the JavaScript side:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

function execute(url) {
                       var iframe = document.createElement("iframe");
                       iframe.setAttribute("src", url);
                       document.documentElement.appendChild(iframe);
                       iframe.parentNode.removeChild(iframe);
                       iframe = null;
                   }
                               
                   execute('mike://this/is/a/custom/function');

[/pastacode]

and on the objective-c side:

[pastacode lang=”c” message=”” highlight=”” provider=”manual”]

- (BOOL) webView:(UIWebView*)webView shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNavigationType)navigationType
{
    NSURL *URL = [request URL];
    if ([[URL scheme] isEqualToString:@"mike"]) {
        // parse the rest of the URL object and execute functions
        NSLog(@"received url with scheme: %@", [URL scheme]);
    }
    return YES;
}

// also make sure you've implemented the delegate method ViewController.h
@interface ViewController : UIViewController <UIWebViewDelegate>

// and in ViewController.m
- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    
    [webView setDelegate:self];
}

[/pastacode]

The reason you use iframes instead of “window.location” is because there are limits to how fast you can call window.location. However, if you load iframes and dump the objects out of the document after creation, it negates the url request limit. There are similar hacks all over dealing with ddos attacks that use a similar approach, and in this case allows you to call objective c much more often.

Also, props to this guy.

From Objective-C to JavaScript

So for this you can actually execute functions and shit from objective-c. You just need to make sure the function calls are in the right scope, so if you have something defined in window it would just be:

[pastacode lang=”c” message=”” highlight=”” provider=”manual”]

[webView stringByEvaluatingJavaScriptFromString:@"testObjectiveCMethod();"]; 

[/pastacode]

Usually, you will want to execute these functions after the window has had a chance to load so:

[pastacode lang=”c” message=”” highlight=”” provider=”manual”]

// webview finished loading
- (void) webViewDidFinishLoad:(UIWebView *)__webView
{
    //Execute javascript method or pure javascript if needed
    [webView stringByEvaluatingJavaScriptFromString:@"testObjectiveCMethod();"];
}

[/pastacode]

Then on the front end, have an alert or something to show you it worked:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

<script>
function testObjectiveCMethod() {
                console.log('things and stuff');
                alert('things and stuff');
            }
        </script>

[/pastacode]

That’s all

Javascript Object Cloning Without jQuery

If I really need to clone an object (almost never) I just use jQuery.extend – because someone far better than me wrote the cloning function and I trust them 🙂

However, I recently discovered this nifty little process that can be used in a pinch. It might be expensive for large operations, but it’ll work on your smaller objects/arrays:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

var obj2 = JSON.parse(JSON.stringify(obj));

[/pastacode]

So here’s an exploration of that process compared to something like “.slice(0)” as well:

var arr = [1,2,3];
var arr2 = JSON.parse(JSON.stringify(arr));
arr2.reverse();
var arr3 = arr.slice(0);
arr3.push(5);

console.log(arr);
console.log(arr2);
console.log(arr3);

var str = 1.004;
var str2 = str;
str2 = 2;

console.log(str2);
console.log(str +"1");

var obj = {a: 100};
var obj2 = JSON.parse(JSON.stringify(obj));
obj2.a = 200;

console.log(obj2);
console.log(obj);


See the Pen references by Mike Newell (@newshorts) on CodePen.

 

Augmented Reality with getUserMedia and Mobile: Part 1

Hey ya’ll,

This is going to be a multipart series talking about using getUserMedia on a mobile device to get a camera stream and ultimately use it to for the purpose of augmented reality. Eventually, I will introduce some webGL components but for now let’s get up and running.

First let’s take a look at compatibility: http://caniuse.com/#feat=stream you need to make sure you’re not using it for something that needs to be implemented within the next year on iPhone (they just introduced webGL in IOS 8, so it should be some time, if ever, that they allow native webcam streams from the browser). This technology should only be used on edge devices on android.

Ok let’s get started by looking at the available media sources:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

navigator.getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);

function init() {
                        
                        if (typeof MediaStreamTrack === 'undefined'){
                            alert('This browser does not support MediaStreamTrack.\n\nTry Chrome Canary.');
                        } else {
                            MediaStreamTrack.getSources(gotSources);
                        }
                    }

function gotSources(sourceInfos) {
                        for(var i = 0; i < sourceInfos.length; i++) {
                            var si = sourceInfos[i];
                            if(si.kind === 'video') {
                                
                                // could also face the 'user'
                                if(si.facing === 'environment' || si.facing.length === 0) {
                                    sourceId = si.id;
                                    // init webcam
                                    initWebcam();
                                }
                                console.log(si);
                            }
                        }
                    }

[/pastacode]

Now lets make that “initWebcam” function:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

function initWebcam() {
                        if(navigator.getUserMedia) {
                            sp = $('.streamPlayback')[0];
                        } else {
                            alert('no user media!');
                            return;
                        }
                        
                        var mandatory = {
                            mandatory: {
                                maxWidth: videoSize.width, 
                                maxHeight: videoSize.height
                            },
                            optional: [
                                {sourceId: sourceId}
                            ]
                        };
                                ;
                        var options = {
                            video: mandatory,
                            audio: false
                        };
                        navigator.getUserMedia(options, handleStream, handleStreamError);
                    }

[/pastacode]

This sets up a user stream and passes in some constraints to the video camera stream like video size. I’ve disabled audio and selected a sourceId to identify the forward facing camera. NOTE: selecting a camera in this way only works on chrome 30+. On a desktop the sourceInfo.facing variable will be an empty string, even with multiple cameras. On a mobile device it will either say ‘user’ or ‘environment’, being the user facing or forward facing cameras respectively.

And of course our stream handlers:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

function handleStream(stream) {
                        var url = window.URL || window.webkitURL;
                        // IMPORTANT: video element needs autoplay attribute or it will be frozen at first frame.
                        sp.src = url ? url.createObjectURL(stream) : stream;
                        sp.style.width = videoSize.width / 2 + 'px';
                        sp.style.height = videoSize.height / 2 + 'px';
                        sp.play();
                    }
                    
                    function handleStreamError(error) {
                        alert('error: ' + error);
                        console.log(error);
                        return;
                    }

[/pastacode]

Once we get our stream, we set the video element’s width and height as well as the src. Make sure to include “autoplay” in the video element itself or you will only end up with the first frame of the video and it will look frozen, even if you call the video’s “play” method.

Now we just need to add our globals:

[pastacode lang=”javascript” message=”” highlight=”” provider=”manual”]

var sp, sourceId;
                    
                    var videoSize = {
                        width: 1920,
                        height: 1080
                    };

[/pastacode]

I’ve made a small demo here:

ar-demo

 

DEMO

All you need to do is allow the webcam, if you are on android. In the next lesson I’ll explore writing the video element to a canvas so we can start to play with pixels, add some cool effects like webcamtoy and ultimately start reacting to things like edge detection, blob detection. Finally, we’ll finish up with a simple facial recognition system and ultimately augmented reality.

 

 

Get Your Turtlebot on mDNS with a .local Address

I have a turtlebot here at work.

I am sick and tired of sshing into randomly assigned IP addresses. Luckily, there’s something called mDNS which allows you to reserve a hostname.local for your turtlebot. Instead of sshing to something like:

[pastacode lang=”bash” message=”” highlight=”” provider=”manual”]

ssh turtlebot@10.20.1.113

[/pastacode]

You can ssh to:

[pastacode lang=”bash” message=”” highlight=”” provider=”manual”]

ssh turtlebot@turtlebot.local

[/pastacode]

so easy you could make an alias in your bash_profile!

Here’s how you do it:

Always a good idea to get an update and then get the package:

[pastacode lang=”bash” message=”” highlight=”” provider=”manual”]

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install avahi-daemon

[/pastacode]

Then you need to make some changes on your turtlebot machine:

[pastacode lang=”bash” message=”” highlight=”” provider=”manual”]

sudo pico /etc/hosts
# edit the line 127.0.1.1 to whatever you want the name to be
127.0.1.1    ___NAME___

sudo pico /etc/hostname
# edit this file to the same name as above
___NAME___

# now you need to tell avahi to set a .local address
sudo pico /etc/avahi/avahi-daemon.conf
# uncomment the following line
domain-name=local

[/pastacode]

Now just restart the service and reboot your computer!

[pastacode lang=”bash” message=”” highlight=”” provider=”manual”]

sudo service avahi-daemon restart
sudo reboot

[/pastacode]