xcode

Local Website in an IOS App

So you’re a master at html5 and you want to build a native app on ios, but you don’t know swift or objective-c.

No worries, there are a ton of platforms out there to help with this. But with some basic knowledge of ios you can build an using mostly web technologies!

What we will do is a build a locally hosted website, put it inside a native ios app and run it all together to give the appearance of a native app but built using html5.

You can download the project on my github: https://github.com/newshorts/LocalWebsiteInIOSApp

 

webgl

Timelapse Movie of Your Desktop While You’re Working

i want to start doing timelapses of coding while I’m working, so here goes:

First off set up a terminal command to take a screenshot every 25 seconds:

i=1;while [ 1 ];do screencapture -t jpg -x ~/Desktop/TrickyWays/$i.jpg; let i++;sleep 25; done

Next stitch together the screens with ffmpeg:

ffmpeg -framerate 1 -pattern_type glob -i '~/Desktop/screens/*.jpg' -c:v libx264 out.mp4

thats all!

iphone-site

Inline Video on the iPhone

This is an example of inline video playing on the iphone

Use cases include webGL video textures, interactive video experiences, and video timing projects.

Please contribute to the project:

https://github.com/newshorts/InlineVideo/blob/master/js/inline-video.js

An example:

/*!
 * Inline Video Player v0.0.1
 * http://iwearshorts.com/
 *
 * Includes jQuery js
 * https://jquery.com/
 *
 * Copyright 2015 Mike Newell
 * Released under the MIT license
 * https://tldrlegal.com/license/mit-license
 *
 * Date: 2015-18-07
 * 
 * TODO: look for the webkit-playsinline playsinline attributes and replace videos on iphones with canvas
 * 
 */

var video = $('video')[0];
var canvas = $('canvas')[0];
var ctx = canvas.getContext('2d');
var lastTime = Date.now();
var animationFrame;
var framesPerSecond = 25;
function loop() {
    var time = Date.now();
    var elapsed = (time - lastTime) / 1000;

    // render
    if(elapsed >= ((1000/framesPerSecond)/1000)) {
        video.currentTime = video.currentTime + elapsed;
        $(canvas).width(video.videoWidth);
        $(canvas).height(video.videoHeight);
        ctx.drawImage(video, 0, 0, video.videoWidth, video.videoHeight);
        lastTime = time;
    }

    // if we are at the end of the video stop
    var currentTime = (Math.round(parseFloat(video.currentTime)*10000)/10000);
    var duration = (Math.round(parseFloat(video.duration)*10000)/10000);
    if(currentTime >= duration) {
        console.log('currentTime: ' + currentTime + ' duration: ' + video.duration);
        return;
    }

    animationFrame = requestAnimationFrame(loop);
}

$('button').on('click', function() {
  video.load();
  loop();
});

See the Pen InlineVideo by Mike Newell (@newshorts) on CodePen.

 

DEMO

code

How to Circumvent Annoying Adblock Messages

As you load a new site, a message appears that tells you to disable your adblock plugin so you can see ads on their site. I hate this. It ruins my experience and makes me never want to come back. However, if you’re determined to get in, there’s hope. Many sites, just put a popup overlay on the page to disable you from clicking/browsing.

Easy enough with a little developer tool action to fix. Let’s see how:

 

xcode

Calling UIWebView from JavaScript

you think you’re soooo smart making a responsive site that gets loaded into a webview so you can say you made a native app…clever. until you realize you have to set some settings and need a pipeline between javascript and objective-c!

This will help you out. There are a couple libraries out there:

From JavaScript to Objective-C

if you’re looking for something small/easy you do this:

  1. make your own protocol (instead of http:// you would use mike://)
  2. make a url request, which ultimately is an event you can look for with UIWebView
  3. process the request in UIWebView and handle any items from your scheme accordingly

So on the JavaScript side:

function execute(url) {
                       var iframe = document.createElement("iframe");
                       iframe.setAttribute("src", url);
                       document.documentElement.appendChild(iframe);
                       iframe.parentNode.removeChild(iframe);
                       iframe = null;
                   }
                               
                   execute('mike://this/is/a/custom/function');

and on the objective-c side:

- (BOOL) webView:(UIWebView*)webView shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNavigationType)navigationType
{
    NSURL *URL = [request URL];
    if ([[URL scheme] isEqualToString:@"mike"]) {
        // parse the rest of the URL object and execute functions
        NSLog(@"received url with scheme: %@", [URL scheme]);
    }
    return YES;
}

// also make sure you've implemented the delegate method ViewController.h
@interface ViewController : UIViewController <UIWebViewDelegate>

// and in ViewController.m
- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    
    [webView setDelegate:self];
}

The reason you use iframes instead of “window.location” is because there are limits to how fast you can call window.location. However, if you load iframes and dump the objects out of the document after creation, it negates the url request limit. There are similar hacks all over dealing with ddos attacks that use a similar approach, and in this case allows you to call objective c much more often.

Also, props to this guy.

From Objective-C to JavaScript

So for this you can actually execute functions and shit from objective-c. You just need to make sure the function calls are in the right scope, so if you have something defined in window it would just be:

[webView stringByEvaluatingJavaScriptFromString:@"testObjectiveCMethod();"]; 

Usually, you will want to execute these functions after the window has had a chance to load so:

// webview finished loading
- (void) webViewDidFinishLoad:(UIWebView *)__webView
{
    //Execute javascript method or pure javascript if needed
    [webView stringByEvaluatingJavaScriptFromString:@"testObjectiveCMethod();"];
}

Then on the front end, have an alert or something to show you it worked:

<script>
function testObjectiveCMethod() {
                console.log('things and stuff');
                alert('things and stuff');
            }
        </script>

That’s all

bash

Javascript Object Cloning Without jQuery

If I really need to clone an object (almost never) I just use jQuery.extend – because someone far better than me wrote the cloning function and I trust them :)

However, I recently discovered this nifty little process that can be used in a pinch. It might be expensive for large operations, but it’ll work on your smaller objects/arrays:

var obj2 = JSON.parse(JSON.stringify(obj));

So here’s an exploration of that process compared to something like “.slice(0)” as well:

var arr = [1,2,3];
var arr2 = JSON.parse(JSON.stringify(arr));
arr2.reverse();
var arr3 = arr.slice(0);
arr3.push(5);

console.log(arr);
console.log(arr2);
console.log(arr3);

var str = 1.004;
var str2 = str;
str2 = 2;

console.log(str2);
console.log(str +"1");

var obj = {a: 100};
var obj2 = JSON.parse(JSON.stringify(obj));
obj2.a = 200;

console.log(obj2);
console.log(obj);


See the Pen references by Mike Newell (@newshorts) on CodePen.

 

htc

Augmented Reality with getUserMedia and Mobile: Part 1

Hey ya’ll,

This is going to be a multipart series talking about using getUserMedia on a mobile device to get a camera stream and ultimately use it to for the purpose of augmented reality. Eventually, I will introduce some webGL components but for now let’s get up and running.

First let’s take a look at compatibility: http://caniuse.com/#feat=stream you need to make sure you’re not using it for something that needs to be implemented within the next year on iPhone (they just introduced webGL in IOS 8, so it should be some time, if ever, that they allow native webcam streams from the browser). This technology should only be used on edge devices on android.

Ok let’s get started by looking at the available media sources:

navigator.getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);

function init() {
                        
                        if (typeof MediaStreamTrack === 'undefined'){
                            alert('This browser does not support MediaStreamTrack.\n\nTry Chrome Canary.');
                        } else {
                            MediaStreamTrack.getSources(gotSources);
                        }
                    }

function gotSources(sourceInfos) {
                        for(var i = 0; i < sourceInfos.length; i++) {
                            var si = sourceInfos[i];
                            if(si.kind === 'video') {
                                
                                // could also face the 'user'
                                if(si.facing === 'environment' || si.facing.length === 0) {
                                    sourceId = si.id;
                                    // init webcam
                                    initWebcam();
                                }
                                console.log(si);
                            }
                        }
                    }

Now lets make that “initWebcam” function:

function initWebcam() {
                        if(navigator.getUserMedia) {
                            sp = $('.streamPlayback')[0];
                        } else {
                            alert('no user media!');
                            return;
                        }
                        
                        var mandatory = {
                            mandatory: {
                                maxWidth: videoSize.width, 
                                maxHeight: videoSize.height
                            },
                            optional: [
                                {sourceId: sourceId}
                            ]
                        };
                                ;
                        var options = {
                            video: mandatory,
                            audio: false
                        };
                        navigator.getUserMedia(options, handleStream, handleStreamError);
                    }

This sets up a user stream and passes in some constraints to the video camera stream like video size. I’ve disabled audio and selected a sourceId to identify the forward facing camera. NOTE: selecting a camera in this way only works on chrome 30+. On a desktop the sourceInfo.facing variable will be an empty string, even with multiple cameras. On a mobile device it will either say ‘user’ or ‘environment’, being the user facing or forward facing cameras respectively.

And of course our stream handlers:

function handleStream(stream) {
                        var url = window.URL || window.webkitURL;
                        // IMPORTANT: video element needs autoplay attribute or it will be frozen at first frame.
                        sp.src = url ? url.createObjectURL(stream) : stream;
                        sp.style.width = videoSize.width / 2 + 'px';
                        sp.style.height = videoSize.height / 2 + 'px';
                        sp.play();
                    }
                    
                    function handleStreamError(error) {
                        alert('error: ' + error);
                        console.log(error);
                        return;
                    }

Once we get our stream, we set the video element’s width and height as well as the src. Make sure to include “autoplay” in the video element itself or you will only end up with the first frame of the video and it will look frozen, even if you call the video’s “play” method.

Now we just need to add our globals:

var sp, sourceId;
                    
                    var videoSize = {
                        width: 1920,
                        height: 1080
                    };

I’ve made a small demo here:

ar-demo

 

DEMO

All you need to do is allow the webcam, if you are on android. In the next lesson I’ll explore writing the video element to a canvas so we can start to play with pixels, add some cool effects like webcamtoy and ultimately start reacting to things like edge detection, blob detection. Finally, we’ll finish up with a simple facial recognition system and ultimately augmented reality.

 

 

turtlebot

Get Your Turtlebot on mDNS with a .local Address

I have a turtlebot here at work.

I am sick and tired of sshing into randomly assigned IP addresses. Luckily, there’s something called mDNS which allows you to reserve a hostname.local for your turtlebot. Instead of sshing to something like:

ssh turtlebot@10.20.1.113

You can ssh to:

ssh turtlebot@turtlebot.local

so easy you could make an alias in your bash_profile!

Here’s how you do it:

Always a good idea to get an update and then get the package:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install avahi-daemon

Then you need to make some changes on your turtlebot machine:

sudo pico /etc/hosts
# edit the line 127.0.1.1 to whatever you want the name to be
127.0.1.1    ___NAME___

sudo pico /etc/hostname
# edit this file to the same name as above
___NAME___

# now you need to tell avahi to set a .local address
sudo pico /etc/avahi/avahi-daemon.conf
# uncomment the following line
domain-name=local

Now just restart the service and reboot your computer!

sudo service avahi-daemon restart
sudo reboot

 

 

htc

HTC One M9 – I’d wait for something else…

I bought an HTC One m9. Having previously owned an M7 I thought I was in for a wonderful time. Nope.

The following has been my experience:

  • The camera is alright, but the M7 camera I felt was better. Even though this one shoots in 4k. Also, it’s extremely slow if you decide to save everything to a memory card instead of the internal memory.
  • It gets way too hot, as I write this my phone is asleep, I haven’t used it all all today. I’m not uploading or syncing…and my phone is still hot. It’s obviously working on something in the background that I can’t control, which means the battery will be dead in about 4 hours. Thanks HTC.
  • It’s slippery. The old brushed aluminum has been upgraded to polished aluminum, looks nice but the fucking thing is like a bar of soap when taking out of your pocket. Especially if you have dry hands.
  • The overall experience is slow. Animations are delayed, I have to wait about 2-5 seconds after waking my phone up before the screen appears. There is about a second delay between my tap and something happening on my phone.
  • Constant airplane mode. The only way I’ve found to keep my phone on all day is to put it on airplane mode (which means I can’t really use it). Basically, I can’t use my phone at all if I want it to last the 9 hours HTC said it will.

I’m so frustrated with the overall experience of my phone, that I may actually spend the money to just go back to iphone. I already used my upgrade on my htc, so I’ll have to pay full price. Even with all that it may ultimately be worth it.

HTC, you won me as a customer with the M7 and lost me with the M9, I’ll never take a risk on your phones again.