How to Circumvent Annoying Adblock Messages

As you load a new site, a message appears that tells you to disable your adblock plugin so you can see ads on their site. I hate this. It ruins my experience and makes me never want to come back. However, if you’re determined to get in, there’s hope. Many sites, just put a popup overlay on the page to disable you from clicking/browsing.

Easy enough with a little developer tool action to fix. Let’s see how:

 

Calling UIWebView from JavaScript

you think you’re soooo smart making a responsive site that gets loaded into a webview so you can say you made a native app…clever. until you realize you have to set some settings and need a pipeline between javascript and objective-c!

This will help you out. There are a couple libraries out there:

From JavaScript to Objective-C

if you’re looking for something small/easy you do this:

  1. make your own protocol (instead of http:// you would use mike://)
  2. make a url request, which ultimately is an event you can look for with UIWebView
  3. process the request in UIWebView and handle any items from your scheme accordingly

So on the JavaScript side:

function execute(url) {
                       var iframe = document.createElement("iframe");
                       iframe.setAttribute("src", url);
                       document.documentElement.appendChild(iframe);
                       iframe.parentNode.removeChild(iframe);
                       iframe = null;
                   }
                               
                   execute('mike://this/is/a/custom/function');

and on the objective-c side:

- (BOOL) webView:(UIWebView*)webView shouldStartLoadWithRequest:(NSURLRequest*)request navigationType:(UIWebViewNavigationType)navigationType
{
    NSURL *URL = [request URL];
    if ([[URL scheme] isEqualToString:@"mike"]) {
        // parse the rest of the URL object and execute functions
        NSLog(@"received url with scheme: %@", [URL scheme]);
    }
    return YES;
}

// also make sure you've implemented the delegate method ViewController.h
@interface ViewController : UIViewController <UIWebViewDelegate>

// and in ViewController.m
- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    
    [webView setDelegate:self];
}

The reason you use iframes instead of “window.location” is because there are limits to how fast you can call window.location. However,¬†if you load iframes and dump the objects out of the document after creation, it negates the url request limit. There are similar hacks all over dealing with ddos attacks that use a similar approach, and in this case allows you to call objective c much more often.

Also, props to this guy.

From Objective-C to JavaScript

So for this you can actually execute functions and shit from objective-c. You just need to make sure the function calls are in the right scope, so if you have something defined in window it would just be:

[webView stringByEvaluatingJavaScriptFromString:@"testObjectiveCMethod();"]; 

Usually, you will want to execute these functions after the window has had a chance to load so:

// webview finished loading
- (void) webViewDidFinishLoad:(UIWebView *)__webView
{
    //Execute javascript method or pure javascript if needed
    [webView stringByEvaluatingJavaScriptFromString:@"testObjectiveCMethod();"];
}

Then on the front end, have an alert or something to show you it worked:

<script>
function testObjectiveCMethod() {
                console.log('things and stuff');
                alert('things and stuff');
            }
        </script>

That’s all

Javascript Object Cloning Without jQuery

If I really need to clone an object (almost never) I just use jQuery.extend – because someone far better than me wrote the cloning function and I trust them ūüôā

However, I recently discovered this nifty little¬†process that can be used in a pinch. It might be expensive for large operations, but it’ll work on your smaller objects/arrays:

var obj2 = JSON.parse(JSON.stringify(obj));

So here’s an exploration of that¬†process compared to something like “.slice(0)” as well:

var arr = [1,2,3];
var arr2 = JSON.parse(JSON.stringify(arr));
arr2.reverse();
var arr3 = arr.slice(0);
arr3.push(5);

console.log(arr);
console.log(arr2);
console.log(arr3);

var str = 1.004;
var str2 = str;
str2 = 2;

console.log(str2);
console.log(str +"1");

var obj = {a: 100};
var obj2 = JSON.parse(JSON.stringify(obj));
obj2.a = 200;

console.log(obj2);
console.log(obj);


See the Pen references by Mike Newell (@newshorts) on CodePen.

 

Augmented Reality with getUserMedia and Mobile: Part 1

Hey ya’ll,

This is going to be a multipart series talking about using getUserMedia on a mobile device to get a camera stream and ultimately use it to for the purpose of augmented reality. Eventually, I will introduce some webGL components¬†but for now let’s get up and running.

First let’s take a look at compatibility:¬†http://caniuse.com/#feat=stream¬†you need to make sure you’re not using it for something that needs to be implemented within the next year on iPhone (they just introduced webGL in IOS 8, so it should be some time, if ever, that they allow native webcam streams from the browser). This technology should only be used on edge devices on android.

Ok let’s get started by looking at the available media sources:

navigator.getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);

function init() {
                        
                        if (typeof MediaStreamTrack === 'undefined'){
                            alert('This browser does not support MediaStreamTrack.\n\nTry Chrome Canary.');
                        } else {
                            MediaStreamTrack.getSources(gotSources);
                        }
                    }

function gotSources(sourceInfos) {
                        for(var i = 0; i < sourceInfos.length; i++) {
                            var si = sourceInfos[i];
                            if(si.kind === 'video') {
                                
                                // could also face the 'user'
                                if(si.facing === 'environment' || si.facing.length === 0) {
                                    sourceId = si.id;
                                    // init webcam
                                    initWebcam();
                                }
                                console.log(si);
                            }
                        }
                    }

Now lets make that “initWebcam” function:

function initWebcam() {
                        if(navigator.getUserMedia) {
                            sp = $('.streamPlayback')[0];
                        } else {
                            alert('no user media!');
                            return;
                        }
                        
                        var mandatory = {
                            mandatory: {
                                maxWidth: videoSize.width, 
                                maxHeight: videoSize.height
                            },
                            optional: [
                                {sourceId: sourceId}
                            ]
                        };
                                ;
                        var options = {
                            video: mandatory,
                            audio: false
                        };
                        navigator.getUserMedia(options, handleStream, handleStreamError);
                    }

This sets up a user stream and passes in some constraints to the video camera stream like video size. I’ve disabled audio and selected a sourceId to identify the forward facing camera. NOTE: selecting a camera in this way only works on chrome 30+. On a desktop the sourceInfo.facing variable will be an empty string, even with multiple cameras. On a mobile device it will either say ‘user’ or ‘environment’, being the user facing or forward facing cameras respectively.

And of course our stream handlers:

function handleStream(stream) {
                        var url = window.URL || window.webkitURL;
                        // IMPORTANT: video element needs autoplay attribute or it will be frozen at first frame.
                        sp.src = url ? url.createObjectURL(stream) : stream;
                        sp.style.width = videoSize.width / 2 + 'px';
                        sp.style.height = videoSize.height / 2 + 'px';
                        sp.play();
                    }
                    
                    function handleStreamError(error) {
                        alert('error: ' + error);
                        console.log(error);
                        return;
                    }

Once we get our stream, we set the video element’s width and height as well as the src. Make sure to include “autoplay” in the video element itself or you will only end up with the first frame of the video and it will look frozen, even if you call the video’s “play” method.

Now we just need to add our globals:

var sp, sourceId;
                    
                    var videoSize = {
                        width: 1920,
                        height: 1080
                    };

I’ve made a small demo here:

ar-demo

 

DEMO

All you need to do is allow the webcam, if you are on android. In the next lesson I’ll explore writing the video element to a canvas so we can start to play with pixels, add some cool effects like¬†webcamtoy and ultimately start reacting to things like edge detection, blob detection. Finally, we’ll finish up with a simple facial recognition system and ultimately augmented reality.

 

 

Get Your Turtlebot on mDNS with a .local Address

I have a turtlebot here at work.

I am sick and tired of sshing into randomly assigned IP addresses. Luckily, there’s something called mDNS which allows you to reserve a hostname.local for your turtlebot. Instead of sshing to something like:

ssh turtlebot@10.20.1.113

You can ssh to:

ssh turtlebot@turtlebot.local

so easy you could make an alias in your bash_profile!

Here’s how you do it:

Always a good idea to get an update and then get the package:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install avahi-daemon

Then you need to make some changes on your turtlebot machine:

sudo pico /etc/hosts
# edit the line 127.0.1.1 to whatever you want the name to be
127.0.1.1    ___NAME___

sudo pico /etc/hostname
# edit this file to the same name as above
___NAME___

# now you need to tell avahi to set a .local address
sudo pico /etc/avahi/avahi-daemon.conf
# uncomment the following line
domain-name=local

Now just restart the service and reboot your computer!

sudo service avahi-daemon restart
sudo reboot

 

 

HTC One M9 – I’d wait for something else…

I bought an HTC One m9. Having previously owned an M7 I thought I was in for a wonderful time. Nope.

The following has been my experience:

  • The camera is alright, but the M7 camera I felt was better. Even though this one shoots in 4k. Also, it’s extremely slow if you decide to save everything to a memory card instead of the internal memory.
  • It¬†gets way too hot, as I write this my phone is asleep, I haven’t used it all all today. I’m not uploading or syncing…and my phone is still hot. It’s obviously working on something in the background that I can’t control, which means the battery¬†will be dead in about 4 hours. Thanks HTC.
  • It’s slippery. The old brushed aluminum has been upgraded to polished aluminum, looks nice but the fucking thing is like a bar of soap when taking out of your pocket. Especially if you have dry hands.
  • The overall experience is slow. Animations are delayed, I have to wait about 2-5 seconds after waking my phone up before the screen appears.¬†There is about a second delay between my tap and something happening on my phone.
  • Constant airplane mode. The only way I’ve found to keep my phone on all day is to put it on airplane mode (which means I can’t really use it). Basically, I can’t use my phone at all if I want it to last the 9 hours HTC said it will.

I’m so frustrated with the overall experience of my phone, that I may actually spend the money to just go back to iphone. I already used my upgrade on my htc, so I’ll have to pay full price. Even with all that it may ultimately be worth it.

HTC, you won me as a customer with the M7 and lost me with the M9, I’ll never take a risk on your phones again.

Browser Won’t Display SVG Locally?

Ok here’s the sitch.

You can pull in svg from a cdn and everything work. But when you try and use your local svg graphics, everything goes to shit.

It’s probably that you’re not serving svg in with the right headers, especially if you’ve saved your svg graphic from illustrator. There’s a great explanation here:¬†http://kaioa.com/node/45

If you’re running XAMPP do the following:

pico /Applications/XAMPP/etc/httpd.conf

And add the following to: ”

<IfModule mime_module>

# add support for svg
    AddType image/svg+xml svg svgz
    AddEncoding gzip svgz

Restart XAMPP and you should be serving svg!

 

 

IOS Development Tips and Tricks

Ok, this will be on-going. Below you’ll find a collection of tips and tricks to get things up and running:

XCode

  • keep it up to date with the latest from app whenever possible
  • let XCode do all the work when setting up and provisioning profiles (Preferences –> accounts)
  • When your development environment is setup and working, export your developer profile (Preferences –> accounts –> [account name] –> settings icon –> export)

Member Center (Developer Portal)

  • Always visit this site on safari, never on google chrome (apple codes uses scripts that won’t compile under current v8 rules with chrome)

Publishing Your App

IOS Native App Dimensions

Been a while since I made a native app for IOS, this article is to refresh my memory and hopefully it might be useful to someone else. Forgot the screen dimensions but found a great resource here:

http://iosdesign.ivomynttinen.com/

The screen sizes are as follows:

iPhoneComparison

So for retina devices, you would multiply the dimensions of the image by 2 or three depending on the display resolution. So for iPhone 4s you would make a retina image the size of 1280 X 1920 etc. So for the iPhone 6+ you would do 3726 X 6624. Apple recommends you use a launch file/storyboard file for more images on iphone 6 and 6+.