Setting up PHP5 and Apache on EC2 and Google Cloud Compute Engine

Over the years I’ve moved steadily into a toolset that requires less and less server management for my projects. I now focus on making “apps” instead of custom applications on the server since it’s more secure, easier to manage and automatically backed up. However, every once in a while I find myself doing a little server management and for that, I love EC2’s services.

In this tutorial, I’ll be covering two cloud services (Amazon EC2 and Google Cloud Compute Engine). Both services are the weapon of choice for developers on the move with little time to worry about the details.

Amazon EC2

The following are some resources for setting up amazon EC2 Servers:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html

http://articles.slicehost.com/2010/5/19/installing-apache-on-ubuntu

The first thing you want to do is get your instance up and running. If you’re new to amazon EC2, click on the top link above and follow the tutorial. I’m gonna tell you what to do before you do it so listen up:

Your going to select an EBS volume (basically a pre installed linux distribution). Then you’re going to set up some security features that allow you to log into your EC2 server via SSH. To do this you will need a .pem key. These are private keys you can generate here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#having-ec2-create-your-key-pair NOTE: you need to save this .pem file somewhere where you won’t forget it because you’re going to use it again, over and over. It’s attached to a security group. If you have the pem file and you associate the security group with your EC2 server, then you will be able to login with SSH, if not, then you will have to generate it all over again. Finally, you will set up some iptable rules that allow you to connect via SSH login to your server, as well as connect via HTTP.

After that, you will set up apache and php on your server. That’s fun, ok let’s go…

Step 1:

You need to login into your amazon dashboard: https://console.aws.amazon.com. Click the button at the top that says “launch”

Step 2:

Follow the wizard instructions. In this case, I used the quick wizard since I already have my security groups set up. However, if you don’t have you security groups or don’t know what the hell I’m talking about follow this tutorial: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#having-ec2-create-your-key-pair

Set your security group, then use the quick wizard to set up your instance:

And make sure the instance is associated with your security group:

Once you have it, click the launch button.

Step 3:

Now you need to get this sucker an IP address. Click on the elastic IP section of you dashboard and create a new static IP if you don’t already have one. If you do have one, make sure it’s freely available to associate with your instance.

Select your IP and click the “Associate” button to get the two paired up. If you don’t see your instance in the drop down options, wait until it has finished setting up.

Now that you have an instance up and running with a static IP and a security group, you need to ssh in and get your server set up Fo’ Real.

Step 4:

To ssh in, you’re going to need a few things. First, you need to know where your .pem file is. It’s the file you generated and downloaded when you created your security group. If you’re not on a mac, follow this tutorial to connect: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-connect-to-instance-linux.html. In fact you should follow that tutorial instead of listening to my shit anyway…but here it goes.

Type this into your terminal:

ssh -i ~/.ssh/mySecurityGroup.pem ubuntu@000.000.000.000

You need to replace the “000” with your static IP address and replace the file path to point to your .pem file.

You should see yourself log into your server. If you get an access denied or it just hangs forever, make sure port 22 is open. You can do that here:

Just select port ssh from the drop down and hit the “Add Rule” if you don’t already see it on the right hand side of your console.

You’ll notice I opened ports 22, 25, 80, 443 and 465. You can go ahead and do the same if you want. These are common ports for http, https and smtp mail services.

Make sure to “Apply Rule Changes”.

Step 5:

Now that we’ve set up our server, opened the proper ports, associated the EC2 instance with a static IP address and SSHed in. We are ready to set up apache.

Since we’re on Ubuntu the setup if fairly straight forward.

SSH to your server:

ssh -i ~/.ssh/mySecurityGroup.pem ubuntu@000.000.000.000

Then make sure your have all your ports open:

iptables -I INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -I OUTPUT -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
iptables -I INPUT -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -I OUTPUT -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT

Then make sure everything is up to date:

sudo apt-get update

Once that is finished we want to install apache2:

sudo apt-get install apache2

Once installed apache will try to run itself, we just need to restart the apache instance so we can test it:

sudo apache2ctl graceful

We should test this out in a browser. If you visit your static IP in the browser you should see something like this:

Step 6:

Once we’ve got apache up and running it’s time to get PHP installed. Follow the same process by using apt-get to install:

sudo apt-get install php5

Now we want to install some common libraries for PHP like cURL etc.:

sudo apt-get install php5-mysql php5-dev curl libcurl3 libcurl3-dev php5-curl php5-gd php5-imagick php5-mcrypt php5-memcache php5-mhash php5-pspell php5-snmp php5-sqlite php5-xmlrpc php5-xsl

Now we are ready to restart apache:

sudo apache2ctl graceful

If all worked correctly, you should now have a working php installation. You can test it with:

php -v

The last thing you probably want to do is install some ftp or git software so you can acutally get your projects on the server. In this case I will install Git.

Step 7:

Install git on your server:

sudo apt-get install git

This will allow you to develop on a localhost environment and push to a remote repository, the through git hooks you can automatically keep your server’s version synced with the master branch and your project will be automagically updated.

NOTE:

Since Ubuntu is notorious for shipping with older version of PHP, there’s a great method for getting newer version in around 4 lines in your terminal:

sudo add-apt-repository ppa:ondrej/php5
sudo apt-get update
sudo apt-get install python-software-properties
sudo apt-get update
sudo apt-get install php5
php5 -v

You may also want to add your ubuntu user to the apache user group www-data:

sudo usermod -a -G www-data  ubuntu
groups ubuntu

You may also want to change the permissions of your root public folder. The folder that holds all my web content is located at /var/www. To make it writeable by my user and git, I changed the ownership to:

chown -R ubuntu:www-data /var/www

This allows me to write to the folder with git. I also made a symlink called “htdocs” by doing:

ln -s /var/www /user/ubuntu/htdocs

This put an htdocs folder in my user’s home directory. Now I’m set.

Google Cloud Compute Engine

In the next section we are going to talk about doing roughly the same thing on Google Cloud Compute Engine. They provide a similar service with the added benefit of Google’s infrastructure and speed.

Organic Robotic Locomotion

I just got super into looking at different forms of locomotion that feel more organic. Below is a compilation of some of different types out there.

Create an Exact Copy of You Current Raspberry Pi

Quick one today guys. When you have your raspberry pi setup up the way you want with all the packages installed correctly and working. You can create an exact copy of your image by simply popping the sd card in to your computer (I’m on a mac) and using dd.

It works like this. When you installed your sd card for the first time your followed a process similar to this: http://www.ubuntu.com/download/desktop/create-a-usb-stick-on-mac-osx

Now we can use the same program we used to write the image to the sd card to copy the image to our hard drive. All dd does, is make a bit for bit copy of one directory to another.

So pop your sd card into your mac and do:

dd if=/dev/disk3 of=~/Desktop/raspi.img bs=1M

If you need to know which disk your sd card is, then run this before and after you insert your sd card and look at the difference:

diskutil list

Streaming Audio on Raspberry Pi

First off, you’re not the only one trying to do this, there are many of us and most of us have left a comment or question on the interwebs, so look it up if you run into trouble. You’ll have to look harder than normal…

First off, follow my tutorial on getting audio up and running on one pi before you start with two. In this example I will get streaming audio from one pi to another.

Step 1:

Install a few packages…

sudo apt-get install pulseaudio
sudo apt-get install paprefs
sudo apt-get install mplayer

Step 2:

Set your settings in paprefs, there’s a great example here: http://dp.nonoo.hu/forwarding-sound-on-lan/

Step 3:

Get both pi’s set up for success, make sure they know eachother’s IP. You can use this command to find your IP address:

ifconfig

Once you’ve got the IP’s set Pi A’s IP to Pi B’s and vice versa:

# on Pi A
export PULSE_SERVER=<ip of Pi B>

# on Pi B
export PULSE_SERVER=<ip of Pi A>

Step 4:

Make sure your shit works. Download an mp3 and test.

wget http://www.freespecialeffects.co.uk/soundfx/household/bubbling_water_1.mp3
mplayer -ao pulse bubbling_water_1.mp3

Make sure the setting are correct and pulse audio should be outputting audio from one pi to the other!

Troubleshooting:

If alsa recognizes your webcam but wont play audio, check this link out: http://forums.debian.net/viewtopic.php?f=6&t=57022

Raspberry Pi – Setting Up Your Audio

The first thing to note is that we are aiming to get out analog audio jack working. If this is not your goal and you are trying to get audio over HDMI to work then you simply need to follow the same directions and substitute “2” instead of “1” when you are told in the instructions below. Be on the lookout:

Step 1:

Install your libraries. Alsa, mpg321 and lame:

sudo apt-get install alsa-utils
sudo apt-get install mpg321
sudo apt-get install lame

Step 2:

Load the driver:

sudo modprobe snd_bcm2835

Step 3:

Make sure you can find the right driver:

sudo lsmod | grep 2835

Step 4:

For this next part, if you are wanting to get analog audio working use a “1” if you are using HDMI use a “2”. The rule is “0” = auto.

sudo amixer cset numid=3 1

Step 5:

Test is out. You can test it a couple different ways but one of them is to use wget to get a file from online and then play it.

wget http://secretemessages.s3.amazonaws.com/output.wav
aplay output.wav

You can also test out an mp3 using the same process, find an mp3 online and play it.

wget wget http://www.freespecialeffects.co.uk/soundfx/household/bubbling_water_1.mp3
mpg321 bubbling_water_1.mp3

If all has gone well that your audio works.

Troubleshooting:

If you run into errors about unrecognized PCM cards and shit, just modify: /usr/share/alsa/alsa.conf and replace the lines about “pcm.front cards.pcm.front” to something like “pcm.front cards.pcm.default“.

If you hear your speakers pop but no sound, try adjusting the volume. Run the following command:

alsamixer

When the gui pops up, press the number “9” to turn the volume up to 9. This should allow you to hear the sound. When you want to save your alsamixer settings enter…

sudo alsactl store

Also I added the modprobe line in step 2 to my f~/.bashrc file so it automatically does it on startup, I know that probably not the way it was intended to work but what the hell.

One last note: there is a great resource for controlling audio with alsa here: http://blog.scphillips.com/2013/01/sound-configuration-on-raspberry-pi-with-alsa/

Turtlebot 101 – Coming Soon

Ok so along with android tutorials, I’ll be moving more into the physical realm with beginner tutorials on ROS. For those who don’t know what that is, it stands for Robot Operating System and its basically combining 50 years of robotics research a programming into one solid platform that extends across many common robotics platforms including Turtlebot.

Just wanted to give an update in case those out there are wondering why I haven’t posted anything recently. I am working on learning the basics and I plan on reporting what I learn soon. I also plan on releasing an android demo that includes a simple way to get up and running with bluetooth in the next couple weeks as well.

Augmented Reality – Detecting Planar Surfaces

Recently I was asked to do some research on whether or not it’s possible for an AR (Augmented Reality) app to do surface detection without a marker.

Basically an AR app needs markers to compare the distortion in order to surmize a plane surface. However, there is some exciting research coming around, see my response below:

Note:

First off I think it’s important to note that I haven’t seen an app that is able to recognize planar surfaces without depth mapping (which phones currently don’t do – thought that may change – http://finance.yahoo.com/news/etron-technology-unveils-latest-3d-102700778.html) or a marker of some sort.

There are a number of articles written on augmented reality which explain why a marker is needed:

http://www.cwjobs.co.uk/careers-advice/it-glossary/the-10-things-you-need-to-know-about-augmented-reality

The reason, depth mapping or a marker is so important for recognizing surface is that without some sort of reference, any image just looks like a bunch of pixels with color data. Pattern recognition helps this process because we can project what a pattern will look like at different angles and infer what plane the surface is on. Without a marker, the app simply doesn’t have a reference.

The future:

Research is being done to simulate depth mapping by changing perspective on the subject:

http://www.youtube.com/watch?v=GyFGmaOhL_4

Current state of things:

There are some apps that allow a user to take a photo of their environment first and then map the game to that rather than printing a marker:

http://www.augmented-reality-games.com/

Further there are some apps that allow a user to position 3D objects in an environment to look like they exist within the environment:

https://play.google.com/store/apps/details?id=com.ar.augment&hl=en

There are some games that include ar, which do well at simulating an environment without the use of codes or depth mapping:

https://itunes.apple.com/us/app/zombies-everywhere!-augmented/id530292213?mt=8

https://play.google.com/store/apps/details?id=com.picitup.iOnRoad&feature=search_result#?t=W251bGwsMSwxLDEsImNvbS5waWNpdHVwLmlPblJvYWQiXQ..

Considerations:

Detecting a users environment may actually make gameplay worse. Suppose a user is playing in an office and most of the enemies “get stuck” on a piece of furniture that prohibits them from playing properly.

The complexity of having shapes and characters move around custom objects is dramatic, now instead of simple movements of 3D objects, the characters may need to climb/jump/mount etc. More actions/movements of characters = more complex development.

Workarounds:

Geolocation game instead?

Is it a game like zombies, run where you physically run from zombies? Instead of ar, could you defend your tower by gathering supplies and running to different places where you take certain actions?

Do we need it?

Do we need to detect the user’s environment? Will the game suffer if a user can simply play anywhere, without the game detecting an environment?

Desk play?

Can we just ask the user to take a photo of their desk, and play some sort of tower defense game on top of their desk?

Recommendation:

The current state of AR environment games is limited (for the time being) to using markers in order to play a game on a surface or using free form techniques to place objects and characters in an environment that isn’t dependent on a user’s environment. More research is needed to get an accurate picture of AR surface detection techniques that do not depend on predesignated markers. However, whatever these techniques are, they do not exist in a “production” environment at this time.

Inspirations:

https://play.google.com/store/apps/details?id=jp.co.questcom.droidshooting&hl=en

https://play.google.com/store/apps/details?id=com.xcodium.satellitefinderar&hl=en

https://play.google.com/store/apps/details?id=com.layar&hl=en

https://play.google.com/store/apps/details?id=com.mambo.paintball&hl=en

 

Android Development 103: Barcode Scanner!

Ok let’s finally do something useful! Of course this has been done a million times, but what the heck, let’s make a barcode scanner.

First off, we are going to build it with Google’s ZXing (zebra crossing) library. This allows us to open the camera, use the app they already built and capture the data without us having to write our own barcode library. Here it goes:

Let’s make our layout file (under App -> res -> layout -> activity_main.xml):

<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="fill_parent"
    android:layout_height="match_parent"
	android:orientation="vertical" >

    <Button android:id="@+id/button_scan"
        android:layout_width="fill_parent"
        android:layout_height="wrap_content"
        android:text="@string/button_scan" />

    <TextView
	    android:id="@+id/scan_format"
	    android:layout_width="fill_parent"
	    android:layout_height="wrap_content"
	    android:textIsSelectable="true" />

	<TextView
	    android:id="@+id/scan_content"
	    android:layout_width="fill_parent"
	    android:layout_height="wrap_content"
	    android:textIsSelectable="true" />

</LinearLayout>

Then we need to populate some strings for those files (App -> res -> strings.xml):

<?xml version="1.0" encoding="utf-8"?>
<resources>

    <string name="app_name">Barcode</string>
    <string name="action_settings">Settings</string>
    <string name="button_scan">Scan</string>
    <string name="scan_format">Scan Format</string>
    <string name="scan_content">Scan Content</string>

</resources>

Now we need to write out app. We are going to include the google Zxing library. To do that right click the “src” directory. Then select “New” -> “Package”. Enter “com.google.zxing.integration.android” for the “Name”

Now we need to add some classes. Download the latest zxing library (https://code.google.com/p/zxing/downloads/list) and unzip it. Navigate to the IntentIntegrator.java file (zxing-2.2 -> android-integration -> src -> com -> google -> zxing -> integration -> android -> IntentIntegrator.java) and open the file in your favorite text editor. Copy the file contents.

Create a new class in your newly created package (right click your com.google.zxing.integration.android package and select “New” -> “Class”). Name your file “IntentIntegrator.java” and paste in the contents of the library file.

Create another new class named “IntentResult.java” and paste the contents of IntentResult.java in the downloaded library file.

Now in MainActivity (App -> src -> com.example.barcode -> MainActivity.java):

package com.example.barcode;

import android.app.Activity;
import android.content.Intent;
import android.os.Bundle;
import android.util.Log;
import android.view.Menu;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.TextView;
import android.widget.Toast;

import com.google.zxing.integration.android.IntentIntegrator;
import com.google.zxing.integration.android.IntentResult;

public class MainActivity extends Activity {

	private Button buttonScan;
	private TextView textFormat, textContent;

	private static String logtag = "barcode";

	@Override
	protected void onCreate(Bundle savedInstanceState) {
		super.onCreate(savedInstanceState);
		setContentView(R.layout.activity_main);

		buttonScan = (Button)findViewById(R.id.button_scan);
		textFormat = (TextView)findViewById(R.id.scan_format);
		textContent = (TextView)findViewById(R.id.scan_content);

		buttonScan.setOnClickListener(scanListener);
	}

	private OnClickListener scanListener = new OnClickListener() {
		@Override
		public void onClick(View v) {
			if(v.getId() == R.id.button_scan) {
				// we know we are talking about the button
				IntentIntegrator scanIntegrator = new IntentIntegrator(MainActivity.this);
				scanIntegrator.initiateScan();
			}
		}
	};

	public void onActivityResult(int requestCode, int resultCode, Intent intent) {
		IntentResult scanResult = IntentIntegrator.parseActivityResult(requestCode, resultCode, intent);

		Log.d(logtag, "able to set the text");

		if(scanResult != null) {
			Log.d(logtag, "able to set the text");

			String scanContent = "Scan Content: ";
			String scanFormat = "Scan Format: ";

			scanContent += scanResult.getContents();
			scanFormat += scanResult.getFormatName();

			Log.d(logtag, "able to set the text");

			// put the results to the text
			textContent.setText(scanContent);
			textFormat.setText(scanFormat);

		} else {
			Toast.makeText(MainActivity.this, "we didn't get anythign back from the scan", Toast.LENGTH_SHORT).show();
		}
	}

	@Override
	public boolean onCreateOptionsMenu(Menu menu) {
		// Inflate the menu; this adds items to the action bar if it is present.
		getMenuInflater().inflate(R.menu.main, menu);
		return true;
	}

}

This will open the camera, scan for a barcode, and return the information about the barcode to you. I’ll post a video of how to implement this in a second.

 

 

Android Development 102: Camera

Last time we showed you how to make a simple android app with two buttons and detect when one or the other is pressed. Now we want to use a button to open the camera, take a photo and then retrieve the photo to use in a display. I’ll show you the code and explain what its doing after.

Step 1

Create your view. This is easy, just a text field with an intro, a button to activate the camera and an empty image view where our photo will be placed after we take the photo.

<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="fill_parent"
    android:layout_height="fill_parent"
    android:orientation="vertical" >

    <TextView
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="@string/hello_world" />

    <Button
        android:id="@+id/button_camera"
        android:text="@string/button_camera"
        android:layout_width="fill_parent"
        android:layout_height="wrap_content"></Button>

    <ImageView
        android:id="@+id/image_view_camera"
        android:contentDescription="@string/image_view_camera"
        android:layout_width="fill_parent"
        android:layout_height="wrap_content" />

</LinearLayout>

Step 2

Now make sure you fill in the strings. This is done under CameraApp/res/values/strings.xml.

<?xml version="1.0" encoding="utf-8"?>
<resources>

    <string name="app_name">CameraApp3</string>
    <string name="action_settings">Settings</string>
    <string name="hello_world">Camer App 3</string>
    <string name="button_camera">Camera</string>
    <string name="image_view_camera">This is a camera image</string>

</resources>

Step 3

Now that we have our strings filled out, we need to update our manifest file to allow camera the proper permissions to use the camera and also read/write to the onboard storage for the phone (sd card). This is how my manifest file looks. Watch out for the “uses-permission” tags and “android:screenOrientation” attributes. I’ve highlighted these parts in the code:

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="com.example.cameraapp3"
    android:versionCode="1"
    android:versionName="1.0" >

    <uses-sdk
        android:minSdkVersion="8"
        android:targetSdkVersion="17" />

    <!-- New permissions for camera app -->
    <uses-permission android:name="android.hardware.camera"/>
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
	<!-- End new permissions for camera app -->

    <application
        android:allowBackup="true"
        android:icon="@drawable/ic_launcher"
        android:label="@string/app_name"
        android:theme="@style/AppTheme" >

        <!-- added "android:screenOrientation" attribute below to ensure the use can't rotate the screen -->
        <activity
            android:name="com.example.cameraapp3.MainActivity"
            android:label="@string/app_name"
            android:screenOrientation="portrait" >
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />

                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>
    </application>

</manifest>

 

Step 4

Now we wan to modify the MainActivity.java file in CameraApp/src/com.example.cameraapp/MainActivity.java. I explain what these are doing in the code:

package com.example.cameraapp3;

import java.io.File;

import android.app.Activity;
import android.content.ContentResolver;
import android.content.Intent;
import android.graphics.Bitmap;
import android.net.Uri;
import android.os.Bundle;
import android.os.Environment;
import android.provider.MediaStore;
import android.util.Log;
import android.view.Menu;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.Toast;

public class MainActivity extends Activity {

	// label our logs "CameraApp3"
	private static String logtag = "CameraApp3";
	// tells us which camera to take a picture from
	private static int TAKE_PICTURE = 1;
	// empty variable to hold our image Uri once we store it
	private Uri imageUri;

	@Override
	protected void onCreate(Bundle savedInstanceState) {
		super.onCreate(savedInstanceState);
		setContentView(R.layout.activity_main);

		// look for the button we set in the view
		Button cameraButton = (Button)findViewById(R.id.button_camera);
		// set a listener on the button
		cameraButton.setOnClickListener(cameraListener);
	}

	// set a new listener
	private OnClickListener cameraListener = new OnClickListener() {
		public void onClick(View v) {
			// open the camera and pass in the current view
			takePhoto(v);
		}
	};

	public void takePhoto(View v) {
		// tell the phone we want to use the camera
		Intent intent = new Intent("android.media.action.IMAGE_CAPTURE");
		// create a new temp file called pic.jpg in the "pictures" storage area of the phone
		File photo = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES), "pic.jpg");
		// take the return data and store it in the temp file "pic.jpg"
		intent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(photo));
		// stor the temp photo uri so we can find it later
		imageUri = Uri.fromFile(photo);
		// start the camera
		startActivityForResult(intent, TAKE_PICTURE);
	}

	@Override
	public boolean onCreateOptionsMenu(Menu menu) {
		// Inflate the menu; this adds items to the action bar if it is present.
		getMenuInflater().inflate(R.menu.main, menu);
		return true;
	}

	// override the original activity result function
	@Override
	public void onActivityResult(int requestCode, int resultCode, Intent data) {
		// call the parent
		super.onActivityResult(requestCode, resultCode, data);
		switch(requestCode) {
		// if the requestCode was equal to our camera code (1) then...
		case 1:
			// if the user took a photo and selected the photo to use
			if(resultCode == Activity.RESULT_OK) {
				// get the image uri from earlier
				Uri selectedImage = imageUri;
				// notify any apps of any changes we make
				getContentResolver().notifyChange(selectedImage, null);
				// get the imageView we set in our view earlier
				ImageView imageView = (ImageView)findViewById(R.id.image_view_camera);
				// create a content resolver object which will allow us to access the image file at the uri above
				ContentResolver cr = getContentResolver();
				// create an empty bitmap object
				Bitmap bitmap;
				try {
					// get the bitmap from the image uri using the content resolver api to get the image
					bitmap = android.provider.MediaStore.Images.Media.getBitmap(cr, selectedImage);
					// set the bitmap to the image view
					imageView.setImageBitmap(bitmap);
					// notify the user
					Toast.makeText(MainActivity.this, selectedImage.toString(), Toast.LENGTH_LONG).show();
				} catch(Exception e) {
					// notify the user
					Toast.makeText(MainActivity.this, "failed to load", Toast.LENGTH_LONG).show();
					Log.e(logtag, e.toString());
				}
			}
		}
	}

}

Now that we’ve updated our Java file, we’ve set the image to the image view. Try to run the app, you will see that when you tap the button the camera opens. Then if select an image to use, the camera will return you to the app and display the image you just took as the app’s image view!

Hurrah, lesson 2 down. I’ll be making a video explaining all this in a couple days, right now it’s midnight on Sunday and I have to go to work tomorrow, so there, cuddling time with my misses lady instead.

Create Local Branch and Track Remote Branch in One Command

Quick one today guys…

I need to setup a local branch to track the latest on a remote branch on github. An easy way to do that is:

git checkout -b snowden --track origin/snowden

Basically, it says we are going to switch to branch “snowden” after we create it with the “-b” command. Then we are going to have that branch track the remote version we have on “origin”. So in this case track the remote “snowden” branch.

That’s it, super simple.

On a side note, if you find that you are unable to merge these branches to master and have unexpected results. You need to use a merge strategy that exactly copies the branch to master, instead of a downstream version. To do that:

git checkout snowden
git merge -s ours master
git checkout master
git merge snowden