Logo of the git version control system

Git hook to prevent WIP commits to certain branches

Git hooks are a great way to automate your workflow and to check for faulty commits. I am using them to prevent work-in-progress commits to the master branch (i.e. commits with a line starting with the string WIP) . But wait – this script differs from the sample found in .git/hooks/pre-commit.sample in two ways:

  1. Only pushes to special heads are inspected. This still allows you to backup your WIP commit on the remote server or to share it with your colleagues but prevents integration into the master branch.
  2. Only commits which would actually be merged into the master are checked – and not all commits ever pushed. This way, even when a WIP commit was pushed to the master in the past, the script does not prevent you from pushing new commits. The pre-commit.sample would explicitly disallow that.

We have two staging areas in our development environment: All pushes to ready/<any name> or directPatch/<any name> trigger our continuous integration process and eventually merge the changes into our master (which you cannot push to directly). Pushes to <developer acronym>/<any name> are always allowed and do not trigger any merging. So we want to check only the pushes to ready and directPatch. Of course, you might want to adapt the script to your needs:

  1. Changing the heads to be checked – see line 24
  2. Changing the word to look for – see line 40

The following hook can be activated by storing it in the file <your repository>/.git/hooks/pre-commit (no file extension)

#!/bin/sh

# This hook is called with the following parameters:
#
# $1 -- Name of the remote to which the push is being done
# $2 -- URL to which the push is being done
#
# Information about the commits which are being pushed is
# supplied as lines to the standard input in the form:
#
#   <local ref> <local sha1> <remote ref> <remote sha1>
#
# This sample shows how to prevent push of commits where the
# log message starts with "WIP" (work in progress) and is pushed
# to a refs/heads/ready or refs/heads/directPatch

remote="$1"
url="$2"

z40=0000000000000000000000000000000000000000

IFS=' '
while read local_ref local_sha remote_ref remote_sha
do
	if [[ $remote_ref != refs/heads/ready/* ]] && [[ $remote_ref != refs/heads/directPatch/* ]]
	then
		# Do not check the WIP
		continue
	fi

	if [ "$local_sha" = $z40 ]
	then
		# Handle delete
		:
	else
		# Only inspect commits not yet merged into origin/master
		range="origin/master..$local_sha"

		# Check for WIP commit
		commit=`git rev-list -n 1 --grep '^WIP' "$range"`
		if [ -n "$commit" ]
		then
			echo "Found WIP commit in $local_ref: $commit, not pushing."
			exit 1
		fi
	fi
done

exit 0

Migrating Windows 7 to 8.1 without losing your apps

When trying to migrate from Windows 7 to 8.1 I learned that I would lose my installed applications. This was especially frustrating because I knew from the upgrade assistant that there is an option to keep them – it is just not available for my configuration. And even the official guide states that I would have to reinstall all apps.

The solution was: First migrate to Windows 8 (the option to keep your apps is available here) and then to 8.1 (which again allows you to keep your apps). My system is up and running and I did not have to reinstall anything. Voilà! The only question which remains is: why?

Master thesis

The final piece of my Master in Computer Science is the thesis “Monocular SLAM for Context-Aware Workflow Assistance”. The abstract is included below:

In this thesis, we propose the integration of contextual workflow knowledge into a
SLAM tracking system for the purpose of procedural assistance using Augmented Reality.
Augmented Reality is an intuitive way for presenting workflow knowledge (e.g. main-
tenance or repairing) step by step but requires sophisticated models of the scene appear-
ance, the actions to perform and the spatial structure. For the latter one we propose the
integration with SLAM (Simultaneous Localization And Mapping) operating on images of
a monocular camera.
We first develop a stand-alone SLAM system with a point cloud as map representation
which is continuously extended and refined by triangulations obtained from new view
points using a three-step keyframe insertion procedure. In a second step, we integrate
contextual knowledge which is automatically obtained from reference recordings in a so-
called offline mapping step. This allows the tracking not only to cope with but actively
adapt to changes in the environment by explicitly including them in the tracking model.
To aid the data acquisition and to merge multiple tracking models into a single one, we
propose a novel method for combining offline acquired maps which is independent of the
spatial structure as opposed to ICP (Iterative Closest Point).
We show typical tracking results and evaluate the single components of our system by
measuring their performance when exposed to noise.

I wrote my thesis in Prof. Stricker’s Augmented Vision group and was supervised by Nils Petersen.

Master Thesis

@mastersthesis{Hasper2014,
author = {Hasper, Philipp},
school = {TU Kaiserslautern},
title = {{Monocular SLAM for Context-Aware Workflow Assistance}},
type = {Masterthesis},
year = {2014}
}

Migrating Eigen2 to Eigen3: useful regular expressions

I’ve recently migrated a project from Eigen2 library to Eigen3 based on the very useful staged migration path.

Here are two regular expressions which came in really handy. Of course you have to manually check the result but they save a lot of time.

/*
 * Using the new linear solver interface
 */

// Regular expression for finding
([^\S\n]*)([^\s]*)\.solve\(([\s]*)(.*?)([\s]*),([\s]*)\&(.*?)([\s]*)\)\;
// Regular expression for replacing
$1$7 = $2.solve\($4\);

Live-Preview

/*
 * Use the static way for map creation.
 * Does not work with all types
 */

// Regular expression for finding
Eigen::Map<(.*?)>\(
// Regular expression for replacing
$1::Map\(

Live-Preview

OpenCV: Draw epipolar lines

Drawing epipolar lines in OpenCV is not hard but it requires a sufficient amount of code. Here is a out-of-the-box function for your convenience which only needs the fundamental matrix and the matching points. And it even has an option to exclude outliers during drawing!

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/calib3d/calib3d.hpp>
#include <opencv2/imgproc/imgproc.hpp>
/**
 * \brief	Compute and draw the epipolar lines in two images
 *			associated to each other by a fundamental matrix
 *
 * \param title			Title of the window to display
 * \param F					Fundamental matrix
 * \param img1			First image
 * \param img2			Second image
 * \param points1		Set of points in the first image
 * \param points2		Set of points in the second image matching to the first set
 * \param inlierDistance			Points with a high distance to the epipolar lines are
 *								not displayed. If it is negative, all points are displayed
 **/
template <typename T1, typename T2>
static void drawEpipolarLines(const std::string& title, const cv::Matx<T1,3,3> F,
							  const cv::Mat& img1, const cv::Mat& img2,
							  const std::vector<cv::Point_<T2>> points1,
							  const std::vector<cv::Point_<T2>> points2,
							  const float inlierDistance = -1)
{
	CV_Assert(img1.size() == img2.size() && img1.type() == img2.type());
	cv::Mat outImg(img1.rows, img1.cols*2, CV_8UC3);
	cv::Rect rect1(0,0, img1.cols, img1.rows);
	cv::Rect rect2(img1.cols, 0, img1.cols, img1.rows);
	/*
	 * Allow color drawing
	 */
	if (img1.type() == CV_8U)
	{
		cv::cvtColor(img1, outImg(rect1), CV_GRAY2BGR);
		cv::cvtColor(img2, outImg(rect2), CV_GRAY2BGR);
	}
	else
	{
		img1.copyTo(outImg(rect1));
		img2.copyTo(outImg(rect2));
	}
	std::vector<cv::Vec<T2,3>> epilines1, epilines2;
	cv::computeCorrespondEpilines(points1, 1, F, epilines1); //Index starts with 1
	cv::computeCorrespondEpilines(points2, 2, F, epilines2);

	CV_Assert(points1.size() == points2.size() &&
			  points2.size() == epilines1.size() &&
			  epilines1.size() == epilines2.size());

	cv::RNG rng(0);
	for(size_t i=0; i<points1.size(); i++)
	{
		if(inlierDistance > 0)
		{
			if(distancePointLine(points1[i], epilines2[i]) > inlierDistance ||
				distancePointLine(points2[i], epilines1[i]) > inlierDistance)
			{
				//The point match is no inlier
				continue;
			}
		}
		/*
		 * Epipolar lines of the 1st point set are drawn in the 2nd image and vice-versa
		 */
		cv::Scalar color(rng(256),rng(256),rng(256));

		cv::line(outImg(rect2),
			cv::Point(0,-epilines1[i][2]/epilines1[i][1]),
			cv::Point(img1.cols,-(epilines1[i][2]+epilines1[i][0]*img1.cols)/epilines1[i][1]),
			color);
		cv::circle(outImg(rect1), points1[i], 3, color, -1, CV_AA);

		cv::line(outImg(rect1),
			cv::Point(0,-epilines2[i][2]/epilines2[i][1]),
			cv::Point(img2.cols,-(epilines2[i][2]+epilines2[i][0]*img2.cols)/epilines2[i][1]),
			color);
		cv::circle(outImg(rect2), points2[i], 3, color, -1, CV_AA);
	}
	cv::imshow(title, outImg);
	cv::waitKey(1);
}

template <typename T>
static float distancePointLine(const cv::Point_<T> point, const cv::Vec<T,3>& line)
{
	//Line is given as a*x + b*y + c = 0
	return std::fabsf(line(0)*point.x + line(1)*point.y + line(2))
			/ std::sqrt(line(0)*line(0)+line(1)*line(1));
}

The most important concepts of epipolar geometry. Author: Arne Nordmann, CC BY-SA 3.0
The most important concepts of epipolar geometry. Author: Arne Nordmann, CC BY-SA 3.0

Remote Android emulator

Update: According to a comment, this does no longer work due to cryptographical authentication!

I often use the Android emulator to check my apps with different display configurations and to stress-test them. But the problem is that it is really slow on my development laptop. So I installed the Android emulator on my desktop PC running Windows and connect to it over my LAN. The major advantage is that you can continue using your development machine while a “server” deals with emulating – one could even emulate several devices at once and still continue programming.

The approach in a nutshell: Forward the emulator’s port so that it is accessible in the local network. Then connect the ADB to it.

On your desktop – the “server”:

  1. Store the executable of Trivial Portforward on the desktop system (e.g. directly in C:\trivial_portforward.exe).
  2. Create a virtual device to emulate (HowTo) and name it “EmulatedAndroid”.
  3. Create a batch file:
    <your-android-sdk-path>\tools\emulator -avd EmulatedAndroid &amp; 
    echo 'On the development machine: adb kill-server and then: adb connect <desktop-pc-name>:5585'
    C:\trivial_portforward 5585 127.0.0.1 5555

  4. If you execute this batch file on your desktop PC, it will open the emulator with the specified virtual device.

Now on your laptop – the “client”:

  1. Now – given that both systems are in the same network – you can connect to the emulator from your laptop by typing in a terminal:
    adb kill-server
    adb connect <desktop-pc-name>:5585

  2. Now you can upload apps, access the logcat and execute adb commands on your remote emulator like on any other Android device. And all without performance impairments on your workstation.
  3. If you are experiencing communication losses, increase the emulator timeout in the eclipse settings to maybe 5000 ms (Window → Preferences → Android → DDMS → ADB connection time out (ms)).

Hello World for Android computer vision

Every once in a while I start a new computer vision project with Android. And I am always facing the same question: “What do I need again to retrieve a camera image ready for processing?”. While there are great tutorials around I just want a downloadable project with a minimum amount of code – not taking pictures, not setting resolutions, just the continuous retrieval of incoming camera frames.

So here they are – two different “Hello World” for computer vision. I will show you some excerpts from the code and then provide a download link for each project.

Pure Android API

The main problem to solve is how to store the camera image into a processable image format – in this case the android.graphics.Bitmap .

@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width,
		int height) {
	if(camera != null) {
		camera.release();
		camera = null;
	}
	camera = Camera.open();
	try {
		camera.setPreviewDisplay(holder);
	} catch (IOException e) {
		e.printStackTrace();
	}
	camera.setPreviewCallback(new PreviewCallback() {

		public void onPreviewFrame(byte[] data, Camera camera) {
			System.out.println("Frame received!"+data.length);
			Size size = camera.getParameters().getPreviewSize();
			/*
			 * Directly constructing a bitmap from the data would be possible if the preview format
			 * had been set to RGB (params.setPreviewFormat() ) but some devices only support YUV.
			 * So we have to stick with it and convert the format
			 */
			int[] rgbData = convertYUV420_NV21toRGB8888(data, size.width, size.height);
			Bitmap bitmap = Bitmap.createBitmap(rgbData, size.width, size.height, Bitmap.Config.ARGB_8888);
			/*
			 * TODO: now process the bitmap
			 */
		}
	});
	camera.startPreview();
}

Notice the function convertYUV420_NV21toRGB8888() which is needed since the internal representation of camera frames does not match any supported Bitmap format.

Using OpenCV

This is even more straight-forward. We just use OpenCV’s JavaCameraView. If you are new to Android+OpenCV, here is a good tutorial for you.

cameraView = (CameraBridgeViewBase) findViewById(R.id.cameraView);
cameraView.setCvCameraViewListener(this);

OpenCV Image Watch for cv::Matx

When developing for/with OpenCV using Visual Studio, the Image Watch plug-in is very useful. However, it does not support the better-typed cv::Matx types (e.g. cv::Matx33f which is the same as cv::Matx<float,3,3> ). Here is how I made use of Visual Studio’s debugger type visualizers to customize the plugin:

  1. Go to the folder <VS Installation Directory>\Common7\Packages\Debugger\Visualizers\ and create a new file called Matx.natvis
  2. Open the file and insert the following:
    &lt;?xml version="1.0" encoding="utf-8"?&gt;
    &lt;!-- Philipp Hasper, http://www.hasper.info--&gt;
    &lt;AutoVisualizer xmlns="http://schemas.microsoft.com/vstudio/debugger/natvis/2010"&gt;
    &lt;UIVisualizer ServiceId="{A452AFEA-3DF6-46BB-9177-C0B08F318025}" Id="1" MenuName="Add to Image Watch"/&gt;
    
    &lt;Type Name="cv::Matx&amp;lt;*,*,*&amp;gt;"&gt;
    &lt;UIVisualizer ServiceId="{A452AFEA-3DF6-46BB-9177-C0B08F318025}" Id="1" /&gt;
    &lt;/Type&gt;
    
    &lt;Type Name="cv::Matx&amp;lt;*,*,*&amp;gt;"&gt;
    &lt;DisplayString Condition='strcmp("float", "$T1") == 0'&gt;{{FLOAT32, size = {$T3}x{$T2}}}&lt;/DisplayString&gt;
    &lt;DisplayString Condition='strcmp("double", "$T1") == 0'&gt;{{FLOAT64, size = {$T3}x{$T2}}}&lt;/DisplayString&gt;
    
    &lt;Expand&gt;
    &lt;Synthetic Name="[type]" Condition='strcmp("float", "$T1") == 0'&gt;
    &lt;DisplayString&gt;FLOAT32&lt;/DisplayString&gt;
    &lt;/Synthetic&gt;
    &lt;Synthetic Name="[type]" Condition='strcmp("double", "$T1") == 0'&gt;
    &lt;DisplayString&gt;FLOAT64&lt;/DisplayString&gt;
    &lt;/Synthetic&gt;
    
    &lt;Item Name="[channels]"&gt;1&lt;/Item&gt;
    &lt;Item Name="[width]"&gt;$T3&lt;/Item&gt;
    &lt;Item Name="[height]"&gt;$T2&lt;/Item&gt;
    &lt;Item Name="[data]"&gt;(void*)val&lt;/Item&gt;
    &lt;Item Name="[stride]"&gt;$T3*sizeof($T1)&lt;/Item&gt;
    &lt;/Expand&gt;
    
    &lt;/Type&gt;
    
    &lt;/AutoVisualizer&gt;
  3. You do not even have to restart Visual Studio. Just start a new debugging session and you can look at your cv::Matx types in a nice little graphical window.

Image Watch for cv::Matx

More about customizing the Image Watch plug-in can be found on the official Image Watch documentation page.