Honestly, the watch works well enough on its own. You need to get used to its 90s-Casio-watch-style control using five hardware buttons, but once you learned it, you could access all the relevant data without any additional connection.
But still, for a bit more convenience and more in-depth details of fitness and activity data, Gadgetbridge is quite nice. Inside the app, you can easily connect to the Garmin watch via Bluetooth.
Fetching and updating activity data is by default done manually via button press, since it may take 10-15 seconds. Once the data is downloaded from the watch, you can dive into all the details about your activities, sleep, heart rate, etc…
Screenshots from https://gadgetbridge.org/basics/features/activities/
There is a couple of maintenance steps you’ll need perform manually from time to time, as Gadgetbridge cannot interfere too much with the IP of the original manufacturers.
Firmware updates
Gadgetbridge allows you to upload updated firmware to the device, but it doesn’t tell you where to get these files from, most likely out of fear of retaliation.
So how do you get the firmware files for your Garmin watch? From a non-shady source, preferably? Easy: You can find them on Garmin’s official forum. More specifically: Their beta builds published for Side Loading contain the last official build as well, for an easy roll-back.
Inside, you find the latest official build under SystemBackdate_XX.XX/GUPDATE.GCD
Open Gadgetbridge, connect to your watch
Click the three dot menu next to your watch, then “File Installer”
Select the .gcd file and upload it to the watch
AGPS updates
AGPS is responsible for speeding up your GPS-based localization and make it more precise. For that, it relies on (pre-)computed satellite orbit and correction data. This must be refreshed from time to time, e.g. every 30 days.
Open Gadgetbridge, connect to your watch
Click on the gear to open the device-specific settings
Click on “Location” and scroll down where it says “Folder”. Set a folder where you will download the AGPS file in a minute.
After folder selection, back on the “Location” screen, see the AGPS 1 URL. Something like https://api.gcs.garmin.com/...
Click on it to copy it to the clipboard. Open the link in a webbrowser to download the file to the folder you set before.
Back on the “Location” screen, directly under the URL, select the “Local file” you’ve just downloaded
The “Status” should switch to “Pending”. Whenever the watch requests an AGPS update, Gadgetbridge will now intercept that call and deliver the file. You’ll see that some days later, the Status will then show “Current”
(Cloud) Backup
The automatic export periodically stores the Gadgetbridge database at a location of your choice, which can also be an online folder, e.g. from Nextcloud if you have the app installed. The important caveat is: This only stores the already processed data from Gadgetbridge, not the raw files from the device (e.g. .fit files in case of Garmin):
In the app’s settings (not the device settings!) go to “Automations”
Toggle the switch for “Auto export enabled” to ON
Under “Export location” select the folder where to export the Gadgetbridge.db to.
If you want to also get the raw files from the device backed up, this needs to be triggered manually:
In the app, open the tab “Data management”
Click “Export zip” and store the file at a location of your choice
The resulting file contains the Gadgetbridge database under database/Gadgetbridge and the raw device files under files/<device ID>/
More about backups, including examples of how to process the data in the Gadgetbridge manual.
Or: How to install any app on the Quest 3 without giving Meta your phone number.
With the long anticipated Apple Vision Pro become available at February 2nd 2024 (unfortunately only in the US), we’ll finally see Apple’s take on a consumer-ready headset for mixed reality – er … I meant to say spatial computing. Seamless video see-through and hand tracking – what a technological marvel.
As of now, the closest alternative to the Vision Pro, those unwilling to spend $3500 or located outside the US, seems to be the Meta Quest 3. And this only at a fraction of the price, at $500. But unlike Apple, Meta is less known for privacy-aware products. After all, it is their core business model to not be.
This post explains how to increase your privacy on the Quest 3, in four easy steps.
This week Google released ARCore, their answer to Apple’s recently published Augmented Reality framework ARKit. This is an exciting opportunity for mobile developers to enter the world of Augmented Reality, Mixed Reality, Holographic Games, … whichever buzzword you prefer.
To get to know the AR framework I wanted to test how easy it would be to combine it with another awesome Android framework: Google VR, used for their Daydream and Cardboard platform. Specifically, its Spatial Audio API. And despite never having used one of those two libraries, combining them is astonishingly simple.
The results:
The goal is to add correctly rendered three dimensional sound to an augmented reality application. For a demonstrator, we pin an audio source to each of the little Androids placed in the scene.
Well, screenshots don’t make sense to demonstrate audio but without them this post looks so lifeless 🙂 Unfortunately, I could not manage to do a screen recording which includes the audio feed.
The how-to:
Setup ARCore as explained in the documentation. Currently, only Google Pixel and the Samsung Galaxy S8 are supported so you need one of those to test it out. The device coverage will increase in the future
The following step-by-step tutorial starts at the sample project located in /samples/java_arcore_hello_ar it is based on the current Github repository’s HEAD
Open the application’s Gradle build file at /samples/java_arcore_hello_ar/app/build.gradle and add the VR library to the dependencies
Place a sound file in the asset folder. I had some troubles getting it to work until I found out that it has to be a 32-bit float mono wav file. I used Audacity for the conversion:
Open your Audio file in Audacity
Click Tracks -> Stereo Track to Mono
Click File -> Export. Select “Other uncompressed files” as type, Click Options and select “WAV” as Header and “Signed 32 bit PCM” as encoding
We have to apply three modifications to the sample’s HelloArActivity.java: (1) bind the GvrAudioEngine to the Activity’s lifecycle, (2) add a sound object for every object placed into the scene and (3) Continuously update audio object positions and listener position. You find the relevant sections below.
public class HelloArActivity extends AppCompatActivity implements GLSurfaceView.Renderer {
/*
...
*/
private GvrAudioEngine mGvrAudioEngine;
private ArrayList&amp;lt;Integer&amp;gt; mSounds = new ArrayList&amp;lt;&amp;gt;();
final String SOUND_FILE = "sams_song.wav";
@Override
protected void onCreate(Bundle savedInstanceState) {
/*
...
*/
mGvrAudioEngine = new GvrAudioEngine(this, GvrAudioEngine.RenderingMode.BINAURAL_HIGH_QUALITY);
new Thread(
new Runnable() {
@Override
public void run() {
// Prepare the audio file and set the room configuration to an office-like setting
// Cf. https://developers.google.com/vr/android/reference/com/google/vr/sdk/audio/GvrAudioEngine
mGvrAudioEngine.preloadSoundFile(SOUND_FILE);
mGvrAudioEngine.setRoomProperties(15, 15, 15, PLASTER_SMOOTH, PLASTER_SMOOTH, CURTAIN_HEAVY);
}
})
.start();
}
@Override
protected void onResume() {
/*
...
*/
mGvrAudioEngine.resume();
}
@Override
public void onPause() {
/*
...
*/
mGvrAudioEngine.pause();
}
@Override
public void onDrawFrame(GL10 gl) {
// Clear screen to notify driver it should not load any pixels from previous frame.
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
try {
// Obtain the current frame from ARSession. When the configuration is set to
// UpdateMode.BLOCKING (it is by default), this will throttle the rendering to the
// camera framerate.
Frame frame = mSession.update();
// Handle taps. Handling only one tap per frame, as taps are usually low frequency
// compared to frame rate.
MotionEvent tap = mQueuedSingleTaps.poll();
if (tap != null &amp;amp;&amp;amp; frame.getTrackingState() == TrackingState.TRACKING) {
for (HitResult hit : frame.hitTest(tap)) {
// Check if any plane was hit, and if it was hit inside the plane polygon.
if (hit instanceof PlaneHitResult &amp;amp;&amp;amp; ((PlaneHitResult) hit).isHitInPolygon()) {
/*
...
*/
int soundId = mGvrAudioEngine.createSoundObject(SOUND_FILE);
float[] translation = new float[3];
hit.getHitPose().getTranslation(translation, 0);
mGvrAudioEngine.setSoundObjectPosition(soundId, translation[0], translation[1], translation[2]);
mGvrAudioEngine.playSound(soundId, true /* looped playback */);
// Set a logarithmic rolloffm model and mute after four meters to limit audio chaos
mGvrAudioEngine.setSoundObjectDistanceRolloffModel(soundId, GvrAudioEngine.DistanceRolloffModel.LOGARITHMIC, 0, 4);
mSounds.add(soundId);
// Hits are sorted by depth. Consider only closest hit on a plane.
break;
}
}
}
/*
...
*/
// Visualize planes.
mPlaneRenderer.drawPlanes(mSession.getAllPlanes(), frame.getPose(), projmtx);
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (int i=0; i &amp;lt; mTouches.size(); i++) {
PlaneAttachment planeAttachment = mTouches.get(i);
if (!planeAttachment.isTracking()) {
continue;
}
// Get the current combined pose of an Anchor and Plane in world space. The Anchor
// and Plane poses are updated during calls to session.update() as ARCore refines
// its estimate of the world.
planeAttachment.getPose().toMatrix(mAnchorMatrix, 0);
// Update and draw the model and its shadow.
mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
mVirtualObject.draw(viewmtx, projmtx, lightIntensity);
mVirtualObjectShadow.draw(viewmtx, projmtx, lightIntensity);
// Update the audio source position since the anchor might have been refined
float[] translation = new float[3];
planeAttachment.getPose().getTranslation(translation, 0);
mGvrAudioEngine.setSoundObjectPosition(mSounds.get(i), translation[0], translation[1], translation[2]);
}
/*
* Update the listener's position in the audio world
*/
// Extract positional data
float[] translation = new float[3];
frame.getPose().getTranslation(translation, 0);
float[] rotation = new float[4];
frame.getPose().getRotationQuaternion(rotation, 0);
// Update audio engine
mGvrAudioEngine.setHeadPosition(translation[0], translation[1], translation[2]);
mGvrAudioEngine.setHeadRotation(rotation[0], rotation[1], rotation[2], rotation[3]);
mGvrAudioEngine.update();
} catch (Throwable t) {
// Avoid crashing the application due to unhandled exceptions.
Log.e(TAG, "Exception on the OpenGL thread", t);
}
}
/*
...
*/
}
That’s it! Now, every Android placed into the scene also plays back audio.
Some findings:
Setting up ADB via WiFi is really helpful as you will walk around a lot and don’t want to reconnect USB every time.
Placing the Androids too close to each other will produce a really annoying sound chaos. You can modify the rolloff model to reduce this (cf. line 71 in the code excerpt above).
It matters how you hold your phone (portrait with the current code), because ARCore measures the physical orientation of the device but the audio coordinate system is (not yet) rotated accordingly. If you want to use landscape mode, it is sufficient to set the Activity in the manifest to android:screenOrientation="landscape"
Ask questions tagged with the official arcore tag on Stack Overflow, the Google developers are reading them!
When developing an app with a big file size the time between pressing Ctrl+F5 and having the new app instance running on a connected device can be rather long.
A timespan of around 10 seconds for a 25MB app is long enough for me to deal with other things, e.g. staging my changes in the VCS and then looking back at the device wondering whether the new version is already running or if I’m looking at the old state. Usually I then start clicking to test the new implementation when the app just closes and reopens since the upload took longer than expected.
The solution is: Add a script to AndroidStudio’s build process which closes the app immediately after pressing Ctrl+F5. This way, when you see your app screen the next time, you can be sure that you are looking at the new version.
Open Android Studio → Run → Edit Configurations
Select your application, scroll to the bottom to the “Before launch” section. Click Plus -> Run External Tool -> Click Plus.Set the values:
Name: Force-stop app
program: adb.exe
parameters: shell am force-stopÂ
Make sure that the external tool runs before the Gradle-aware Make in the “Before Launch” section
P.S.: An alternative would be to auto-increment the version code with every change but that is not feasible for me since I increment my version code automatically based on my git history (more on that later).
Update: According to a comment, this does no longer work due to cryptographical authentication!
I often use the Android emulator to check my apps with different display configurations and to stress-test them. But the problem is that it is really slow on my development laptop. So I installed the Android emulator on my desktop PC running Windows and connect to it over my LAN. The major advantage is that you can continue using your development machine while a “server” deals with emulating – one could even emulate several devices at once and still continue programming.
The approach in a nutshell: Forward the emulator’s port so that it is accessible in the local network. Then connect the ADB to it.
On your desktop – the “server”:
Store the executable of Trivial Portforward on the desktop system (e.g. directly in C:\trivial_portforward.exe).
Create a virtual device to emulate (HowTo) and name it “EmulatedAndroid”.
Create a batch file:
<your-android-sdk-path>\tools\emulator -avd EmulatedAndroid &
echo 'On the development machine: adb kill-server and then: adb connect <desktop-pc-name>:5585'
C:\trivial_portforward 5585 127.0.0.1 5555
If you execute this batch file on your desktop PC, it will open the emulator with the specified virtual device.
Now on your laptop – the “client”:
Now – given that both systems are in the same network – you can connect to the emulator from your laptop by typing in a terminal:
Now you can upload apps, access the logcat and execute adb commands on your remote emulator like on any other Android device. And all without performance impairments on your workstation.
If you are experiencing communication losses, increase the emulator timeout in the eclipse settings to maybe 5000 ms (Window → Preferences → Android → DDMS → ADB connection time out (ms)).
Every once in a while I start a new computer vision project with Android. And I am always facing the same question: “What do I need again to retrieve a camera image ready for processing?”. While there are great tutorials around I just want a downloadable project with a minimum amount of code – not taking pictures, not setting resolutions, just the continuous retrieval of incoming camera frames.
So here they are – two different “Hello World” for computer vision. I will show you some excerpts from the code and then provide a download link for each project.
Pure Android API
The main problem to solve is how to store the camera image into a processable image format – in this case the android.graphics.Bitmap .
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
if(camera != null) {
camera.release();
camera = null;
}
camera = Camera.open();
try {
camera.setPreviewDisplay(holder);
} catch (IOException e) {
e.printStackTrace();
}
camera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
System.out.println("Frame received!"+data.length);
Size size = camera.getParameters().getPreviewSize();
/*
* Directly constructing a bitmap from the data would be possible if the preview format
* had been set to RGB (params.setPreviewFormat() ) but some devices only support YUV.
* So we have to stick with it and convert the format
*/
int[] rgbData = convertYUV420_NV21toRGB8888(data, size.width, size.height);
Bitmap bitmap = Bitmap.createBitmap(rgbData, size.width, size.height, Bitmap.Config.ARGB_8888);
/*
* TODO: now process the bitmap
*/
}
});
camera.startPreview();
}
Notice the function convertYUV420_NV21toRGB8888() which is needed since the internal representation of camera frames does not match any supported Bitmap format.
When I saw the xkcd comic “Now”, I immediately wanted it as widget on my smartphone. A cool little gadget showing you the approximate time of day all around the world.
I searched trough the Google Play Store and only found versions with a huge file size – since they just stored all possible images in the app files, the widgets reached a size of around 25 MB.
So I spent a few hours learning how to built Android widgets and compiled my own version – fewer than 1 MB big and with a cool preview animation.
Since Google Glass is currently only available for “Explorers”, the usability is quite limited and the device requires some hacking. Since the Glass-specific OS is updated regularly this will change soon but up until then, the following trick will come in handy:
Install and run an APK on Google Glass:
Well, currently the standard Glass launcher is quite limited and only allows for starting some pre-installed applications. You could either install a custom launcher like Launchy or do it without any modifications at all: In a nutshell, we will install the APK, find out the name of the app’s launching activity and then launch it. So no modifications and no prior knowledge of the application’s code is needed.
Connect the Google Glass to your computer (you should have the Android SDK installed) and enable the debug mode on your device.
Open a terminal and install the APK via adb install <apk path>.
Find the android tool aapt (located in <sdk>\build-tools\<version>).
Retrieve the package name and the activity to launch: aapt dump badging <apk path> | grep launchable-activity.
Now you have all necessary information to launch the activity whenever you want: adb shell am start -a android.intent.action.MAIN -n <package name>/<activity name>.
As an example with the password app Passdraw (currently not ported to Glass 🙂 ): adb shell am start -a android.intent.action.MAIN -n com.passdraw.free/com.passdraw.free.activities.LandingActivity
I recently got my hands on a Google Glass, the Android-based head-mounted display developed by Google. While connecting to it and installing apps works like a charm on my Linux system, it was quite a hassle to do the same with Windows.
I found a quite nice tutorial which I had to adapt to Windows 8: In a nutshell, we have to convince the Google usb driver that it fits to the Glass device and due to the editing we have to convince Windows 8, that it is okay for the driver’s signature to mismatch. Please proceed at your own responsibility.
Connect the Google Glass to your PC and watch how the driver installation fails. If it does work: congratulations, you are done!
Note the VID and the PID of your connected Glass. You can find them via Device Manager → Portable Devices → Glass 1 → Properties → Details → Hardware Ids.
Open <sdk>\extras\google\usb_driver\android_winusb.inf
Add the following lines using the VID and PID from step 2 to sections [Google.NTx86] and [Google.NTamd64]:
If you are not running Windows 8, you are done. If you are, the following error will occur: “The hash for the file is not present in the specified catalog file. The file is likely corrupt or the victim of tampering”. This is because we have altered the .INF-file and now the signature does not match anymore.
Go back to the file android_winusb.inf and search for the lines
Now, you will get a different error: “The third-party INF does not contain digital signature information”. Well, this security check is great but since we know what we are doing … : Do an “Advanced Startup” (just press the windows key and type it in, then go to Troubleshoot → Advanced options → Start up settings → Restart.
Disable driver signature enforcement in the boot menu.
Update your drivers again in the device manager and this time skip the driver signature enforcement.
Google Glass should now be recognized correctly. Restart your computer if you want to re-enable the driver signature enforcement.