Categories
Android MVP Source Code

Every Android API for Delphi

Delphi XE5’s support for Android includes many of the most common Android APIs either wrapped in nice cross platform libraries and component or accessible directly via the JNI wrappers. The rest can be accessed by creating headers to expose them. The new JNI Bridge makes this calling of the managed Java APIs from the native Delphi app much easier then it used to be, but it still takes a little effort to make the translation (it is easier than translating a Windows API). Brian Long (an Embarcadero MVP) has an excellent video from CodeRage 8 that goes into great detail on the process.

But all of that has now changed. CHUA Chee Wee aka “Chewy” (also an Embarcadero MVP) has released an Android2DelphiImport tool that makes wrapping and accessing any and every Android API much easier. It gives you 3 distinct benefits:

  1. You can point it at Android.jar in the Android SDK and have it create wrappers for EVERY Android API. You’ll need to copy and paste out the pieces you want (it puts them all in one source file), but it saves a lot of typing and research. It implements the wrapper using the JNI Bridge just like the RTL does.
  2. You can point it at any other built in Java library, like the Google Glass GDK that provides all the Glass specific features on Google Glass, or maybe the Google Cloud Messaging (GCM) API. You’ll have a source file that wraps all the API calls exposed in that JAR file.
  3. You can use it to wrap a 3rd party Java JAR file for Android and it will create a .PAS interface for it, bundle it up for inclusion in your Delphi app, and load it at runtime.

As a matter of testing this tool I pointed it the Android.jar file for Kit Kat. It took a little while, but when it was done I had over 100,000 lines of interface wrappers covering EVERY Android API on Kit Kit. I copied out the lines for Toast support, added in the necessary uses statements, and I had full toast support in just a few minutes.

My next test was to point it to the Google Glass GDK (Glass Development Kit) for building native Glass Apps. Previously I had only used the Android SDK & NDK, which supports the common Android functionality on Glass, but the GDK adds support for Glass specific features. Once the GDK is installed (via the Android SDK Manager under API 15) you will find gdk.jar in the sdk\add-ons\addon-google_gdk-google-15\libs folder. It created a nice wrapper for it, but that wrapper wouldn’t compile because the uses clause was incomplete (it has a notice that you need to adjust the uses clause). I had to track down 4 additional units for the uses clause and then I extracted 3 more apis from the earlier Android wrapper to cover APIs that weren’t previously exposed. In all it took me about 15 minutes and then I had full support for the Google Glass GDK.

Since compiling isn’t enough, I built a simple app to insert and remove cards on the Google Glass timeline. It worked like a charm. I didn’t need to tweak or adjust the generated code at all (beyond the uses clause). Here is my code:

tm := TJTimelineManager.JavaClass.from(SharedActivityContext);
card := TJCard.JavaClass.Init(SharedActivityContext);
card.setText(StringToJString('Hello Glass'));
card.setFootnote(StringToJString('From Delphi'));
id := tm.insert(card); // Use id to edit or remove card later

I haven’t tested the 3rd scenario yet, but I did observe how it works. The tool creates a .apk out of the selected JAR file. It then includes a routine to load that APK at runtime so you can call into the methods it includes. You would need to go this route when the JAR isn’t built into the platform already. I have a library that I’m planning to test this with (so stay tuned), but I wanted to blog about the other benefits right way.

I am really excited about the potential of this tool. Not only does this mean you have even easier access to the entire Android API, but you also have easy access to all the extended APIs and 3rd party APIs. It has a simple command-line interface, and very few options, but when it works that is all you need.

Right now purchasing it is a little more complicated than using it. He only accepts Bitcoins, 1/4 Bitcoin to be exact. Based on the current exchange rate it is about $200 US, which is an excellent value for what you get (and considering how much effort has gone into its development). So you will either need to mine or purchase a bitcoin to pick this tool up, but if you are doing Android development I highly recommend it.

Categories
Android Delphi Projects iOS Mobile

Learning from Digifort

You’ve probably seen Éric Fleming Bonilha videos showing off his Digifort mobile applications developed with Delphi XE5. The videos don’t mention it, but the back end server and client applications are all written in Delphi too. Just in case you haven’t see the videos, here they are again:

Earlier version, but on a lot of different devices:

Embarcadero just completed a case study with him too, which is really informative. I spoke with him down in Brazil and he said they previously developed mobile clients with both Java and Objective-C, and found Delphi let them develop their projects much faster, and they get both Android and iOS from one project. Also, and perhaps more importantly he said the performance of the Delphi client was just as good, plus they found it more flexible for building a user interface that looks great and is easy to use.

Digifort Mobile Client

Digifort may be based in Brazil, but their clients are all over the world and are a mix of government agencies and business of various sizes. Eric arranged a trip to meet me in Scotts Valley this last week. He showed me some pictures of some of the walls of monitors his clients have, all powered by Digifort. Some really impressive installations.

A big part of his trip was to pick up a his very own Google Glass to start developing a Digifort mobile app for Google Glass. In just a couple short sessions he was capturing images from the built in camera, connecting to his remote server, and streaming live video from Brazil to the glass display. The use case for security personal to view cameras while on patrol, while sharing what they see with everyone else is a great one.

David, Eric and Jim

Eric also had a chance to visit with some people from R&D and product management and share his experiences working with Delphi XE5 and FireMonkey. Here are some best practices he found for making a really smooth user interface.

  • FireMonkey handles PNG images really well. He makes a lot of use of transparent and semitransparent PNG images in TImage components. Layering, animating and zooming those images is what he uses to create some of those really great effects, like the joystick control for camera control.
  • The TFloatAnimation and other animation effects are really powerful. He uses those extensively for smooth animations.
  • He created the drawer interface using TFrames (he uses a lot of frames). The main (center screen) has a Pan Interactive Gesture on it. He looks at the gesture to see if it is horizontal (comparing the gesture start to a later gestsure event) and has traveled at least 10 pixels in that direction. Once that happens then he moves the edge of it with the current finger position from the gesture. He also tracks the speed of the movement, so if you let go then he uses another TFloatAnimation to smoothly finish the movement at the same speed.
  • When the drawer starts to open he pauses all the video and other animations. This really increases the performance of the drawer animation.
  • Anything that is not currently shown on the screen has its visibility set to false. So if the drawer is closed, then everything in the drawer is invisible (since it is in a frame he just sets the frame to invisible). This keeps it from rendering and gives what is visible all the processing power. This is a common suggested optimization with many mobile development tools.
  • It is important to think about a mobile app’s interface as a mobile app. Don’t try to squeeze a desktop app onto a mobile device. That will only frustrate you and your users.
  • In his lists of cameras he uses a TVertScrollBox and fills it with a custom component that contains TImages and TLabels. That gives him maximum flexibility for the drag to reorder (again a Pan Interactive Gesture). He did find that the TLabel has better performance than drawing the text manually inside his custom control.

There were a lot of things he shared where he spent a little extra time to get things just right, and that is what makes the difference for a really smooth user interface. When asked about the learning curve to move from Desktop VCL to FireMonkey Mobile he said there was just a little learning curve, but now he really likes FireMonkey better than VCL. There was talk about having him collaborate for a user interface webinar, which I’m sure will be very informative.

You can catch Eric’s appearance in our Devices and Gadgets webinar on the webinar replay (posting any day now). And download his sample code (along with the rest of the code from the webinar).

What are some tips and best practices you’ve found in your FireMonkey mobile development?

Categories
Tools

Maximized Side-by-Side Code Editing

Sometimes we forget some of the basics. Had a customer ask if you can maximize the code editor Window in RAD Studio and edit two files side-by-side. I’d heard someone talk about this a while ago, but I couldn’t remember the details.

There is an option in Tools / Options / Editor Options / Display that allows a code window to Zoom to full screen.

Zoom To Full Screen

Then right click in the code window you want full screen, bringing up the View Menu, and choose New WindowNew Edit Window

 

With Windows 7 or Windows 8, just drag this new code window to the left or right edge of the screen and it automatically gets tiled nicely.

Maximized RAD Studio Code Editor Side-by-Side

Categories
Android iOS Mobile webinar

Buffering Sensor Data

Working with sensors on devices can often lead to large amounts of data coming to you really fast. For example the TMotionSensor’s OnDataChange event fires 100 times a second on my Nexus 5. When I was building my level app for Google Glass the level bar was bouncing all over the place because of the sensitivity and sample rate.

My first thought was to only take every 10th sample, but I wasn’t happy with that either because the specific sample it pulled could be the one when there was a jitter.

Example: 1,2,1,1,2,1,2,3,1,3,12,2,3,1

If I just looked at sample 1 and 11 then I would see a lot of movement, but in reality it was relatively stable most of the time.

What I ended up doing was buffering the data and taking an average. I just created a generic TList of the appropriate type, and during the OnDataChange event I would simply store the sample data. When it came time to update the display I took an average sample, which I found gave a much smoother and more representational display.

Although it was still possible the line could jump erratically if I really moved a lot. So I decided to use an animation for the movement. This keeps the line movement smooth, even if there is a lot of movement (it interpolates the positions between the current line position and the new position). I used a TFloatAnimation and set the StartFromCurrent property to true.

When the animation is finished I set the StopValue to the the average of the values, then enable again. It is important to always clear the sample values after taking an average. Otherwise the movement will continue to get slower and slower as it becomes more and more stable (averaging a large enough sample of numbers results in a smaller range of results.)

I was really pleased with how smooth things looked with a 0.1 second duration on the animation. With 100 samples a second, this translates into each animation covering the average of 10 samples. The built in animations made it really easy, and the final display looked great.

I’ll include the source code with the downloads from the Making the Connection: Programming Devices and Gadgets with RAD Studio webinar coming up next week!

RAD-in-Action Webinar Making the Connection: Programming Devices and Gadgets with RAD Studio Wednesday, January 22, 2014

Categories
Android design iOS Mobile

Skeuomorphic No More?

Skeuomorph is compounded from the Greek: skéuos (container or tool), and morph (shape). It describes something that possess additional ornamentation indicative of its inspiration. It is used to describe both physical objects as well as digital designs.

In the physical world we frequently see it as something made of plastic that is styled to look like leather, wood, etc. In the digital world it shows up when a button or other digital element contains textures, shading, etc. to make it look like the physical element that inspired it. From a design point of view in digital assets it is useful in that the user recognizes what an object represents by its physical familiarity (called an affordance).

Apple’s platforms used to be full of a great examples of Skeuomorphism. On iOS all the default icons had a glare that made them look 3D. Many apps, had a extra details to make them look lifelike. Take the Notes app, it was full of torn bits of paper, leather borders, stitching, paper lines, etc. The new version doesn’t have any of that (although it does have a slight paper texture).

iOS6 vs iOS7 Notes

iOS 7 didn’t lead the move to non-skeuomorphic design. Windows Phone 7 (the predecessor to WP8 and Windows 8) and the Metro design eschewed skeuomorphism completely. (If you are keeping score, iOS 7’s features were inspired by Android and design inspired by Microsoft.) Android has always been been straddling the proverbial skeuomorphic fence. Although with the the other two mobile plays moving away from skeuomorphism I expect Android to follow.

iOS7’s move away from skeuomorphism really highlights how most apps design no longer tries to mimic the platform’s design completely. Users are creative with their apps, and often times bring their own design with their app across all platforms. That is the great thing about building your cross platform apps with Delphi. You can use the standard platform styles so your app looks like a standard app on each platform, or just as easily switch to a premium or custom style and have your app stand-out and look consistent across platforms.

Now you need to ask yourself if I only wrote this post so I could use words like eschew, skeuomorphic, proverbial and affordance.

Categories
Android

Launching a Delphi XE5 App via Voice on Google Glass

Building a running an app on Google Glass is easy with Delphi XE5, but what about integrating it into the Glass menu system and launching it with a voice command? Turns out that is pretty easy too.

First of all, I find it easiest to to add AndroidManifest.template.xml to your project. It shows up in your project folder after you build an Android app the first time. This template is for the AndroidManifest XML file that is used to describe your application to the Android OS. We open this file and find the element. We then need to add the following child to it, along with the and children elements already there.

<action android:name="com.google.android.glass.action.VOICE_TRIGGER">

This tells it that our main activity is eligible to receive the VOICE_TRIGGER intent. Next we need to specify which voice command we want to receive. This is done by adding a <meta-data> element outside of the <intent-filter> element (but still inside the <activity> element).

<meta-data android:name="com.google.android.glass.VoiceTrigger" android:resource="@xml/voice_trigger_start">

This specifies that there is VoiceTrigger metadata in the xml file voice_trigger_start.

Update: Thanks to the Glass XE16 KitKat update, a special permission is required to use a non standard voice trigger command. In the Manifest, right below the <%uses-permission%> line you need to add the following special
permission
:

<uses-permission android:name="com.google.android.glass.permission.DEVELOPMENT">

So now we need to create that XML file. Simply right click on your project in the Project Manager and select Add New, then under Web Documents select XML File. Rename it to voice_trigger_start.xml, save it, and edit it to look like the following:

<!--?xml version="1.0" encoding="UTF-8"?-->
 <trigger keyword="My voice is my passport, verify me.">

The value of keyword can be any word or phrase that you want to associate with your app. This will be used in the sentence of “OK Glass, my voice is my passport, verify me.” Generally you want to keep it short, like “take a picture” or “get directions”, while not using something that conflicts with a built in command. This is also what will show up in the launcher on glass next to your app icon.

Now we use the Deployment Manager to specify the location for this new XML file. Simply go to Project \ Deployment and select Add File and choose the voice_trigger_start.xml file. Then change the Remote Path to res\xml.

Simply run your app from the IDE (after you’ve installed a Google Glass USB ADB driver) and after that you can launch your app from the Glass launcher or your selected voice command.

What to learn more about other devices and gadgets? Join me for my free webinar on Programming Devices and Gadgets with RAD Studio on January 22nd.Programming Devices and Gadgets with RAD Studio

Categories
Android Tools

Connecting to Any Android with ADB via USB

I’ve seen other attempts at universal ADB (Android Debug Bridge) drivers, but I’ve tested those and not had any luck. The following steps have worked for a wide variety of devices that I’ve tested it with. With such a huge variety of different Android devices available it isn’t always easy to find the right USB driver, but you need an ADB USB driver to connect development and debugging tools. This is different then being able to add and remove files from the Android device.

Disclaimer: I’ve done this a few times, and talked to others who have done it too, all without any incident. This is however a bit of a hack, and may result in some unintended consequence, which may including voiding your warranty, damaging your computer, damaging your android device, or even causing your hair to fall out. Proceed at your own risk.

First of all, you need the Android SDK installed. You don’t need the ADT Bundle or Android Studio if you don’t want those. Just scroll down to Use an Existing IDE. If you have RAD Studio XE5 (an edition with Mobile) installed then you had the option to install this during the RAD Studio install. I’ll include directions for either installation method. Google provides a USB driver for their Nexus line of devices. This is the driver we are going to use, but first we need to modify it to work with our device. I’m assuming you are running Windows. OS X doesn’t need device specific USB drivers.

These directions are for Windows 8.1. Run the Android SDK Manager. You can do this from the Android Tools start menu item that is installed with RAD Studio, or run the android.bat file located in the SDK\Tools folder of the Android SDK installation. This brings up the Android SDK Manager. Scroll to the bottom and look for Google USB Driver in the Extras category. If that is not installed then put a check mark next to it and install it. Android SDK Manager - Extras - USB Driver Next go to the folder where your Android SDK is installed. With RAD Studio XE5 the default install location is  under Users\Public\Documents:

C:\Users\Public\Documents\RAD Studio\12.0\PlatformSDKs\adt-bundle-windows-x86-20130522

From there go to the \sdk\extras\google\usb_driver folder, as that is where the Google USB Driver is installed. I typically make a copy of these folder somewhere else, as we will be modifying some of these files. If you get an updated driver, then your changes will be overwritten if you leave them here. Next we need to go to Device Manager with your new Android device attached (and in developer mode). Look for the entry for Android under Other devices. Device Manager - Other Deivces - Android This is your Android device without a driver loaded. If you don’t see it then either it isn’t connected, or your Android device isn’t in developer mode. It is possible it may show up with the name of the Android device, but it should still be under “Other devices” and have the yellow triangle on it. Right click on this device and select properties. Android Properties - No DriverThen go to the Details tab and from the dropdown select the Hardware Ids property.AndroidProperties-Details

These are the identification for your specific Android device. It should look similar to the picture above.

USB Driver file in folder

Now using your favorite text editor open the android_winusb.inf file we found in the usb_driver folder above. Locate the line that says [Google.NTamd64]. You’ll see some entries above this line, these are for 32-bit installs (it is section [Google.NTx86]), and the entries after the line are for 64-bit installs. Chances are you only need to edit one section, since you are doing this for your own windows install, but you can edit both sections if you are not sure. The lines are the same.

So add lines similar to the following in the section(s) you choose.

;Samsung Galaxy S3 
%SingleAdbInterface% = USB_Install, USB\VID_04E8&PID_6860 
%CompositeAdbInterface% = USB_Install, USB\VID_04E8&PID_6860&MI_03

You probably noticed that is that crazy looking string on the right looks really similar to the values we saw for the Hardware Ids from device manager. The line prefixed by the semicolon is a comment, so I usually put the name of the device there.

Notice that the portion of the identifier with REV_#### is missing (I usually leave it off, but it should work either way). Also the %SingleAdbInterface% line doesn’t have the MI_## portion, while the %CompositeAdbInterface% line does include it. You might need to experiment with this to find what works (again remember the disclaimer).

If you want to load the driver for Google Glass, then it should look something like the following.

;Google Glass
%SingleAdbInterface% = USB_Install, USB\VID_18D1&PID_9001&REV_0216&MI_01
%CompositeAdbInterface% = USB_Install, USB\VID_18D1&PID_9001&MI_01

%SingleAdbInterface% = USB_Install, USB\VID_18D1&PID_9001&REV_0216&MI_00
%CompositeAdbInterface% = USB_Install, USB\VID_18D1&PID_9001&MI_00

(the first two lines are for the 2nd edition, the second two lines are for the 1st edition I believe).

Advanced: If you want to load the driver for the bootloader, then put your device into bootloader mode and add a %SingleBootLoaderInterface% line for the value that shows up in Device manager then (it will be different).

Now save the android_winsub.inf file. Unfortunately since it is modified, the signature is invalid and Windows won’t let you load it. With Windows XP this wasn’t such a big deal, but in recent versions the driver signature is enforced. There is a way around it though.

Check out the following guides for loading unsigned drivers in specific OS:

I’ve also made a video of the process on Windows 8.1

What to learn more about connecting to devices and gadgets? Join me for my free webinar on Programming Devices and Gadgets with RAD Studio on January 22nd.

Programming Devices and Gadgets with RAD Studio