Xamarin Tizen Networking: Under the covers of HTTP/2 in .NET

My current side/passion project requires the use of HTTP/2: It’s a .NET implementation of the Alexa Voice Service and I use it to drive Voice in a Can: Alexa for iOS, Apple Watch, Mac, Android, Android Wear, and … Tizen.

This isn’t an advert, but I do want to set the context. The Alexa Voice Service requires the use of HTTP/2 and this is a real-world product, not an academic excercise.

Why HTTP/2?

The reason the Alexa Voice Service requires HTTP/2 is that as well as the normal requests a client makes (send a request and get a response), the Alexa Voice Service specifies that a client keep a long-running downchannel HTTP/2 connection open so that it can use the HTTP/2 server push mechanism to send directives to the client.

For example when the client sends a request to recognize what is being said, it sends Recognise event to the Alexa Voice Service. This consists of a multipart mime message, the first part being JSON indicating that a recognize request is being sent, and the second part is binary data containing the audio samples from microphone (streamed).

Whilst the microphone data is being streamed, the Alexa Voice Service can detect that the person has stopped speaking (silence detection) and it uses the downchannel to asynchronously send a StopCapture directive, at which point the client stops recording and finishes the request.

So the HTTP/2 is a must. You can’t create an AVS client without supporting HTTP/2.

On platforms such as iOS, WatchOS, MacOS and Android I’ve abstacted out the HTTP functionality behind an interface, and used platform-specific code to implement the interface (NSUrlSession, OkHttp etc).

On Tizen I wanted to see if I could just use the .NET platform.

Forcing HTTP/2 to be used by the .NET HttpClient

The first challenge was to make the .NET HttpClient use HTTP/2.

This turned out to be surprisingly easy. I needed to specify the HttpRequestMessage.Version.

This was my original code for sending a message:

var content = stream == null ? null : new StreamContent(stream);
var request = new HttpRequestMessage(httpMethod, url) {
  Content = content,
  Version = new Version(2,0)
};
var response = await _httpClient.SendAsync(request, cancellationToken);

Notice how I’m setting the Version property.

Handling streamed responses as the data arrives

The second challenge is that by default the HttpClient waits for the complete response. This doesn’t work with the Alexa Voice Service because it streams responses. If you ask “Alexa, what is PI to 100 decimal places” you don’t want for the complete response to return before you start hearing the response … you want the response to stream and be played as it is received.

The solution to this was an additional parameter when calling SendAsync. You can specify whether you want the HttpClient to wait until the complete response is received, or just the HTTP headers, using the HttpCompletionOption.

var response = await _httpClient.SendAsync(request, HttpCompletionOption.ResponseHeadersRead, cancellationToken);

Tizen uses the .NET Core UNIX HttpClient implementation

There were times when I wanted to look at how the Tizen HttpClient was implemented. One of the many delights of Xamarin and the “new” Microsoft is that pretty much everything is open source.

I went digging, expecting to find a Tizen HttpClientHandler but to my surprise, I found it was using the .NET Core UNIX HttpClient. The source is here (it uses Curl).

Enabling logging

One final tip. Sometimes you want to see what is happening under the hood. When looking through the source I found logging statements, and I wanted to see the logs, such as this code from the CurlHandler:

      static CurlHandler()
        {
            // curl_global_init call handled by Interop.LibCurl's cctor

            Interop.Http.CurlFeatures features = Interop.Http.GetSupportedFeatures();
            s_supportsSSL = (features & Interop.Http.CurlFeatures.CURL_VERSION_SSL) != 0;
            s_supportsAutomaticDecompression = (features & Interop.Http.CurlFeatures.CURL_VERSION_LIBZ) != 0;
            s_supportsHttp2Multiplexing = (features & Interop.Http.CurlFeatures.CURL_VERSION_HTTP2) != 0 && Interop.Http.GetSupportsHttp2Multiplexing() && !UseSingletonMultiAgent;

            if (NetEventSource.IsEnabled)
            {
                EventSourceTrace($"libcurl: {CurlVersionDescription} {CurlSslVersionDescription} {features}");
            }

To see these log messages I first declared a member _myEventListener member which is an EventListener:

private MyEventListener _myEventListener;

Then later in my code I initialized the _myEventListener:

  var netEventSource = EventSource.GetSources().FirstOrDefault(es => es.Name == "Microsoft-System-Net-Http");
  if (netEventSource != null && _myEventListener == null) {
    _myEventListener = new MyEventListener();
    _myEventListener.EnableEvents(netEventSource, EventLevel.LogAlways);
  }

The event listener is declared like this. Note the filtering of a couple of hard-coded strings that were poluting my output:

class MyEventListener : EventListener {
  protected override void OnEventWritten(EventWrittenEventArgs eventData) {
    var memberNameIndex = eventData.PayloadNames.IndexOf("memberName");

    var memberName = memberNameIndex == -1 ? null : eventData.Payload[memberNameIndex].ToString();

    var message = new StringBuilder();
    for (var i = 0; i < eventData.Payload.Count; i++) {
      if(i == memberNameIndex) continue;
      if (i > 0) {
        message.Append(", ");
      }
      message.Append(eventData.PayloadNames[i] + "=" + eventData.Payload[i]);
    }

    var last = eventData.Payload.Last().ToString();

    if(last == "Ask libcurl to perform any available work...") return;
    if (last == "...done performing work: CURLM_OK") return;
    if(string.IsNullOrWhiteSpace(last)) return;

    if (memberName == null) {
      Log.D(message);
    } else {
      // ReSharper disable once ExplicitCallerInfoArgument
      Log.D(message, memberName, "CurlHandler");
    }
  }
}

My logger uses Tizen.Log.Debug("viac", message, "", "",0); to output to the log, using the Tizen system Log class.

I used this command line to view the log:

sdb dlog viac:D" or "sdb dlog viac:D`

An extract of the output it all its glory:

D/viac    ( 7582):  18:30:26 []  TizenNetworkImpl MakeHttpRequest Sending...
D/viac    ( 7582):  18:30:26 []  CurlHandler SendAsync thisOrContextObject=HttpClient#52727599, parameters=(Method: GET, RequestUri: 'https://avs-alexa-na.amazon.com/v20160207/directives', Version: 2.0, Content: <null>, Headers:
D/viac    ( 7582): {
D/viac    ( 7582):   Authorization: Bearer ...
D/viac    ( 7582): })
D/viac    ( 7582):  18:30:26 []  CurlHandler .ctor thisOrContextObject=CurlResponseMessage#51192825, parameters=(OK)
D/viac    ( 7582):  18:30:26 []  CurlHandler RequestMessage thisOrContextObject=CurlResponseMessage#51192825, first=CurlResponseMessage#51192825, second=HttpRequestMessage#38539564
D/viac    ( 7582):  18:30:26 []  CurlHandler Content thisOrContextObject=CurlResponseMessage#51192825, first=CurlResponseMessage#51192825, second=NoWriteNoSeekStreamContent#64971671
D/viac    ( 7582):  18:30:26 []  CurlHandler SendAsync handlerId=26756241, workerId=4, requestId=5, message=Method: GET, RequestUri: 'https://avs-alexa-na.amazon.com/v20160207/directives', Version: 2.0, Content: <null>, Headers:
D/viac    ( 7582): {
D/viac    ( 7582):   Authorization: Bearer ...
D/viac    ( 7582): }
D/viac    ( 7582):  18:30:26 []  CurlHandler SendAsync thisOrContextObject=HttpClient#52727599, result=System.Threading.Tasks.Task`1[System.Net.Http.HttpResponseMessage]

Final thoughts

When I first learned to program I spent evening after evening of focused hours trying to break the copy-protection on 8-bit games, not to steal them (I’d already bought them), but to try to disssassemble them in order to work out how to get infinite lives.

I often think that despite the formal training I later received getting a degree in computer science, those childhood hours of fierce focused concentration, trying to accomplish something I wasn’t even sure was possible, was the best training I ever had.

I had no idea whether I could get the Alexa Voice Service running on Tizen, whether I could get HTTP/2 working, or a myriad other things. Sometimes you just have to keep trying, having faith in your abilities, continually trying different approaches, until eventually, one day:

Auto launching Xamarin Mac apps at login

I have an app, called Voice in a Can, which lets you use Alexa on your Apple Watch and iPhone. I’m working on bringing it to the Mac, and one of the things I want is that it be started at login, if the user wants this.

To do this in a sandboxed app, you need to create a helper app, and bundle it inside your main app, in a specific location (/Contents/Library/LoginItems). This helper app is automatically launched at startup, and has no UI – all it does is launch the main app, which in my case sits as an icon in the system toolbar.

There is a great blog post on how to do this by Artur Shamsutdinov, which this post is based on. This blog post adds some detail, information on how to use MSBuild, and trouble-shooting information. You really should check out Artur’s post too.

I created a main application, in my case it is called VoiceInACan.AppleMac:

I made sure this was signed, and configured to use the SandBox.

In my AppDelegate I called SMLoginItemSetEnabled to tell MacOS to launch my helper app at startup (the com.atadore.VoiceInACanForMacLoginHelper is the bundle ID of my helper app, defined below) :

    [DllImport("/System/Library/Frameworks/ServiceManagement.framework/ServiceManagement")]
    static extern bool SMLoginItemSetEnabled(IntPtr aId, bool aEnabled);

    public static bool StartAtLogin(bool value) {
      CoreFoundation.CFString id = new CoreFoundation.CFString("com.atadore.VoiceInACanForMacLoginHelper");
      return SMLoginItemSetEnabled(id.Handle, value);
    }

    public override void DidFinishLaunching(NSNotification notification) {
      ...
      var worked = StartAtLogin(true);
      ...

In a real app you’ll not want to auto-launch a Sandboxed app without permission from the user since your app will be rejected by App Review when you submit it.

I created a helper Mac app, as another project, in my case called VoiceInACan.AppleMacLoginHelper

I made sure this was signed, and configured to use the SandBox

I edited the storyboard to uncheck Is Initial Controller (in the properties on the right) to ensure the helper app has no UI:

I updated Info.plist to indicate the app was background only (because it will have no UI and serve purely to launch my main app on startup):

I added a dependency from my main app to the helper app by right-clicking on References in my main app, selecting Edit References, going to the Projects tab and checking the checkbox next to my helper app:

This ensures that the helper app is built before my main app.

In my AppDelegate.cs in my helper app, I launch my main app:

using System.Linq;
using AppKit;
using Foundation;

namespace AppleMacLoginHelper {
  [Register("AppDelegate")]
  public class AppDelegate : NSApplicationDelegate {
    public AppDelegate() {
    }

    public override void DidFinishLaunching(NSNotification notification) {
      System.Console.WriteLine("ViacHelper: starting");
      if (!NSWorkspace.SharedWorkspace.RunningApplications.Any(a => a.BundleIdentifier == "com.atadore.VoiceInACanForMac")) {
        System.Console.WriteLine("ViacHelper: Got bundle");
        var path = new NSString(NSBundle.MainBundle.BundlePath)
            .DeleteLastPathComponent()
            .DeleteLastPathComponent()
            .DeleteLastPathComponent()
            .DeleteLastPathComponent();
        var pathToExecutable = path + @"Contents/MacOS/VoiceInACan";
        System.Console.WriteLine("ViacHelper: Got path: " + pathToExecutable);

        if (NSWorkspace.SharedWorkspace.LaunchApplication(pathToExecutable)) {
          System.Console.WriteLine("ViacHelper: Launched: " + pathToExecutable);
        } else {
          NSWorkspace.SharedWorkspace.LaunchApplication(path);
          System.Console.WriteLine("ViacHelper: Launched: " + path);
        }
      }

      System.Console.WriteLine("ViacHelper: dying");
      NSApplication.SharedApplication.Terminate(this);
    }

    public override void WillTerminate(NSNotification notification) {
      // Insert code here to tear down your application
    }
  }
}

I updated my main app to embed the helper app within it

So far I’ve created two apps: the main app, which provides my main functionality (in my case Alexa), and a helper app which has no functionality other than to launch the main app. In order for the SMLoginItemSetEnabled to work the helper app needs to be embeded within the main app.

To do this, I edited the csproj of my main app, and added markup to embed the main app. Here are the bits, the complete thing is below:

First, define an ItemGroup that references all the files in the helper app’s bundle (the Configuration refers to Debug or Release):

  <ItemGroup>
    <HelperApp Include="$(ProjectDir)/../VoiceInACan.AppleMacLoginHelper/bin/$(Configuration)/AppleMacLoginHelper.app/**" />
  </ItemGroup>

Next, copy those files into the right place in the main app (note that this is done after _CopyContentToBundle so that it is copied before the build signs the final bundle):

  <Target Name="CopyHelper" AfterTargets="_CopyContentToBundle">
    <Message Text="Copying helper app" />
    <MakeDir Directories="$(AppBundleDir)/Contents/Library" />
    <MakeDir Directories="$(AppBundleDir)/Contents/Library/LoginItems" />
    <Copy SourceFiles="@(HelperApp)" DestinationFiles="@(HelperApp->'$(AppBundleDir)/Contents/Library/LoginItems/AppleMacLoginHelper.app/%(RecursiveDir)%(Filename)%(Extension)')" />
  </Target>

Finally, the embeded bundle’s files can be signed (this may not be necessary … first try without this):

  <Target Name="CodeSignHelper" AfterTargets="CopyHelper">
    <Message Text="Signing helper app" />
    <Codesign SessionId="$(BuildSessionId)" ToolExe="$(CodesignExe)" ToolPath="$(CodesignPath)" CodesignAllocate="$(_CodesignAllocate)" Keychain="$(CodesignKeychain)" Resources="$(AppBundleDir)/Contents/Library/LoginItems/AppleMacLoginHelper.app" SigningKey="$(_CodeSigningKey)" ExtraArgs="$(CodesignExtraArgs)">
    </Codesign>
  </Target>

This is my complete modification to my csproj (after the import of the Xamarin.Forms.targets):

  <Import Project="..\packages\Xamarin.Forms.3.3.0.912540\build\Xamarin.Forms.targets" Condition="Exists('..\packages\Xamarin.Forms.3.3.0.912540\build\Xamarin.Forms.targets')" />
  <ItemGroup>
    <HelperApp Include="$(ProjectDir)/../VoiceInACan.AppleMacLoginHelper/bin/$(Configuration)/AppleMacLoginHelper.app/**" />
  </ItemGroup>
  <Target Name="CopyHelper" AfterTargets="_CopyContentToBundle">
    <Message Text="Copying helper app" />
    <MakeDir Directories="$(AppBundleDir)/Contents/Library" />
    <MakeDir Directories="$(AppBundleDir)/Contents/Library/LoginItems" />
    <Copy SourceFiles="@(HelperApp)" DestinationFiles="@(HelperApp->'$(AppBundleDir)/Contents/Library/LoginItems/AppleMacLoginHelper.app/%(RecursiveDir)%(Filename)%(Extension)')" />
  </Target>
   <Target Name="CodeSignHelper" AfterTargets="CopyHelper">
    <Message Text="Signing helper app" />
    <Codesign SessionId="$(BuildSessionId)" ToolExe="$(CodesignExe)" ToolPath="$(CodesignPath)" CodesignAllocate="$(_CodesignAllocate)" Keychain="$(CodesignKeychain)" Resources="$(AppBundleDir)/Contents/Library/LoginItems/AppleMacLoginHelper.app" SigningKey="$(_CodeSigningKey)" ExtraArgs="$(CodesignExtraArgs)">
    </Codesign>
  </Target>

</Project>

Finally copy your main app’s bundle to the Application folder, and run it so that it registers the embedded helper to start on login.

Troubleshooting SMLoginItemSetEnabled

The first challenge is getting log information. If you run the Console app, it only shows you information from after it was launched, which is after you login. You can get historical information, from the terminal

sudo log collect --last 1d
open system_logs.logarchive

This will show you the last day’s worth of logs. You’ll want to look for messages from otherbsd

The second challenge I faced was that although I registered the startup item properly, it wasn’t being launched properly. I was getting this cryptic error Could not submit LoginItem job com.atadore.VoiceInACanForMacLoginHelper: 119: Service is disabled:

After Googling, I discovered the lsregister command, and was able to see many many “registrations” of my helper app, from developing and backups etc

/System/Library/Frameworks/CoreServices.framework/Frameworks/LaunchServices.framework/Support/lsregister -dump | grep AppleMacLoginHelper.app | more

What fixed it for me, and your millage may vary, and you should really check what these commands do before executing them, was:

/System/Library/Frameworks/CoreServices.framework/Frameworks/LaunchServices.framework/Support/lsregister -gc
/System/Library/Frameworks/CoreServices.framework/Frameworks/LaunchServices.framework/Support/lsregister -kill

I then re-ran my main app, which re-registered my helper app as a single entry in lsregister and joy, my app launches at startup. I started working on this yesterday at 7:30 am and got it working around 1:30 pm. I’m hoping if you need to do something similar this post will shave a little time off your experience!

Acknowledgements

There is no way I’d have got this working without Artur Shamsutdinov’s blog post from 2016.

Running Xamarin Forms apps on the new Tizen 4.0 Samsung Galaxy Watch

I picked up a new Samsung Galaxy Watch (SM-R800) today, and after spending an evening on it, I managed to deploy and run a Xamarin Forms (Tizen 4.0) app on it … I just tried the default template:

In case it helps someone else, these are some of the things I did. FWIW I’m using Windows running in Parallels on a Mac.

  1. Install the Tizen tools for Visual Studio, and create a new Tizen XML App (Xamarin Forms)
  2. Enable development mode on the watch by tapping the software version
  3. Enable Wifi on the watch, and note the IP address
  4. Run the Device Manager (Tools|Tizen|Device Manager) and use the Scan button … this should detect your watch (It didn’t initially for me because I’d forgotten to set my Windows network to Private)
  5. Run the Tizen Package Manager (Tools|Tizen) and ensure you have Samsung Certificate Extension installed under Extension SDK
  6. Run the Tizen Certificate Manager (Tools Tizen). Click the “+”. If you don’t see Samsung listed then check the previous step. Choose Samsung and run through all the steps (including signing in with a Samsung account).
  7. This is the part that tripped me up. Under Tools|Options|Tizen ensure you have “Sign the .TPK file…” checkbox checked:
  8. Build and Run (I got a hang running with the debugger, but when I started without debugging it worked.). You should see the watch as the device in Visual Studio:

I’m sure I’ve forgotten something … it was a long night getting this running so feel free to reply and I’ll see if I can help.

Screencast: Your computer screen as an Alexa Smart Home Security Camera

This is a screencast I just put together showing how you can show your computer’s screen as an Alexa Smart Home Security Camera.

I wanted this because I already have security camera software running on a windows desktop … all I wanted was to say “Alexa, show security cameras” and see the software running on that computer.

Source referenced in the screencast is here

Using Siri to control your Alexa Smart Home devices

I have many Smart Home devices that can be controlled from my Amazon Echo, however none of those devices can be controlled from Siri on my Apple Watch or iPhone. None are HomeKit compatible.

What I’ve done lets me control my Alexa Smart Home devices via Siri on my Apple Watch or iPhone. This solution is not elegant (it involves a Raspberry PI, HomeBridge and a speaker) but it does work…

Code here. Demo here:

Using Google Sign-in for iOS in Xamarin Forms to access Google APIs

This is another of those posts where I am essentially writing a message to my future self to remind myself how to do something, and in the process perhaps help out someone else.

I wanted to use the Google Sign-in for iOS Xamarin Component from Xamarin Forms to let a user sign-in to Google, and then use the resulting access token to invoke one of the Google APIs, in my case the Google Tasks API.

There are several hurdles to overcome:

  • How to use the Google Sign-in for iOS Xamarin Component from Xamarin Forms, since the examples are for iOS apps;
  • How to use that component to request access to the Google Tasks API;
  • How to use the resulting access token to actually invoke the API.

Google Sign-in for iOS Xamarin Component from Xamarin Forms

The Getting Started Guide for the Google Sign-in for iOS Xamarin Component explains how to set up the component for a native Xamarin iOS app.

I followed its instructions with regards to registering on the Google API Console, downloading the GoogleService-Info.plist file, and setting up my AppDelegate:

    public override bool OpenUrl(UIApplication application, NSUrl url, string sourceApplication, NSObject annotation)
    {
      return Google.SignIn.SignIn.SharedInstance.HandleUrl(url, sourceApplication, annotation);
    }

    public override bool FinishedLaunching(UIApplication app, NSDictionary options)
    {

      NSError configureError;
      Google.Core.Context.SharedInstance.Configure(out configureError);
      if (configureError != null)
      {
        // If something went wrong, assign the clientID manually
        Debug.WriteLine("Error configuring the Google context: {0}", configureError);
        Google.SignIn.SignIn.SharedInstance.ClientID = "....apps.googleusercontent.com";
      }

          ...

The instructions with regards to Signing In were trickier though, since they assume access to iOS View Controller.

Xamarin Forms hides such platform-specifics, however this post on Using Custom UIViewControllers in Xamarin.Forms on iOS by Xamarin’s Mike Bluestein explains how to get hold of the ViewController by creating a custom renderer for a page.

Assuming your Xamarin Forms main page is called “MainPage” (inspired, I know), I followed Mike’s instructions and ended up with a renderer like this:

using System.Diagnostics;
using System.Threading.Tasks;
using Foundation;
using Google.SignIn;
using Xamarin.Forms;
using Xamarin.Forms.Platform.iOS;

[assembly: ExportRenderer(typeof(enfiler.Views.MainPage), typeof(enfiler.iOS.IOSMainPage))]
namespace enfiler.iOS
{
  public class IOSMainPage : PageRenderer, ISignInUIDelegate, ISignInDelegate
  {
    TaskCompletionSource<string> _taskCompletionSource;

    public override void ViewDidLoad()
    {
      Services.GoogleTasks.Instance.GetAccessToken = GetAccessToken;
      base.ViewDidLoad();
    }


    public Task<string> GetAccessToken()
    {
      _taskCompletionSource = new TaskCompletionSource<string>();
      SignIn.SharedInstance.UIDelegate = this;
      SignIn.SharedInstance.Delegate = this;
      SignIn.SharedInstance.Scopes = new string[] { Google.Apis.Tasks.v1.TasksService.Scope.Tasks };
      SignIn.SharedInstance.SignInUser();
      return _taskCompletionSource.Task;
    }

    public void DidSignIn(SignIn signIn, GoogleUser user, NSError error)
    {
      if (error != null)
      {
        _taskCompletionSource.SetException(new NSErrorException(error));
      }
      else
      {
        _taskCompletionSource.SetResult(user.Authentication.AccessToken);
      }
    }
  }
}

When the Xamarin Forms page called MainPage loads, this renderer gets invoked to actually render it on iOS. Since it derives from the builtin PageRender class, it doesn’t have to do any of the heavy lifting of rendering, but instead simply registers itself in the Services.GoogleTasks.Instance class in my Xamarin Forms PCL, which we will see later.

Notice how the GetAccessToken does the Sign In work described in the Getting Started guide. It provides for asynchronous invocation and thus uses the TaskCompletionSource class since the sign-in completes via the DidSignIn callback.

One difference from the Getting Started guide is that I’m specifying the Google Tasks OAuth Scope in GetAccessToken. In order to do this I needed to add the Google APIs Client Library nuget package. I also needed to activate the Google Tasks API for my app in the Google API Console.

Notice also that in the DidSignIn I’m completing the task returned from GetAccessToken either with an exception, or with the OAUTH access token resulting from logging in.

Invoking the Google Tasks API with the token returned from the Google Sign-In component

This is the GoogleTasks class with which the IOSMainPage class registered itself by setting the GetAccessToken callback:

using System;
using System.Diagnostics;
using System.Threading.Tasks;
using Google.Apis.Tasks.v1;

namespace enfiler.Services
{
  public class GoogleTasks
  {

    public static GoogleTasks Instance { get; } = new GoogleTasks();
    public async Task<Google.Apis.Tasks.v1.Data.Task> CreateTask(string title, string notes)
    {
      var taskService = new TasksService();
      var task = new Google.Apis.Tasks.v1.Data.Task
      {
        Title = title,
        Notes = notes
      };
      var request = taskService.Tasks.Insert(task, "@default");
      request.OauthToken = await GetAccessToken.Invoke();
      return await request.ExecuteAsync();
    }

    public Func<Task<string>> GetAccessToken { get; set; }
  }
}

I defined this in the Xamarin Forms PCL for my project, and added the Google APIs Client Library nuget package to my PCL too.

The key thing here is the assigning of the OauthToken on the request.

Inside my Xamarin Forms app whenever I want to create a new Google Task I await the invocation of CreateTask which calls back into the custom renderer:

      var googleTask = await Services.GoogleTasks.Instance.CreateTask("Hello To", "Jason Isaacs")

Summary

Google have deprecated the use of Web Views to authenticate with their services and are instead requiring the use of their own libraries, such as the Google Sign-In for iOS library.

By combining the use of the Google Sign-In for iOS Xamarin Component with a custom page renderer, and requesting a custom OAUTH scope I was able to request access to a user’s Google Tasks, and then create a task.

I’ve not yet explored the same thing on Android, but I’d hope to be able to register a callback from my Android code just as on iOS to do the OAUTH dance.