Service Fabric, Containers and Open Networking Mode

In case you haven’t noticed, deploying applications in containers is the way of the future for a lot of workloads.  Containers can potentially solve a lot of problems that have plagued developers and operations teams for decades, but the extra layer of abstraction can also bring new challenges.

I often deploy Windows containers to Service Fabric, not only because it’s a nifty orchestrator, but it also provides a greater array of options for modernizing Windows workloads since you can run Service Fabric on-prem as well as in Azure to support hybrid networking and other business requirements.

You can quickly create a Service Fabric cluster in Azure with the portal and Visual Studio can get you started with deploying existing containers to a Service Fabric cluster pretty quickly with the project wizard, but as with anything in the technology space, what comes out of the box might not do exactly what you need.

In the case of a recent project, I wanted to be able to deploy more instances of a container than I had nodes in my cluster.  By default, Service Fabric will deploy one instance of application to each node until you’ve placed that application on all nodes.  However, depending on what your container does, you might want to double or triple up.  This is accomplished with two things: open networking and partitions.

You can get the majority of the way with this documentation about container networking nodes on Service Fabric – https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-networking-modes.  You’ll need to make some changes to your Service Fabric deployment template, including parts to issue each node in your VM Scale Set additional IP addresses on your subnet.  Each container deployed with get one of these IP addresses. Then you will need to make some changes to your application and service manifest files, which include specifying the networking mode to “open” and adjusting how you handle port bindings.

Because your application is really a container, its deployed as a stateless service.  Most of the Service Fabric documentation talks about partitions in relation to stateful services and it’s a bit unclear how to apply that to stateless ones.

Within your application manifest, you’ll need to edit your service instance to use either the named or ranged partition type, instead of “SingletonPartiton” which is the default.  I prefer using the ranged version as it’s much easier to adjust the partition number, but admittedly don’t really have a good understanding on how the low and high keys apply to the containers when they aren’t actually using those ranges to distribute data.

Named Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
         <NamedPartition>
             <Partition Name="one" />
             <Partition Name="two" />
             <Partition Name="three" />
             <Partition Name="four" />
        </NamedPartition>
    </StatelessService>
</Service>

Ranged Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
           <UniformInt64Partition PartitionCount="4" LowKey="1" HighKey="10" />
     </StatelessService>
</Service>

Once you’ve made all these changes, Service Fabric will deploy containers equal to the number of instances multiplied by the partition count, up to the available number of IP addresses.  So two instances of four partitions will be eight containers and eight IP addresses.  Keep in mind that if a deployment exceeds the number of IP addresses you have available, you will have errors.  Based my testing so far, I don’t recommend trying to max out your available IP addresses, there seems to be a need for a little wiggle room for scaling operations.

Microsoft OpenHack on Containers comes to San Francisco – May 15-17

Who?

OpenHack brings together groups of diverse developers to learn how to implement a given scenario on Azure through three days of immersive, structured, hands-on, challenge-based hacking. This scenario is focused on implementing container solutions and move them to the cloud.

What!

Join us for three-days of fun-filled, hands-on hacking where you will team up with community peers and learn how to containerize Linux and Windows based workloads to the cloud. During OpenHack you will:

  • Choose your desired tooling and technology based on Kubernetes or Azure Service Fabric.
  • Hack on challenges structured to leave you with skills and expertise needed to deploy containers and clusters in the work place.
  • Network with fellow community members and other professional developers from startups to large enterprises, as well Microsoft developers.
  • Get answers to your technology and workplace project questions from Microsoft and community experts.

Bonus

In addition to the challenge-based learning paths, a limited number of 1-hour envisioning slots will be made available on a first come, first served basis to work side-by-side with Microsoft experts on your own workplace projects.

OpenHack is FREE for registered attendees!

Food, refreshments, prizes and fun will be provided. If travelling, attendees are responsible for their own travel expenses and evening meals.

What you need:

To be successful and maximize value from the event, participants should have a basic understanding of the following concepts and technologies. You are not required to be an expert or authority, but a familiarity with each will be advantageous:

  • Docker containers
  • Cloud hosted services
  • REST Services
  • DevOps
  • IP Networking & Routing

Click here to register!

OpenHacks are invite only and space is limited. You may be put on a waitlist. When your registration is confirmed, we will follow up with additional details.

Azure Containers, SSH Keys and Windows

When working with containers on Azure there are a couple things to keep in mind around key management. I’ll use Azure Container Service (AKS) for the context here, but in the end, keys are keys.

You have two options when creating a cluster on AKS:

az aks create --resource-group YourRG --name YourCluster --generate-ssh-keys

az aks create --resource-group YourRG --name YourCluster --ssh-key-value \PATH\TO\PUBLIC\KEY

With –generate-ssh-keys, Azure will automatically create the necessary key for you named id_rsa and id_rsa.pub in the $HOME\.ssh folder, of the machine that created the cluster. If there are already keys with that name there, it will re-use those.

Once your cluster is created, you’ll use

az aks get-credentials --resource-group YourRG --name YourCluster

to download an access token to set the current context for your session, manage the cluster and deploy containers.

If you happen to work from more than one machine, or expect other people to also access this cluster or make other clusters using the same keys, you need to share these auto-created keys appropriately. I work from two different machines, wasn’t paying attention and ended up with two different “default” sets of keys. I awkwardly discovered this when creating a cluster with my home machine, traveling with my laptop and then finding myself unable to access the cluster while out of town. Joys.

Using “–generate-ssh-keys” shall henceforth be known as “the lazy way” of key management.

To do this better, create your keys manually, put them in a secure location accessible by those who matter and then make your clusters using “–ssh-key-value” instead.  (Let’s call this the “thoughtful way.”) You will also need to provide the path to the key when requesting the access token. For example:

az aks get-credentials --resource-group YourRG --name YourCluster --ssh-key-value \PATH\TO\PRIVATE\KEY

As I’m a Windows user, I use PuttyGen for my key creation. I will refrain from recreating the wheel of how to do this, as there are already some pretty comprehensive posts, either in Microsoft Docs or this one by Pascal Naber.

A Note about AKS vs ACS: As of this writing, you have two different ways of creating container clusters in Azure. ACS allows you to create clusters orchestrated with Kubernetes, Docker Swarm or DC/OS. Due to the nature of the way these are created, you have full access to the master node VM. If you’ll be using Putty to connect to the master node of your ACS cluster directly, you’ll need to use a Putty-specific PPK file for your private key and specify it in your Putty session settings. If you create a Kubernetes cluster using AKS (as I did in my examples above) you won’t have SSH access to the master node.

A Note About Service Principals: In addition to automatically generating keys, AKS/ACS will automatically generate the necessary service principals needed. However, it won’t generate a new SP for each cluster. If you have a suitable SP already in your subscription, it will re-use that one. So just keep that in mind for your production clusters. You may want to provide different service principals for various clusters, etc. You can read more about setting up Azure AD SPs for AKS if you so desire.

Working with Containers while working on Windows

With all the rage with containers these days, you may be wondering how to get started and make sure you can be successful if you use a Windows on your preferred client device. One of the cool things about working with containers from a Windows machine is that you can work with both Linux and Windows containers. This post will focus on working with Linux containers, but you’ll need all these tools for working with Windows containers too.

For building containers and working with images locally, you’ll need Docker for Windows. Just go with the default installer options and you should be ready to go in short order. You will need to have machine that supports virtualization and has those features on. When you work with Windows containers, they will run on your OS. When you work with Linux containers, they will run on a Hyper-V VM you can find if you run Hyper-V Manager on your machine.

It’s worth noting if you are going to be working with persistent or shared volumes on your containers, they work a little bit differently on your windows machine. Docker recommends that you use the –mount flag with volumes  and when using them for Linux containers, it’s better to share from the Linux MobyVM and avoid using the Windows host directly.  However, if you need to use the host directly, you can by sharing the required drive via the Shared Drives feature under the Docker for Windows Settings.

For deploying containers to Azure, you will want the latest version of the Azure CLI 2.0.  You DO NOT want anything less than version 2.0.21, trust me. You will use the Azure CLI to do things like create and manage container services (either ACS or AKS), push images to Azure Container Registry, deploy containers to Azure Container Instances and get the credentials to connect to those resources.

Once you are connected to those resources (particularly if they are going to be used for Linux containers), you’ll be using the same tools as anyone working from a Linux client, such as Kubectl for deploying containers to a Kubernetes cluster.

For sanity checking purposes, I also make sure I have Windows Subsystem for Linux (aka “Bash on Windows”) installed and the latest version of Azure CLI 2.0 installed in that environment too. I usually can do everything I need from CMD, but sometimes a strange error has me double checking my work in “Linux-land”. 🙂 Speaking of WSL, if you really want to trick out your WSL setup, read this.

Once you have Linux container hosts deployed in Azure, you may want to connect to one directly using SSH – perhaps your Kubernetes master agent. I use Putty for this, because I like being able to save my connection settings in the application to use again when I’m working on a project over several days. You will need to convert your SSH keys to a PPK file type with PuttyGen before using them to connect to Linux container host.  (More to come on key management later, I promise.)

So to sum up… To get started with containers on a Windows machine, you need:

  • Docker for Windows
  • Azure CLI 2.0
  • Putty and PuttyGen

Happy Containerizing… and if you run into some “beyond the basics” challenges, let me know in the comments.

Containers with Windows and Node.js

Recently, I’ve been working on a project with one of my TE colleagues, who hails from the “developer” side of the house. One of the challenges I have being interested in infrastructure and less interested in writing applications is that I’m often lacking something to build infrastructure for. So this has been a great opportunity to have something to focus spending my Azure credits on.

So for this project, we agreed to combine some of the things she wanted to do (Internet of Things, PowerBI, Bots, etc) with some of things I wanted to learn more about, like Containers and Service Fabric. The result was an idea for a sensor that would detect soil humidity and air temperature (IoT) for plants, report that data to the cloud for collection (via IoT Hub and CosmosDB) and make that data available via PowerBI for review. Ideally, having a Bot that lets me know when my plants need watering would really help with my lack of a green thumb. 🙂

As part of this we needed to be able to deploy an API that took the data from the IoT Hub and moved it the database. We also needed a front-end web application to show the collection of information. Both of these applications were going to be written in Node.js.

Now before you start tearing apart what is clearly going to be over-kill for this size of a project, keep in mind we know we can do all of this with PaaS offerings. But that would be less “fun”! You can check out the project at https://github.com/jcocchi/IoTPlantWatering and see that we’ve listed out many of the possible architecture scenarios. However, this post is about putting one of those Node.js applications in a container.

Step 1: Get Node.js onto a Windows Server Core container

Now, you’ll find plenty of information on the Web about creating a Docker container with Node.js, particularly if you’d like to run that on Linux. Combine that with the fact that Node.js is most easily installed on Windows with the MSI and you’ll find a lot less documentation about getting it on a Windows container. However, I came across this somewhat dated documentation and sample which got me started. It’s circa November 2016, which is a lifetime ago at this point and references Server 2016 TP 3 when Microsoft had a choice between managing Windows containers with PowerShell or Docker. I edited the HybridInstaller.ps1 script to download the latest version of Node.js and then followed the rest of the instructions in the “docker-managed” section.

The key bits are to download the HybridInstaller.ps1 and dockerfile to a new folder, then run:

Docker build -t windowswithnodejs:v1 C:\YOUR\FOLDER

You’ll end up with an image tagged “windowswithnodejs:v1” that you can then use as a base for the next steps.

Step 2: Make Sure Node IS actually installed

At this point I had a local image of my container available to run and I wanted to make sure that I really did install Node.js correctly.   For that, you can find some handy instructions here for connecting interactively to a Windows container. The whole Wiki is actually very informative if you are new to Windows containers.

Step 3: Install A Node.js Application

We have two Node.js applications in our project, but I started with the simpler of the two – the RecieveHubMessages app. My project partner had nicely detailed the installation process and dependencies so I was able to clone the application code to my desktop, create the necessary .ENV file (because you don’t want your secrets in GitHub!) and put together a dockerfile to build a fresh image based off my image with Node.js already installed.   The process is exactly the same as Step #1 (above) just using DOCKER BUILD with a different docker file and folder with the right application code in it.

After this was complete, I ran a container with this new image, connected to it and confirmed that the application was running. Since our goal was to be able to deploy this application in Azure, I also created an Azure Container Registry to host the image. From there, I was able to deploy it to Azure Container Services (using Kubernetes) and Azure Service Fabric.  (More later on the differences between ACS and Service Fabric.)

The Network “Hack” that Wasn’t To Be

Sometimes the idea looks great on paper but doesn’t really work out when you try to configure it. And often, the only way to be sure is to break out the good old scientific method and try. So I tried. And it didn’t work, so I’m putting here in case you get a similar wild idea in near future.

The goal was to start with a primary VNET in Azure for some VMs. This network was going to act as a collection point for data coming in from a number of remote physical sites all over the world. In addition, some machines on the primary network would need to send configuration data to the remote sites. Ultimately, we were looking at a classic hub and spoke network design, with an Azure VNET in the center.BasicNework

There are several ways you can do this using Azure networking, VNET peering between Azure VNETs, Site-to-Site (S2S) VPNs, and even ExpressRoute. ExpressRoute was off the table for this proof of concept, and since the remote sites were not Azure VNETs, that left Site-to-Site VPN.

The features you have available to you for Site-to-Site VPN depend on the type of gateway devices you use on each end for routing purposes. For multi-site connections, route-based (aka dynamic) routing is required. However, the remote sites were connected to the internet using Cisco ASA devices. The Cisco ASA is a very popular Firewall/VPN that’s been around since about 2005, but it only uses policy-based (aka static) routing.

So while we could easily use a static route to connect our primary site to any SINGLE remote network using the S2S VPN, we couldn’t connect to them all a simultaneously. And since we couldn’t call this a “hack” without trying to get around that very specific limitation, we tried to figure out a way to mask the static route requirement from the primary network. So how about VNET Peering?

VNET Peering became generally available in Azure in late 2016. Prior to its debut, the ability to connect any network (VNET or physical) required the use of the VPN gateways. With peering, Azure VNETs in the same region can be connected using the Azure backbone network. While there are limits to the number of peers a single network can have (default is 10, max limit is 50) you can create a pretty complex mesh of networks in different resource groups as long as they are in the same region.

So our theory to test was…. What if we created a series of “proxy” VNETS to connect to the ASA devices using static routing but then used the VNET Peering feature to connect all those networks back to the primary network?ProxyNets

We started out by creating several “proxy” VNETs with a Gateway Subnet and an attached Virtual Network Gateway. For each corresponding physical network, we created a Local Network Gateway. (The word “local” is used here to mean “physical” or on-prem if you were sitting in your DC!) The Local Network Gateway is the Azure representation of your physical VPN device, and in this case was configured with the external IP address of the Cisco ASA.

The we switched over to the VNET Peering configuration. It was simple enough to create multiple peering agreements from the main VNET to the proxy ones. However, the basic setup does not account for wanting to have traffic actually pass through the proxy network to the remote networks beyond. There are a couple notable configuration options that are worth understanding and are not enabled by default.

  • Allow forwarded traffic
  • Allow gateway transit
  • Use remote gateways

The first one, allow forwarded traffic, was critical. We wanted to accept traffic from a peered VNET in order to allow traffic to pass through the proxy networks to and from the remote networks. We enabled this on both sides of the peering agreement.

The second one, allow gateway transit, allows the peer VNET to use the attached VNET gateway. We enabled this on the first proxy network agreement to allow the main VNET to direct traffic to that remote subnet beyond the proxy network.

The third one, use remote gateways, was enabled only on the main VNET agreement. This indicates to that VNET that it should use the remote gateway configured for transit.

PeeringNet1

One this was all set up on our first proxy network, it worked! We were able to pass traffic all the way through as expected. However, connecting to just one network with a static route was doable without all the extra things. We needed to get a second proxy and remote network online!

We flipped over to the configuration for the peer agreement to the second remote network. There we found we COULDN’T enable the “Use Remote Gateways” because be we already had a remote gateway configured with the first peering agreement. Foiled! 😦

PeeringNet2

Using a remote gateway basically overrides all the cool dynamic-ness (not an official technical term) that comes with VNET peering. It’s not possible with the current feature set of VNET peering to mask the static S2S VPNs we were trying to work around. It may be possible if we wanted to explore using a 3rd party VPN device in Azure or consider ExpressRoute, but that was outside of the scope of the project.

Still, it was fun to try to get it to work and learned a bunch about some new Azure networking features.  Sometimes, the learning is worth loosing the battle.

Cognitive Services, IoT Hubs and Azure Functions… on Mars?!?

Are you interested in getting your feet wet with Azure IoT Hubs, Cognitive Services or Azure Functions?  If so, don’t miss this change to get hands on with some Mars themed challenges in a city near you!

MISSION BRIEF
At 05:14 GMT, the Joint Space Operations Network lost all contact with the Mission Mars: Fourth Horizon team as they were conducting routine sample collections on the Martian surface. The cause of the interruption is still unknown.

Your mission is to join us in reestablishing communications between the Earth and Mars.

In this free hands-on event you’ll learn the full capabilities of the Microsoft Development platform while sharpening your skills in a fun, fast-paced environment. Meet our experts, develop your skills, and a get chance to put your development abilities to the test.

Microsoft experts will be on hand to take you through the following topics during the event:

  • Azure IoT Hubs – Learn how to establish bi-directional communications with billions of IoT devices.
  • Azure Functions – Dive into the event driven, compute-on-demand experience that extends the existing Azure application platform.
  • Cognitive Services – Build multi-platform apps with powerful algorithms using just a few lines of code.

Sound good? Find a city near you and accept the mission at http://missionmars.microsoft.com

Azure VM Deployments with DSC and Chocolatey

I kinda love deploying servers. Really I do. It’s one of the consistent parts of my job as Sysadmin over the years and generally it has resulted in great amounts of satisfaction. As a technical evangelist, I still get to deploy them all the time in Azure for various tests and a projects.  Of course, one of the duller parts of the process is software installation.  No one really enjoys watching progress bars advance, when really you want to get to the more useful “configuration” part of whatever you are planning.

Not that long ago, Sysadmins utilized a not quite magical process of imaging machines to speed this up.  The process still required a lot of waiting.  If one was doing desktop deployments, the process was only made slightly more bearable by looking at the family photos and other trinkets left on people’s desks. Depending on the year one might have been working, this imaging process was also known by the brand name of a popular software – “ghosting.” If you look up the definition of imaging or ghosting in the dictionary, you’d find that it basically meant spending hours installing and capturing the perfect combination of software only to find one or more packages out of date the next time the image is used.

At any rate, fast forward to now and for the most part, we still have to install software on our servers to make them useful. And without a doubt, there will always be another software update. But at least we have a few ways of attempting to make the software installation part of the process a little less tedious.

I’ve been working on a project where I’ve been tackling that problem with a combination of tools for deployment of a “mostly ready to go” server in Azure. The goal was to provide a single server to improve the deployment process for small gaming shops – in particular, allow for the building of a game to be triggered from a commit on GitHub. Once built, Jenkins can be configured to copy the build artifacts to a storage location for access.  For our project, we worked with the following software requirements to support a Windows game, but there is nothing stopping you from taking this project and customizing it to your own needs.

  • Windows Server with Visual Studio
  • Jenkins
  • Unity
  • Git

I’m a big fan of ARM Template deployments into Azure, since they can be easily kicked off using the Azure CLI or PowerShell. So I created a basic template that would deploy a simple network with the default Jenkins port open, a storage account and VM. The VM would use an Azure supplied image that already include the current version of Visual Studio Community. (Gotcha: Before deploying the ARM template, confirm that the Azure image specified in the template is still available. As new versions of Visual Studio are released, the image names can change.)

The template also takes advantage of the DSC extension to call a DSC configuration file to install the additional software and make some basic OS configuration changes. The DSC extension call the package from our GitHub repo, so if you plan to customize this deployment for yourself, you may want to clone our repo as a starting point.

You can find our working repo here and the documentation is a work in progress at the moment.   The key files for this deployment are:

  • BuildServerDSCconfig.ps1.zip
  • StartHere.ps1
  • buildserverdeploy.json

Use the StartHere.ps1 PowerShell file to connect to your Azure account, set your subscription details, create a destination resource group and deploy the template.  If you are more an Azure CLI type of person, there are equivalent commands for that as well.

Once you deploy the buildserverdeploy.json template, the BuildServerDSCconfig.ps1.zip is automatically called to do the additional software installations.  Because the additional software packages come from a variety of vendors, the DSC configuration first installs Chocolatey and then installs the community maintained versions of Jenkins, Unity and Git. (Creating the DSC configuration package with the BuildServerDSCconfig.ps1 is another topic, stay tuned.)

Once the deployment is complete, all that remains is for the final configuration to be set up to meet the needs of the developers.  This includes connecting to the proper GitHub repo, providing the necessary Unity credentials and licensing and creating the deployment steps in Jenkins.

Congrats!  You’ve now created an automated CI/CD server to improve your development process.

Hack Day at the SF Reactor – 1/10/17

Start the New Year off right by learning how you can improve your company’s workflow. Sit down with Microsoft technical experts who will help you automate your most repetitive tasks through open source technology. Spend the afternoon with us coding side by side where we will show you how you could automate your build process, create a chat bot to answer frequently asked questions, or scale your existing app through Azure.

As part of the hackfest you will hear from the Microsoft Technical Evangelism team who will show you how, you could:

    • Build great bots that converse where your users are
    • Build an automated, continuous integration Jenkins pipeline to help get your applications to market faster
    • Build Docker containers and scale through Azure app services

1:00 PM-5:00 PM, Microsoft Reactor, San Francisco, CA 94107 — A detailed agenda will be shared prior to the event.

You *must* RSVP by Friday, January 6th to dxhacksfest@microsoft.com.

We’ll help implement your existing project or create a new one. Our goal is to help you with your project so you feel empowered to keep working to refine your scenarios at work.

REMINDER: Bring your laptop and create a production ready prototype or proof of concept by the end of the day. An Azure subscription is required, we can help you get started with a free trial if needed.