Private cloud is easy

August 17th, 2017 Company News Fresh Perspectives

Blockchain and public ledgers. What is it all about? If you put it in one word, that would be “decentralization”. That’s what Ethereum guys are all about. While some people don’t mind, others are concerned about the situation, where handful of major players running their Clouds, and the rest of the world trusting them blindly with their data.

Nextcloud solved part of this problem in their own way. Want to just keep your documents and files out of reach of big corporations? Nextcloud gives you a straightforward solution: use cloud approach and still keep your data on your home computer.

Next step would be secure remote access. There is no point in keeping your files anywhere, if you can’t access them, right? Thanks to the progress, secure remote access is easily solvable.

Here are the options:

Option 1 (plain old way, without Beame):

Option 2 (keep it up, with Beame):

Welcome to community of those, who fight centralization, and make corporate and government spying harder.

We think you’ll find it much easier with Beame.

PKI based identity on a blockchain

August 15th, 2017 Company News Development Notes


Device and applications explosion in the IoT world turns every related security issue to critical due to its huge scale. There are numerous IoT manufacturers and application development companies, and thousands of already deployed IoT devices that potentially can be used by the attackers, as it recently happened several times.

In this post I will target the device identity. I will show how to use the most popular conceptual approach today – a blockchain, put on it the proven technology of the Internet – the PKI, and with that limit access to the devices to mutually authenticated sessions, where both identities: of the requestor and the device are cryptographically verifiable.

As usual, some technical background in form of general definitions first and then the magical part.

The technology used in this blogpost was developed in

Register for our webinar organized by GlobalSign and


There is a bunch of technologies used in this post. It will be much easier to understand certain things, if the reader has basic understanding of how hashing and digital signatures work, but I did an effort to make it readable for any 🙂


Blockchain is a linked (chained) list of records, intended to serve particular purpose. As of today main application of such model was cryptocurrency (Bitcoin), with slight movement towards more generic use (Ethereum). Records in blockchain contain data relevant to the host application.

In more common words: blockchain is the way to store application specific data, when the data by design can be split to discrete meaningful chunks. Relationship between consequent records shall be such, that removing an element would not pass undetected. The storage shall be decentralized in a way, that all elements of the system carry some version of the blockchain.

To ensure data integrity, system policy shall take care that all system elements have the very same blockchain version at some moment in time after each change.

The Ledger

Another word that can describe blockchain structure is the ledger. To make a logic connection, records in the blockchain would be called events, each event carries some event related data, that allow event (and whole ledger) validation.

Keep in mind, that every event should be signed by a valid signer.


Public Key Infrastructure – ecosystem that allows creation and management of digital certificates (X.509). Globally maintained by CAs (certificate authorities), that provide tools to keep the PKI up and running.

PKI + Blockchain = Identity

Let’s start from building a real life example: a house, with couple of computers and mobiles, an electronic door lock and a couple of security webcams (each device is intended to be set eventually with its own certificate):

The arrangement is not random – devices are put on layers. Topmost layer L0, displayed as main home PC, next one below – some laptop, main mobile phone L2.1 (also used to authorize identities of cameras and the lock that then appear on Layer 3), L3.4 – will be some guest mobile authorized by L2.2 (e.g. her mom came to visit on holidays).

Arrows show, how upper layer devices are used to authorize lower level devices (e.g. L2.2->L3.4). That structure will later allow us to replace separate identities, without need to replace all identities in the system (e.g. compromise response with limited damage).

To start analyzing the trust in such network, let’s define two logic components (remember that we are talking about PKI on blockchain?):

Certificate Chain

As it sounds – certificate chain is a logic chain built up of certificates. X.509 can support more than one unique name, and we will use that fact to chain up the certificates.

CN (common name) field, the domain name, will be the actual device identity (like a username). Using this analogy, private key used in appropriate way will be the password, with major difference that it never leaves the device and only used to prove the possession.

SAN (alternate name) will hold slightly changed CN of the entity that approved, or authorized the new identity creation (issuance of new X.509).

Built in such fashion, certificates will form a tree, where leaf nodes are the certs and relationships are created by their content (CN and SAN fields). Tree root will be the uppermost node (L0, or home PC in our example).

The picture below, shows our house in schematic view with devices displayed as certificates with some data, to show the bonds between them (L3.3->L2.2->L1->L0):

Consider a connection request from L3.4 to L3.3 (her mom is trying to open the electronic lock): in order to find common root, L3.3 (the lock) will have to walk up the tree, using SAN fields of L3.4 and L2.2, as well as its own, and the search will be: L3.4->L2.2->L1<-L2.1<-L3.3

This means, that L3.3 is a relative with L3.4 through the L1, and with trust criteria correctly set the connection can be securely allowed.

However, there is a catch: how can we be sure, that the certificate of L3.4 is indeed signed by the correct key? Or in other words – if some bad CA signed a certificate with the same CN and SAN fields, how can we tell them apart?

Here when it is time to put it all onto the Ledger, and tie with authorization proofs.

Authorization Proof Chain

First of all several basic claims and definitions:

Here we can form a definition for authorization proof:

Real life example would be – teacher and parent signatures on homework diary.

Looking at our example: L0 (the PC) authorizes issuance of X.509 to L1 (the laptop) and alongside that, creates a record (event) with signed name (CN) and public key (generated by the L1).

Given that:

  1. L0 is already equipped with X.509 based identity
  2. All devices equipped with SW tools allowing communication and required cryptographic operations (beame-gatekeeper, beame-insta-ssl, beame-authenticator)

This flow will be:

  1. L1 generates RSA key pair and sends request to L0 for identity creation
  2. L0 generates valid CN (FQDN) for a new identity
  3. L0 creates authorization token, containing authorization proof (signed L1 public key and the new L1 CN)
  4. L0 creates a record for issuing a registration token
  5. L0 sends the token, containing the record from 4), back to L1
  6. L1 stores the record from the token
  7. L1 generates certificate request (using own private key and the token) and sends it to CA
  8. CA verifies signature of L0 and the request
  9. CA signs the new cert and sends it back to L1
  10. L1 gets a certificate and creates a record, following the record of its authorization proof

Now this procedure will be repeated for every lower level device (mobiles and webcams). Keep in mind, that L3 devices would be authorized by L2 identity. So by the end of the process, we will have that all devices own a valid certificate, connected by its content to its ancestors up to L0, and also holding the proof of its creation, signed by the authorizing entity. Lets call these records “creation events” and draw on a simplified diagram how they are related:

Looking at the diagram, it is easy to spot, that walking up the “create” events, we can get from any device, to the corresponding “create” of L0. So it looks just as walking up the certificates. With some significant differences:

That’s that. The diagram we just saw is the Event Ledger and finally, considering all the functionality, and it is our Blockchain.

Small problem: each device seems to have different version of the Ledger. How do we deal with this? Correct – mining. We need some external highly available entity to do a synchronization of our blockchain. Availability level – is a function of how fast the system changes, so for our system it can be rather low (may be even performed on-demand).

For the purpose of this limited use-case, we’ll set that responsibility to L0, our main home PC. For that, any device creating an event, will send its full Ledger to the Miner and the latter will merge it and send it back. So now there will be N+1 verifiable versions of the Ledger in the system (N = number of devices).

Now any device, receiving a request and, for example, not finding suitable authorization proof in local ledger, can request the Miner to perform required merge, by sending to it its own Ledger along with the requestors one. All what will be left to do, is to verify (e.g. by L3.3), that merged version is correct, by walking through it and finding corresponding event.

So let’s look back at the scenario, where L3.4 (her mom) was trying to access L3.3 (open the door lock). We were able to verify on L3.3 the identity of L3.4 by finding the certificate of L0 in their common certificate tree. Now, L3.3 will be able to verify, that the X.509 certificate, that L3.4 owns, contains the right public key, by manipulating the events in the Ledger. Fin.


The PKI based identity on a blockchain is the new security technology developed by

In this blogpost, we’ve seen a smart-home, where home appliances and computers identities are members of global PKI, and their common history being kept on limited functionality event ledger – the blockchain.


Is it scalable? – Of course. Define system functionality. Add event types (there are a lot). Use suitable amount of miners.


What happens if my certificate is lost/device broken? – That one is left to the same operations: verifying the cert, finding corresponding events in the Ledger and making decisions (subject for separate blogpost)


What if my certificate was compromised? – Compromise response is yet another blogpost. Still, using flexible logic and correct event structure it is possible to get to very limited damage in certain compromise scenarios.


Can I use same identity to connect to my cloud IoT services? – Yes. Want to know more? – Register for webinar organized by Globalsign and

Insta-SSL as a simple way to use RDP / VNC / SSH into LANs

July 11th, 2017 Company News

Tldr; offers open source tools allowing extremely scalable and simple deployment of the X.509 certs, as well as connectivity solution for devices residing on LANs.  The certs can be used for many purposes: authentication of an IOT device, 2fa, digital signatures etc.  

Below you will learn about new features we’ve just added to one of our most popular products.

beame-insta-ssl is a tool intended to expose applications running on LAN to global network through HTTPS. The tool, as its base service, allows receiving public SSL certs for hashed hostnames, under domain. These certs are fully trusted by mobile devices and browsers alike. The hashed structure represents a trust tree (defined by yourself), to which only you or those you delegate can add/modify  members.

Now access can be granted to specific resources to specific parts of the tree, and it can be managed.

In this post we present is a feature, allowing you to establish a TCP tunnel from point A to point B, with pretty much as secure as it can get. What can this be used for ? Remote Service and Support, connecting devices using an insecure protocol over WAN infrastructure. Do you have a unique use case for it ? let us know…

Many of you, beame-insta-ssl users, have asked for it and now it is here. This means you can use it now for secure remote access/support of desktops, IoT devices, and servers alike, yes you can use it to fix your mom’s computer, if you can get her through the installation of nodejs. (-. We will make this easy and deployable with one click. I promise.

We are launching new versions of beame-sdk & beame-insta-ssl (comes in single installation of insta-ssl). New key features are now being exposed.

TCP-2-TCP Tunnelling

Background: We want to transmit data over the public internet, between point A and point B. The data can be any TCP payload, so RDP, VNC, SSH, DICOM,  HL7 or anything else just fit into it.

Till today there was simply no technical option without addition of the remote segment as a network through a (a) VPN connection or (b) intermediate breaking the crypto in between like all remote support tools.

We are now proposing a secure, simple, easy and cheap mechanism to establish a TCP connection between points A and B, with end to end security using standard TLS as a transport, over an untrusted network, without a need for a VPN. This is easy, manageable and cheap.

To understand the core problem in today’s access model, please consider the following:

Suppose you are deploying a device, a router, a robot, you install it at a customer, and now you want to access the device, for service and maintenance.  You have lots of techs, they come and go, how do you manage this secure vendor access?


Tree Based Trust

Suppose the following model:


This is a layer-structured tree, with L0 (Level zero) being its top. Current solution allows any node on the tree, to define its trust criteria, by choosing the desired top (highest level up to L0and depth (number of layers beneath the node).


How does it work?

Connection diagram:

Description of the flow

beame-insta-ssl is started with its destination being localhost:22. Upon launch beame-insta-ssl will register its endpoint for global access through the beame-router. This endpoint is capable of receiving signed TLS traffic which will relay the traffic over the websocket connection to the beame-insta-ssl on the client side. Then at some point in time user will start  the beame-insta-ssl client and the connection will be created between the two insta-ssl instances, using client auth and the trust tree logic for authentication. Next, user will start an SSH client pointing to the localhost, at port to which the client beame-insta-ssl is bound.

Wait there is a client ?

Yes. This use case is specifically intended for insecure protocols, such as DICOM for example, which simply can not go unencrypted. So if you want client-less – you https from a browser, this is to get data from point A to B safely.  The good thing that beame-insta-ssl comes with both client and server components in the main repository.


But there is SSH already?

Yes, and this comes as a layer below SSH. SSH requires port 22 which is closed  in most enterprise environments. The way how this would work with SSH is – you would require a server accessible to both sides, and you would have to map a machine to a specific local port. The end user keys have to be individually inserted into each of the target machine, and keys of the target machines on the proxy.  All of this is easily done for one or two boxes, beyond then it becomes a gigantic pain. Beame addresses the issue and secures it at the tunnel level.


What is the big advantage of beame ?

Major differences :

  1. No man in the middle by design
  2. Easy management of access
  3. Easy tools for management of access, and trust based on crypto.
  4. This is as secure as any tunneling technology is going to get.


How do you get this to work

Suppose you are accessing a machine that resides on LAN by ssh:

** Prerequisites: node 6.9.x installed


  1. Install beame-insta-ssl by:
    npm install -g beame-insta-ssl
  2. Register at
  3. Run the command from the email and voila you have L0 credential
  4. Now generate the token for your client device:
    beame-insta-ssl creds getRegToken --fqdn fqdn-you-just-got
  5. Copy the reg token, and somehow securely deliver it to the client machine
  6. Start the tunnel host by running:
    beame-insta-ssl tunnel make --dst 22 --proto tcp --fqdn YOURL0 --highestFqdn YOURLX --trustDepth 3 --noAuth false

Lets dig in and understand this:

— dst — destination port to which traffic will be forwarded (22 is for SSH)

— proto — (tcp/http/https)

— highestFqdn — VERY IMPORTANT: the FQDN of the most senior ancestor which, if found in the certificate chain of client cert, will allow him access.

— trustDepth — how many generations of child nodes (created below your cred) will you trust

noAuth — set it true if you do not require client authentication or just skip it


  1. Run if there’s no insta-ssl yet (skip this step if you have it):
    npm install -g beame-insta-ssl
  2. Then run using the TOKEN you got on step 5 above (this will create client cert):
    beame-insta-ssl creds getCreds --regToken TOKEN
  3. Now lets create actual client to connect to our host
    beame-insta-ssl client make --dst 1234 --fqdn --src YOURL0
  4. Start ssh client in shell:
    ssh -p 1234

At this point you tunnel should be up and running and ready to receive connections

RDP? Just replace port number in the example above with 3389, and instead of the last step run RDP client application with pre-configured Username/Password to

Such tunnel runs all the way over standard TLS and is available over global network to any requestor that holds valid credential.

And for the end: If you didn’t spot it – Yes, it supports unauthenticated connections if such needed. Just use –noAuth true on server-side and you can skip client cert (–fqdn) for tunnelClient and all related to authentication on server.

Distributed as open source. Install now from npm. Get sources on beameio-github

Using green certificates for your web applications

June 29th, 2017 Guide


This post is intended to anyone, that somewhat concerned about communication security, has basic understanding of why web applications are called so, what cloud is, and how to distinguish between X.509 and  birth certificate to the level, that it is possible to say what each of these is good for.

First of all: HTTP should disappear. I hope this does not sound as an overstatement. Sending unprotected data over the Internet today, at least for people that care, is simply irresponsible.  Connecting parties should recognize each other, and be sure, that only they can peer into whatever they send. And no doubt, the technology is out there, just waiting to be used. Internet browsers do splendid work filtering unfaithful resources. Communities work hard to investigate holes in web applications security techniques and publish their study for anyone who’s interested.

Now, we live in a time, when everything moves into a cloud, so is all that safe if one accesses web application over HTTPS (the secure version of HTTP)? What can happen behind the scenes that will make the connection as unprotected as if it was an outdated HTTP? How easy it can be getting a certificate to run your own web app through securely? Let’s start from going through web app architecture and see.

TLS termination

The very base of secure communication, cryptography is used to change the data in a way, that only use of particular secret key makes it meaningful.

To secure communication between web browsers and web servers, all involved parties use low level protocol called TLS (Transport Layer Security, successor of broken SSL), that performs all cryptography tasks. Browsers indicate TLS protected resource by displaying a green lock to the left of site address:

When implementing TLS traffic encryption between client (for example web browser) and server (e.g. web server), one might assume that “end-to-end” encryption would be between the client and the server. In real life it’s rarely so. Common infrastructure layouts include terminating TLS traffic (opening the encryption) from the client (your web browser for example) before it reaches the servers it was destined for.

In addition web and/or application servers can have encrypted or unencrypted connections to databases and other backhaul services. We will omit these connections for the sake of simplicity.

“Application” in our case means software that implements some logic. Application gets a request, processes it in some meaningful way and sends back a response.

TLS termination at the load balancer

Here is one very common TLS termination layout: TLS is terminated at load balancer. The encrypted traffic starts in the browser and ends inside the load balancer provided by one of Amazon Web Services (Elastic Load Balancing), Google Cloud Platform (Google Cloud Load Balancing) , Microsoft Azure (Application Gateway), or other cloud providers. The traffic is not encrypted between the load balancer and the server that handles the request:

The cloud provider might choose to encrypt the traffic anyway in a transparent way.

NSA-like folks would probably target the traffic from load balancer to the servers as it is the most convenient point to sniff. Even if it’s encrypted, breaking this encryption would be a worthy target.

TLS termination on web+application server

Less common cloud-based layout – TLS is terminated on the servers that respond to requests. Compared to TLS termination at load balancer, this layout requires a bit more architecture related efforts, since deploying and updating the X.509 certificates happens on all of the servers (see image below) instead of one place – the load balancer.

TLS termination on web server, HTTP traffic to application server

Another possible configuration: encrypted traffic up to web server while the latter uses HTTP to talk to the applications.

Simple / home server

The simplest configuration: single server. This might be your server at home. This configuration is not common in the cloud as it has no redundancy.

HTTP(S) server and application server

Web servers usually have two well-formed components.

In some configurations these components run in the same process:

In other configurations they are separate processes:

When terminating TLS on the server, it will usually be in the network processing components of your server: NodeJS, Apache, Nginx, etc.

Why use certificates?

Getting a public x.509 certificate can be challenging. Getting it along with connectivity is twice as appealing. built a line of products, that does exactly that: easy getting a x.509 certs (while keys are generated on the device) and make their CNs (common name from the cert) routable (resolvable in DNS).

If you use one of the products to obtain certificates and you have decided to terminate TLS on the server (your own application, Nginx, Apache), here are the instructions for using Beame certificates on some common platforms.

Exporting Beame certificates

To proceed with following instructions, keep in mind, that all occurrences of $FQDN below are to be replaced with the FQDN that corresponds to your certificate. And $DIR should be replaced with the actual accessible directory.

To get hold of your first public X.509 certificate (you can create more certificates using the one you already own, considering that it is a cert), you should install one of the Beame products. And beame-insta-ssl is the most easy way to do so :

npm -g install beame-insta-ssl

Register to get a token and proceed with simple instructions in registration email.

Eventually it doesn’t matter which product you used to get a certificate, beame-insta-ssl will be able to export the credential to a specified directory (just like e.g. Beame Gatekeeper will do it):

Now lets use it!

Below I placed two examples, of how to implement actual HTTPS server with common tools, using freshly exported credentials (certificates and keys).


Nginx tools

There are Beame tools that will allow, along with certificate management, building many things, from simple TLS tunnels, to real network with authenticated access.

I will not put the detailed description here, just because it is well presented on product pages.

Considering that tools are open-source, you also can peer into and mess with the code.

VMs, Docker and vs NSA

June 5th, 2017 Company News


Going into cloud is unavoidable. Are we actually able to control where and how our data is transferred and stored? Security agencies all over the world want to collect as much of electronic communications as they can. Just in case. Is it possible to deny them access? Let’s see what software developers and systems engineers can do about that. Except for the AT&T secret room what would be another good point to tap into network traffic?

First we will go over the important system components, in order to understand computer systems network layout.

How a Computer System is built

General principle

A computer system is a compound environment containing:


Server is a software that handles requests (e.g. a web server that handles requests from web browsers). Typical conversation between web browser and web server looks as follows:

Since server is a software it needs a hardware to run on. The hardware varies widely depending on use case:

Web server

A specific type of server. This server processes incoming requests and returns web pages. In some systems, the web server talks to database server and builds the resulting web page based on the data it got from the database. In other systems, the web server talks to “app server” or to many other “services” and maybe also to one or more databases in order to gather the data needed to build the resulting page.

App server

In some situations, the core logic resides in “app server”. During handling of a web page request, a web server may issue zero to multiple request to the app server, depending on request type. After talking to app server, the web server can return the web page that was built using the information it.

Containers / Docker

Independent on which hardware the software runs on, it can run inside a software container. A container is something like a sandbox for fully functional Operation System. Running a container looks like a process (or a set of processes) to the host system.

Containers provide basic level of software isolation. The most popular containers implementation is Linux Containers and the the most puppet software to manage Linux Containers is Docker. Docker provides a convenient way to package and distribute software.

Load balancer

Usually lives in the cloud. This is where requests from Web Browsers go to. Load balancer is a software that handles incoming requests and distributes the requests to servers that will be handling the requests. It’s needed for two main reasons.

Network equipment

Well, this is all HW

Common computer system network layouts

Tiered architecture

Also known as Multitier architecture.

  1. Load balancer + web + databaseNetwork traffic:
    • clients (web browsers) to load balancer
    • load balancer to web server
    • web server to database server
  2. Load balancer + web + app + database
    • clients (web browsers) to load balancer
    • load balancer to web server
    • web server to app server
    • app server to database server
  3. Web load balancer + web server + app load balancer + app server + database
    • clients (web browsers) to web load balancer
    • web load balancer to web server
    • web server to app load balancer
    • app load balancer to app server
    • app server to database server

Services architecture

This is what commonly referred to as Service-oriented architecture.

  1. Load balancer + services + database
    • clients (web browsers) to load balancer
    • load balancer to web server
    • web server to app server

Where would NSA plug in to sniff?

There is a lot of network traffic going on. The requests arrive via Internet to a load balancer. The Internet is insecure by its nature and therefore the traffic from your web browser to the load balancer is usually encrypted end-to-end (TLS – security on transport layer). What usually happens next, is that the requests coming from your browser are decrypted at the load balancer and passed to the web server unencrypted. In addition, when web server talks to app server and/or to the database, important information flows between these points, usually again unencrypted.

It so happens, that plugging into a network switch just after the load balancer, or somewhere deeper in the system (less preferable) would expose the actual data.

Let’s Protect!

We already know, that traffic from browsers to load balancers is encrypted. The web servers use SSL certificates to identify themselves and allow clients to start encrypted sessions. The basis for that are widely used x.509 certificates. A bit more about them in another post.

Here we will explore possible use cases for X.509 certificate installed inside Docker or VM. So latter will come equipped with SW, that will allow easy x.509 deployment.

There is a software framework, developed by, called Beame Gatekeeper, that simply does that. Generates keys, certificate requests, receives and stores certificates, and provides tools to manage and use them later.

When configured to run as a system service, the Gatekeeper bootstraps automatically.

What is left to do, is to apply the certificate to encrypt previously open traffic. For this part, the Gatekeeper provides tools to expose a servers running on local network.

Considering that Dockers and VMs are usually created and destroyed on demand, and often automatically, using Docker or VM images with preinstalled Gatekeeper solves the problem.

For systems, where the architecture is almost static, the solution would be to install certificates once, without messing with the system images.

x.509 based identity, OS level or dedicated application?

June 4th, 2017 Company News


Using x.509 certificate as identity tag, makes identity verification independent of its owner personal features. All is put on cryptography, that proved to be the only unbroken barrier between secured data access and complete uncertainty when my dear virtual “self” will stop being really mine.

Lets consider that we agreed that x.509 based crypto identity is the future (see my other posts to understand why I insist on that). Now, how and where the secret part of such crypto ID is generated and stored? Why should one trust that? Well, there are options. In this post we’ll discuss two of them. As usual, some technical terms with simple explanation in the beginning, and may be we’ll be able to decide which way of crypto ID storage is the best.


Client-side certificate

Such credential is usually produced on some independent HSM, and provided to the target machine as Pkcs or Pfx  file (kind of a bundle of crypto data packed into archive, optionally encrypted and/or protected by password). Import of such credential requires root/admin access, and the process of adding it will be different for each OS, and/or Internet Browser.

On successful import, OS/Browser will save the provided credential in its secure storage, specific for each OS (certificate store for Windows ,  keychain for MacOS or iOS,  credential store for Android etc).

So happens, that Browsers treat Client side certs differently. For example Firefox stores it inside its own secure storage, whereas Chrome requires root access to push the cert into OS keychain. The way, how Browser/OS stores the certificate, is not exactly covered in this post. What should be clear, is that it is not a trivial task adding or replacing such credential.

Eventually this cred is a part of device’s Operation System or Browser PKI vault. And it does not matter where we are: mobile, home PC or laptop, and independent of the OS type. It will be accessible to the application that requires it,  to be presented whenever the application will try to access a server, that requires client-side certificate as identification.

Regular use-case, is when Internet Browser, during connection attempt, prompts to a User to choose a client-side certificate from the list of installed certificates. Not user friendly, due to the fact that usually prompt proposes CN certificate field as its identifier, so no real logical connection between the names in the list and service one tries to login.

Would be used to authenticate the device it is installed on. Best fit for enterprise laptop, when service and corresponding cert are configured by the IT group. Can be used on mobile devices, limited to access applications that are fit to run in mobile browser.

Dedicated Application certificate

Each installed application has rights to import a valid x.509 for its own needs.The certificate is validated by the OS prior to be stored in device’s application specific secure storage.

In this case the keys and request are made and certificate received on target device in some custom secure procedure, and are unavailable outside the parent application. Use of such certificate is limited to the parent application functionality.

More secure than the first one, due to double protection (protected by the OS and by Application) and the fact that it is used by the same device that created it.

Best fit to authenticate independent sessions (like using mobile based cred to open a session on some PC).

Simple to use.

Ideal for crypto ID functionality.

x.509 ID, where to store, summary

As it can be understood, those two types , though formally alike, are very different in their final form.

I’ll build the summary based on two products: Beame Gatekeeper and Beame Authenticator.

To relate the Gatekeeper to the theme of this post, it can be considered as a framework, to create and export the Pfx, and verify the cred’s validity afterwords. So it would be installed on some secure machine, and controlled by the administrator. Client-side certs based credentials produced by it will be transferred to target devices by one of available methods: email, shared drive, or any other custom secure process.

Beame Authenticator would be the application that creates and imports application specific certificate onto target device. It will contact some Gatekeeper to register a new credential, and any use of that credential will be handled by both the Authenticator and the Gatekeeper.

Both ways of storage have advantages for specific use-cases:

SSO and custom Identity Providers

June 3rd, 2017 Company News

SSO. Introduction

Using same identity to login into multiple services over the Global Network is really handy. From the point of user experience, this arguably would be the best invention in whole area of access control in last ten years or even more. However, I emphasized “user experience”. With ability to open multiple services with one login, security became more fragile. Why? Kinda easy answer – it is much more appealing, breaking into one account to get access to all services at once. And yes, the security mostly is still passwords and shared secrets based. There is MFA indeed, if we speak of unauthorized access. However, MFA is not everywhere yet, and it cannot protect if bad guys get access to shared keys on Service Provider side (what possibly had happened in recent OneLogin security breach).

In this post I am going to describe how asymmetric crypto and PKI can help to solve the problem of stolen identity. First I’ll go through some technical background staff and then towards the end I’ll bring in a new player on the SSO field – x.509 certificate as Identity Proof.

Some definitions first


Single Sign On is a technology that defines how to manage access control of multiple, independent systems / services, using single interaction with the user to identify the connecting persona, without further need to bother the User on consequent request to another service. From here forth we’ll be talking about authentication and not authorization, in other words, here we care who the User is and not what he’s allowed to do. 

Common technologies that allow SSO implementation:

Kerberos based

On initial access user provides identifying credential and gets Kerberos Ticket-Granting-Ticket (TGT) in return. The latter allows acquisition of Service Ticket to get an access to particular service requiring authentication.

Generally limited to enterprise LANs or other Kerberos networks.


Smart card based

User uses secure dongle to provide credential when requested by Service Provider. It is plugged in once secure session started and is accessed by supporting services during authentication procedure

Common approaches: Old fashioned: username/password

Modern: x.509 certificate provisioned to the dongle through HSM

Combination of the two

Secure. Cumbersome. Not fancy.

Integrated Windows Authentication

Uses native security features of Windows clients and servers. It is a set of protocols, that are set in priority order in corresponding setting of IIS. Can act as Kerberos based SSO, by acquiring TGT if Kerberos provider is properly configured and available, if Kerberos procedure fails or is unavailable next option is picked, e.g. NTLMSSP (Microsoft binary messaging protocol, utilizing NTLM challenge-response schema).



Security Assertion Markup Language is a form of federated identity based authentication, intended to be used over the internet.

It is a XML based open standard, that defines format of information exchange between Identity (IDP) and Service (SP) Providers. Messages are transferred through User Agent (internet browser), in some cases messages between IDP and SP can be sent directly.

As it sounds, IDP does not have the content User is trying to access, but has means to verify User identity. On the contrary, SP delegates identity verification to the IDP, and the content is presented at the end of SAML identity validation process. 

SSO usually comes only as an additional way to login. Username and password are still there and can be used.

SAML SSO implementation requires proper configuration of IDP on SP and SP on IDP side. Initial trust between IDP and SP is created in SP defined proprietary process (like login with valid Username/Password and configuring corresponding settings of SP application). User signs up with same ID on IDP and on SP, and that ID is provided in SAML Response, that is sent from IDP to SP once IDP had validated the user identity in some custom procedure.

Considering that we are talking about authentication over the internet, SAML will be the SSO technology of choice for this post.

SAML based SSO how-to

To get general understanding on how it works, consider following login flows:

Well, functionally there’s a session ID somewhere, to allow SAML Single Logout. But the latter is not in scope of this discussion, due to the fact, that it is just very similar to login message exchange. Nothing related to initial authentication process that takes place on IDP.

It is easy to spot, that IDP authentication process is absolutely unspecified. What standard defines, is the format of SAML request/response messages and bindings (the way messages are sent: POST, redirect).

Historically, most existing IDPs use same LDAP schema for user identification. This leads to the same security issues as in regular old-fashioned login, where authentication is held by the SP.


x.509 as a proof of identity

What would be really nice – is to make a User identity a crypto challenge. Like SSH. Not any challenge, but one where the secret is known only to the identity owner.

There is a well known way of doing that – RSA asymmetric cryptosystem. It is what is used to create a popular x.509. In several words: there are two keys one is called private and is usually stored in secure vault, the other is public, and is available to anyone. The public used to verify, that for certain data manipulation correct private key was used, or to encrypt a data to be open only by private key holder.

Let’s remind what x.509 is: a digital certificate that ties together some identity and a public key. And the owner of corresponding private key considered the x.509 owner. Or in other words – x.509 proves the identity of certain private key holder, and that identity is written in x.509 structure.

One can create x.509 at leisure, these will be called self signed, and cannot be approved by standard networking SW (browsers, CA APIs etc.), they are usually used in private networks where trust is built based on internal policy, and verification and management handled by private methods.

Publicly trusted x.509s are produced by Certificate Authorities (CA). They build a basis of what is called the PKI, a basement of secure communication over public networks. Recognizable by common networking SW, they allow a User to rely on standard certificate validation means like CRL and OCSP (both provide revocation state of the cert). Using public certificate, allows its validation on any public network.

What should be done to make x.509 serve some other need except for domain identity? Or lets put it this way: how can one use PKI to create verifiable identity for SAML Identity Provider?

It will require: ability to create such identity at scale, connect this identity to some real ID, deliver it to the target device, manage the PKI for the newborn crypto-ID.

The solution

There is a new breed of Identity – cryptoID. has developed a set of open source products, that allow exactly what is needed to make a User crypto verifiable.

Getting cryptoID to a mobile device is a matter of seconds. Keys are generated on a device, ID is a meaningless FQDN and corresponding x.509. All ready to be verified by anyone who requests it.

Using the cryptoID, User can make the initial password really tough, just keep it somewhere, it won’t be needed often.

Such identity cannot be broken easily – gonna require steal the mobile device, open the system lock and open the app lock all in several attempts.

5 retries given to open the app, after that – it deletes secure container. So User will be required to go and create new cryptoID from the Identity Provider. Nothing happened.

One can say: what if… Yes. But when User finds out, that the phone is stolen, the current cryptoID is easily replaceable by revoking the old x.509 and creating the new one.

And if User looses the email account? Well, until the mobile is around all OK. If both are gone (email and mobile) – just to go back to the Service Provider and recover from there.

And for the happy end: here’s a demo how SSO works with, how any SSO Identity Provider should work.

Authentication in WLAN. How to make Client recognizable

June 1st, 2017 Company News


WLANs. Those networks are historically problematic towards both sides: Provider and Client.

Due to WiFi nature, there’s not much to do in public environment to ensure Client security when connected to public AP. Even if AP is able to use strong security (e.g. PEAP with TLV crypto binding), it will not help, if Client can be convinced by MitM to use weaker security. Since public WLANs are intended to serve virtually anyone, they do not limit Clients to only those, who support particular type of security (down to some logic limit, like EAP-MSCHAP or similar). Protecting Client’s link to the AP, is usually allowed to go to the lower default. So Client is always in a risk of connection to some rouge AP.  The most safe thing here, will be – not to provide sensitive data on connection establishment, so that attacker would not gain from the attack. All this more dangerous for enterprise WLANs, where User can be synchronized with AD or corporate apps login, and less for public WLANs.

From other side, real AP owner of, e.g. some regular public place like hotel, is interested to limit access only to its customers, and for limited time, due to his will to provide better service.

Usual practice, is setting a password, that is provided to a customer along with the order, key or any other standard interaction.

Passwords are relatively weak. Setting strict password policy makes it harder to maintain and less friendly to use. And still, passwords are fished out, broken by sniffing and offline dictionary attacks, or just stolen.

So, how a client can be allowed access to particular AP, for limited timeframe and still to have good user experience?

In other words, how can decision making infrastructure be made reliable, fast and scalable?

To get to a possible solution, I will provide some technical background first, and then it will be easier to prove the point.



Authentication Authorization and Accounting, is a framework to define criteria of access to network resources.

The term is used in Networking equipment. Supporting device could be even called AAA server, where each “A” will represent a set of configurations.



A standard for AAA, AP backhaul client-server network protocol, that manages network access, in Client initiated flow

  1. access-request
  2. access-accept/reject/challenge

Typical network that uses RADIUS for AAA, comprises Network Access Server (NAS) , RADIUS server and custom Authentication process, intended to return Success/Fail response to the RADIUS server in response to provided User authentication data.



Network Access Server is usually a device, that has PPP interface towards Clients and RADIUS client to communicate to RADIUS server (all can physically be installed on one machine). So functionally, authentication-vise, NAS can be considered as a bridge for AAA network access management.



Widely used, point-to-point protocol, that defines direct connection between two nodes. Has its own security extensions (EAP, PEAP etc), that support variety of authentication options. The EAP is responsible to request a Client for authentication response, once link between peers has been established. In public open networks, this is frequently bypassed (allowed in standard), and Client identity is considered valid by default.


Typical open WLAN access flow



So, in order to satisfy the requirement (provide reliable way of identifying Client for limited time), system should create logical connection between Client device and Authentication service:


There are basically two options: proposed completely new way of managing identity: use unique meaningless FQDN with corresponding secret key and x.509 certificate as identity proof.


Here how it works for both options:


Why not to use self-signed cert? The certificate shall be valid and verifiable, publicly trusted. Otherwise the mobile OS will just not allow it into the secure storage.

Proposed solution allows creating strong, manageable Client identity in a minute, with easy integration into existing networks, and with great user experience.

VPN is great … workaround

May 28th, 2017 Company News

There’s no simple solution, that will allow access to applications, that reside inside the corporate network, from outside the company premises. While the need to access such applications (like corporate email, or data that is hosted on corporate PC) is undeniable. In this post we will give an overview of existing solutions, and in the end, we will present the Beame Gatekeeper , as a tool to be used to make corporate applications accessible, while keeping them protected.


Protecting corporate applications

There are quite a few aspects to be addressed when protecting a software application.

Among other complementary measures, common practice for protecting business (web-)applications on corporate network is to limit the network access: only computers in the office’s network are allowed to access corporate applications. In theory, this blocks potential attackers as only authenticated employees have the access.

How these applications can be used by an employee when he/she is at home? The employee can’t be inside corporate network when at home, right? There is a work-around for that. It’s called VPN.

What is VPN?

VPN is a software that secures traffic between two communicating sides. Each side can be a single computer or a computer network. Typical VPN bridges between corporate office network and a laptop of an employee. When connected using VPN, the employee can access applications that reside on the corporate network because the employee appears as if he/she is on the corporate network.


The distinction of “on corporate network” vs “outside of corporate network” should not be a factor for any security decision.

Some years back the network security perimeter was clearly defined and aligned with physical limits of corporate campuses.

Today, due to distributed workforces, mobile workforces and cloud usage the network perimeter is not only outside the premises, it’s also less controllable.

VPN for users outside the corporate network

Employees often work remotely and also access corporate resources via mobile phones. This fact does not fit into network segmentation security model. VPN is a workaround so that remote users might appear as if they are on the corporate network.

The right thing to do is to review and replace the security model as it does not meet the requirements anymore. Enterprises should follow Google’s lead regarding security: get rid of network segmentation based model.

VPN can be OK as a temporary measure while the underlying system is replaced.

VPN requires complicated software to be installed on the connecting device. This software must interfere with networking as it’s exactly the purpose of VPN: mangle network traffic so that the device would appear to be on the corporate network (in our case).

“Better performance, more reliable connections, and improved ease of use topped the list of most-wanted improvements.” —

Corporate network is not really secure

Spear phishing, viruses and malware which are designed to penetrate the network perimeter are getting better. So are the defences. That’s a race. As in any race the attackers lead from time to time, even if it’s for a short period of time. So corporate network is very likely to be penetrated at some point.

If access decisions are based on the fact that the requesting entity is “inside corporate network”, knowledge of mechanism of taking these decisions helps the attackers to proceed with next steps.

External resources about outdated security model

For those who’s eager to read more about how VPN approach is bad, below some links. Skip it to the last chapter if you more interested in solution than in a problems.

The solution

There’s an open source framework, called Beame Gatekeeper, by , that allows easy management of credentials. All based on pure cryptography, the Gatekeeper uses mobile phone based identity and turns access control from centralized to distributed.

Let’s summarize all the above and define criteria for creation of secure VPN like network:

Managing StrongSwan VPN with

May 24th, 2017 Guide



Cryptography is used to securely communicate between participants. Encrypted data can be sent over insecure connections because it is very hard (practically impossible) to decrypt this data without having a digital secret key.

There are two main types of cryptography:

Shared key cryptography

Also known as as symmetric-key cryptography. Both sides have same, shared key.

Pros: simple.

Cons: Does not allow scenarios of any complexity as trust can not be delegated. No clear way to revoke trust after the key was compromised, one have to explicitly reconfigure peers’ configurations.

Public key cryptography

Also known as asymmetric cryptography. Each side has it’s own key. Such key has two parts: private and public. Private key should never leave the device it was generated on. Private key allows decryption of data that was encrypted for it, using the public part of the key. Public part can be sent over the internet.

Pros: trust can be delegated by signing additional certificates and validating the signature by other participants. Certificates and therefore trust are limited in time and can be revoked. This allows flexibility.

Cons: requires infrastructure

What is VPN?

VPN is a software that secures traffic between two communicating sides. Each side can be a single computer or a computer network. Typical VPN bridges between corporate office network and a laptop of an employee. When connected using VPN, the employee can access office servers that reside inside office network. Typical servers on corporate network are Exchange mail server, file sharing servers, etc. Corporate servers usually not exposed directly to the internet for security reasons. VPN requires both sides to be authenticated and authorized. This is done using usernames and passwords in some cases but cryptographic certificates are believed to be better alternatives.

What is cryptographic certificate?

As mentioned above, information can be encrypted using public part of cryptographic key. How is it possible to determine that the key belongs to whoever it claims to be? That’s what certificates are for. Cryptographic certificate binds two things together: identity (such as and a public key. How one can trust a certificate? That’s what PKI is for.

What is PKI?

Public key infrastructure is a way to work with certificates and ensure trust chains. Trust chain starts at a Certificate Authority (CA), one of many world-wide trusted companies. CA certificates are typically installed on computers (PCs, Macs, servers) during operating system installation. This allows your browser to decide which sites are trusted when surfing to HTTPS sites.

Each CA can be imagined as pyramid’s top. Certificates signed by CA are automatically trusted by the PKI-implementing software. These could be imagined as the pyramid level just below the pyramid’s top. Certificates signed by certificates that are signed by CA are also trusted and so on, till we reach the bottom of the pyramid. Certificates at the bottom are the ones that are used to authenticate sites (Google, Facebook, etc) and less frequently – people. Common height of such pyramids is 2 to 4 levels, including top and bottom.

Current problems

Private CA insecurity

Private, also called “self-signed” CA is yet another pyramid’s top, except in this case, it does not belong to a publicly trusted CA company.

Typical installation of StrongSwan VPN (and some other VPNs) relies on private CA. This CA is used to issue (sign) both VPN server certificate, which is used to authenticate the VPN server and VPN clients’ certificates to authenticate VPN clients.

iOS devices such as iPhone require that the VPN server’s certificate will be a trusted certificate. Trusted in this context means that there is a trust path from any publicly trusted CA (pyramid’s top) to the given certificate. In case of private CA, there will be no publicly trusted CA up the chain. The solution is to install private CA certificate on the device along the trusted public CA certificates.

Installing private CA on a device is dangerous. If private CA certificate is compromised, all encrypted communications of the device can be listened to (decrypted) and tampered with. The exception to this horrible rule is few more security-aware companies which use special techniques to protect themselves and their clients. Compromise of a CA certificate means that fake certificates can be issued for any site the device tries to access, benefiting the attacker and allowing him/her to decrypt and tamper with traffic. Compromise of private CA is more likely to happen as it’s not handled by companies which entire orientation is PKI and security. Such compromise is also more likely to be undetected for longer period of time.


For proper credentials revocation checks, additional setup is required which is sometimes neglected and done in hurry when the first revocation should be done.

How Beame solves VPN certificates management problems

See our video: Gatekeeper managing StrongSwan.

Certificates manager

Beame Gatekeeper with StrongSwan plugin can be used instead of private CA management. In contrast to private CA, certificates issued using Beame Gatekeeper are signed by publicly trusted CA. This means that additional (private) CA certificate is not required to be installed on the iOS device.

Beame Gatekeeper allows to create certificate and VPN settings file for an iOS device. Setting up StrongSwan VPN on an iOS device is now as easy as scanning the QR shown in web administration interface and another few touches on the mobile device.

Beame Gatekeeper updates StrongSwan configuration files once per minute adding cryptographic identities of new users and removing revoked identities. Beame Gatekeeper disconnects any users that are currently connected, whose identities were revoked.


Correct credentials revocation does not require additional effort. Hitting “Revoke” in Beame Gatekeeper’s web administration interface is enough for VPN service reconfiguration.