Going into cloud is unavoidable. Are we actually able to control where and how our data is transferred and stored? Security agencies all over the world want to collect as much of electronic communications as they can. Just in case. Is it possible to deny them access? Let’s see what software developers and systems engineers can do about that. Except for the AT&T secret room what would be another good point to tap into network traffic?
First we will go over the important system components, in order to understand computer systems network layout.
A computer system is a compound environment containing:
Server is a software that handles requests (e.g. a web server that handles requests from web browsers). Typical conversation between web browser and web server looks as follows:
Since server is a software it needs a hardware to run on. The hardware varies widely depending on use case:
A specific type of server. This server processes incoming requests and returns web pages. In some systems, the web server talks to database server and builds the resulting web page based on the data it got from the database. In other systems, the web server talks to “app server” or to many other “services” and maybe also to one or more databases in order to gather the data needed to build the resulting page.
In some situations, the core logic resides in “app server”. During handling of a web page request, a web server may issue zero to multiple request to the app server, depending on request type. After talking to app server, the web server can return the web page that was built using the information it.
Independent on which hardware the software runs on, it can run inside a software container. A container is something like a sandbox for fully functional Operation System. Running a container looks like a process (or a set of processes) to the host system.
Containers provide basic level of software isolation. The most popular containers implementation is Linux Containers and the the most puppet software to manage Linux Containers is Docker. Docker provides a convenient way to package and distribute software.
Usually lives in the cloud. This is where requests from Web Browsers go to. Load balancer is a software that handles incoming requests and distributes the requests to servers that will be handling the requests. It’s needed for two main reasons.
Well, this is all HW
Also known as Multitier architecture.
This is what commonly referred to as Service-oriented architecture.
There is a lot of network traffic going on. The requests arrive via Internet to a load balancer. The Internet is insecure by its nature and therefore the traffic from your web browser to the load balancer is usually encrypted end-to-end (TLS – security on transport layer). What usually happens next, is that the requests coming from your browser are decrypted at the load balancer and passed to the web server unencrypted. In addition, when web server talks to app server and/or to the database, important information flows between these points, usually again unencrypted.
It so happens, that plugging into a network switch just after the load balancer, or somewhere deeper in the system (less preferable) would expose the actual data.
We already know, that traffic from browsers to load balancers is encrypted. The web servers use SSL certificates to identify themselves and allow clients to start encrypted sessions. The basis for that are widely used x.509 certificates. A bit more about them in another post.
Here we will explore possible use cases for X.509 certificate installed inside Docker or VM. So latter will come equipped with SW, that will allow easy x.509 deployment.
There is a software framework, developed by Beame.io, called Beame Gatekeeper, that simply does that. Generates keys, certificate requests, receives and stores certificates, and provides tools to manage and use them later.
When configured to run as a system service, the Gatekeeper bootstraps automatically.
What is left to do, is to apply the certificate to encrypt previously open traffic. For this part, the Gatekeeper provides tools to expose a servers running on local network.
Considering that Dockers and VMs are usually created and destroyed on demand, and often automatically, using Docker or VM images with preinstalled Gatekeeper solves the problem.
For systems, where the architecture is almost static, the solution would be to install certificates once, without messing with the system images.