December 4, 2020
In our previous post we spoke about the potential solutions for deploying serverless offerings with hardware acceleration support. With the increasing adoption of the serverless and FaaS paradigms, providers will need to offer some form of hardware acceleration semantics.
For some time now, Amazon has identifed this as a “compelling use case” for their AWS Firecracker hypervisor which powers the Amazon Lambda service. What is more, they identify traditional techniques for GPU support in VMs such as GPU passthrough comes with limitations and significantly increases the attack surface of the hypervisor.
June 1, 2020
The debate on how to deploy applications, monoliths or micro services, is in full swing. Part of this discussion relates to how the new paradigm incorporates support for accessing accelerators, e.g. GPUs, FPGAs. That kind of support has been made available to traditional programming models the last couple of decades and its tooling has evolved to be stable and standardized.
On the other hand, what does it mean for a serverless setup to access an accelerator?
February 26, 2020
Earlier this month we visited FOSDEM, an absolutely open and free event for developers, open-source vendors and enthusiasts to meet, share their ideas and news, and discuss the latest in open source. Talks at FOSDEM are usually organized within several sections: Keynotes, Main tracks, Developer rooms and Lightning talks.
Some people from our team had visited before, but for most of us first timers it was really exciting. Packed keynotes, busy dev rooms, people chatting outside, over coffee, beer, or snacks!
October 21, 2019
Spawning applications in the cloud has been made super easy using container frameworks such as docker. For instance running a simple command like the following
docker run --rm -v /path/to/nginx-files:/etc/nginx nginx spawns an NGINX web server, provided you customize config files and the actual HTML files to be served.
This process, inherits NGINX’s stock docker hub rootfs, and spawns it as a docker container in a generic Linux container host.
October 21, 2019
Since we got our hands on the new Raspberry Pi 4, we started exploring how various virtualization technologies behave on the board. First thing we tried is how to run Nabla on it and how it compares to native KVM.
Next thing we wanted to try is firecracker, the notorious micro-VMM that Amazon Lambda & Fargate run on. To our disappointment, firecracker was not yet running on RPi4. So we started looking into coding in the necessary changes :)