In our previous posts, we saw how to build the toolchain for a Nabla container, and also how we can use this toolchain to run applications as unikernels using Nabla.
In this post, we will be focusing on the steps we need to take into running something actually useful using Nabla. More specifically, we will go through all the steps for building Python3 into a Rumprun unikernel, suitable for running in a Nabla container, and cooking a filesystem that includes a Python script that we wish to run within.
We will be using the rumprun-packages git repository, which contains a collection
of frameworks and applications that we can build on top of the Rumprun infrastructure.
We have started doing some work on updating rumprun-packages, so that we can build
and bake applications using the recent updates
done by the Nabla people for Solo5 support in Rumprun and more specifically
the spt
and hvt
Solo5 tenders. This is work in progress and we will be porting
more packages from rumprun-packages to work on top of the upstream toolchain, both
for x86 and aarch64
.
Building Python3.5 as a unikernel Link to heading
Once we have built the rumprun toolchain we can build and bake Python3.5 in a Rumprun unikernel following these steps:
1git clone https://github.com/cloudkernels/rumprun-packages.git
2cd rumprun-packages
3
4# Setting up the rumprun-packages build environment
5cp config.mk.dist config.mk
6
7# If we are building for aarch64 we should also run:
8echo "RUMPRUN_TOOLCHAIN_TUPLE=aarch64-rumprun-netbsd" >> config.mk
9
10cd python3
11
12# Build for the spt target
13make python.spt
14
15# Build for the hvt target
16make python.hvt
Packing our Python script Link to heading
We still need to be able to pack our python script so that we can run it within the
unikernel, i.e. the equivalent of doing python my_script.py
?
Remember, in the world of unikernels we do not have access to a terminal, our application is our Linux box / VM / container.
We have two problems to solve:
- Make our script available within the unikernel
- Prepare our environment with all the package dependencies our script needs, in order to execute.
We will solve these issues by packing our script along with all its dependencies inside a disk image which we will later provide to the unikernel at run time.
Here’s how we do this:
1# We 're sill under rumprun-packages/python3.
2
3# this will be were we will install the python environment and our script
4mkdir -p python/lib
5
6# Our previous step has fetched all the basic Python environment
7# under: ./build/pythondist/lib/python3.5
8cp -r build/pythondist/lib/python3.5 python/lib/
9
10# We add the script to Python's site-packages
11cp myscript.py python/lib/python3.5/site-packages/
12
13# And we prepare our packages dependencies
14pyvenv-3.5 newpackage-env
15source newpackage-env/bin/activate
16pip install a_python_package
17deactivate
18cp -r newpackage-env/lib/python3.5/site-packages/* python/lib/python3.5/site-packages/
19
20# Now we have everything we need under 'python', so we create the disk image
21genisoimage -l -r -o disk.iso python
That’s it! disk.iso contains all the necessary environment to run our script.
1solo5-spt --disk=disk.iso --net=tap0 python.spt \
2'{"cmdline":"python.spt -m myscript","env":"PYTHONHOME=/python","net":{"if":"ukvmif0","cloner":"True","type":"inet","method":"static","addr":"10.0.0.2","mask":"16"},"blk":{"source":"etfs","path":"/dev/ld0a","fstype":"blk","mountpoint":"/python"}}
We have created a Docker image in order to automate the previous procedure of building Python as a unikernel and preparing the disk iso with our script and its dependencies, so that instead of running the following steps you can simply do something like:
1docker run --rm -v $(pwd):/build cloudkernels/python3-build disk.iso myscript.py requirements.txt
where requirements.txt
includes the dependencies of myscript.py
in the form of one
package per line (this is, essentially, whatever running pip freeze
on your python
project directory would produce).
You can find the Docker image on Docker hub and on github.
A working example is found below. Please note that this version includes a hack to hardcode the dns server in the dummy rootfs as we haven’t yet patched the configuration logic of rumprun to include a command line option for dns.
We will use a simple requests example. The files needed are the python snippet and requirements.txt
.
requests_main.py
:
1import requests
2
3r = requests.get('https://www.example.com')
4print(r.status_code)
5print(r.text)
requirements.txt:
1requests
Now run the command to bake the necessary python dependencies:
1# docker run --rm -v $(pwd):/build cloudkernels/python3-build:x86_64_dns disk.iso requests_main.py requirements.txt
2[...]
3 7.12% done, estimate finish Sat Feb 23 18:56:49 2019
4 14.25% done, estimate finish Sat Feb 23 18:56:49 2019
5 21.35% done, estimate finish Sat Feb 23 18:56:49 2019
6 28.48% done, estimate finish Sat Feb 23 18:56:49 2019
7 35.59% done, estimate finish Sat Feb 23 18:56:49 2019
8 42.70% done, estimate finish Sat Feb 23 18:56:49 2019
9 49.81% done, estimate finish Sat Feb 23 18:56:51 2019
10 56.94% done, estimate finish Sat Feb 23 18:56:50 2019
11 64.04% done, estimate finish Sat Feb 23 18:56:50 2019
12 71.16% done, estimate finish Sat Feb 23 18:56:50 2019
13 78.27% done, estimate finish Sat Feb 23 18:56:50 2019
14 85.38% done, estimate finish Sat Feb 23 18:56:50 2019
15 92.51% done, estimate finish Sat Feb 23 18:56:50 2019
16 99.61% done, estimate finish Sat Feb 23 18:56:50 2019
17Total translation table size: 0
18Total rockridge attributes bytes: 714249
19Total directory bytes: 1473102
20Path table size(bytes): 4710
21Max brk space used 678000
2270274 extents written (137 MB)
And invoke the unikernel using the following command:
1# ./solo5-hvt --mem=64 --disk=disk.iso --net=tap0 python.hvt '{"cmdline":"python.hvt -m requests_main","env":"PYTHONHOME=/python","net":{"if":"ukvmif0","cloner":"True","type":"inet","method":"static","addr":"10.0.0.2","mask":"16", "gw":"10.0.0.1"},"blk":{"source":"etfs","path":"/dev/ld0a","fstype":"blk","mountpoint":"/"}}'
2solo5-hvt: python.hvt: Warning: phdr[0] requests WRITE and EXEC permissions
3solo5-hvt: WARNING: Tender is configured with HVT_DROP_PRIVILEGES=0. Not dropping any privileges.
4solo5-hvt: WARNING: This is not recommended for production use.
5 | ___|
6 __| _ \ | _ \ __ \
7\__ \ ( | | ( | ) |
8____/\___/ _|\___/____/
9Solo5: Memory map: 64 MB addressable:
10Solo5: reserved @ (0x0 - 0xfffff)
11Solo5: text @ (0x100000 - 0x73ae37)
12Solo5: rodata @ (0x73ae38 - 0x8cdcd7)
13Solo5: data @ (0x8cdcd8 - 0xb5b93f)
14Solo5: heap >= 0xb5c000 < stack < 0x4000000
15rump kernel bare metal bootstrap
16
17[ 1.0000000] Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
18[ 1.0000000] 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017,
19[ 1.0000000] 2018 The NetBSD Foundation, Inc. All rights reserved.
20[ 1.0000000] Copyright (c) 1982, 1986, 1989, 1991, 1993
21[ 1.0000000] The Regents of the University of California. All rights reserved.
22
23[ 1.0000000] NetBSD 8.99.25 (RUMP-ROAST)
24[ 1.0000000] total memory = 26824 KB
25[ 1.0000000] timecounter: Timecounters tick every 10.000 msec
26[ 1.0000080] timecounter: Timecounter "clockinterrupt" frequency 100 Hz quality 0
27[ 1.0000090] cpu0 at thinair0: rump virtual cpu
28[ 1.0000090] root file system type: rumpfs
29[ 1.0000090] kern.module.path=/stand/amd64/8.99.25/modules
30[ 1.0200090] mainbus0 (root)
31[ 1.0200090] timecounter: Timecounter "bmktc" frequency 1000000000 Hz quality 100
32[ 1.0200090] ukvmif0: Ethernet address 5e:ac:bf:a1:15:09
33[ 1.0732133] /dev//dev/ld0a: hostpath XENBLK_/dev/ld0a (137 MB)
34mounted tmpfs on /tmp
35
36=== calling "python.hvt" main() ===
37
38200
39<!doctype html>
40<html>
41<head>
42 <title>Example Domain</title>
43
44 <meta charset="utf-8" />
45 <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
46 <meta name="viewport" content="width=device-width, initial-scale=1" />
47 <style type="text/css">
48 body {
49 background-color: #f0f0f2;
50 margin: 0;
51 padding: 0;
52 font-family: "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;
53
54 }
55 div {
56 width: 600px;
57 margin: 5em auto;
58 padding: 50px;
59 background-color: #fff;
60 border-radius: 1em;
61 }
62 a:link, a:visited {
63 color: #38488f;
64 text-decoration: none;
65 }
66 @media (max-width: 700px) {
67 body {
68 background-color: #fff;
69 }
70 div {
71 width: auto;
72 margin: 0 auto;
73 border-radius: 0;
74 padding: 1em;
75 }
76 }
77 </style>
78</head>
79
80<body>
81<div>
82 <h1>Example Domain</h1>
83 <p>This domain is established to be used for illustrative examples in documents. You may use this
84 domain in examples without prior coordination or asking for permission.</p>
85 <p><a href="http://www.iana.org/domains/example">More information...</a></p>
86</div>
87</body>
88</html>
89
90rumprun: call to ``sigaction'' ignored
91
92=== main() of "python.hvt" returned 0 ===
93
94=== _exit(0) called ===
95[ 1.8632722] rump kernel halting...
96[ 1.8632722] syncing disks... done
97[ 1.8632722] unmounting file systems...
98[ 1.9953910] unmounted tmpfs on /tmp type tmpfs
99[ 1.9967528] unmounted /dev//dev/ld0a on / type cd9660
100[ 1.9967528] unmounted rumpfs on / type rumpfs
101[ 1.9967528] unmounting done
102halted
103Solo5: solo5_exit(0) called
Please note that for this to work, we have setup tap0 with an ip address of 10.0.0.1 and have setup NAT in the host for the guest to access the network.
Building the Nabla container Link to heading
Now baking the nabla container is a walk in the park after the above steps. You can have a look in our previous post or the relevant repo, or if you’re a bit lazy here’s a quick summary:
Just clone this repo:
1git clone https://github.com/cloudkernels/nabla-base
add the needed files in the rootfs directory:
1mount -o loop disk.iso /mnt
2cp -avf /mnt/* nabla-base/rootfs/
3umount /mnt
add the seccomp
tender binary:
1cp python.spt nabla-base/python.nabla
Replace myprog.nabla
with python.nabla
in the Dockerfile
(careful, the runtime
expects to find a file ending in .nabla so make sure to keep the extension
name).
And build your nabla
container image using the following command:
1cd nabla-base
2docker build -f Dockerfile -t python3-requests-nabla .
Assuming you have setup runnc correctly, spawning the container is as easy as:
1docker run --rm --runtime=runnc python3-requests-nabla -m requests_main
Note the boot command line – it has to match the 'cmdline:''
parameter in the json
string used above.
That’s it folks!
Give it a try and let us know what you think!