An attempt to Dockerize the Cryptech build environment.
The ten zillion Debian packages are tedious but straightforward.
The tricky bit is the XiLinx toolchain:
- You have to download the installation tarball by hand
- You have to get a license key from XiLinx before you can use it
- You have to run GUI installation tools to install and configure it
There's not much we can do about the first two, so we assume that
you've obtained a tarball and a license key file, and that you've
dropped them into this directory with the filenames we expect.
The third...is fun, in a demented sort of way. Don't know whether
it'll work yet, but going to try automating this using
Xvfb, xautomation, and ratpoison.
-
You need to download the Xilinx ISE Design Suite.
-
Xilinx only supports specific versions of Red Hat and Suse Linux,
but it does run on Ubuntu and Debian, with the following caveat:
Ubuntu and Debian symlink /bin/sh
to /bin/dash
, which can't
handle if [ ... ]
syntax in shell scripts. Symlinking /bin/sh
to /bin/bash
works.
-
The Xilinx tools are serious disk hogs: VMs for this need at least
30-40 GB of disk space.
Step-by-step installation (Dockerfile will attempt to automate this):
- Unpack
Xilinx_ISE_DS_Lin_14.7_1015_1.tar
(or whatever version you have).
cd
to Xilinx_ISE_DS_Lin_14.7_1015_1
, and run sudo ./xsetup
- Click through two screens of license agreements.
- Select ISE WebPACK.
- Unselect (or leave unselected) Install Cable Drivers.
- Go!
Well, not quite. You will need to convince the ISE that you have a license.
On the page
http://www.xilinx.com/products/design-tools/ise-design-suite/ise-webpack.htm
click on the Licensing Solutions link. On the resulting page, expand
the section Obtain a license for Free or Evaluation product. To
download the ISE Webpack, you should have created an account, so now
you can go to the Licensing Site and use that account to create a
Certificate Based License.
You do not need to go through the HostID dance, just say Do It. You
will then receive a certificate in email (not an X.509 certificate)
which you will be able to use. Then start the ISE Webpack by issuing
the command ise. Go to the Help menu and Manage Licenses. Use the
resulting new License Manager window to install the .lic
file. This
process is complex and flakey.
http://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/design-tools.html
http://www.xilinx.com/products/design-tools/ise-design-suite/ise-webpack.htm
http://www.armadeus.com/wiki/index.php?title=ISE_WebPack_installation_on_Linux
With the license file already present this is simple: follow the installation
instructions, tell it to use existing license file, it'll find it if
you click the right buttons. And yes, it's another GUI program.
The ise binary referred to above is in /opt/Xilinx/14.7/ISE_DS/ISE/bin/lin64/ise
(or in .../lin/ise
, but the pbuilder
setup requires a 64-bit build machine).
It turns out you don't really need to run the whole ise tool to get to
the license manager, you can just run the license manager directly,
but you do have to source the appropriate settings file first, none of
the XiLinx tools work properly without that. So:
. /opt/Xilinx/14.7/ISE_DS/settings64.sh
/opt/Xilinx/14.7/ISE_DS/common/bin/lin64/xlcm -manage
The file finish.png
is for the visgrep
tool from the xautomation
package. It sorta mostly kinda works as a mechanism for detecting
that we've gotten to the end of the XiLinx installation process. I
haven't gotten it to work quite as it says on the tin, but something like:
while true
do
xwdtopnm 2>/dev/null framebuf/Xvfb_screen0 | pnmtopng >framebuf/screen.png
if test -n "$(visgrep framebuf/screen.png finish.png finish.png)"
then
break
fi
done
For reasons that I don't understand, visgrep
returns failure (exit
status 1) even when it finds the pattern, even though the manual says
it's not supposed to do that. Dunno why. Ancient code. Whatever.
In practice, this is so nasty that I'm seriously tempted just to wait
half an hour then blindly click on where the finish button should be.
This whole thing is of course a kludge tower. Furthermore, due to the
size of the installation files, normal Docker image layering would
mean images at least 6GB larger than they need to be, so we're
probably going to want to clean all that up then do a save | load
cycle to squeeze out the garbage.
The whole Xvfb thing is pretty much impossible to debug under docker
build
, which is starting to look like a moot point in any case since
Xvfb is refusing to start in that environment even when everything
else claims to be right. So, an alternative plan: don't use a
Dockerfile at all, just use docker import
, docker run
, and docker
commit
. Overall plan if we were to go this way:
-
Construct initial filessytem content using debootstrap -foreign
(as we do now) and also untarring the XiLinx source tarball
directly. Might as well drop in the handful of small scripts we'd
otherwise have to COPY
from a Dockerfile while we're at this.
-
docker import
(as we do now, but with all the extra initial
content too to create the stage1 image.
-
docker run
the stage1 image with a bind mount for the Xvfb frame
buffer directory, running an install.sh
script we stuffed into
the tree in step 1. This does apt-get
to fetch packages we need,
then does python install.py
to run our Xvfb driver voodoo.
install.sh
probably needs to set -e
so we can tell when
something failed. Ideally, install.sh
does everything including
throwing away all the stuff we no longer need like the 6GB unpacked
XiLinx installation tree.
-
Assuming that install.sh
et al ran to happy completion, run
docker commit
to generat a new image from the result.
-
[Optional] docker save | docker load
cycle to garbage collect,
assuming that hack still works, whatever other hack (if any) to
achieve same goal (gc).
One nice thing about this approach is that it lets us save screen
images from the installation process, making this a lot easier to
debug. Unless the internal scripts need it for some reason, we don't
even need to convert from xwd
format to something more common, we
can just shutil.copyfile()
the frame buffer file and leave the
results for somebody else to deal with. If anything is going to keep
those they should probably be converted to png but not critical.
OK, confirmed that the problem with running Xvfb
is (somehow)
specific to running under docker build
. Same configuration (packge
list, etc) ran (apparently) fine under docker run
.
Attempting to docker commit
the result blew out what was left of my
disk space. Since we're going to want to clean up and collapse
anyway, it's possible that docker container export
is a better path
than commit here: would have to feed the result to docker import
to
get an image, but that might be exactly what we want.