aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
authorRob Austein <sra@hactrn.net>2019-02-17 04:51:03 +0000
committerRob Austein <sra@hactrn.net>2019-02-17 04:51:03 +0000
commit0820895f73dfc41f37bb63365290815c861013cf (patch)
tree1b3a2f87f328aadddd20bd6564891476a1ccb73b /README.md
parent462f33573500393d29111b78d1aa621f9beb8493 (diff)
Parameterize and clean up, now that basic hack seems to work.
Diffstat (limited to 'README.md')
-rw-r--r--README.md192
1 files changed, 47 insertions, 145 deletions
diff --git a/README.md b/README.md
index 1d9e25c..4f757f2 100644
--- a/README.md
+++ b/README.md
@@ -1,165 +1,67 @@
-Docker
-======
+Cryptech build environment under Docker
+=======================================
-An attempt to Dockerize the Cryptech build environment.
+This is an attempt to Dockerize the Cryptech build environment.
The ten zillion Debian packages are tedious but straightforward.
-
The tricky bit is the XiLinx toolchain:
-* You have to download the installation tarball by hand
-* You have to get a license key from XiLinx before you can use it
-* You have to run GUI installation tools to install and configure it
+* You need to download the Xilinx ISE Design Suite distribution tarball.
+* You need to get a license key from XiLinx before you can use ISE.
+* You have to run GUI installation tools to install and configure it.
There's not much we can do about the first two, so we assume that
you've obtained a tarball and a license key file, and that you've
dropped them into this directory with the filenames we expect.
-The third...is fun, in a demented sort of way. Don't know whether
-it'll work yet, but going to try automating this using
-Xvfb, xautomation, and ratpoison.
-
-XiLinx voodoo from Berlin workshop notes
-----------------------------------------
-
-* You need to download the Xilinx ISE Design Suite.
-
-* Xilinx only supports specific versions of Red Hat and Suse Linux,
- but it does run on Ubuntu and Debian, with the following caveat:
- Ubuntu and Debian symlink `/bin/sh` to `/bin/dash`, which can't
- handle `if [ ... ]` syntax in shell scripts. Symlinking `/bin/sh`
- to `/bin/bash` works.
-
-* The Xilinx tools are serious disk hogs: VMs for this need at least
- 30-40 GB of disk space.
-
-Step-by-step installation (Dockerfile will attempt to automate this):
-
-1. Unpack `Xilinx_ISE_DS_Lin_14.7_1015_1.tar` (or whatever version you have).
-2. `cd` to `Xilinx_ISE_DS_Lin_14.7_1015_1`, and run `sudo ./xsetup`
-3. Click through two screens of license agreements.
-4. Select ISE WebPACK.
-5. Unselect (or leave unselected) Install Cable Drivers.
-6. Go!
+The third...is fun, in a demented sort of way.
-Well, not quite. You will need to convince the ISE that you have a license.
+The Xilinx tools are serious disk hogs: VMs for this need at least
+30-40 GB of disk space. The build process for this Dockerized
+environment is even worse: 60GB appears to be just barely enough.
-On the page
+To get a license, go to
http://www.xilinx.com/products/design-tools/ise-design-suite/ise-webpack.htm
-click on the Licensing Solutions link. On the resulting page, expand
-the section Obtain a license for Free or Evaluation product. To
-download the ISE Webpack, you should have created an account, so now
-you can go to the Licensing Site and use that account to create a
-Certificate Based License.
+and click on the Licensing Solutions link. On the resulting page,
+expand the section "Obtain a license for Free or Evaluation
+product". To download the ISE Webpack, you should have created an
+account, so now you can go to the Licensing Site and use that account
+to create a Certificate Based License.
-You do not need to go through the HostID dance, just say Do It. You
+You do not need to go through the HostID dance, just say "Do It". You
will then receive a certificate in email (not an X.509 certificate)
-which you will be able to use. Then start the ISE Webpack by issuing
-the command ise. Go to the Help menu and Manage Licenses. Use the
-resulting new License Manager window to install the `.lic` file. This
-process is complex and flakey.
+which you will be able to use.
http://www.xilinx.com/support/download/index.html/content/xilinx/en/downloadNav/design-tools.html
http://www.xilinx.com/products/design-tools/ise-design-suite/ise-webpack.htm
http://www.armadeus.com/wiki/index.php?title=ISE_WebPack_installation_on_Linux
-With the license file already present this is simple: follow the installation
-instructions, tell it to use existing license file, it'll find it if
-you click the right buttons. And yes, it's another GUI program.
-
-The ise binary referred to above is in `/opt/Xilinx/14.7/ISE_DS/ISE/bin/lin64/ise`
-(or in `.../lin/ise`, but the `pbuilder` setup requires a 64-bit build machine).
-
-It turns out you don't really need to run the whole ise tool to get to
-the license manager, you can just run the license manager directly,
-but you do have to source the appropriate settings file first, none of
-the XiLinx tools work properly without that. So:
-
-```
-. /opt/Xilinx/14.7/ISE_DS/settings64.sh
-/opt/Xilinx/14.7/ISE_DS/common/bin/lin64/xlcm -manage
-```
-
-Kludges too awful to mention
-----------------------------
-
-The file `finish.png` is for the `visgrep` tool from the `xautomation`
-package. It sorta mostly kinda works as a mechanism for detecting
-that we've gotten to the end of the XiLinx installation process. I
-haven't gotten it to work quite as it says on the tin, but something like:
-
-```
-while true
-do
- xwdtopnm 2>/dev/null framebuf/Xvfb_screen0 | pnmtopng >framebuf/screen.png
- if test -n "$(visgrep framebuf/screen.png finish.png finish.png)"
- then
- break
- fi
-done
-```
-
-For reasons that I don't understand, `visgrep` returns failure (exit
-status 1) even when it finds the pattern, even though the manual says
-it's not supposed to do that. Dunno why. Ancient code. Whatever.
-
-In practice, this is so nasty that I'm seriously tempted just to wait
-half an hour then blindly click on where the finish button should be.
-
-Possible future direction
--------------------------
-
-This whole thing is of course a kludge tower. Furthermore, due to the
-size of the installation files, normal Docker image layering would
-mean images at least 6GB larger than they need to be, so we're
-probably going to want to clean all that up then do a save | load
-cycle to squeeze out the garbage.
-
-The whole Xvfb thing is pretty much impossible to debug under `docker
-build`, which is starting to look like a moot point in any case since
-Xvfb is refusing to start in that environment even when everything
-else claims to be right. So, an alternative plan: don't use a
-Dockerfile at all, just use `docker import`, `docker run`, and `docker
-commit`. Overall plan if we were to go this way:
-
-1. Construct initial filessytem content using `debootstrap -foreign`
- (as we do now) and also untarring the XiLinx source tarball
- directly. Might as well drop in the handful of small scripts we'd
- otherwise have to `COPY` from a Dockerfile while we're at this.
-
-2. `docker import` (as we do now, but with all the extra initial
- content too to create the stage1 image.
-
-3. `docker run` the stage1 image with a bind mount for the Xvfb frame
- buffer directory, running an `install.sh` script we stuffed into
- the tree in step 1. This does `apt-get` to fetch packages we need,
- then does `python install.py` to run our Xvfb driver voodoo.
- `install.sh` probably needs to set `-e` so we can tell when
- something failed. Ideally, `install.sh` does everything including
- throwing away all the stuff we no longer need like the 6GB unpacked
- XiLinx installation tree.
-
-4. Assuming that `install.sh` et al ran to happy completion, run
- `docker commit` to generat a new image from the result.
-
-5. [Optional] `docker save | docker load` cycle to garbage collect,
- assuming that hack still works, whatever other hack (if any) to
- achieve same goal (gc).
-
-One nice thing about this approach is that it lets us save screen
-images from the installation process, making this a lot easier to
-debug. Unless the internal scripts need it for some reason, we don't
-even need to convert from `xwd` format to something more common, we
-can just `shutil.copyfile()` the frame buffer file and leave the
-results for somebody else to deal with. If anything is going to keep
-those they should probably be converted to png but not critical.
-
-OK, confirmed that the problem with running `Xvfb` is (somehow)
-specific to running under `docker build`. Same configuration (packge
-list, etc) ran (apparently) fine under `docker run`.
-
-Attempting to `docker commit` the result blew out what was left of my
-disk space. Since we're going to want to clean up and collapse
-anyway, it's possible that `docker container export` is a better path
-than commit here: would have to feed the result to `docker import` to
-get an image, but that might be exactly what we want.
+Once you've downloaded the ISE installation tarball and the license
+file, you should place copies of them in this directory (the one with
+all the dockerization stuff). Since these were probably painful to
+obtain, you might want to store the files somewhere else (eg, the
+parent directory), chmod them 444, and hard link them into this
+directory.
+
+After you've added those files to this directory, typing `make`
+should, in theory, build the whole thing. It takes a ridiculously
+long time to build, but we dont' expect this to happen often.
+
+Legal caveat
+------------
+
+Note that the resulting Docker image contains a licensed copy of the
+build environment, so passing it around to your friends or installing
+it on more machines than the license allows is a no-no. We're *not*
+attempting to circumvent XiLinx's licensing system, just make it
+possible to run builds which require ISE in a reproducable Dockerized
+environment.
+
+Grotty details
+--------------
+
+Readers familiar with Docker wiil notice that this build environment
+is...kind of weird. Partly that's because of the size of some of the
+files involved, but mostly it's because the Xvfb/ratpoison hack we're
+using to drive ISE installation doesn't work under `docker build`.
+Don't know why, don't really care (so many windmills, so little time).