Transfering research data using dat - Implementing a campus-wide infrastructure

While dat is a peer-to-peer protocol that supposedly does not require third-party support and intermediaries, its current early stage implementation benefits from the support of centralized IT infrastructure in order to realize the objective of transfer of small and large scale research datasets. This article proposes an architecture to support users campus-wide

Experiences with using dat for research data transfer

How well does dat perform dat for campus-wide transfer of research data? Dat is a distributed protocol created to share data and we are experimenting with it to see if it answers our needs. This is a living article that documents the ongoing experience to make dat a standard for transferring research data.

Reproducibility and user empowerment via Docker

Looking at Docker from a user empowerment perspective. Also some toughts on scientific reproducibility using Docker. The reproducibility perspective was original when this text was written in 2014, but now is fairly standard.

A Docker container for Biopython testing using Buildbot

This is the last in a series of posts related to creating a testing container for Biopython using Docker.

In the previous post we developed a container that allows you to have a complete, fully functional environment for Biopython.

Our objective now is different: We want to have a container to help do integration testing for Biopython. That is a completely different kind of animal:

  • It should be able to connect to a buildbot server (remember buildbot is a continuous integration framework). Biopython has a buildbot server.
  • It is a server/executable container. You do not log-in into it (unless something strange is happening). When you start it, it becomes and independent agent (we are helping to build SkyNet here): it connects to the builbdot server and does whatever testing tasks are required from it.

Buildbot installation

Buildbot installation can be done by adding something like this:

RUN apt-get install -y buildbot-slave RUN buildslave create-slave biopython CHANGEUSER CHANGEPASS


Notice that you need to change the username and password on the code (CHANGEUSER, CHANGEPASS). This will have to be agreed beforehand with the buildbot system administrator. This also means that you cannot use the Dockerfile from the Internet directly. You have to download and manually edit it. Alternative approaches would be appreciated, BTW.

Server (executable) mode

This kind of container is different from the previous one, where everything was prepared for a user to login and do interactive work inside the container. Here, when you fire up the container it goes on to its business (testing).

In Docker that is achieved by creating an ENTRYPOINT:

ENTRYPOINT bash should include everything needed to start up the server (for example, the database servers that on the previous post were started on .bashrc, should here go into Importantly, if you have a daemon server (i.e., that goes into the background) you have to keep the entrypoint running or the container will terminate. So, in our case we will have the following file:

service postgresql start
service mysql start
export DIALIGN2_DIR=/tmp
buildslave start biopython
tail -f biopython/twistd.log #To hold things

Final dispositions

This series of tutorials was made by a Docker newbie. While I hope this will help others with their docker installations, there might be sub-optimal solutions here (I would appreciate if you point me in the right direction).

The Biopython docker work will continue here. Updates can be found there.

Remember, while you can directly run the container from the previous blog post, the testing container will require you to a) download the container file b) edit username and password c) then you can run it.

A Docker container for Biopython

In this post we will create a docker container for Biopython. Our final objective is to have a container to test Biopython (a different kind of beast compared with what we are doing here), but this one might actually be interesting for a lot more people. For this we will use Docker.

A Caveat: Docker is undergoing intense development thus some of the suggestions below might break with time. If you find such a case, please inform me and I will amend this post. I will assume that you have installed Docker and that your user has group permissions to interact with Docker (if not, then just sudo most of the commands below).

For the impatient
Install docker. Remember: depending on your installation you might need to add sudo to the commands below.
docker build -t biopython
#Grab a coffee, wait a bit
docker run -t -i biopython /bin/bash

Creating a Docker file

Basic stuff

We will use Ubuntu, most specifically Ubuntu Saucy. Why Saucy? For no specific reason, but we want to make sure that the environment is stable, so we pick a recent-but-not-bleeding-edge distro. So, our file starts with:

FROM ubuntu:saucy

Which simple uses Saucy (downloading the image if necessary)

We now add all the ubuntu standard packages needed for Biopython:

#We need this for phylip
RUN echo 'deb precise multiverse' >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y git python-numpy wget gcc python-dev
RUN apt-get install -y python-matplotlib python-reportlab python-rdflib
RUN apt-get install -y clustalw fasttree t-coffee
RUN apt-get install -y bwa ncbi-blast+ emboss clustalo phylip mafft muscle
RUN apt-get instally -y embassy-phylip samtools phyml wise raxml
# For BioSQL
RUN apt-get install -y mysql-server python-mysqldb postgresql python-psycopg2

Notice the change of repositories and all support packages (git, gcc, ...)

Non-standard packages

There are several pieces of software that require manual installation. It is an ongoing task, but it is mostly simple grunt work, for example:

#reportlab fonts
RUN wget
WORKDIR cd /usr/lib/python2.7/dist-packages/reportlab
RUN  mkdir fonts
WORKDIR cd /usr/lib/python2.7/dist-packages/reportlab/fonts
RUN unzip /
RUN rm
RUN mkdir genepop
WORKDIR /genepop
RUN wget
RUN tar zxf sources.tar.gz
RUN g++ -DNO_MODULES -o Genepop GenepopS.cpp -O3
RUN cp Genepop /usr/bin
RUN rm -rf genepop

Not much more than a sequence of bash commands in all the cases that I have done (download stuff, compile, copy, cleanup, ...).

Configuring and starting services (DBs)

Here we need to configure the databases needed for BioSQL (PostgreSQL and MySQL - sqlite is ready). The configuration looks like this:

RUN echo "host    all             all             ::1/128                 trust" > /etc/postgresql/9.1/main/pg_hba.conf
RUN echo "service postgresql start" > .bashrc
RUN echo "service mysql start" >> .bashrc

We the need to configure permissions access to the postgreSQL server. Notice that the address is a IPv6 one. Something in the system (I did not research what) is doing IPv6 first (localhost has both a v4 and v6 address). Modern: yes, welcome: yes, expected: no. So, if something based on localhost seems to be failing check if it is using IPv6.

The Database servers are started in .bashrc. This solution is, in my view, sub-optimal (for instance you can run a container without starting with bash, and there goes database server initialization). If you know of a better way, please say...

Preparing Biopython

It is actually quite easy:

RUN git clone
WORKDIR /biopython
RUN python install

Running and getting the Docker file

If you want to run this do, on your machine (with docker, preferably with the sudo issue resolved):

docker build -t biopython
docker run -i -t biopython /bin/bash

You will see a few errors related to database startup, but these are not important in this context.

You can now do, for example:

root@dc9d8c3c48f8:/biopython# cd Tests/
root@dc9d8c3c48f8:/biopython/Tests# python --offline

Grab the docker file here, if you want to look at it.

Next steps

Next step will be the creation of a buildbot docker for Biopython. Also finalize the list of dependencies (almost done).