We run Ubuntu 20.04
Output of df -h
:
root@centuri-engineering:/home/guillaume# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 9.6M 3.1G 1% /run
/dev/sda2 127G 6.1G 115G 6% /
tmpfs 16G 8.0K 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/sda1 511M 13M 499M 3% /boot/efi
/dev/sda5 309G 2.1G 291G 1% /home
/dev/md0 7.3T 659G 6.3T 10% /mnt/md0
tmpfs 3.2G 0 3.2G 0% /run/user/1000
Here is /etc/fstab
:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda2 during installation
UUID=b8306de5-7b14-4aa5-b327-e8f4a08de658 / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda1 during installation
UUID=74D2-21A6 /boot/efi vfat umask=0077 0 1
# /home was on /dev/sda5 during installation
UUID=970e738e-0331-4d42-bf09-1d698ad1de1c /home ext4 defaults 0 2
# swap was on /dev/sda4 during installation
UUID=f602a833-f03d-4450-ab14-a33c3823094b none swap sw 0 0
# Added manualy
/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
TIL when running su
, the root shell keeps the user's environement. So
for example, /usr/sbin/
is not in $PATH
. So better use su -s
Update fstab to permanentely mount de RAID 1 disk:
# as root
echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | tee -a /etc/fstab
apt install \
emacs-nox \
git rsync \
postgresql-11 \
postgresql-server-dev-11 \
apt-transport-https \ # https://docs.docker.com/engine/install/debian/
ca-certificates \
curl \
gnupg-agent \
software-properties-common \
nvidia-detect # see https://wiki.debian.org/fr/NvidiaGraphicsDrivers
python3-venv
# as root
# add caddy repository
echo "deb [trusted=yes] https://apt.fury.io/caddy/ /" \
| sudo tee -a /etc/apt/sources.list.d/caddy-fury.list
# add docker repositories
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
apt-key fingerprint 0EBFCD88 # see docker website for a check
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
apt update
apt install caddy libnss3-tools \
docker-ce docker-ce-cli containerd.io
usermod -aG docker guillaume
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-x86_64.sh
chmod +x Miniforge3-Linux-x86_64.sh
./Miniforge3-Linux-x86_64.sh
After you log out & in again, install docker-compose (via pip, in user
space)
pip install docker-compose
# as root
wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.1/mkcert-v1.4.1-linux-amd64
mv mkcert-v1.4.1-linux-amd64 /usr/local/bin/mkcert
chmod +x /usr/local/bin/mkcert
mkcert -install
mkcert localhost centuri-engineering.luminy.univ-amu.fr 139.124.81.38 127.0.0.1 ::1
mv localhost+4-key.pem key.pem
mv localhost+4.pem cert.pem
mv *.pem /etc/caddy
cd /etc/caddy && caddy reload
The basic idea is a container serves it's app at a local port (say 4080) and we reverse-route all the requests from a given subdomain (with ports 80 and 443) to this port. Caddy takes care of automated https
with LetsEncrypt.
Here is a sample of the /etc/caddy/Caddyfile
doing this:
wiki.centuri-engineering.univ-amu.fr {
reverse-proxy 127.0.0.1:4080
}
This certainly is the thinest server configuration possible
Warning This is an example of installation at some point in time, it certainly will not work as is!
Get docker compose files
pip install docker-compose
git clone git@github.com:centuri-engineering/omero-at-centuri.git
Restore DB
cd omero-at-centuri/docker
# Pull only the database up
docker-compose up -d database
docker-compose exec database psql -U omero -d omero -f /backups/omero.sql
Start all
cd omero-at-centuri/docker
docker-compose up -d
docker-compose exec database psql -U omero -d omero -f /backups/omero.sql
Dump DB
docker-compose exec database pg_dump omero -U omero > backups/omero.sql
This reference was usefull
Relevant command line:
UPDATE pixels SET path = regexp_replace(path,'\\','/','g');
https://davidamick.wordpress.com/2014/07/19/docker-postgresql-workflow/
Looks easier to go through docker-compose exec commands - so:
COMMAND="psql -U omero -d omero -f /backups/omero.dump.sql"
docker-compose exec database $COMMAND
Note that old passwords are conserved in the db, so we might need to
reset the root omero user password. This is done on the omeroserver
machine through:
# Log on the server
docker-compose exec omeroserver bash
export PATH=$PATH:/opt/omero/server/OMERO.server/bin
omero db password
This prints a SQL cmmand with the hashed password. In the database
server then:
docker-compose exec database psql -U omero
# we have to remove the extra quotes for some reason
omero=# UPDATE password SET hash = 'mFkbWqddcdsqsdHffffdd4==' WHERE experimenter_id = 0;
conda create -n pyomero python=3.8
conda install -c conda-forge \
jupyter\
notebook\
jupyterlab\
ipywidgets\
scikit-image\
matplotlib
pandas
# dependences to install zeroc-ice
sudo apt install libbz2-dev libssl-dev
pip install omero-py
https://github.com/imcf/auto-tx