Complex OpenUpgrade Migration
Day 2: Tools
Yesterday we completed enough to experiment and see what is going to happen. So today we want to try out an upgrade from v7 to v8. We know that eventually we want to be on v11, and we also know that it isn't going to be all smooth sailing. For development, testing and production environments I've become a massive fan of the Doodba way and with some adjustments their scaffolding approach is what we are going to take here.
It already comes with openupgrade as an option, but our real issues are that
Backing up and restoring a 50GB database is going to take some time.
We are going to wreck the database at least once (and no doubt a lot) plus we wnat to try an upgrade before we have analyzed any customizations.
A database configured for either a test or production environment is probably not what we want here, where speed matters, checkpoints, logs and streaming replication not so much.
In order to make this happen for the test we are going to create and tune a latest version of postgres on localhost and set up our odoo upgrade images to talk to it. For the purposes of this post, I'll do an upgrade on a more vanilla config to enable some comparisons. For now we'll restore a single copy of the database and use that as a template.
After installing postgres 10 locally (Note you could use Docker and eventually we will) we need to allow it to accept TCP/IP connections from our docker connections.
In pg_hba.conf we add the following line
host all all 172.16.10.0/12 md5
Note the above may vary depending on your local/campus/docker network setups. For me it is just convenient as that covers all the 172 private network space. It is however not especially secure and any externally accessible machine will want to lock this down much better.
In postgresql.conf we do the bare minimum of tuning to get started.
synchronous_commit = off
checkpoint_timeout = 1d
checkpoint_completion_target = 0.95
Because I'm doing the test on my laptop, the other defaults were OK enough for now, and we can worry about them later. In running through the conf, I noticed my cluster had been set to en_NZ.utf8. This is going to be a huge performance problem if we restore into that collation.
So create our database and user
su - postgres
CREATE USER upgrade PASSWORD 'secretpassword';
CREATE DATABASE upgrade_template LC_COLLATE='C' OWNER='upgrade' IS_TEMPLATE = TRUE TEMPLATE='template0';
Create user just creates a postgres ROLE with LOGIN privilege. You may want to give CREATEDB or even SUPERUSER privileges but it isn't necessary for this as we are restoring in an existing database as the postgres user. We create our template database which we are going to restore into. In order to change the collation we need to base it off `template0`, rather than the default `template1`. Set our owner to our new upgrade user, and flag it as a template. Flagging as a template isn't strictly necessary here, as we assigned ownership to our upgrade user they can use as a template anyway.
Next we grab a backup and restore it into our newly created template. It is in a postgres custom format so we need to use pg_restore.
pg_restore -F c -j $(grep -c ^processor /proc/cpuinfo) -d upgrade_template -O upgrade.custom
CREATE DATABASE upgrade1 OWNER upgrade TEMPLATE upgrade_template;
The above command restores the backup upgrade.custom, setting the file format to custom, the number of jobs to match the number of CPU's, restore into upgrade_template, don't copy ownership commands. Then we create our database from our newly updated template. This was done as the postgres user.
Now our database is ready, we need to create our upgrade scaffolding.
Using Doodba we download the scaffolding branch and make a few amendments. At this point, you will need to have git, docker and docker-compose installed.
# XXX Base version must match $ODOO_VERSION in .env file
MAINTAINER Graeme Gellatly <firstname.lastname@example.org>
odoo/custom/src/repos.yaml - making sure we use openupgrade remote. The OCA repositories are optional, and just include the few modules I've manually checked from yesterday which have nothing to do.
# Defaults for all builds
# Odoo is always required
# Shallow repositories ($DEPTH_DEFAULT=1) are faster & thinner
# You may need a bigger depth when merging PRs (use $DEPTH_MERGE
# for a sane value of 100 commits)
- openupgrade $ODOO_VERSION
# Example of a merge of the PR with the number <PR>
# - oca refs/pull/<PR>/head
# Example of an OCA repository
- oca $ODOO_VERSION
# End Defaults
# Start Public
- oca $ODOO_VERSION
- oca $ODOO_VERSION
- oca $ODOO_VERSION
We create a new docker-compose files to build our upgrade image based off the development compose file, but without demo data, all other services removed and pointing to our docker hosts IP for DB.
# To aggregate, use `setup-devel.yaml`
# No need for this in development
# XXX Odoo v8 has no `--dev` mode; Odoo v9 has no parameters
Finally we setup our .env file. Not much to do here except find and change the following lines. We aren't actually using the database here, but may as well set the version.
if not already in your .bashrc either add this and source the file or run at terminal you plan to use for upgrade. As per Doodba documentation.
export UID GID="$(id -g $USER)" UMASK="$(umask)"
rm docker-compose.yaml && ln -s upgrade.yaml docker-compose.yaml
mkdir -p odoo/auto/addons # Only make the directory if it doesn't already exist.
chown -R $USER:1000 odoo/auto chmod -R ug+rwX odoo/auto
docker-compose build --pull
docker-compose -f setup-devel.yaml run --rm odoo
If the build errors due to a dependency mismatch on kaptan - add the following to the top of odoo/custom/dependencies/pip.txt.
And start our first attempt with - after committing and psuhing any scaffolding you want and closing down resource eaters.
docker-compose run --rm odoo -u all --workers=0 --stop-after-init
Knowing the data and watching the early logs already I can see lots of things that will need investigating. A view of top shows about a 90/10 split between postgres and python. Ordinarily in a production environment, and in my past experiences I usually see 75/25. I'm hopeful that this means I've got postgres optimzation right and things will run quickly.
End of day 2.