{Snamell(it)}

  • {Snamell(it)}
  • Blog
  • About
  • Location
  • Contact Us

Category Archives: build

July 31, 2013 · pti

Separating business and stuff on the JVM

Java Apps and the External Environment

Table of Contents

  • 1. Overview
  • 2. Patterns for External Files
    • 2.1. Standalone Directory
    • 2.2. Resource Loading Files
    • 2.3. Separate Internal Configuration from External Configuration
    • 2.4. Logging and Monitoring
      • 2.4.1. What if the business asks for Special Monitoring
      • 2.4.2. Real-time monitoring and administration
    • 2.5. Complying to OS rules through packaging.
      • 2.5.1. Main Deploy folder
      • 2.5.2. Config files
      • 2.5.3. Data files
      • 2.5.4. Log Files
      • 2.5.5. Dotfiles
    • 2.6. Apps deployed on a runtime platform
  • 3. Conclusion

1 Overview

Java applications, although isolated by the jvm and standard library
from most OS details, need to interact with the environment to be
useful. We need to configure the app and this is often file based,
and we need to store data and logfiles somewhere.

The danger is that this will introduce subtle dependencies to the
underlying operating system. This causes friction when used in other
environments like development and testing.

Alternatively certain OS-es make assumptions on how things are
handled there. Debian has some strong opinions on what should be
where.

2 Patterns for External Files

2.1 Standalone Directory

Most java apps seem to have chosen the standalone directory pattern
as basis of their deployment.

$ tree -d Tools/apache-maven-3.0.4
Tools/apache-maven-3.0.4
├── bin
├── boot
├── conf
└── lib
    └── ext

$  tree -d -L 2 Tools/apache-tomcat-6.0.35
Tools/apache-tomcat-6.0.35
├── bin
├── conf
├── lib
├── logs
├── temp
├── webapps
│   ├── docs
│   ├── examples
│   ├── host-manager
│   ├── manager
│   └── ROOT
└── work

This has many advantages. All references to files can now be done
using relative links based from the root folder. The amount of
assumptions that need to be made about the underlying OS is minimized.

In the case of the apache folks these differences are handled in
platform specific startup scripts, but the app hardly sees anything
of it. Now these startup scripts are very complicated and are reused
and refined over many projects.

For our own projects we do not need to make such complicated startup
scripts, but this is the right place to put this glue between the OS
and the app. If it is buried in the app, it is also complicated, and
impossible for sysadmins to modify when the deployment conditions
change.

Many frameworks support this way of working by exposing the root
directory in a configuration variable, allowing easy configuration
relative to the root.

2.2 Resource Loading Files

Many Java products use resource loading files from the classpath
instead of directly opening files.

This makes it easy to provide sensible defaults in the jar
files. Maven has also special support for this by adding the
resources folder to the classpath before the classes and the jars.
In case of testing the src/test/resources is added before that. On
deployment the ${appname}/conf folder is added before the jars.

By putting the right config file in the right location for default
(src/main/resources), test (src/test/resources), deploy
(${appname}/conf), the app is properly configured without the need
for any smarts in the app.

2.3 Separate Internal Configuration from External Configuration

This is especially for Spring, i.e. it would be suicide to do
otherwise, but it is in fact generally applicable. The point is that
some configuration is intended to be changed by the sysadmins and
some is not.

Writing modular, loosely coupled software is great and good
practice. Glueing stuff together using some form of configuration
file is just as great. Now part of this is real product design and
changing it will make it a different product. This includes how the
core pieces are wired together. This part should be internal and
separated from the external config.

Other configuration is related to details which do not alter the
purpose but fill in changing details. Like ip-addresses, names,
email, database connections, … and other related detail
config options. These we will find in the external config files.

Note that significant parts of the app can be provided by plugging in
components. Of course these components need to be externally
configured too. So these are in external config files.

Import the external files and the internal files in a way that the
external files can override the internal ones.

Copy the default external config files to the ${appname}/conf
folder so the admin needing to manage it can immediately see the
defaults. Also take care to comment it so that the person editing it
does not need to be digging for the manual.

Please keep the configuration files small. The ideal application is a
zero configuration app which auto-detects its settings from existing
resources, not an app where every feature can be tweaked and
customized. Every configuration parameter need to be coded, documented,
deployed, managed, reviewed, adjusted, corrected (usually several
times). So this ends up being very expensive.

External Configuration is poison, use it in medicinal quantities (not
necessarily homeopathic quantities, if it is needed, it is needed).

2.4 Logging and Monitoring

Since both these things are essentially non-functional requirements,
they should be pushed down to the platform and out of the app.

All logging frameworks are essentially pluggable. The collect all the
log messages in a back-end independent way and send them to an actual
logging implementation, an appender, usually writing to a file, but
this could be an email, JMS message, SNMP trap, …

Of course where those messages end up is largely dependent on the
organization supporting the app and should be decided by them. So the
final loggers should be treated as external configurable components.

So the app should not get involved with the details of logging, just
add a default config with some sensible defaults (size-based rotating
log files so the dev machines and test machines do not run out of
disk space) in an external log file. Please add a comprehensive set
of commented log targets so the admins can change easily the
log levels in a granular way to support the app effectively.

Similarly the app may rely that an external monitoring system is
available which monitors the error logs for critical
errors. Document these in the Operations Manual under the monitoring
section.

Also make sure that the app behaves consistent to the protocols it is
using. A website which has an error should return error status 5xx,
referencing an entity in a REST API which does not exist should return
404, … , whatever is the norm here. This makes monitoring with
tools like Nagios a breeze, as no parsing of the page needs to be
done.

2.4.1 What if the business asks for Special Monitoring

Tough question. In principle it is now a functional business
requirement and there should be a story for it. The risk exists that
this requirement might become broken after a config change during
routine maintenance.

The best way to deal with it is to make it part of the application,
but still push it as far down in the framework/libraries as possible.

For example, if the logging framework can be leveraged, then the
internal configuration could include a predefined appender for the
business notifications, separate from the external appenders.

In practice, deal with them on a case by case basis. Maybe you can
talk the business out of it, or rely on Nagios configuration managed
by Ops? Talk to the stakeholders.

2.4.2 Real-time monitoring and administration

Allowing access to internal value, parameters and admin functions
through some standard management framework like JMX is another
interesting pattern often seen.

Implementing this is straightforward and will be exposed by a
plethora of tools providing a UI for the management of this
information so that the code can focus on the business value, not
adding stuff to manage that stuff. Just do not forget to document it,
self documentation is best of course. Also give instructions in the
Ops manual to control access to this functionality.

Some projects notify developers and stakeholders immediately when
exceptions or other things happen. Another great pattern, but try to
push it out of the app using standard features of frameworks like the
logging framework, Camel, …

Copying classes from other projects is definitely not recommended,
this is a library shouting to come out. Refactor it as a separate
module, ask to make it part of the company foundation so it is just
there when needed. Just look and ask around first if this is not a
wheel which was already invented.

2.5 Complying to OS rules through packaging.

The above assumption to store everything under a folder runs against
the grain of the Linux standards. (Although they are actually the Mac
and Windows way of working).

I’ll treat the case for debian based distros here, but the same is
possible for the redhat and other distros.

In short, use symbolic links to move the folders to the locations
where linux is happy and keep them visible in the local folder for
the JVM. Everyone happy.

2.5.1 Main Deploy folder

All read-only stuff, which is the real application stuff, is expected
somewhere beneath /usr (but not /usr/local which is reserved for
locally compiled packages which we never do).

I recommend to create the app home folder in /usr/share/${appname}
and copy all libraries, binaries, scripts, static resources, etc in
it.

2.5.2 Config files

Config files in debian are expected under the /etc folder and the
package manager will automatically flag files deployed there as
config files so this does not need to be done separately (unless you
want to change the defaults of course).

Just move the default config files to /etc/${appname} and create a
symbolic link

${appname}: ln -s /etc/${appname} conf

Well, I guess debhelpers have better tools for this, so use whatever
is usual using the buildtool you use.

2.5.3 Data files

Storing data should end up under /var somewhere. I recommend to use
a folder under */var/lib/${appname} and create folders there which
you link back to the main deploy folder. If you only need 1 data
folder you do not need to create subfolders of course.

2.5.4 Log Files

Log files are expected beneath /var/log.

Create a folder /var/log/${appname} and link this to
${apphome}/logs. Make sure the folder is owned by the user the app
will be running as.

2.5.5 Dotfiles

Now we get in the hard cases. Normally this is only needed for
desktop apps, server apps should never use personal dotfiles. However
this is one of those cases where you never should say never.

For desktop apps, use the java support for dotfiles. This will use
personal dotfiles on Unixy OS-es and the registry on
Windows. Easy-peasy for greenfield apps. Problem solved.

For 3rd party apps or libs, we have to play the hand we’re
dealt. Typical examples are .netrc which is used to store passwords
outside the app. Good practice, but major headache.

For server apps, try to avoid it. Before you know it, you can no
longer do a ‘git clone …; mvn install’ to build it. Keeping build
dependencies down is critical for long term support and easy
onboarding.

In any case they are no deployment issue other that making sure it is
documented and there are some samples available for complicated files.

2.6 Apps deployed on a runtime platform

Many java apps, components, webapps, … are deployed on some kind of
runtime, be it a servlet container, appserver, OSGi container, …

Great. Leverage it. Push all this stuff down into the container, so
you can surf on the work done by the container packager.

For instance the jboss server has a folder …/conf in the instance
being started, which is on the class path. Just dump your external config
files there with cfengine or whatever you use for deploying.

Logfiles are also taken care of as that is a service the container
should be offering. Just document the important categories and
loglevels as usual, the rest in the concern of the container admin.

In general if you deploy on a controlled environment, expect that
your external dependencies are provided by the container. Work with
the container owner to find the sweet spot.

For testing this is no issue as maven will do the right thing in
unit and integration testing.

3 Conclusion

In orded to focus on the value of apps we must be separating business and stuff like datafiles, configfiles, logfiles as far from each other as possible. It is often already difficult enough (read: expensive) to fix bugs without having all that cruft sprinkled through the codebase and essential configs. Most of the requirement posed by the details of connecting the app code and the external stuff fall in the realm of non-functional and should be
moved as much as possible outside of the programmed code and into
frameworks and runtime containers and into the hands of the admins.

The best way to deal with those external dependencies is to push
them away from the app code and ignore them for the rest. With the
guidelines above this can be realized to a large extent in a
straightforward way.

Both configuration code and configuration parameters are poison over
time. Use them in medicinal doses.

Posted in build, Java Stuff, linux | Leave a comment |
July 14, 2011 · pti

Building debian packages in a cleanroom

Overview and Goals

We build our solutions mostly on Ubuntu Natty and we deploy to Debian (currently lenny). One problem we face is that Debian has a slow release cycle and the packages are dated. Before a new release is approved and deployed to our target servers it can still take many months causing us to have to use up to 3 year old technology.

So we are often faced to ‘backport’ packages or debianize existing packages if we want to use the current releases.

In the past we had different build servers for the target architectures. However this is a heavy solution and scales poorly. It also makes upgrading to the next release that much heavier.

So we need a system for building debian packages that is :

  1. Fully automated
  2. Target multiple distributions Debian (lenny,squeeze) and Ubuntu(natty, maverick)
  3. Build on development machines(a) and Jenkins/Hudson CI servers(b)
  4. easily configurable
  5. memorizable process

The goal is to make packages for internal consumption, and the process outlined here falls short of the community standards.

Enter pbuilder

Of course we are not the first or only one with this issue. In fact we are laggards and there are excellent articles on the ‘net to help us with these goals.

e.g.

  • PBuilder User Manual
  • Pbuilder Tricks on the Debian Wiki
  • PBuilder How To over at the Ubuntu Wiki

The pbuilder program create a clean room environment of a freshly installed empty debian or ubuntu distro, chroot into it and starts building based on the project metadata, mostly from the debian/control file.

It does this by unpacking a preconfigured base image of the selectable target , installing the build dependencies, building the package in the cleanroom, moving the artifacts to the hosting machine and cleaning everything up again. And it does this actually surprisingly fast. This clearly satisfies goals 1 and 2 (and half of 3 if we assume a developer has full control over his laptop).

The pbuilder is configured through commandline options, which are clear and friendly enough but you end up with commandlines of several lines long which are impossible to type in a shell and are a maintenance nightmare in build scripts (clearly conflicts with point 5). Also in the ideal world we would be able to retarget a build without touching the checked out files, e.g. with environment variable (see goals 3 and 4).

Configuring pbuilder

On the Pbuilders Tricks page I found a big smart shell script to use as the pbuilder configuration file ~/.pbuilderrc.

# Codenames for Debian suites according to their alias. Update these when
# needed.
UNSTABLE_CODENAME="sid"
TESTING_CODENAME="wheezy"
STABLE_CODENAME="squeeze"
OLDSTABLE_CODENAME="lenny"
STABLE_BACKPORTS_SUITE="$STABLE_CODENAME-backports"

# List of Debian suites.
DEBIAN_SUITES=($UNSTABLE_CODENAME $TESTING_CODENAME $STABLE_CODENAME $OLDSTABLE_CODENAME
    "unstable" "testing" "stable" "oldstable")

# List of Ubuntu suites. Update these when needed.
UBUNTU_SUITES=("natty" "maverick" "jaunty" "intrepid" "hardy" "gutsy")

# Mirrors to use. Update these to your preferred mirror.
DEBIAN_MIRROR="ftp.be.debian.org"
UBUNTU_MIRROR="mirrors.kernel.org"

# Optionally use the changelog of a package to determine the suite to use if
# none set.
if [ -z "${DIST}" ] && [ -r "debian/changelog" ]; then
    DIST=$(dpkg-parsechangelog | awk '/^Distribution: / {print $2}')
    # Use the unstable suite for Debian experimental packages.
    if [ "${DIST}" == "experimental" ]; then
        DIST="unstable"
    fi
fi

# Optionally set a default distribution if none is used. Note that you can set
# your own default (i.e. ${DIST:="unstable"}).
: ${DIST:="$(lsb_release --short --codename)"}

# Optionally set the architecture to the host architecture if none set. Note
# that you can set your own default (i.e. ${ARCH:="i386"}).
: ${ARCH:="$(dpkg --print-architecture)"}

NAME="$DIST"
if [ -n "${ARCH}" ]; then
    NAME="$NAME-$ARCH"
    DEBOOTSTRAPOPTS=("--arch" "$ARCH" "${DEBOOTSTRAPOPTS[@]}")
fi

BASETGZ="/var/cache/pbuilder/$NAME-base.tgz"
DISTRIBUTION="$DIST"
BUILDRESULT="/var/cache/pbuilder/$NAME/result/"
APTCACHE="/var/cache/pbuilder/$NAME/aptcache/"
BUILDPLACE="/var/cache/pbuilder/build/"

# make sure folders exist
mkdir -p $BUILDRESULT
mkdir -p $APTCACHE

echo "Target : $BUILDRESULT" >>/tmp/dist

if $(echo ${DEBIAN_SUITES[@]} | grep -q $DIST); then

    OTHERMIRROR="deb file:///var/cache/pbuilder/$NAME/result ./"
    BINDMOUNTS="/var/cache/pbuilder/$NAME/result"
    HOOKDIR="/var/cache/pbuilder/$NAME/hooks"
    EXTRAPACKAGES="apt-utils"
    # Debian configuration
    MIRRORSITE="http://$DEBIAN_MIRROR/debian/"
    COMPONENTS="main contrib non-free"
    DEBOOTSTRAPOPTS=("${DEBOOTSTRAPOPTS[@]}" "--keyring=/usr/share/keyrings/debian-archive-keyring.gpg")
    if $(echo "$STABLE_CODENAME stable" | grep -q $DIST); then
        EXTRAPACKAGES="$EXTRAPACKAGES debian-backports-keyring"
        OTHERMIRROR="$OTHERMIRROR | deb http://www.backports.org/debian $STABLE_BACKPORTS_SUITE $COMPONENTS"
    fi
elif $(echo ${UBUNTU_SUITES[@]} | grep -q $DIST); then
    # Ubuntu configuration
    MIRRORSITE="http://$UBUNTU_MIRROR/ubuntu/"
    COMPONENTS="main restricted universe multiverse"
    DEBOOTSTRAPOPTS=("${DEBOOTSTRAPOPTS[@]}" "--keyring=/usr/share/keyrings/ubuntu-archive-keyring.gpg")
else
    echo "Unknown distribution: $DIST"
    exit 1
fi

I just updated the distribution names to the current situation and added the directory where the packages are collected as a repository so subsequent builds can use these packages as dependencies. I also specified the keyrings to use for Debian and Ubuntu and made sure the expected folders are created to mount them in the clean room.

I created this in my account on my development laptop and added a symbolic link in ~root/.pbuilderrc to this file so I can update it from my desktop environment and do not have to get my brain all twisted up to try to remember with which configuration I am busy in my shell, sudo, su –, …

THe way the script works is that the configuration adapts itself to the content of the DIST and ARCH environment variables. So to configure lenny-amd64 as target is sufficient to do

~ > export DIST=lenny
~ > export ARCH=amd64

This approach is also perfect Jenkins or Hudson to determine the target build from checked out sources, since it can be specified in the build recipe. (satisfies goals 3b, 4 and 5)

Since we have to run these programs using sudo we must make sure the environment variables are passed by sudo. We can do this in the Defaults line of the /etc/sudoers file with the envkeep instruction.

...
Defaults  env_reset,env_keep="DIST ARCH http_proxy ftp_proxy 
 https_proxy no_proxy"
... snip ...
# Cmnd alias specification
Cmnd_Alias PBUILDER=/usr/sbin/pbuilder, /usr/bin/pdebuild
... snip to end of file ...
# Allow members of group sudo to execute any command
%sudo   ALL=(ALL:ALL) ALL

pti     ALL=(ALL) NOPASSWD: PBUILDER
jenkins ALL=(ALL) NOPASSWD: PBUILDER
#includedir /etc/sudoers.d

You add the DIST and ARCH variables there. I also included the environment variables for proxying so I can easily switch between environment on my laptop and these changes propagate to sudo (which is also useful for plain apt-get, by the way).

I also added a line to show how to make the tools available for a user without having to give their password. This is not needed for interactive work, but very much so for the user as which the CI server is running (in our case jenkins). Note that the definition should be after the group definitions, otherwise these take precedence and jenkins has to provide his password (read: is hanging during the build).

Creating the target base images

The heavy lifting is now done. Let’s create an base.tgz for lenny-amd64.

~ > export DIST=lenny
~ > export ARCH=amd64
~ > sudo pbuilder create

Now go and have a cup of coffee (or read some emails).

Rinse and repeat for the other target platforms.

Backporting existing packages

In theory backporting would be as simple as

~  ᐅ cd tmp
~/tmp  ᐅ apt-get source mongodb
Reading package lists... Done
Building dependency tree
Reading state information... Done
Need to get 1,316 kB of source archives.
Get:1 http://be.archive.ubuntu.com/ubuntu/ natty/universe mongodb 1:1.6.3-1ubuntu2 (dsc) [2,276 B]
Get:2 http://be.archive.ubuntu.com/ubuntu/ natty/universe mongodb 1:1.6.3-1ubuntu2 (tar) [1,285 kB]
Get:3 http://be.archive.ubuntu.com/ubuntu/ natty/universe mongodb 1:1.6.3-1ubuntu2 (diff) [29.0 kB]
Fetched 1,316 kB in 1s (679 kB/s)
gpgv: Signature made Thu 17 Mar 2011 11:49:37 PM CET using RSA key ID D5946E0F
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./mongodb_1.6.3-1ubuntu2.dsc
dpkg-source: info: extracting mongodb in mongodb-1.6.3
dpkg-source: info: unpacking mongodb_1.6.3.orig.tar.gz
dpkg-source: info: unpacking mongodb_1.6.3-1ubuntu2.debian.tar.gz
dpkg-source: info: applying debian-changes-1:1.6.3-1
dpkg-source: info: applying build-process-remove-rpath
dpkg-source: info: applying mozjs185
~/tmp  ᐅ DIST=lenny ARCH=amd64 sudo pbuilder build mongodb_1.6.3-1ubuntu2.dsc
I: using fakeroot in build.
I: Current time: Thu Jul 14 14:28:17 CEST 2011
I: pbuilder-time-stamp: 1310646497
I: Building the build Environment
I: extracting base tarball [/var/cache/pbuilder/lenny-amd64-base.tgz]
...

and you should get a nice set of debian packages in */var/cache/pbuilder/lenny-amd64.

In practice you will often end up with errors like :

... snip ...
The following packages have unmet dependencies:
  pbuilder-satisfydepends-dummy: Depends: xulrunner-dev (>= 2.0~) but it is not installable
The following actions will resolve these dependencies:

Remove the following packages:
pbuilder-satisfydepends-dummy

Score is -9850

Writing extended state information... Done
... snip ...
I: cleaning the build env
I: removing directory /var/cache/pbuilder/build//6279 and its subdirectories

In these case you have to walk the dependency tree till you find the leafs, and walk back up the branches to the trunk. Note also that chances are that unless you target machines which only serve a very specific purpose, you might end up with packages which are uninstallable since you pull out the rug from other installed packages. However we have the principle to use 1 virtual host to deliver 1 service, hence there are very little packages deployed to them and nothing complicated like desktop environments.

Simple leaf packages often build without a hitch:

~/tmp  ᐅ DIST=lenny sudo pbuilder build libevent_1.4.13-stable-1.dsc
I: using fakeroot in build.
I: Current time: Thu Jul 14 14:44:00 CEST 2011
I: pbuilder-time-stamp: 1310647440
I: Building the build Environment
I: extracting base tarball [/var/cache/pbuilder/lenny-amd64-base.tgz]
I: creating local configuration
I: copying local configuration
I: mounting /proc filesystem
I: mounting /dev/pts filesystem
I: Mounting /var/cache/pbuilder/ccache
... snip ...
dpkg-genchanges: including full source code in upload
dpkg-buildpackage: full upload (original source is included)
W: no hooks of type B found -- ignoring
I: Copying back the cached apt archive contents
I: unmounting /var/cache/pbuilder/lenny-amd64/result filesystem
I: unmounting /var/cache/pbuilder/ccache filesystem
I: unmounting dev/pts filesystem
I: unmounting proc filesystem
I: cleaning the build env
I: removing directory /var/cache/pbuilder/build//15214 and its subdirectories
I: Current time: Thu Jul 14 14:49:57 CEST 2011
I: pbuilder-time-stamp: 1310647797
~/tmp  ᐅ ls -al /var/cache/pbuilder/lenny-amd64/result
total 5260
drwxr-xr-x 2 root root    4096 2011-07-13 19:47 .
drwxr-xr-x 5 root root    4096 2011-07-13 19:13 ..
-rw-r--r-- 1 pti  pti     2853 2011-07-14 14:49 libevent_1.4.13-stable-1_amd64.changes
-rw-r--r-- 1 pti  pti     9129 2011-07-14 14:49 libevent_1.4.13-stable-1.diff.gz
-rw-r--r-- 1 pti  pti      907 2011-07-14 14:49 libevent_1.4.13-stable-1.dsc
-rw-r--r-- 1 pti  pti   499603 2009-12-05 23:04 libevent_1.4.13-stable.orig.tar.gz
-rw-r--r-- 1 pti  pti    61956 2011-07-14 14:49 libevent-1.4-2_1.4.13-stable-1_amd64.deb
-rw-r--r-- 1 pti  pti    31262 2011-07-14 14:49 libevent-core-1.4-2_1.4.13-stable-1_amd64.deb
-rw-r--r-- 1 pti  pti   172950 2011-07-14 14:49 libevent-dev_1.4.13-stable-1_amd64.deb
-rw-r--r-- 1 pti  pti    51588 2011-07-14 14:49 libevent-extra-1.4-2_1.4.13-stable-1_amd64.deb
-rw-r--r-- 1 root root    9051 2011-07-14 14:48 Packages
~/tmp  ᐅ

Using pdebuild for building packages

Many of our packages are debianized and can be build using debuild.

I use here the Ubuntu sources of tokyocabinet as an example (which uses the libevent package we just built, btw):

~/tmp/tokyocabinet-1.4.37  ᐅ DIST=lenny ARCH=amd64 pdebuild
...snip...
 dpkg-genchanges  >../tokyocabinet_1.4.37-6ubuntu1_amd64.changes
dpkg-genchanges: not including original source code in upload
dpkg-buildpackage: binary and diff upload (original source NOT included)
W: no hooks of type B found -- ignoring
I: Copying back the cached apt archive contents
I: unmounting /var/cache/pbuilder/lenny-amd64/result filesystem
I: unmounting /var/cache/pbuilder/ccache filesystem
I: unmounting dev/pts filesystem
I: unmounting proc filesystem
I: cleaning the build env
I: removing directory /var/cache/pbuilder/build//4199 and its subdirectories
I: Current time: Thu Jul 14 15:05:27 CEST 2011
I: pbuilder-time-stamp: 1310648727
~/tmp/tokyocabinet-1.4.37  ᐅ ls /var/cache/pbuilder/lenny-amd64/result
...snip...
tokyocabinet_1.4.37-6ubuntu1_amd64.changes
tokyocabinet_1.4.37-6ubuntu1.debian.tar.gz
tokyocabinet_1.4.37-6ubuntu1.dsc
tokyocabinet_1.4.37.orig.tar.gz
tokyocabinet-bin_1.4.37-6ubuntu1_amd64.deb
tokyocabinet-doc_1.4.37-6ubuntu1_all.deb

Sometimes the dependencies break on the version of debhelpers. This version is added conservatively by the dh* scripts and often is overly conservative. Many packages build just fine with older versions of the debhelpers. ** *

Setting up automated build

To set this up on the build server we have to replicate the steps above

  1. Create the ~/.pbuilderrc file
  2. Symbolic link to this file in ~root/.pbuilderrc
  3. Allow jenkins to use sudo for building packages
  4. Create a jenkins job to (re)build the packages
  5. Create jobs to build the packages

* *

Posted in build, linux, Ubuntu | Leave a comment |

Pages

  • {Snamell(it)}
  • Blog
  • About
  • Location
  • Contact Us

Archives

  • July 2013
  • March 2012
  • July 2011
  • June 2011
  • November 2008
  • September 2008
  • August 2008
  • July 2008
  • June 2008

Categories

  • build (2)
  • Java Stuff (6)
  • linux (5)
  • MacOSX (1)
  • Personal (1)
    • Huiswerk (1)
    • Kinderen (1)
  • Plone (1)
  • Project Management (2)
  • Ubuntu (6)
  • Uncategorized (3)

WordPress

  • Log in
  • WordPress

Subscribe

  • Entries (RSS)
  • Comments (RSS)
© My Website