Skip to content

Blog

Name Change

I recently got married, and the aim for my wife and I has always been to double barrel our surnames.

Not just because we're super into equality and equity, but also because of the great joke that is being Mr Buck-Rogers.

Live scenes from our wedding.

Thankfully, I already own buck-rogers.co.uk, so perhaps I'll move the personal posts to there.

Setting up DNS Challenge for Traefik

Last night I set up Traefik Forward Auth in my homelab to restrict access to my services using an Identity Provider (such as Google, or locally run KeyCloak instance). I'd been having issues with getting wildcard certificates working when I realised that I was still using the httpchallenge method of getting certificates - this does not allow for wildcard certificates, so we need to change it out for a dnsChallenge. To do this is as simple as pie - first modify your traefik config (mine's in yaml, your's could be in toml):

certificatesResolvers:
  myhttpchallenge:
    acme:
<<<   httpChallenge:
<<<     # used during the challenge
<<<     entryPoint: http
>>>   dnsChallenge:
>>>     provider: gcloud
>>>     resolvers: ['8.8.8.8', '8.8.4.4']
      email: <example@example.com>
      storage: acme.json
      # Use Let's Encrypt Staging CA when testing!
      #caServer: https://acme-staging-v02.api.letsencrypt.org/directory

We then need to provide a method for traefik to configure the DNS records for the challenge - as we're using google we need to provide a service account, but your provider may be different - check the Traefik Docs for more details. Getting the service account set up and with correct permissions is an exercise left to the reader. Once we have the credentials available, we need to provide them to the traefik container, modifiying the docker-compose file we use to provision it:

volumes:
  - ./gcloud.json:/gcloud.json
environment:
  - "GCE_SERVICE_ACCOUNT_FILE=/gcloud.json"

Once saved, re-provision using docker-compose up -d, and you shouldn't see any errors in your traefik logs.

How to build LiME for Memory analysis on Remote Systems

LiME, the Linux Memory Extractor (github) is a Loadable Kernel Module which allows an investigator to collect volatile memory from Linux based systems. This is important for investigations, as Volatile memory contains a large amount of evidence which would be otherwise be lost or modified.

Though LiME is very useful as a tool, in order to use it on a system it requires compilation. If performing this on the target system, further artifacts may be introduced - the system may not have the required kernel headers or software, and the act of compilation may overwrite or modify areas of memory which currently contain evidence to be collected. In order to reduce the amount of artifacts generated during the process of memory collection, we can compile the kernel module remotely against the same kernel as is on the remote system.

This is known as compilation of an external or out of tree module. This is a more forensically sound method of compilation - the kernel object is not compiled on the system being analysed, and administrators do not need to include development tools on the production systems. To build against the remote system, we will of course have to acquire the specific kernel version in use on the target system; we can get this either by downloading the packages from the distribution repsitories, or by cloning the git repository for the distribution.

In order to test your build, you may find it useful to have a test system which mirrors your target build. This would allow you to test the RAM capture without affecting the target system; however this could take some time to set up and isn't viable in time sensitive situations.

Warning: This guide will focus on building for Ubuntu. These instructions should work on other distributions, but your mileage may vary.

Building External Modules

If we do not know the kernel release in use, we can check using uname -r. This gives us the version, patch level, sublevel and localversion in use. Alternatively, we can use the command cat /etc/os-release which will give use further information on the system. We can then either get a copy of the kernel directly from the package maintainer, or by cloning the source. This guide will cover both methods. Kernel

Kernel source packages can be downloaded from the following locations:

Ubuntu: packages.ubuntu.com launchpad.net

Extract them to a location using dpkg -x /path/to/downloaded/deb /path/to/tmp/folder. This temporary folder will be the folder that the module is built against. Kernel from git

Clone the kernel source for your distribution - for Ubuntu, the url follows the form kernel.ubuntu.com/ubuntu/ubuntu-$RELEASE$.git.

Using these, to build for Ubuntu Cosmic Cuttlefish, we run:

git clone git://kernel.ubuntu.com/ubuntu/ubuntu-cosmic.git

We then want to checkout the correct kernel release for our target. We can do this by changing dirctory to the downloaded kernel source and running git tag -l to list the available releases, before checking out the secific version with git checkout $TAG$.

An example of this would be:

git checkout Ubuntu-4.18.0-9.10

Download LiME

Change directory to an empty folder and run git clone https://github.com/504ensicsLabs/LiME.git to clone the repository, before changin directory to the src folder.

We can now run make to build against the currently running kernel, or modify the Makefile to build against another kernel. If we are using the linux headers packages, we can modify the makefile as below. If we are using the git release, we must do a little extra legwork.

Building from git release

If we're using the git release, we must first prepare the release for building external modules.

Copy the config from /boot/config-$KVER-generic to $git release folder$/.config and run make olddefconfig.

You may need to install some extra software, such as bison and flex.

Finally, run make modules_prepare to prepare the kernel source tree for building external modules.

Makefile

We can either run make -C $Path of Kernel Headers$/build" KVER=$Kernel Version to be built against$ M="$(pwd) or use a Makefile to build the kernel object. An example of this modified Makefile is as follows:

obj-m := lime.o
lime-objs := tcp.o disk.o main.o

PWD := $(shell pwd)

.PHONY: modules modules_install clean distclean debug

default:
    $(MAKE) -C "$Path of Kernel Headers$/build" KVER=$Kernel Version to be built against$ M="$(pwd)" modules
    strip --strip-unneeded lime.ko
    mv lime.ko lime-custom.ko

clean:
    rm -f *.o *.mod.c Module.symvers Module.markers modules.order \.*.o.cmd \.*.ko.cmd \.*.o.d
    rm -rf \.tmp_versions

distclean: mrproper

mrproper: clean
    rm -f *.ko

Running the command make will then compile LIME against the target kernel sources, and a loadable kernel module lime-custom.ko will be output. Using the Kernel Object

The final step is to copy this file to the target system and load it into the kernel using sudo insmod /path/to/lime-custom.ko "path=/path/to/usb/drive/ram.lime format=lime". This will load the kernel object, begin capturing the memory of the system and output it to the file ram.lime, which can then be analysed through tools such as volatility.

Memory analysis using these tools will be covered in another article in the future.

References: