Articles tagged with Security

  1. Rails
    15 Feb 2016
    1. After upgrading an app to Rails 5.0.0.beta2 I started playing with Action Cable.
      In this post I want to show how to do authorization in Cable Channels.
      Whether you use CanCanCan, Pundit or whatever, first off you will have to authenticate the user, after that you can do your permission checks.

      How to do authentication is shown in the Action Cable Examples. Basically you are supposed to fetch the user_id from the cookie. The example shows how to check if the user is signed in and if not reject the websocket connection.
      If you need more granular checks, keep reading.

    2. To understand the following code you should first familiarize yourself with the basics of Action Cable, the Readme is a good start.

      The goal here is to identify logged in users and do permission checks per message. One could also check permissions during initiation of the connection or the subscription of a channel, the most granular option is to verify permissions for each message. This can be beneficial if multiple types of messages or messages regarding different resources which require distinct permissions are delivered from the same queue.
      Also imagine permissions change while a channel is subscribed, you would propably want to stop sending messages immediately if a user gets the permission to receive them revoked.

    3. In the ApplicationCable we define methods to get the user from the session and Cancancan’s Ability through which we can check permissions.

      module ApplicationCable
        class Connection < ActionCable::Connection::Base
          identified_by :current_user
      
          def connect
            self.current_user = find_verified_user
          end
      
          def session
            cookies.encrypted[Rails.application.config.session_options[:key]]
          end
      
          def ability
            @ability ||= Ability.new(current_user)
          end
      
          protected
          def find_verified_user
            User.find_by(id: session["user_id"])
          end
        end
      end

      We give Channel access to the session and the ability object. The current user is already accessable through current_user.

      module ApplicationCable
        class Channel < ActionCable::Channel::Base
          delegate :session, :ability, to: :connection
          # dont allow the clients to call those methods
          protected :session, :ability
        end
      end

      So far we setup everything we need to verify permissions in our own channels.
      So now we can use the ability object to deny subscription in general, or in this case to filter which messages are sent.

      Notice: Currently using ActiveRecord from inside a stream callback depletes the connection pool. I reported this issue under #23778: ActionCable can deplete AR’s connection pool. Therefore we have to ensure the connection is checked back into the pool ourselfs.

      class StreamUpdatesChannel < ApplicationCable::Channel
        def subscribed
          queue = "stream_updates:#{params[:stream_id]}"
          stream_from queue, -> (message) do
            ActiveRecord::Base.connection_pool.with_connection do
              if ability.can? :show, Stream.find(params[:stream_id])
                transmit ActiveSupport::JSON.decode(message), via: queue
              end
            end
          end
        end
      end
  2. God object
    07 Feb 2016
    1. A year ago we had an issue using Git from TeamCity “JSchException: Algorithm negotiation fail” due to diffie-hellman-group-exchange-sha256 wasn’t supported. (see Git connection fails due to unsupported key exchange algorithm on JetBrains issue tracker)

      Today we had a similar issue with using the TeamCity plugin for RubyMine.
      Our TeamCity installation is served through a reverse proxy by an Apache web server. The only common algorithm between Java and our TLS configuration is TLS_DHE_RSA_WITH_AES_128_CBC_SHA.

      Due to Java’s JCE provider having a key size upper limit of 1024, since Java 8 it is 2048, the connection fails because we require at least 4096. In RubyMine you get the Message “Login error: Prime size must be multiple of 64, and can only range from 512 to 2048 (inclusive)”.

    2. To fix this on a Debian “Jessie” 8 system with OpenJDK 8 installed follow these steps.

      Install the Bouncy Castle Provider:

      sudo aptitude install libbcprov-java

      Link the JAR in your JRE:

      sudo ln -s /usr/share/java/bcprov.jar /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/ext/bcprov.jar 

      Modify the configuration /etc/java-8-openjdk/security/java.security

      security.provider.2=org.bouncycastle.jce.provider.BouncyCastleProvider
  3. Gnupg
    17 May 2015
    1. Lately I spent a lot of time exploring the details of GnuPG and the underlying OpenPGP standard. I found that there are many outdated guides and tutorials which still find their way into the hands of newcomers. There seems to be a cloud of confusion around the topic, which leads to many misinformed users, but also to the idea that OpenPGP is hard to understand.

      This article is my attempt to fight some of this confusion and misinformation.

    2. Naming confusion

      A lot of people seem to have problems separating the different terms.

      • OpenPGP is a standard for managing cryptographic identities and related keys mostly described by RFC 4880. It also provides a framework for issuing and verification of digital signatures and for encrypting and decrypting of data using the aforementioned identities.
      • PGP, meaning Pretty Good Privacy, was the first implementation of the system now standardized as OpenPGP. It is proprietary software currently owned an being developed by Symantec.
      • GnuPG, the GNU Privacy Guard, is probably the most wide-spread free software implementation of the OpenPGP standard. Some lazy people also call it GPG because it’s executable is called gpg. This confuses people even more.

      OpenPGP and its major implementations

    3. “OpenPGP is just for e-mail”

      It is true that OpenPGP was created to allow secure e-mail communication. But OpenPGP can do far more than that.

      One major field of usage for OpenPGP is the secure distribution of software releases. Almost all of the big Linux distributions and lots of other software projects rely on GnuPG to verify that the downloaded packages are indeed identical to those made by the original authors.

      OpenPGP can encrypt and digitally sign arbitrary files. Also, by using so called ASCII-armored messages, OpenPGP can be used to send encrypted and signed messages through every system that is able to relay multi-line text messages.

      In addition, OpenPGP identity certificates can be used to authenticate to SSH servers. They can also be used to verify the identities of remote servers through Monkeysphere.

      All in all, OpenPGP is a fully-fledged competitor to the X.509 certificate system used in SSL/TLS and S/MIME. Personally I think OpenPGP actually outperforms X.509 in any regard.

    4. Certificates and keys

      Far too many things in OpenPGP are called keys by many people. In OpenPGP, an identity is formed by one or more asymmetric crypto keys. Those keys are linked together by digital signatures. Also, there is a whole lot of other useful data contained within this structure.

      A lot of times, I have seen that describing this whole bunch of different pieces of data “a key” just makes it harder for people to understand the system. Calling it an identity certificate describes it far better and allows people distinguish between it and the actual crypto keys within.

      OpenPGP identity certificate and related keys

    5. Fingerprints and other key identifiers

      Each key in OpenPGP (of the current version 4) can be securely identified by a sequence of 160 bits, called a fingerprint. This sequence is usually represented by 40 hexadecimal characters to be easier to read and compare. OpenPGP identity certificates are identified by the fingerprints of their primary keys.

      The fingerprint is designed in a way, so that it is currently considered infeasible to deliberately generate another certificate which has the same fingerprint. Behind the scenes this is achieved by using the cryptographic hash function SHA-1.

      Versions of GnuPG prior to version 2.1 did not display the full fingerprint by default. Instead they displayed a so called key ID. Key IDs are excerpts of the end of fingerprint sequence. The short variant is the 8 hexadecimal characters, the long variant is 16 hexadecimal characters long.

      Fingerprint:                          0123456789ABCDEF0123456789ABCDEF01234567
      Long key ID:                                                  89ABCDEF01234567
      Short key ID:                                                         01234567

      Even today, these key IDs are displayed prominently within GnuPG’s output and lots of OpenPGP related GUI programs and websites display them. They all fail to warn the user that neither the short, nor the long key ID can be used to securely identify a certificate, because they have been shown to be easily spoofable. Please don’t rely on these, or even better, avoid them completely and use full fingerprints instead.

    6. Secure exchange of identity certificates

      Probably the biggest obstacle of establishing secure communication through cryptography is making sure that both parties own a copy of each other’s public asymmetric key. If a malicious third party is able to provide both communication partners with fake keys, the whole cryptography can be circumvented by performing a MITM attack.

      In OpenPGP, communication partners need to exchange copies of each others identity certificates prior to usage. To deny possible attackers, this needs to be done through a secure channel. Sadly, secure channels are very rare. One way could be to burn the certificates to CDs and exchange these at a personal meeting.

      The certificates could also be uploaded to a file server and downloaded by both communication partners, provided that they verify the fingerprints of the certificates afterwards. The fingerprints still needs to be exchanged through a secure channel.

    7. Key servers

      Key servers are specialized file servers that allow anyone to publish OpenPGP certificates. Some key server networks continuously synchronize their contents, so you only need to upload your certificates to one of the network participants. Most key servers don’t allow to delete any content that has ever been uploaded to them, so make sure not to publish things you’d later regret.

      Be aware that usually, key servers are not certificate authorities. Everyone can upload any certificates they like and usually, no verification is performed. There is no reason to ever assume the certificates received from a generic key server to be anyhow authentic. Just like with any other insecure channel, you have to compare the certificate’s fingerprints with a copy received through a secure channel.

      Instead, key servers are a great way to receive updated information about known certificates. For example, if an OpenPGP certificate expires, it can be renewed by its owner and the update can then be published to the key servers again. Another important scenario would be a identity certificate that has been compromised. The owner can then publish a revokation certificate to the key servers to inform other people that the certificate is no longer safe to be used.

      So key servers are less of an address book, rather than a mechanism for certificate updates. OpenPGP users are well advised to update certificates before each usage or on a regular interval.

  4. Debian
    11 Jan 2015
    1. Some time ago I acquired a BeagleBone Black, a hard-float ARM-based embedded mini PC, quite similar to the widely popular Raspberry Pi. Mainly I did this because I was disappointed in the Raspberry Pi for its need of non-free firmware to boot it up and because you had to rely on third-party maintained Linux distributions of questionable security maintenance for it to function.

      Because some people gave me the impression that you could easily install an unmodified, official Debian operating system on it, I chose to take a look at the BeagleBone Black.
      After tinkering a bit with the device, I realized that this is not true at all. There are some third-party maintained Debian-based distributions available, but at their peak of security awareness, they offer MD5 fingerprints from a non-HTTPS-accessible website for image validation. I’d rather not trust in that.

      When installing an official Debian Wheezy, the screen stays black. When using Jessie (testing) or Sid (unstable), the system seems to boot up correctly, but the USB host port malfunctions and I’m unable to attach a keyboard. Now while I was looking for a way to get the USB port to work, some people hinted to me, that it might be possible to fix this problem by changing some Linux kernel configuration parameters. Sadly I cannot say whether this actually works or not, because it seems to work only for boards of revision C and higher. My board, from the third-party producer element14 seems to be a revision B.

      Still I would like to share with the world, how I managed to cross-compile the armmp kernel in Debian Sid with a slightly altered configuration on a x86_64 Debian Jessie system.

    2. Creating a clean Sid environment for building

      First of all, I created a fresh building environment using debootstrap:

      sudo debootstrap sid sid
      sudo chroot sid /bin/bash

      Further instructions are what I did while being inside the chroot environment

      Then I added some decent package sources, and especially made sure there is a line for source packages. You might want to exchange the URL with some repository close to your location, the Debian CDN sometimes leads to strange situations in my experience.

      cat <<FILE > /etc/apt/source.list
      deb     http://cdn.debian.net/debian sid main
      deb-src http://cdn.debian.net/debian sid main
      FILE

      Then I added the foreign armhf architecture to this environment, so I could acquire packages from that:

      dpkg --add-architecture armhf
      apt-get update

      Next, I installed basic building tools and the building dependencies for the Linux kernel itself:

      apt-get install build-essential fakeroot gcc-arm-linux-gnueabihf libncurses5-dev 
      apt-get build-dep linux
    3. Configuring the Linux kernel package source

      Still within the earlier created chroot environment I then prepared to build the actual package.

      I reset the locale settings to have no dependency on actual installed locale definitions:

      export LANGUAGE=C
      export LANG=C
      export LC_ALL=C
      unset LC_PAPER LC_ADDRESS LC_MONETARY LC_NUMERIC LC_TELEPHONE LC_MESSAGES LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_CTYPE LC_TIME LC_NAME

      I then acquired the kernel source code:

      cd /tmp
      
      apt-get source linux
      
      cd linux-3.16.7-ckt2

      I configured the name prefix for the cross-compiler executable to be used:

      export CROSS_COMPILE=arm-linux-gnueabihf-

      Now in the file debian/config/armhf/config.armmp, I changed the Linux kernel configuration. In my case I just did change the following line:

      CONFIG_TI_CPPI41=y

      You might need completely different changes here.

    4. Building the kernel package

      Because for some reason this package expects the compiler executable to be gcc-4.8 and I couldn’t find out how to teach it otherwise, I just created a symlink to the cross-compiler:

      ln -s /usr/bin/arm-linux-gnueabihf-gcc /usr/local/bin/arm-linux-gnueabihf-gcc-4.8

      Afterwards, the build process was started by the following command:

      dpkg-buildpackage -j8 -aarmhf -B -d

      The -j flag defines the maximum amount of tasks that will be done in parallel. The optimal setting for fastest compilation should be the amount of CPU cores and/or hyper-threads you’ve got on your system.

      The -d flag ignores if some dependencies aren’t installed. In my case, the process complained about python and gcc-4.8 not being installed before, even though they actually were installed. I guess it meant python:armhf and gcc-4.8:armhf then, but installing these is not even possible on my x86_64 system, even with multiarch enabled. So in the end I decided to ignore these dependencies and the compilation went fine by the looks of it.

      Now that compilation process takes quite a while and outputs a lot of .deb and .udeb package into the /tmp directory. The actual kernel package I needed was named linux-image-3.16.0-4-armmp_3.16.7-ckt2-1_armhf.deb in my case.

    5. Creating a bootable image using the new kernel

      For this, I left the Sid chroot environment again. I guess you don’t even have to.

      Now I used the vmdebootstrap tool to create an image that can then be put onto an SD card.

      First of all I had to install the tool from the experimental repositories, because the versions in Sid and Jessie were bugged somehow. That might not be needed anymore in the future.

      So I added the experimental repository to the package management system:

      cat <<FILE > /etc/apt/sources.list.d/experimental.list
      deb     http://ftp.debian.org/debian experimental main contrib non-free
      deb-src http://ftp.debian.org/debian experimental main contrib non-free
      FILE

      And afterwards I did the installation:

      apt-get update
      apt-get -t experimental vmdebootstrap

      I created a script named customise.sh that sets the bootloader up inside the image, with the following content (many thanks to Neil Williams):

      #!/bin/sh
      
      set -e
      
      rootdir=$1
      
      # copy u-boot to the boot partition
      cp $rootdir/usr/lib/u-boot/am335x_boneblack/MLO $rootdir/boot/MLO
      cp $rootdir/usr/lib/u-boot/am335x_boneblack/u-boot.img $rootdir/boot/u-boot.img
      
      # Setup uEnv.txt
      kernelVersion=$(basename `dirname $rootdir/usr/lib/*/am335x-boneblack.dtb`)
      version=$(echo $kernelVersion | sed 's/linux-image-\(.*\)/\1/')
      initRd=initrd.img-$version
      vmlinuz=vmlinuz-$version
      
      # uEnv.txt for Beaglebone
      # based on https://github.com/beagleboard/image-builder/blob/master/target/boot/beagleboard.org.txt
      cat >> $rootdir/boot/uEnv.txt <<EOF
      mmcroot=/dev/mmcblk0p2 ro
      mmcrootfstype=ext4 rootwait fixrtc
      
      console=ttyO0,115200n8
      
      kernel_file=$vmlinuz
      initrd_file=$initRd
      
      loadaddr=0x80200000
      initrd_addr=0x81000000
      fdtaddr=0x80F80000
      
      initrd_high=0xffffffff
      fdt_high=0xffffffff
      
      loadkernel=load mmc \${mmcdev}:\${mmcpart} \${loadaddr} \${kernel_file}
      loadinitrd=load mmc \${mmcdev}:\${mmcpart} \${initrd_addr} \${initrd_file}; setenv initrd_size \${filesize}
      loadfdt=load mmc \${mmcdev}:\${mmcpart} \${fdtaddr} /dtbs/\${fdtfile}
      
      loadfiles=run loadkernel; run loadinitrd; run loadfdt
      mmcargs=setenv bootargs console=tty0 console=\${console} root=\${mmcroot} rootfstype=\${mmcrootfstype}
      
      uenvcmd=run loadfiles; run mmcargs; bootz \${loadaddr} \${initrd_addr}:\${initrd_size} \${fdtaddr}
      EOF
      
      mkdir -p $rootdir/boot/dtbs
      cp $rootdir/usr/lib/linux-image-*-armmp/* $rootdir/boot/dtbs

      Afterwards the image was built using the following command:

      Note that you might want to change the Debian mirror here as well.

      sudo -H \
        vmdebootstrap \
        --owner `whoami` \
        --log build.log \
        --log-level debug \
        --size 2G \
        --image beaglebone-black.img \
        --verbose \
        --mirror http://cdn.debian.net/debian \
        --arch armhf \
        --distribution sid \
        --bootsize 128m \
        --boottype vfat \
        --no-kernel \
        --no-extlinux \
        --foreign /usr/bin/qemu-arm-static \
        --package u-boot \
        --package linux-base \
        --package initramfs-tools \
        --custom-package [INSERT PATH TO YOUR SID CHROOT]/tmp/linux-image-3.16.0-4-armmp_3.16.7-ckt2-1_armhf.deb \
        --enable-dhcp \
        --configure-apt \
        --serial-console-command '/sbin/getty -L ttyO0 115200 vt100' \
        --customize ./customise.sh

      The result is a file called beaglebone-black.img that can easily be put onto an SD card by using the dd command.

    6. After I put that image on my SD card and booted it, it didn’t solve the USB problem, maybe it isn’t working at all, or maybe it just doesn’t work on my hardware revision. At least it did boot like a regular Sid image I tried before and now I have the knowledge to conduct further experiments.

      It was a hell of a job to find out how to do it, involving tons of guides and howtos giving contradicting instructions and being outdated in different grades. In the end, what helped the most was talking to a lot of people on IRC.

      So I hope this is helpful for someone else and should you be aware of a way to actually fix the USB host problem, please send me a comment.