Streamlining radio buttons and checkboxes with CSS & Font Awesome

The appearance of radio buttons and checkboxes differs greatly between browsers. Luckily, there are a number of ways to streamline the look of these input elements across browsers. One of them involves a combination of HTML, pure CSS and Font Awesome icons.

HTML structure & icons

The required HTML structure consists of a <label> element wrapping a hidden <input type="checkbox"> tag and two icons as well as an optional label text. Each of the icons represents one of the checkbox’s two possible states.

<label class="checkbox">
    <input type="checkbox" name="salami">
    <i class="far fa-lg fa-square"></i>
    <i class="far fa-lg fa-check-square"></i>
    Add salami by activating this chekbox
</label>

Wondering about why you would use the <label> tag as a wrapper and whether that really is legit HTML5? First of all, it is definitely legit, and secondly, this technique spares you having to provide a for="…" attribute on the <label>.

CSS rules

The CSS code uses the :checked pseudo-class selector to toggle one or the other of the icons.

label.checkbox input:checked ~ .fa-square {
    display: none;
}
label.checkbox input:not(:checked) ~ .fa-check-square {
    display: none;
}

That’s all there is to do! Your checkboxes will now look the same on every browser. As a bonus, you can also colorize them to make them fit the rest of your design:

Creating a persistent Arch Linux installation on a USB stick

I’ve been using Arch Linux for the better part of a decade now. As a result, I am so used to it that I’ll choose it for nearly any task at hand. Although Arch might not be a traditional distribution for persistent live systems, there’s really no reason to not use it to this purpose.

What follows is a list of steps to install and set up a minimal Arch Linux live USB system. In the spirit of KISS, we will go with a single-partition layout:

  1. Create a single Linux type partition with fdisk or the tool of your choice on your USB device (e.g. /dev/sdc)
  2. Create an ext4 file system on the created partition: # mkfs.ext4 /dev/sdc1
  3. Mount the resulting file system: # mount /dev/sdc1 /mnt/usbarch
  4. Use pacstrap from the arch-install-scripts package to install the base package group: # pacstrap /mnt/usbarch base
  5. Auto-generate an fstab file: # genfstab -U /mnt/usbarch >> /mnt/usbarch/etc/fstab
  6. Take a look at the generated /etc/fstab file and adapt if necessary
  7. Change root into the new system: # arch-chroot /mnt/usbarch
  8. Configure the time zone: # ln -sf /usr/share/zoneinfo/Region/City /etc/localtime and # hwclock --systohc
  9. Uncomment en_US.UTF-8 UTF-8 and other required locales in /etc/locale.gen, and generate them with: # locale-gen
  10. Set the LANG variable in /etc/locale.conf, for example: LANG=en_US.UTF-8
  11. Set a default keymap in /etc/vconsole.conf, for instance: KEYMAP=de-latin1
  12. Define a hostname in /etc/hostname, for example: usbarch
  13. Set a super-secure root password: # passwd
  14. Install GRUB on your USB device: pacman -Sy grub && grub-install --target=i386-pc /dev/sdc
  15. Finally, use the grub-mkconfig tool to auto-generate a grub.cfg file: grub-mkconfig -o /boot/grub/grub.cfg

The system should now be bootable and can be further adapted to your liking.

Writing an ISO disk image directly from the Internet to a device

Disk images tend to be large yet available disk space remains a scarce resource even in times of multi-terabyte devices. For this reason, it can still be handy to retrieve a disk image from the Internet and write it to a device without having to temporarily store it on your disk. The below command will retrieve some.iso with wget and pipe the downloaded data to dd‘s stdin. The venerable dd command will then write everything to the /dev/sdX device:

wget -q -O - http://example.com/some.iso | sudo dd of=/dev/sdX bs=4M

As always, be careful to supply the right output device file. As we all (should) know, any mistake in using dd can cause you erased devices and sleepless nights.

Building a Java JSON API in 4 minutes (yes, 4 minutes)

So you have written this amazing piece of software solving one of the world’s biggest problems. If only you could quickly & painlessly share the fruits of your labor with the world…

Well, actually you can, and it’s not that hard. Quite the contrary: It’s really simple, thanks to Spark1 which (according to its website) is

A micro framework for creating web applications in Kotlin2 and Java 8 with minimal effort.

Great, because that’s exactly what we are looking for. (Remember, we are lazy)

In order to get things started, we have to introduce Spark to our classpath. Since we haven’t yet gotten around to checking out them super-fancy alternative build tools, let’s go with the good old Maven. Add the following dependencies to your pom.xml3:

<dependency>
    <groupId>com.sparkjava</groupId>
    <artifactId>spark-core</artifactId>
    <version>2.7.2</version>
</dependency>
<dependency>
    <!-- We'll need this a bit further down the road -->
    <groupId>com.google.code.gson</groupId>
    <artifactId>gson</artifactId>
    <version>2.8.2</version>
</dependency>

In order to determine whether things are going our way, let’s create a main class for running a simple webserver, e.g. MyAmazingAPI.java:

import static spark.Spark.*;

public class MyAmazingAPI {
    public static void main(String[] args) {
        get("/hello", (req, res) -> "Hello World");
    }
}

After compiling this class, run it4 and open your browser at http://localhost:4567/hello. Pretty cool, huh? We’re two minutes in and things are already starting to come together.

Now let’s respond with some JSON. This is achieved by implementing the ResponseTransformer interface. Our JsonTransformer takes a POJO and converts it to a JSON object with the help of Gson. Create a new .java class file with the following contents:

import com.google.gson.Gson;
import static spark.Spark.*;

public class JsonTransformer implements ResponseTransformer {

    private Gson gson = new Gson();

    @Override
    public String render(Object model) {
        return gson.toJson(model);
    }
}

In order to actually make the transformation happen, you have to adapt your route to reference one of your POJOs and the JsonTransformer:

    get("/hello", "application/json", (request, response) -> {
        return new MyModel();
    }, new JsonTransformer());

That’s it! You could now mvn package your code, deploy it to your server and start the API by running the JAR’s main class.

Adding type information to exported Scrapy items

By default, Scrapy won’t include any type information when using feed exports to serialize scraped items. It follows that, when exporting multiple types of items at once, we later on can’t easily discern between the different concepts represented by the items. Consider the following items.py module:

import scrapy


class AnimalItem(scrapy.Item)
    name = scrapy.Field() 


class CatItem(AnimalItem) 
    pass


class DogItem(AnimalItem):
    pass

In the above example, the application apparently needs to discern between Cats and Dogs. Otherwise, sub-classing AnimalItem wouldn’t make a lot of sense since neither CatItem nor DogItem explicitly add anything to their base class. When exporting these items to, say, a .jsonl feed, you’d get something along these lines:

# cats'n'dogs.jsonl

{"name": "Garfield"}
{"name": "Lassie"}
{"name": "Flipper"}

Besides the apparent problem that somehow we managed to scrape not only cats and dogs but at least one dolphin as well, we have lost the ability to easily make a distinction between different kinds or types of animals.

There are multiple places in the Scrapy code structure where you could tackle this problem. For example, you could write a custom item pipeline to check the type of each processed item and add a corresponding _type field to it. Another solution (which I happen to find more elegant) is to create such a field inside the AnimalItem class to automatically add the _type field to each of its subclasses:

class AnimalItem(scrapy.Item)
    name = scrapy.Field()
    _type = scrapy.Field()

    def __init__(self, *args, **kwargs):
        kwargs['_type'] = self.__class__.__name__.replace('Item', '').lower() # 'CatItem' → 'cat'
        super().__init__(self, *args, **kwargs)

There you go. From now on, whenever we need to export our animals, it will be easy to figure out what kind of animal we are dealing with:

# cats'n'dogs'n'dolphins.jsonl

{"name": "Garfield", "_type": "cat"}
{"name": "Lassie", "_type": "dog"}
{"name": "Flipper", "_type": "dolphin"}

Running background tasks in Django

For modern web applications, running asynchronous tasks in the background is often a must. Whether you need to parallelize something not-so-time-critical (say, thumbnail generation) or access that miraculous-but-really-slow machine learning API in the background, there is a plethora of other use cases that require the developer to isolate time-consuming operations from Django’s default synchronous request-response cycle. This is done to spare the user having to wait for a response while being left to stare at an unresponsive browser window.

Of course you could go ahead and develop your own asynchronous task queue by means of Python’s good ol’ threading module. But why reinvent the wheel? On the other hand, if asynchronous programming is an area you would like to learn more about, the DIY approach might be the way to go. In case you are going for a ready-made solution, keep on reading.

Because executing code in the background is so important these days, various people have come up with various solutions to the problem. For Django alone, at least 3 well-supported packages are available via PyPI: Celery, Channels and Django Background Tasks.

Each of the mentioned packages is a great choice for implementing background tasks. In this post, I will focus on Django Background Tasks (DBT) as in my experience it’s the easiest to set up. This is mostly due to its simple design which is, by default, database-backed. Thanks to this property you are not required to install an external message broker such as Redis.

In order to set up DBT, you have to install the package by means of pip, add the background_task app to your INSTALLED_APPS and then migrate your database to install the required tables:

# In your shell

pip install django-background-tasks
# In settings.py

INSTALLED_APPS = (
    (…),
    'background_task',
)
# In your shell again

python manage.py makemigrations background_task
python manage.py migrate

Next, it is your turn to decorate a function with that sweet @background decorator included with DBT. The convention is to define these background functions in myapp/tasks.py but of course you could define them virtually anywhere in your codebase. Let’s stick with the convention, though, in order not to irritate your co-coders and risk exclusion from the next company event.

# In myapp/tasks.py

from background_task import background

@background
def do_xyz_in_the_background(**kwargs):
    …

Whenever you call this function, DBT will transparently create an instance of DBT’s Task model and append it to its database-backed task queue. If you needed to, you could still call the decorated function synchronously:

do_xyz_in_the_background.now(some_kwarg=123)

One more important thing: For the scheduled background tasks to actually be executed, you need to run the python manage.py process_tasks command in parallel with your Django server. process_tasks periodically checks the database for new Tasks and, if necessary, launches them asynchronously.

Hosting a Django application with Apache’s mod_wsgi

  1. Install mod_wsgi: Obviously, this step depends on your package manager (which is usually determined by your distribution). On Arch Linux it goes something like this: sudo pacman -Sy mod_wsgi
  2. Adapt Apache’s configuration: In order to tell the Apache web server to interface with your Django WSGI app, a few directives need to be added to Apache’s configuration.
    Depending on how your distribution structures Apache’s configuration, there are places where it would make more or less sense to include the necessary directives. In my case, it made sense to include the WSGI directives inside the VirtualHost that should host my Django app:
# /etc/httpd/conf/httpd.conf
LoadModule wsgi_module modules/mod_wsgi.so
# /etc/httpd/conf/extra/httpd-vhosts.conf
<VirtualHost _default_:443>
    ServerName my_app.example.com

    # Make sure that wsgi.py can be accessed by the httpd process
    DocumentRoot /usr/share/webapps/my_app/my_app/
    <Directory /usr/share/webapps/my_app/my_app/>
        <Files wsgi.py>
            Require all granted
        </Files>
    </Directory>

    # Reference the app's WSGI script and define a WSGI daemon process group
    WSGIScriptAlias / /usr/share/webapps/my_app/my_app/wsgi.py
    WSGIDaemonProcess my_app.example.com python-home=/usr/share/webapps/my_app/venv python-path=/usr/share/webapps/my_app
    WSGIProcessGroup my_app.example.com

    # Serve the Django app's static files
    Alias /static /usr/share/webapps/my_app/static/
    <Directory /usr/share/webapps/my_app/static/>
        Require all granted
    </Directory>
</VirtualHost>
  1. Enable the right settings module: In case your Django app’s settings.py module deviates from the default location (such as when you manage your deployment environments via multiple settings modules), you need to adapt my_app/wsgi.py:
import os

from django.core.wsgi import get_wsgi_application

# Reference a non-standard settings module (e.g. your production settings)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_app.settings.production")

application = get_wsgi_application()
  1. Allow requests to the app’s host system: Don’t forget to add the domain of the server you are deploying your Django app to to your ALLOWED_HOSTS list.
  2. Restart Apache: On a systemd host, you could use systemctl for that: sudo systemctl restart httpd
  3. Grab a cookie: Sometimes you have to treat yourself for your achievements.