rclone – Sync to B2

Ever since B2 was released, I have been looking for a decent command-line tool to synchronize data to Backblaze’s servers. I have tried various tools, but was not easy nor intuitive. That’s until I found rclone.

This article will look at how to install rclone, set-up a new profile, and synchronise data to B2.

Firstly register with Backblaze’s B2 service, then generate an application ID and access key. These credentials must be kept private because they allow access to all buckets to create, edit, delete and update data stored with B2.

At the time of writing, B2 gives you 10 GB free storage, then you’ll pay $ 0.005/GB/month. Sending files to B2 is free and listing bucket contents is $ 0.004 per 1000 requests. Downloading comes under Class B transactions, which is $ 0.004 per 10,000 requests then $ 0.02/GB downloaded. More

With cost out of the way, I wanted to create buckets. You can either do this with the web client or use the tool (rclone) in the article. Think of buckets as the root directory, where you store all your files, directories. You can create buckets for different tasks, such as documents, pictures, mail, videos, etc. The advantage of this is that you can customise lifestyle rules depending on what is in the bucket.

I wanted to save encrypted documents and emails to B2. Note: my data is already encrypted at rest, so I did not use rclone’s built-in encryption, which is out of the scope of this article. But the option is there if that’s your requirement.

Rclone

Visit rclone’s website and download the Linux version that best suits your computer. I used the AMD64 (64-bit) version of the script. You will have to “manually” install this script yourself, but do not worry this is easy.

Extract the downloaded archive.

Enter the command line and type:

cp /dir/to/download/rclone /usr/bin/rclone

(copies the file to your bin directory)

Change ownership

sudo chown root:root /usr/bin/rclone
sudo chmod 755 /usr/bin/rclone

Configure

With permissions set, we can now use the script. Rclone will need to be configured with your access keys to access buckets.

rclone config

Name the configuration. Make it short and easy to remember because you will use this phrase to send data to the bucket. I will use: b2.

I used B2 for the purpose of this article, so I selected 3.

Enter your credentials.

In this screenshot, the name is test32, but for the purpose of this article I am assuming it is called b2. The type denotes which service you’re using, eg. b2, s3, etc.

That’s done. So let’s create a bucket.

rclone mkdir b2:test-documents

(must be unique within B2)

Check it has worked, list all your buckets

rclone lsd b2:

Synchronisation

You can synchronise data to the buckets easily. Synchronisation function is one-way; i.e. data from local to bucket. Find the direct route to the directory, which in my test case is /path/to/data/local then decide where you want to store that data within B2. Use the phrase we set earlier when creating the access: I used B2. Then follow by bucket name then directory structure. Eg. b2:bucket/dir/dir

rclone sync -v /path/to/data/local b2:test-documents/

The above command will upload to B2.

Test your files have arrived:

rclone lsd b2:test-documents

Currently rclone allows you to upload to other storage services:

1 / Amazon Drive
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
3 / Backblaze B2
4 / Dropbox
5 / Encrypt/Decrypt a remote
6 / Google Cloud Storage (this is not Google Drive)
7 / Google Drive
8 / Hubic
9 / Local Disk
10 / Microsoft OneDrive
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
12 / SSH/SFTP Connection
13 / Yandex Disk

Rclone is an easy-to-use command-line tool allowing safe storage of data to Backblaze’s servers; the synchronisation tool makes back-up easy. With rclone’s command, you can set-up a scheduled jobs to ensure regular data back-ups occur. To keep data safe at rest, rclone has an encryption function. Not only can you upload to B2, rclone is a versatile tool with many cloud providers it can access.

Scheduled BTRFS snapshots

I have looked before at snapshots in BTRFS. However, I have not yet discussed scheduled snapshots. My NAS with BTRFS is usually off, so I prefer to manually created snapshots whenever I use it, however I have recently set-up a new computer with a BTRFS volume inside. Because I regularly use this computer, I would like scheduled snapshots. If you layout of BTRFS subvolumes well, some subvolumes may require hourly snapshots, and others daily or weekly.

I created a bash script to create the snapshot:

sudo nano /etc/cron.daily/btrfs-daily

Then paste the following command:

#!/bin/bash
# Daily script to run BTRFS snapshot of home directory
btrfs subvolume snapshot -r /pool/home /pool/home/.snap/$(date +’%Y-%m-%d_%H.%m’)

Save the above (Ctrl + O then Ctrl + X).
To enable the script to run, change its permissions:

sudo chmod +x /etc/cron.daily/btrfs-daily

Placing the bash script in the cron.daily directory ensures the script is run at system start-up once a day, if missed. So to run the script daily, your computer does not have to be on 24/7. This is the quickest and easiest way to create a daily task under Ubuntu that will run on the next system start-up.

These snapshots are read only so not vulnerable to ransomware or the likes, and I can easily retrieve files. Using scheduled snapshots ensures they are created regularly.

Note of back-up: these snapshots are stored on the same drive, so not strictly a back-up. Because the data is near-line can be easily restored. My data is also stored elsewhere.

SHRED: Securely erase files in command line Linux

During the course of your daily computing it may be necessary to erase files to prevent fraud or snooping; important files could be password vaults, bank statements, credit reports, etc. Computer drives contain a lot of sensitive information. If you are selling your computer / drives you will need to boot with a live Linux CD and erase the entire disk, not covered here. This post will look into securely erasing individual files and directories.

Let’s say for example I have a file called June2017-credit_report.pdf. I have read this file, perhaps stored it elsewhere and want to securely erase. Using the rm command will mark the file as deleted, making it invisible to the operating system , yet remains on the drive. This is no good for security.

For shredding files, I am using shred, a command line secure delete program.

shred -fuv --iterations=60 /home/me/documents/statements/June2017-credit_report.pdf

Iterations refers to the amount of writes the shredding tool does. More writes equals more security. The default iteration is 3, which is acceptable in a home environment.

Note: the shredding tool cannot help of the file system is keeping snapshots (like BTRFS, ZFS).

Arguments available

-f, –force    change permissions to allow writing if necessary
-n, –iterations=N  overwrite N times instead of the default (3)
–random-source=FILE  get random bytes from FILE
-s, –size=N   shred this many bytes (suffixes like K, M, G accepted)
-u, –remove   truncate and remove file after overwriting
-v, –verbose  show progress
-x, –exact    do not round file sizes up to the next full block;
this is the default for non-regular files
-z, –zero     add a final overwrite with zeros to hide shredding
–help     display this help and exit
–version  output version information and exit

Shredding content of directory

Easily shredding an entire directory in Linux is difficult. The previous command is easy to delete a single file, but try adding a recursive tag, the command fails.

There is a way to shred an entire directory with an easy modification to the command. The following command shreds an entire directory.

find /dir/dir -type f -exec shred -vzu --iterations=25 {} \;

Be careful to check all commands and ensure you are erasing the correct file directory. Always keep a good back-up because mistakes in the command line happens.

What shredding does not do

Be careful of operating systems that create snapshots, versioning or stores data elsewhere, such as synchronisation to cloud storage or local storage. Data could quite easily escape out.


This post is part of the Linux really useful commands series. Check out other posts.

New system: the operating system (2/3)

My previous post looked at the hardware of my new desktop PC. Here I will be installing the operating system on a new solid-state drive (SSD).

SSD Choice

I found a 60 GB SSD on Amazon. It’s only £30 and has decent performance benchmarks. The use of an SSD will improve speed and performance to get the most out of my older hardware. But why 60 GB? The operating system installation only uses 5 GB of storage, and I plan to use hard drive storage pool to store my home directory (documents, pictures, music, mail, web cache etc.) utilising old hard drives. Instead of paying hundreds of pounds on a single SSD to store all my files, I have re-purposed old hard drives to store the bulk of my data which aids in the lifespan of the SDD by reducing large read/writes. Leaving the majority of space available on the SSD allows level wearing to prevent constant writes to the same blocks (files are shared evenly across the SSD utilising all blocks).

The Drevo X1 Series 60 GB SSD available at Amazon is my choice for my operating system boot drive. Reviewers uploaded CrystalDiskMark data showing reads of 550 MB/s and writes of 390 MB/s. That is a decent read/write performance for only £30 but time will tell on its longevity. However, should the SSD fail, I will only lose my operating system, which can easily be reinstall onto another SSD.

When choosing SSDs, it is important to understand the type of flash the drive uses. My SSD is using multi-level cell (MLC) flash, as appose to single-level cell (SLC) flash memory; MLC flash is common is consumer devices, such as cameras, phones, USB flash, etc., whereas SLC is commonly phone in higher cost flash and in the enterprise. The main weakness of MLC is its lower lifespan and slower writes, compared to SLC faster write speeds. For my purpose, this type of SSD will work well. If, say, the SSD fails within the year, which I do not expect, I will replace with a higher priced drive, such as Samsung Evo.

To improve the lifespan of my SDD I need to complete a few tasks.

I created a swap partition despite recommendation not to because the extra read/writes when RAM becomes scarce can reduce the lifespan of the SSD (this prevents Linux from hibernating, though); the main reason I did was because some applications may use the swap space. However, I have never needed a swap partition on my current system, so expect I will never use the swap space, but it is there just in case: when the Linux kernel runs out of RAM it will start closing processes. Currently, my desktop PC has 4 GB RAM which will soon be eaten when watching YouTube videos, mail clients, two web browsers. I can upgrade to 16 GB, so slow incremental updating will be cost-effective. My next purchase will be a further 4 GB of RAM to total 8 GB. Increasing the size of the RAM will prevent the kernel using the swap space, reducing SSD writes (more).

To further reduce writes to the SSD, I disabled the recording of access date and time of files that are read. Every time a files is read by Linux, the file’s access time is updated causing a write. Writes should be minimised to extend the life of an SSD, so this simple trick goes a long way. (more)

sudo nano /etc/fstab

Then change “errors=remount-ro” to “noatime,errors=remount-ro”. Full details here.

TRIM is important to prevent slow writes to SSDs over time. When a file is stored on a new SSD, it is saved onto a fresh block. But should you delete that file, the block is marked as free to use but does not delete the file therein (it is invisible to the operating system). When a new file is stored, the SSD controller must first remove the old file then write the new. Doing so will slow writes to the SSD over time once the SSD becomes full of blocks previously used ref. In Ubuntu (of which Lubuntu is based) since 14.04 TRIM automatically occurs weekly, but should you wish there’s a manual command:

sudo fstrim /

Although most SSDs supports TRIM, double-check before purchasing your model, ensuring it supports TRIM, because it is critically important to preserve write performance of the SSD. I checked and TRIM is enabled on a weekly cron for Lubuntu 16.04.

My computer boots from past the BIOS screen to the login screen in 6 seconds. The SMART data reports are good, and so far the system is running smoothly. I will keep you posted on the performance.

# cat /etc/cron.weekly/fstrim
#!/bin/sh
# trim all mounted file systems which support it
/sbin/fstrim --all || true

Operating System

The operating system I have installed is Lubuntu 16.04 LTS. I prefer Linux over Windows because I enjoy the system-level access to modify, power of the command line, control of packages installed, and security I get. The computer came with Windows 7 Professional, which was removed because I will not be using the hard drive for the boot disk – the hard drive will form a storage pool later. Lubuntu is based on Ubuntu, which in turn is based on Debian – a Linux distribution. The reason I chose Lubuntu was its lightweight desktop GUI and low system resource usage, allowing me more power for my applications and tasks. I have been using Lubuntu on my laptop Intel i3 so experienced with the operating system.

Installation

Before installing an operating system, I used the live CD to check the memory using MemTest. After I left it running no errors were found, so I quit (does not quick automatically – the test continues).

A quick word on partitioning. Linux allows you to freely set-up your own partitioning schemes. Partitions are separated parts of one physical drives displayed to the system as separate drives.

Had I not set-up a separate storage pool (see next post), I would have put home folders onto a separate partition so that the operating system can be easily reinstalled. In this case, there will only be two partitions.

I decided to create an ext4 volume on my SSD followed by a small swap of 1 GB, then I left 9 GB as free space, as not to completely fill the SSD. The BTRFS volume was already set-up earlier using the live CD prior to getting the SSD to speed up the installation.

And that’s it. I continued through the easy steps and all was installed. I spent the afternoon installing my favorite applications and customised the desktop to my liking. Early use is promising. The system is fast and responsive. After installing my tools the SSD is only 14% full (of 38 GB), which will slowly expand with patches and new applications, because my home directory is storing my home directory (documents, pictures, music, cache, etc.) on the hard disk drives.

My final post will look at my BTRFS storage pool I set-up for my home directory and other data.

Click here for my Flickr gallery.

Check out all posts in the series: new system.