File Server Upgrade: Samba File Sharing

Now that I have set-up my file server I need to be able to save / restore files to it. Linux uses the Samba protocol to share files between computers on a network, including Windows and Mac. The easiest and quickest way to manage Samba is to go through webmin GUI. I will be using the command line for simple operations, so SSH into the server too.

ssh user@ip-address

The idea is that you create a share (link to a directory within your server) then set users who may access data within the directory. Users are created in the server, then converted to Samba users. Data can be written to the file server as their users and group, or custom user, groups. Make sure users have rights to access the directory on the server.

Create User

Create a new user:

sudo adduser pingu

Now, convert to a Samba user:

sudo smbpassd -a pingu

Note: if you use the same username and password of the computer connecting to the share, the user should be able to automatically log-in.

Create Directory

Within your file server, create a directory to share. In this example, we will create user Pingu and directory pingu-files.

mkdir /pool/data/pingu-files

Change directory permissions to desired user. E.g.

sudo chown pingu:users -R /pool/data/pingu-files

However, you may wish to make the directory accessible by all. For example, chmod 777 -R will allow all users to read / write but keep the owner, should have have access to a Samba account. This makes back-up much easier later.

 

Set-up Share

Log-in to Webmin. On the left-hand menu, access (under Servers) Samba Windows File Sharing.

To create your first share you need to have the directory exisit on your file server. In this example, I will use /pool/data/pingu-files. The test directory must be accessible by the user. In my example, I will be using the nobody user; I only have access to this machine so not a security problem for me. In a multiuser environment, you may wish to take more time creating groups.

To create a new share, click the ‘Create a new file share‘ button. You will need to then point the share to a directory. I have forced the user nobody and the group nogroup; this means that any files created using that share (no matter which user creates the file) the file will be create as nobody:nogroup with permissions 777. This is wide open – it means that anyone with access to my file server can access all data. Not something I worry about.

Before we can use the share, let’s edit the permissions: click on Security and Access Control.

In this option we choose who’s allowed to access the share. In my case, I have a user called test.

You only need to put the username in the valid users text box. In here, the user will be able to list, read and write to the share.

Next, I want to edit File Permissions to ensure all data written and edited is forced. Note: there is a typo on the below screenshot, under New Unix file mode the numbers should read 777 not 77 (I took the screenshot before I noticed this).

And that’s it. We should have an accessible share. You can either access the share with the server’s IP address or  host name.

From windows:
//host-name/

From Linux:
smb://host-name

Conclusion

You are now able to access files on your file server with a username and password. You can either use as hot storage, so sync files to back-up to the share. For an easy life and a single user environment, shares can be made accessible by guest.

Note: I tried global options (e.g. settings that worked across all shares by default) but had problems with file permissions. For example, I could write a to existing directories, but could not write files to new directories I created. This was fixed by deleting all smb configuration file and then using no global options (As set-up on this article).

Subdomain

At some point it may be necessary to create a subdomain. For example, login.karlhunter.co.uk, blog.karlhunter.co.uk, etc.

This reference will create a subdomain using Ubuntu server and apache2.

Create directory:

mkdir /var/www/html/testsub

Edit sites-available with new domain:

nano /etc/apache2/sites-available/000-default.conf

Create following lines:

<VirtualHost *:80>
ServerName testsub.karlhunter.co.uk
DocumentRoot /var/www/html/testsub
</VirtualHost>

Restart apache2 server:

sudo service apache2 restart

Domain Name System

Once server side has been set-up, we need to visit our domain name host to edit the DNS. Here, we add the subdomain name to the register. Each domain name service may vary, but in general add the name number the A record:

=========================================
Name                    Type                    Value
www                      A                            52.133.143.31
testsub                 A                            52.133.143.31
mail                       A                            52.133.143.31
blog                       A                            52.133.143.31
=========================================

Above are a few examples you could adopt.

If this does not work, wait a few hours for the DNS to aggregate across the domain severs across the Internet. This may take a few hours.

RSYNC exclude

Really Useful Linux

Rsync is a powerful command line synchronisation tool. There are many options but today I will be looking at my favourite: exclude-from.

Say I have the follow directories in a pool:

  • home
  • data
  • pictures
  • videos
  • junk
  • cache

I would like to back-up to my NAS but exclude a few folders, such as videos, cache and junk. I could use this command:

rsync -avH –exclude=cache –exclude=junk –exclude=videos /pool /location/to/back/up

This will be time consuming and messy. Increases my risk of errors. Also, if I want to exclude .mozilla, .cache and Dropbox from by Home folder at a later date, this will add more complexity to the one command.

The easy method is to create a file somewhere in the back-up directory. For example, I will use /pool/exclude.list

Within the list, create the following entries (one per line):

junk/
home/.cache
home/.mozilla
home/Dropbox
videos/
cache/
.Trash*

Now, once saved, add exclude-from to the rsync command:

rsync -avH –exclude-from=exclude.list /pool /location/to/back/up

What are those switches?

a = archive (saves permissions, access times, recursive, etc.)
v = verbose (displays what ryscn is currently doing)
H = human readable outputs

 

ZFS – Scrub

ZFS has a built in process to check data against its hashes to determine whether individual files have been corrupted; if corrupted, will replace with the known good data

zpool scrub pool

(pool being the name of the storage pool)

ZFS will then scrub your data in the background. To check on the progress of the scrub use:

zpool status -v pool

This will output:

pool: pool
state: ONLINE
scan: scrub in progress since Wed Nov 22 17:09:18 2017
496M scanned out of 74.7G at 35.4M/s, 0h35m to go
0 repaired, 0.65% done

Should you wish to stop the scrub use:

zpool scrub -s pool

Data scrubbing should be performed monthly. So I placed the following in my cron job.

sudo nano /etc/cron.monthly/zfs-task

Copy and paste:

#!/bin/sh
# Perform ZFS scrub
zpool scrub pool

(Replace pool with your ZFS pool name)

chmod +x /etc/cron.monthly/zfs-task

 

ZFS – Set-up guide

I have moved over from BTRFS to ZFS. I have not used ZFS before so I was looking to learn. I tried to replace a hard drive on my BTRFS volume but failed to mount just one disk (testing one drive failure) but BTRFS would not mount in degraded mode – a common bug which requires a kernal build to fix.

Due to this problem, it was an excellent time to try a new file system. Like BTRFS, ZFS is a copy-on-write file system that pools together hard drives to increase redundancy – one drive fails, data remains. Data becomes corrupt, copy of data restored.

It took me three attempts to set up successfully; this does not mean ZFS is more difficult than BTRFS, I made some easy-to-avoid errors that will be mentioned here.

Install

First, I am using Ubuntu, so to install:

sudo apt install zfsutils-linux

Create Pool & Add disks

Once installed it is time to create a pool. You can create many pools on one disk, or many pools across multiple disks. This post will create one pool across two disks in ‘mirrored’ mode. All commands must be run under super user.

zpool status

This will show that ZFS has been correctly installed. You should get message along the lines of no pools available.

List disks attached:

fdisk -l

Make note of the disk identifier. You’re looking for:

fd43y6d8-5d61-4ad3-99fb-04uf28d6c324
6dyf94d8-04hi-984j-99fb-f4ef204iu324

I am creating a pool simply named ‘pool’ using two disk above.

zpool create pool mirror /dev/disk/by-uuid/fd43y6d8-5d61-4ad3-99fb-04uf28d6c324 /dev/disk/by-uuid/6dyf94d8-04hi-984j-99fb-f4ef204iu324

Stop: Why am I not using /dev/sda /dev/sdb? In short, if these devices change it will mess with ZFS mounting. A headache waiting to happen. If you do not wish to use UUID, use by-id (/dev/disk/by-id/xxx).

Check this has worked:

zpool list

Warning: I started with one hard drive, added data, then added a second disk. Do not use the ‘add’ command, instead use ‘attach’ – ZFS will then resilver (balance) data across disk. When I used add instead of attach, the disk became stuck within the pool – I could not remove it or upgrade the storage pool to ‘mirror’ mode. I had to delete the pool and start fresh.

Create Dataset

Once created, you need to create directories within your pool. These are called datasets. For example, within my pool I want to use the directories: home, downloads, pictures, documents.

zfs create pool/home
zfs create pool/downloads
zfs create pool/pictures
zfs create pool/documents

As you created these as root, you will need to allow permissions. Think about who can access, but in my case I will use global permissions.

chmod 777 /pool/home
chmod 777 /pool/downloads
chmod 777 /pool/pictures
chmod 777 /pool/documents

Access these directories are drop your files there. To check these datasets have been created:

zfs list

Create Snapshot

To protect data you should use the power of snapshots to recover lost data. You could either snapshot entire pool, or dataset.

zfs snapshot pool/home@snap-name
zfs snapshot pool/@snap-name

To check snaphots have worked.

zfs list -t snapshot

 

Conclusions

That’s it. We have created a storage pool mirrored across two disks. ZFS will checksum the data to ensure what you read from the dataset is correct and, if not, will take the data from the other disk. It is easy to create snapshots and roll back, monitor storage usage. Next post will look into scrub and statistical data ZFS can show.