Github!

Github
Recently I’ve made a switch over to Github for version control over all my programs and scripts(present and future). Git is one of the most amazing tools, and I’d recommend it for everyone who needs version control.

Anyway, I’ve decided to make a post about Git and Github since I have updated MANY(probably all) of the scripts on my blog to a new home on Github: https://github.com/oatley

Some of the updates scripts include:
Pidora-smr
Smart-bk
more to come…

Advertisements
Posted in SBR600 | Tagged , , | Leave a comment

Sigul Sign, Mash Repositories, and Rsync to Mirrors!

Why did I make this script?
So I’m tired of silly little errors popping up at very important stages during the Pidora release. It’s very difficult to manage and let people know exactly how to connect to the Sigul Signing server, when to run the mash(after signing), and when to rsync to the mirrors(please don’t rsync broken or failed mash repos!).

This script works from a single computer, and gains access to the sigul server, mash script, and rsync hosts.

How does this script work?
Like most of my scripts I run external commands through password-less ssh to run tasks on remote machines. I use this to connect to the sigul client, the mash host, and to rsync to our release mirrors. These are all very basic commands to run, but I have brought them all together in this, possibly, over complicated script.

What is actually meant by bringing sign, mash and rsync together is, the user can run a single command, and the script will automatically perform a sigul sign, a mash run, and a rsync, while at the same time, checking for errors at each step.

Source code: pidora-smr
You can find more information here: Pidora-smr

The script currently has default values set directly inside the script, however there are command line options there that allow the user to specify new values instead of modifying the script.

Posted in SBR600 | Tagged , , , , | Leave a comment

Python Based Backup Script for Linux

Introduction
Here at CDOT, our current backup solution was a little archaic, and hard to expand on. I decided to make a new method of backup that can be run from a single computer and backup our entire infrastructure. This script is currently, as I’m writing this not in a finished state, however it is in a state where it works and is usable as a replacement to our previous system. I would like to pose a warning that this method of backup across systems is not a very secure method, and it does pose security threats. Since it does require you to give some users nopasswd sudo access to some or all programs. I am looking for a way around this, and would appreciate any input on this matter.

Here is a copy of the script: smart-bk.py

Goals
There were a few goals that were kept in mind with this script:
– Script resides on a single computer (complete)
– Do not run multiple backups using the same hard drive (complete)
– Check space requirements before performing a backup on source and destination (in progress)
– Emails out daily reports on success or fail (not complete)
– Logs all information /var/log/smart-bk/ (complete)
– Easy(ish) to add a new backup schedule (complete)
– Can view all backups that are currently running (complete)
– Can view all the backups in the queue to run (complete)
– Can view all the schedules that are added (complete)
– Records a record of all previously run backups (not complete)
– Website to view status of currently running backups (not complete)

At this time, not all of these goals have been completed, but I would like them to be sooner or later. Right now I’m setting up a little documentation on how it currently works, what it’s missing, and what my next steps will be.

Scheduler System
The main chunk of the script is setting up a scheduler system. A person or script will add backups they would like to be performed to a schedule using specific parameters. A schedule looks like this:

----------------------------------------------------------------------------------------------------
id|day|time|type|source host|dest host|source dir|dest dir|source user|dest user
----------------------------------------------------------------------------------------------------
1|06|11:00|archive|japan|bahamas|/etc/|/data/backup/japan/etc/|backup|backup

What do these fields mean?

id - This is just a unique field identifier.
day - This is the day the backup last was last run. This is used to check if the schedule is expired(in the past) or has already completed.
time - This is the time at which the backup will start. This allows you to order different schedules to happen earlier or later in the day.
type - This is the type of backup. Currently there are 3.
     - archive backup wraps the directory specified in a tar archive and compresses it with bzip. Uses options: tar -cpjvf
     - rsync is a very simple rsync that preserves most things. Uses options: rsync -aHAXEvz
     - dbdump backup, this is specifically a koji db backup currently. Uses options: pg_dump koji
source_host - This host is the target for backup. You want the files backup up from here.
dest_host - This host is your backup storage location. All files backed up will go here.
source_dir - This directory correlates to source_host. This is the directory that is backed up.
dest_dir - This directory correlates to dest_host. This is where the backup is stored.
source_user - User to use on the source host.
dest_user - User to use on the dest host.

Database
All data for this script is stored inside a sqlite3 db.

sqlite> .schema 
CREATE TABLE Queue(scheduleid INTEGER, queuetime TEXT, FOREIGN KEY(scheduleid) REFERENCES Schedule(id));
CREATE TABLE Running(scheduleid INTEGER, starttime TEXT, FOREIGN KEY(scheduleid) REFERENCES Schedule(id));
CREATE TABLE Schedule(id INTEGER PRIMARY KEY, day TEXT, time TEXT, type TEXT, source_host TEXT, dest_host TEXT, source_dir TEXT, dest_dir TEXT, source_user TEXT, dest_user TEXT);

How To Use sbk
Checking all the available options:

[backup@bahamas ~]$ sbk -h

Output:

Usage: sbk [options]

The smart backup scheduler program sbk is used to run backups from computer to
computer. sbk does this by adding and removing schedules from a schedule
database. Once added to the schedule database, sbk should be run with '--
queue' in order to intelligently add hosts to a queue and start running
backups. It is recommended to run this as a cron job fairly often, more
fequently depending on the number of schedules.

Options:
  -h, --help          show this help message and exit
  -q, --queue         queue schedules and start backups
  -a, --add           add new schedule at specific time
  -s, --show          show the schedule and host info
  -r, --remove        remove existing schedule
  --remove-queue      remove existing schedule from queue
  --remove-run        remove existing schedule from running
  --expire            expire the day in schedule
  --add-queue         add a single schedule to queue
  --sid=scheduleid    specify schedule id for removing schedules
  --time=18:00        specify the time to run the backup
  --backup-type=type  archive, pg_dump, rsync
  --source-host=host  specify the source backup host
  --source-dir=dir    specify the source backup dir
  --source-user=user  specify the source user
  --dest-host=host    specify the destination backup host
  --dest-dir=dir      specify the destination backup dir
  --dest-user=user    specify the destination user
  --log-dir=dir       specify the directory to save logs

Showing Schedule Information
Show all schedules, schedules in queue, and running schedules:

[backup@bahamas ~]$ sbk -s

Output:

        -[Schedule]-
----------------------------------------------------------------------------------------------------
id|day|time|type|source host|dest host|source dir|dest dir|source user|dest user
----------------------------------------------------------------------------------------------------
1|06|11:00|archive|japan|bahamas|/etc/|/data/backup/japan/etc/|backup|backup
2|06|11:00|archive|romania|bahamas|/etc/|/data/backup/romania/etc/|backup|backup
----------------------------------------------------------------------------------------------------

-[Queue]-
----------------------------------------------------------------------------------------------------
|schedule id|queue time|
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------

-[Running]-
----------------------------------------------------------------------------------------------------
|schedule id|start time|
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------

Adding new schedules
All of these options are unfortunately required.
Add a new schedule:

[backup@bahamas ~]$ sbk --add  --time=11:00 --backup-type=archive --source-host=japan --dest-host=bahamas --source-dir=/etc/ --dest-dir=/data/backup/japan/etc/ --source-user=backup --dest-user=backup

Removing schedules
In order to remove a schedule, a “sid” must be specified. This is simply the “id” of the schedule, which is unique to schedules.
Remove a schedule:

[backup@bahamas ~]$ sbk --remove --sid=1

Start the Backups
Start intelligently queuing schedules and starting backups(best to run this in crontab:

sbk -q
or
sbk --queue

If you found this post interesting, there is more information about this backup system and it’s uses on the zenit wiki
http://zenit.senecac.on.ca/wiki/index.php/OSTEP_Infrastructure#Backup_System

Posted in SBR600 | Tagged , , | Leave a comment

Pidora Release Process (repositories)

In order to release updates on pidora, we have to follow a specific procedure. This procedure can be found on the wiki: Pidora standard operating procedure

First, sign all the built packages in koji, in the updates tags. This will create signed copies of the packages in koji. Second, mash all the tags with strict key checking using the mash program. This will create repositories with the specified tags and output them, it will fail if packages in koji are not signed. Finally, rsync the mash repos to a httpd server. In this case we have a directory structure with symbolic links that point to each of the mash repos. This allows us to overwrite the pidora-18-latest with a sym link, which then spreads the changes across the published repos.

Below is a example of the entire symbolic link setup.

Mash Repos:

~/pidora-rsync/mash/pidora-18-latest/
├── mash.log
├── pidora-18
│   ├── armhfp
│   └── source
├── pidora-18-rpfr-updates
│   ├── armhfp
│   └── SRPMS
├── pidora-18-rpfr-updates-testing
│   ├── armhfp
│   └── SRPMS
├── pidora-18-updates
│   ├── armhfp
│   └── SRPMS
└── pidora-18-updates-testing
    ├── armhfp
    └── SRPMS

Pidora main repo sym links:

~/public_html/pidora/releases/18/packages/
├── armhfp
│   ├── debug -> ~/pidora-rsync/mash/pidora-18-latest/pidora-18/armhfp/debug
│   └── os -> ~/pidora-rsync/mash/pidora-18-latest/pidora-18/armhfp/os
└── source
    └── SRPMS -> ~/pidora-rsync/mash/pidora-18-latest/pidora-18/source/SRPMS

Pidora updates repo sym links:

~/public_html/pidora/
├── rpfr-updates
│   ├── 18
│   │   ├── armhfp -> ~/pidora-rsync/mash/pidora-18-latest/pidora-18-rpfr-updates/armhfp/
│   │   └── SRPMS -> ~/pidora-rsync/mash/pidora-18-latest/pidora-18-rpfr-updates/SRPMS/
│   └── testing
│       └── 18
│           ├── armhfp -> ~/pidora-rsync/mash/pidora-18-latest/pidora-18-rpfr-updates-testing/armhfp/
│           └── SRPMS -> ~/pidora-rsync/mash/pidora-18-latest/pidora-18-rpfr-updates-testing/SRPMS/
└── updates
    ├── 18
    │   ├── armhfp -> ~/pidora-rsync/mash/pidora-18-latest/pidora-18-updates/armhfp/
    │   └── SRPMS -> ~/pidora-rsync/mash/pidora-18-latest/pidora-18-updates/SRPMS/
    └── testing
        └── 18
            ├── armhfp -> ~/pidora-rsync/mash/pidora-18-latest/pidora-18-updates-testing/armhfp/
            └── SRPMS -> ~/pidora-rsync/mash/pidora-18-latest/pidora-18-updates-testing/SRPMS/
Posted in SBR600 | Tagged , , , , , | Leave a comment

Pidora 18 – Status Update

Hi All,

So over in CDOT, we have had a nice meeting on our future projects and the future of Pidora. (meeting notes can be found here: http://zenit.senecac.on.ca/wiki/index.php/OSTEP_Meeting_2013-06-03).

A little status update, agreene and myself, oatley, will be putting a bit more energy into “getting the ball rolling” with Pidora over the next month or so. A blog is going to be set up to assist in giving out information on updates, fixes, enhancements, additions, etc(all of which should be tracked in our bug tracker). We are also thinking of some ways to help get people in the community to help out, but haven’t gone much further in that just yet.

Plans for firmware updates seem to be leaning towards automatically building all updates and putting them in the updates-testing repo, that way we can bring the most stable and major firmware updates to our updates, but if people would like “the most recent” they can enable updates testing.(This is a plan and is not in effect yet).

Posted in SBR600 | Tagged , , | Leave a comment

Pidora 18 – Release Annoucement

Pidora 18 (Raspberry Pi Fedora Remix) Release

We’re excited to announce the release of Pidora 18 —
an optimized Fedora Remix for the Raspberry Pi.
It is based on a brand new build of Fedora for the ARMv6
architecture with greater speed and includes packages
from the Fedora 18 package set.

* * *

There are some interesting new features we’d like to highlight:
* Almost all of the Fedora 18 package set available via yum
(thousands of packages were built from the official Fedora
repository)
* Compiled specifically to take advantage of the hardware already
built into the Raspberry Pi
* Graphical firstboot configuration (with additional modules
specifically made for the Raspberry Pi)
* Compact initial image size (for fast downloads) and auto-resize
(for maximum storage afterwards)
* Auto swap creation available to allow for larger memory usage
* C, Python, & Perl programming languages available & included
in the SD card image
* Initial release of headless mode can be used with setups
lacking a monitor or display
* IP address information can be read over the speakers and
flashed with the LED light
* For graphical operation, Gedit text editor can be used with
plugins (python console, file manager, syntax highlighting)
to serve as a mini-graphical IDE
* For console operation, easy-to-use text editors are included
(nled, nano, vi) plus Midnight Commander for file management
* Includes libraries capable of supporting external hardware
such as motors and robotics (via GPIO, I2C, SPI)

* * *

For further documentation, downloads, faq’s, read-me’s, how-to’s, tutorials, or videos:
http://pidora.ca/

* * *

Pidora 18 is a Fedora Remix — a combination of software packages from the Fedora Project with other software.

The Fedora Project is a global community of contributors working to advance open source software. For more information or to join the Fedora Project, see http://fedoraproject.org

Pidora is a project of the Seneca Centre for Development of Open Technology (CDOT). To connect with CDOT, please visit http://cdot.senecacollege.ca

The Raspberry Pi is a small, inexpensive computer board designed to provoke curiosity and experimentation in programming and computer electronics. For more information, see the Raspberry Pi Foundation website at http://raspberrypi.org

* * *

– – –
The CDOT team at Seneca College

Posted in SBR600 | Tagged , , , | Leave a comment

Sigul – Setting up a Sigul Client

How to Setup a Sigul Client
This post will explain the process of setting up sigul clients for a working sigul setup. I have automated a portion of the tasks with a script, but I will go over both methods of manually setting up a client and the slightly more automated setup.

Quick Overview of Sigul
There are at least 3 separate computers involved in the sigul setup(server, bridge, and client should be separate computers):
1. The Sigul Server is completely cut off from all network except contact with the Sigul Bridge.
2. The Sigul Bridge allows sigul clients to connect to it, and talks to the Sigul Server for the client’s request.
3. The Sigul Client communicates with the bridge, and makes requests such as: sign this package, or list users.

If you are looking for another process of sigul:
Connection to Sigul Server/Bridge
Sigul Problems and Troubleshooting
Sigul Client – How to Sign/Testing the Client

Using a script to setup more clients
The script is in the root directory of the sigul client and must be run as root. It can be used by executing the script, and specifying a username as an argument:

/root/sigul-client-setup/setup.sh username

What Does the Script Do?
The script folder contains extra files that it copies over to the users directory.
These files include:
1. sigul client database – which already contains the bridge cert, so that you only need to add the new user cert.
2. a copy of sigulsign_unsigned.py
3. the sigul client config

These files are copied over to the /home/username/.sigul folder. Then the script generates a client cert for the user. Finally it grants the user key access, for this you need to know the passphrase for the key, and use any sigul admin account. It will also give you a warning to make sure you created a admin user on the sigul server.

Create a Admin
Login to the Sigul Server click here if you don’t know how. Once you have logged into the sigul server you need to run the add admin command:

sigul_server_add_admin

The admin name should probably match the username on the Sigul Client. Make sure that your admin users change both their admin password and passphrase after you give it to them, click here for information on how to do that.

Manual Process of Setting Up Sigul Clients
The first step is the do the above step, and Create a Admin. Look for the heading above on this page(Create a Admin) and follow the instructions.

Next give access to the key to the users:

sigul grant-key-access pidora-18 username

This process gets a little complicated because you need to use the Sigul Bridge as well. First export the CA from the Sigul Bridge, this can be done by specifying the directory of the Sigul Bridge database and the name of the CA. The Sigul Bridge database is probably kept in /var/lib/sigul.
On the Bridge:

pk12util -d [directory of database] -o sigul-ca.p12 -n [name of your CA]

Copy the output file(sigul-ca.p12) from the bridge to the clients. When the clients are setup, delete this file, as it should not exist outside the database.

Log into a client user and create a new Sigul Client database:

mkdir ~/.sigul/
certutil -d ~/.sigul/ -N

Import the CA that you copied into the client database:

pk12util -d ~/.sigul/ -i [name of CA file]

Must then modify the trust attributes of the CA and mark it as valid:

certutil -d ~/.sigul/ -M -n [name of your CA] -t CT,,

Create the client cert:

certutil -d ~/.sigul/ -S -n sigul-client-cert -s 'CN=username ' -c [name of your CA] -t u,, -v 120

The CN should match username of both sigul admin and linux user. The -S will create a cert and add it to the database. The -n is the name of the cert. The -c specifies the name of the CA. The -t u,, specifies that this is a user cert. Finally the -v means that this is valid for 120 months.

The manual setup for the client is complete, you should go here and try some of the client tests.

Posted in SBR600 | Tagged , , , , , , , , , , , | 2 Comments