Tutorials (51)


SSH Tunnel for Samba Shares

German (my coworker and friend) posted something a while ago which I am now finding pretty useful: http://eichberger.de/2006/01/ssh-tunnel-smb-for-mac-os-x.html

Note that this post is oriented toward those with macs, but the same should work on other *nix systems with a few minor modifications.

Why do I need this? Well, I’m still on the campus network, but currently in another building, which precludes me from being able to mount the SMB share from the file server… something about the network that neither do I care to understand at this point, nor do I need to. There’s an easy work-around:

You need to forward ports to another server/workstation that has access to the samba share. Once you have the tunneling up and running you can act as if you were sitting at that other computer.

From the terminal: sudo ssh -L 139:host-ip:139 uname@server.ext

From the “Connect to server” dialog in the finder: smb://localhost/name-of-your-share

Put it to use: an example
Say you have ssh access to a computer, mydomain.com, which also happens to host the Samba share.

Open the terminal and type: sudo ssh -L 139:mydomain.com:139 username@mydomain.com
enter your password on the computer you’re sitting at now
enter the password for the server you’re tunneling to
…ok Now you should be properly tunneled to mydomain.com

Open the “Connect to Server” dialog (Apple-K, or from the Finder: Go->Connect to Server…). Since the samba share is hosted on mydomain.com, you don’t need to do too much from here. For the sake of this example, we’ll call the mount “files.”
Enter smb://localhost/files
You’ll be asked for your username and password on the server mydomain.com

There you go… the samba share will mount on your desktop and you’re done!




Smarty: sections and iterators

Just wanted to point this out to any who may be interested, or can’t seem to figure out how to hop around array indices in the Smarty template.

In straight PHP it’s simple enough to bounce around to different array indices $array[$ii-1] etc. In Smarty, not so easy.

If you want to compare the current array member’s value to the one previous, you have to use a special modifier/keyword in your template: index_previous. Likewise you’d use index_next for the next index.

{section name=ii loop=$people}
{$people[ii].name} is {$people[ii].age}
{if $people[ii.index_prev].age < $people[ii].age}
{$people[ii].name} is older than {$people[ii.index_prev].name}
{/if}
{/section}

You can find more info on Smarty sections HERE.




Glossary links, anyone?

We’ve all been to those sites before – the ones that dubiously insert advert links on keywords throughout the page. It’s pretty annoying because they often have nothing to do with the reason(s) why you were on that page to begin with. So now here I am, telling you how to do it easily. But I assure you – my intentions are completely pure!

The task: Link glossary terms to some educational material (coming from a database).

1) Pull your list of terms from the database into an array. Call it $glossary_terms

2) Run a preg_replace command on your page content:

$replace='$0';
$updated_content=preg_replace('/b('. implode('|',$glossary_terms). ')b/i',$replace,$content);

3) Display the $updated_content link instead of the original $content.

Explanation
preg_replace() takes either strings or arrays as input, which is really helpful in situations like this where you have an array of terms to which content gets checked against.
The b modifier signifies word boundaries (spaces, punctuation, etc).
The i modifier tells to a case-insensitive search.

Thanks go to xblue and MarkR on PHPBuilder.com for the pointers (especially xblue for pretty much just handing me the answer!).




Adding a new hard drive to a Linux system

Time came to add a second hard disk to my workstation. I didn’t need a whole lot – just another 250GB for backup and extra storage space until the new workstation arrives later this summer. Here’s a quick tutorial on how to get the new disk in and running on you linux box.

Once the hardware is properly installed, open up a terminal and log-in as root.

/sbin/fdisk /dev/hdb(assuming this is your second drive and your primary is /dev/hda).
/sbin/fdisk /dev/hdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.

The number of cylinders for this disk is set to 30401.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Type m for help…

Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)

type “n” for a new partition,
“p” for primary,
“1” for partition,
use the default size suggested (usually just hit enter for default):
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-30401, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-30401, default 30401):
Using default value 30401

Type “p” to get a list of the partition table:
Command (m for help): p

Disk /dev/hdb: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/hdb1 1 30401 244196001 83 Linux

Then type “w” to write the changes to disk (create the partition on your new drive)
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

…you’re almost done. Just a couple more steps

The next command will make the filesystem on the disk:
/sbin/mkfs -t ext3 /dev/hdb1

The app will begin printing an incrementing number, and before you know it it’ll be done:
mke2fs 1.38 (30-Jun-2005)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
30539776 inodes, 61049000 blocks
3052450 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=62914560
1864 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 31 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

Final steps

Make a new directory in your filesystem to which the new drive will be mapped:
mkdir /drive2

Mount the drive:
mount -t ext3 /dev/hdb1 /drive2

Edit your fstab to auto-mount the disc:
(add this following line)/dev/hdb1 /drive2 ext3 defaults 1 1

That’s it!




Storing hierarchical data in a database part 2b: Modified preorder tree traversal – insertions

Last time I introduced the Modified preorder tree traversal algorithm as a method to store relationships between nodes. This time I’ll show you how nodes are inserted into the tree. Note that I’m using MySQL, so you may need to change your queries slightly depending on the DB you’re using.

Before we get started, I’d like to share some links I’ve found since the last post.
Wikipedia: Tree traversal
MySQL AB: Managing Hierarchical Data in MySQL

Consider our tree, introduced last time:

Say you wanted to insert “Boxer” as a child node of Dog. The left/right values for this new node will be 14/15, respectively. Before we can insert, however, we need to make some room: all left/right values greater than 13 need to be incremented by two so we can fit [14]Boxer[15] in (Dog becomes [13]Dog[16] and Cat becomes [17]Cat[18] ).

$sql_lft="UPDATE animals SET lft=lft+2 WHERE lft>13";
$sql_right="UPDATE animals SET rght=rght+2 WHERE rght>13";
$sql_insert="INSERT INTO animals (node_name,lft,rght) VALUES ('Boxer',14,15)";

In order to insert a leaf node (at the same level), you simply use the rght value from a neighbor (or parent node’s lft value in some cases… you should be able to figure-out why).

The trickiest part really isn’t the insert as much as it is writing an algorithm that determines the proper lft/rght values at every point in the hierarchy. There are lots of ways to do it, so I’ll leave it up to your imagination. The best way to understand what’s going on is by trying it yourself. If you get stuck, feel free to ask!

Next time around I’m going to discuss the idea of moving (multiple) nodes within the tree, and a few other little pieces of functionality that should serve you well…




Installing AWStats on Fedora (FC4) with Apache virtual hosting

While the exact distribution probably doesn’t matter too much, some steps are pertinent only to Fedora (i.e. the Yum install). Honestly, follow the instructions in the awstats documentation closely and you should be fine. My point here is to point-out some of the finer details.

First step, install AWStats as root:
% sudo yum install awstats

Yum will install awstats into /usr/share/ (/usr/share/awstats)

Now run the configuration script
% cd /usr/share/awstats/tools/
% sudo perl awstats.pl
.....

When it asks for the location of the server configuration file:
/etc/httpd/conf/httpd.conf (your apache conf file)
follow the remaining direction until you exit the configuration script. For the sake of this tutorial I’ll call the site “mysite.”

NOTE: Apache logs are typically in the common log file format. AWStats works to its fullest potential if you change the log file format to combined in your httpd file. If you choose to keep the common file format, you’ll have to make the appropriate changes in awstats.

Now configure the awstats config file you just created with the above script:
% sudo emacs /etc/awstats/awstats.mysite.conf

You’ll need to edit a few lines to get everything working:
(line 51) LogFile="/var/path/to/my/file_access.log"

(line 153) SiteDomain="subdomain.domain.ext"

#and any others you might be using
(line 168) HostAliases="subdomain.domain.ext 127.0.0.1 localhost"

#Optional, but improved security. Leaving this blank allows ALL, otherwise fill-in the IPs that you want to allow
(line 349) AllowAccessFromWebToFollowingIPAddresses="127.0.0.1"

One thing I was unable to get working correctly with HTTP authentication was:
(line 328) AllowAccessFromWebToAuthenticatedUsersOnly=1
and
(line 339) AllowAccessFromWebToFollowingAuthenticatedUsers = "__REMOTE_USER__"

Ok!

now run the awstats.pl script:
% cd /usr/share/awstats/wwwroot/cgi-bin/
% sudo perl awstats.pl -update -config=mysite

If all goes well, you should see something along the lines of:
Found 0 dropped records
Found 0 corrupted records,
Found 150 old records,
Fount 50 new qualified records.

…at the end of the update script output.

So, to do this with multiple domains, just repeat the steps above, making sure to make the appropriate changes to each domain…

You should now be able to visit http://www.yoursite.ext/awstats/awstats.pl?config=mysite and you’ll see the fruits of your labor. I chose not to allow web users to do automatic updates. Rather, I have a cronjob set to run the awstats.pl -update script a once per day
(I don’t administer any high-traffic sites, so it’s not critical to have the most up-to-date records). You can see near the end of my Incremental backups with rsync post for more information on that, if you’re interested.

A word of caution: AWStats is often the target of worm attacks through XSS (cross-site scripting). One reason to use Yum to install/manage awstats is that you don’t have to do any of the work to keep it updated (make sure you enable automatic yum updates in your systems services config (menu: Desktop->System Settings -> Server Settings -> Services; check the “yum” option is checked and you click the “Start” button while it’s highlighted). Also make sure to limit who can see the awstats reports.




Storing hierarchical data in a database Part 2a: Modified preorder tree traversal

Second part in this series of discussions/tutorials on storing hierarchical data in a database. You can read the first on here. The original article I read that got me started on these ideas can be found here (you’ll notice the tree diagrams are similar). Most credit should be given to Gijs Van Tulder; I simply modified some of his code and added to his explanation where appropriate.

This time we’re going to take a look at the basics of the modified preordered tree traversal algorithm.

Consider the tree used in part 1, which uses the adjacency list model:

What the modified preordered method does is store the relationships between neighboring nodes in order to recreate the tree structure. We do this by storing “lft” and “rght” (“left” and “right” are reserved SQL keywords) values at each node; as you descend a tree, the left-hand values increase, and likewise, as you ascend the right-hand values increase. It’s hard to visualize until you see it in action:

tree_diag_image-lr_arrow.gif

Pay special attention to the sequence in which these nodes are ordered, as shown by the arrows.

Here’s the presorted table for the above tree:

…and one of the first things you probably want to do with your new presorted data? Create a tree. First things first, though. The code behind it is more difficult to understand than the adjacency model but the overhead is greatly reduced because you only need two queries for to retrieve tree information from the database (adjacency uses a new query every time you recursively enter the retrieve function).

In its simplest form the queries are:
(retrieves the left and right values for the starting node)
SELECT lft,rght FROM animal_tree WHERE name="$name";
SELECT * FROM animal_tree WHERE lft BETWEEN $row['lft'] AND $row['right'] ORDER BY lft ASC;
…and that’s it.

Take a look at the code (modified from the example in the Sitepoint article):

//I use my own database classes, but it should be easy enough to follow what's happening.
function get_tree($name="Animal",&$DB) {

$result=$DB->query("SELECT lft,rght FROM animal_tree WHERE name="$name"");
$row=$DB->fetchRow($result);
$result2=$DB->query("SELECT * FROM animal_tree WHERE lft BETWEEN {$row['lft']} AND {$row['rght']} ORDER BY lft ASC");

$right = array();

$row=$DB->fetchAssoc($result2);

$ii=0; //start counter at 0
foreach($row as $row) {
if(count($right)>0) {

/* $right[max#] is the last array element, which holds the
* rght value from the previous row, which is a parent in
* many cases, but if not a parent, it keeps popping-off
* the last element of the $right array - going back up
* the tree until $row[right] (current node) finds its
* parent (the parents rght value is greater than the
* child's rght)
*/
while($right[count($right)-1]$value)
{
print(str_repeat(' ',$value['indent'])."({$value['lft']})".$value['node']."({$value['right']})n");
}
}

…and in-case you want the path to a particular node:
SELECT * FROM animal_tree WHERE lft$rght ORDER BY lft ASC

..to count the number of descendants: $d=(rght-lft-1)/2

Thanks, again to Gijs for the excellent tutorial. Next time we will explore some ways to modify and manipulate the tree.




Storing hierarchical data in a database: Part 1

(Click to see part 2a and 2b)
In this first of two (or three) posts, I’m going to present a fairly conventional way of representing hierarchical data in a database. (The original article I read that got me interested on posting these ideas can be found here, hence the similarity in the tree structure).

My DB of choice is MySQL, so you may have to modify your queries slightly for other DB engines.

Before we even begin, we should consider why we’d even want to do something like that – what can you even use it for? Animals. It’s something we can all relate to. If, for some reason, you needed to catalog the relationships of different animals to one another, you’d have to come-up with some sort of tree diagram to make sense of it all. Take a look at this one, for example:

What else can you say about it? Pretty simple. I particularly like fish, so I expanded a little bit there (later we’ll see how we can expand our tree to include different breeds of dogs and cats). Also notice that you can really create many different types of trees. One tree that I’m working on, for example, will demonstrate the taxonomic relationships between different species of fish for the DFL. Often times the employee/supervisor scheme is used for demonstration purposes.

The first method of storing our data is by simply indicating to which parent does each item belongs. Take a look at this table:

*Note: I chose to have parent=0 to mean that’s the top-level of our tree (there is no row id 0).

That wasn’t too painless, now was it? The parent_id column basically points to the node (location in the tree) to which that particular item belongs. For example, cat (id 9) has a parent_id 3, which is the id of “Mammal.” Using this information we can reconstruct the tree above (it probably won’t look quite like that, but the structure of the data is what we’re concerned with).

There’s essentially only one basic query we need to achieve our goals:
$query="SELECT * FROM animal_tree where parent_id=$id";
Easy, huh? but it doesn’t really do anything for us. Say we wanted to select Animals->Fish->Marine->all, how would we do that?

In order to make this a bit more automated, since that’s really what we’re trying to do, we need to make use of a recursive function. What is that? you may ask… Well, it’s essentially a function that, at some point, calls itself from within. Here’s a quick example:

This function keeps calling itself until $number equals 100, at which point it kills the script, telling us we’re all done. If we’re going to create a recursive function for our tree, we’re going to need a few modifications to our code first…

That wasn’t too painless, now was it? If this still seems a little to difficult or abstract, don’t worry. Recursive function seem exactly that when you first learn them.

If you ever wanted to find the path to “Halibut,” you can create a similar function that find the parent_id of “halibut”, run the function to find the parent’s parent, so on and so forth.

…enjoy.




Incremental backups with rsync

There are some good resources guiding people to make shell scripts for creating incremental backups using rsync, but it’s still a lot to wade through if you’re not very familiar with some of the ins and outs of *nix terminal commands.

** NOTE: I’m still testing these scripts to make sure they all work. In theory, and by appearance they do. Use at your own risk (you’ve been warned =) **

These two scripts will give you daily backups for a week, and three weekly backups (from the incremental).

1) make backups on a separate partition, on a separate disk, on a separate server. Don’t have the money for buying a new server? Well, your money can buy you more than you realize. You’ll do just fine to set-up a PIII ~500MHz with a reasonable amount of ram (512MB) and a new hard disk drive. 200GB drives are going for less than $100 these days, so you’ve got yourself a backup server!

Get on with it! Two scripts – one for the daily tasks, one for the weekly. Ok – four. One will set-up your cron jobs, but it’s so easy it won’t really count. I’m using BASH for the examples, but T/CSH or any other scripting language should be fine with some modifications

daily.sh

#!/bin/sh

##################################################
#needs to be run as root (to preserve permissions)
# NOTE: unlike the weekly script that saves backups
# on the same day every week (e.g., every Sunday),
# this script makes backups that are X days old
# instead of indicating the day of the week they
# were made (daily.0 is made every day, whereas
# weekly.1 is made every Sunday
##################################################

#the mount point of the drive using for your backups
BACKUP_DIR='/backup'

#mount drive read-write so we can back-up to it.
#you'll want to change the entry in your fstab so it's mounted as read-only by default
/bin/mount -o remount,rw $BACKUP_DIR/

## rotate-out the daily backups
/bin/rm -rf $BACKUP_DIR/daily.6
/bin/mv $BACKUP_DIR/daily.5 $BACKUP_DIR/daily.6
/bin/mv $BACKUP_DIR/daily.4 $BACKUP_DIR/daily.5
/bin/mv $BACKUP_DIR/daily.3 $BACKUP_DIR/daily.4
/bin/mv $BACKUP_DIR/daily.2 $BACKUP_DIR/daily.3
/bin/mv $BACKUP_DIR/daily.1 $BACKUP_DIR/daily.2
/bin/mv $BACKUP_DIR/daily.0 $BACKUP_DIR/daily.1

#start with a fresh copy...
/bin/mkdir $BACKUP_DIR/daily.0

#the good stuff. Note that you need to set-up a public key and allow list between
#the two computers (will cover in a later post)
/usr/bin/rsync -e ssh -vcpogrt --numeric-ids --delete --exclude-from=/path/to/backup_exclude_list.txt --link-dest=$BACKUP_DIR/daily.1 root@serverIPaddress:/ $BACKUP_DIR/daily.0 > /tmp/rsyncAll

/bin/echo "n" >> /tmp/rsyncAll

#send you an email with any messages (errors) returned by the script.
mail -s "rsyncAll backupserver from mainserver" -c youremail@domain.ext root

exclude list
You’ll need this for sure on your system. I haven’t quite figured out all the things that need to be excluded, but I’ve put most of the important ones here. Note that I’m using Fedora (FC4), so your milage may vary. Make sure to put each entry on its own line.

/proc/
/tmp/
/mnt/
/backup/
/media/
/media/backup/
/sys/devices/pci*

weekly.sh
The weekly script is real easy. just some copying/moving.


#!/bin/sh
####################################################
#needs to be run as root (to preserve permissions)
# Run this script BEFORE the daily script ...
# Weekly backups are taken on the same day every week
# and are NOT weekly from the current day. Note that
# distinction.
####################################################
BACKUP_DIR='/backup'

#mount drive so we can back-up to it.
/bin/mount -o remount,rw $BACKUP_DIR/

#previous third week's stuff is outta here!
/bin/rm -rf $BACKUP_DIR/weekly.2

/bin/mv $BACKUP_DIR/weekly.1 $BACKUP_DIR/weekly.2
/bin/mv $BACKUP_DIR/weekly.0 $BACKUP_DIR/weekly.1

# 6-day-old data now the one-week-old data (weekly.0)
/bin/cp -al $BACKUP_DIR/daily.6 $BACKUP_DIR/weekly.0

#daily.6 is now vacant and will be filled by daily script

#mount drive read-only so nobody can touch it
/bin/mount -o remount,ro $BACKUP_DIR/

Finally, you’ll probably want to add some cron jobs to your list…
By default, there were no jobs in my root’s crontab, so it’s no problem to enter the following into a file:

EDIT: simple mistake on the weekly.sh entry: should be 0 0 * * 0 to run every Sunday at midnight.
0 0 * * 0 /bin/sh /path/to/scripts/weekly.sh
30 0 * * * /bin/sh /path/to/scripts/daily.sh

All this means is the following: Every Sunday at midnight, run the weekly script (you’ll want to do this BEFORE your daily script runs so that the daily.6 gets moved over to the weekly set before it gets overwritten by the job that’s to run thirty minutes later. The numbers and stars are basically used to tell cron when to run the scripts. Just run a Google search to find a LOT more detailed information.

Enter it by typing into the terminal (as root): crontab /path/to/crontabfile.txt
you can verify your scheduled jobs by running: crontab -l




nvidia driver installs on Fedora Core 4

Earlier this week I had the unpleasant task of trying to fix/update the NVidia drivers on my Fedora workstation. Video was working fine. What was giving me problems was the hardware 3D support, which is needed by the uber expensive visualization app I use for data analysis.

I’ve never had a pleasant, much less easy, time upgrading the video drivers on this machine. It started with the RedHat Enterprise Linux distribution – better known as “Always at least two full versions behind current,” or, “This software base is at least two years behind other distributions.” Dell suggests you use their “custom” NVidia drivers. NVidia says to use theirs. Neither worked very well, though Dell’s seemed to be more problematic than it was worth.

Fast forward to Fedora.
This Link, though it’s probably the best option, didn’t help – the instructions didn’t work because Yum couldn’t resolve some pretty idiotic dependency issues with my kernel. Next option: the actual NVidia installer. To be honest, it didn’t work the first time around, which is why I tried the Yum installer, but I thought I’d give it another shot.

  • Download the Linux NVidia driver install script.
  • Change your /etc/inittab to run level 3 (from 5 in the first line of executable code), reboot (or just log-out, hit control-alt-F1 and as root run telinit 3).
  • As root run the script. At this point, if I recall correctly, you’d actually be able to telinit 5 and use full 3D capabilities (maybe modprobe nvidia would be required first). I ran into a problem that the NVidia ftp site was down/not responding, so it couldn’t download a pre-compiled profile for the video drivers. Just answer “Yes” when it asks if you want it to compile it for you.

The problem: if you reboot (run level 5), no more nvidia driver. You’ll see an error that the .ko file can’t be found at boot. So something is happening between the install and the boot process. glxgears won’t work – nothing – which is as expected.

  • To make a long story short – look in your kernel’s drivers directory. The script doesn’t put the .ko files in the correct places – at least not initially. What you need to do is make sure that the nvidia.ko file can be found, in one way or another in the …/kernel/drivers/ and the …/kernel/drivers/video/ directories… yes, that’s both of them.
  • open, as root, the /etc/inittab and change the run level back to 5. save. close.
  • reboot.

Here’s another link that ultimately wasn’t too useful to me, but gave me some ideas:Here.

Hopefully somebody will find these instructions even slightly useful