Category: Servers

Don’t let your daemons die

Don’t let your daemons die

Okay, let’s pretend that this isn’t the first time that I’ve posted in about a month. School really has been crazy. I haven’t really even had time to do anything new or cool lately. Tonight though, I’m putting school aside.

I’ve been having some problems with one of the daemons on one of my server’s randomly dying on me, and it got to the point that I couldn’t bear it anymore, I had to find a solution.

Thankfully, people have been using developing things for unix/linux for longer than I’ve been alive, and there’s a program that acts as a perfect band-aid. It’s called Monitor.

Get the file:
apt-get install monit

Create the monit control file:
(The monit control file uses a language of it’s own, if you want to learn it, feel free to go to the references at the bottom of this page. Otherwise just use some form of the file below)


check process sshd with pidfile /var/run/sshd.pid
start program "/etc/init.d/sshd start"
stop program "/etc/init.d/sshd stop"
if failed port 22 protocol ssh then restart
if 5 restarts within 5 cycles then timeout

Then, on the console type monit -c [path to the above file] -d [number of seconds inbetween checks]

This this is saving me a lot of trouble.

Sources:

http://www.tildeslash.com/monit/doc/examples.php
http://www.tildeslash.com/monit/
http://www.tildeslash.com/monit/doc/manual.php

Hopefully it wont be another month,
Jon Howe

Great Iptables Tool

Great Iptables Tool

I’ve been messing around with iptables a bit more than usual the past week or so. Through this process, I decided that it wasn’t worth it for me to keep creating / tweaking all of my rules by hand. So.. I found an Excellent (with a capital ‘E’) frontend to iptables.

For me the frontend had to meet some strict requirements.

  1. It must have a curses graphical user interface
  2. It must be easy to use

Something called Jay’s Iptables Fits these requirements perfectly.

You can find all of the details at it’s website.

Check it out!

Later,
Jon Howe

How To Cache Apt Packages On A Network Using Apt-Cacher

How To Cache Apt Packages On A Network Using Apt-Cacher

If you’ve got more than one computer running Debian that packages are downloaded through apt, then apt-cacher should help you a lot.

Apt-cacher is actually a cgi script that is run by apache. Using apt-cacher is very easy, and installing it is even easier.

Step 1: Install apt-cacher.
(Run this on the proxy computer)

apt-get install apt-cacher

Enter the webpage : http://localhost/apt-cacher
to see that the proxy is running.

Step 2: Backup and Convert Clients sources.list.
(Do this on the computers that you want to access the cache.)

cp /etc/apt/sources.list /etc/apt/sources.list.backup

vi /etc/apt/sources.list

Press ‘:’ while in vi.

Enter “%s/http:///http://[Your Proxy IP]/apt-cacher?//g

This searches through your sources and adds http://[Your Proxy IP]/apt-cacher?/ before every repository.

An example from my sources.list looks like this:

deb http://192.168.3.2/apt-cacher?/www.backports.org/debian/ sarge-backports main

Step 3: Update Clients

apt-get update

Optional Step 4: Import Existing packages into the apt-cacher cache

Copy the desired packages to the proxy directory /var/cache/apt-cacher/import

Run the import script to make it so that apt-cacher can use them.
perl /usr/share/apt-cacher/apt-cacher-import.pl

Step 5: Use apt-get

If the required previous steps completed successfully you should now be able to use apt with it’s connection proxied through your apt-cacher proxy.

Questions / Comments, leave me a comment, and I’ll reply.

Later,
Jon Howe

Okay…

Okay…

I did get something good out of this whole fiasco. When I think of it, this blog being down for a day or so is a small price to pay for the amount of knowledge that I gained relating to setting up apache, mysql, and php… Over and Over Again.

Let me know if there are problems that you see.

I’m pretty sure that mail is all screwed up because I have no mail server, or at least I hope that I don’t (I uninstalled it).

However, that’s something that I’m going to work on next. I’m going to try to find a new mail solution, at least until a stable release of Hula comes out.

I’m a pretty picky customer when it comes to mail servers. I want a mail server that’s got a lot of easy to use documentation, so it might be a while until the jonhoweonline.com mx record is active again…

Some other things that I’ll be working on:

  • Running my own DNS
  • Setting up mod-rewrite for apache / drupal. This will make it so that instead of this site saying www.jonhoweonline.com/me/?q=node/1 it will say www.jonhoweonline.com/me/node/1
  • I am going to be starting an open source Content Management System (CMS) project with (primarily) a couple of my friends, so I’m going to get a cvs server up and running.
  • I hope to have some turorials out for all of those at some point or another,so stay posted!

    Later, and thanks for sticking with me,
    Jon Howe

    How to Back up a MYSql Database for Simple Offsite Storage

    How to Back up a MYSql Database for Simple Offsite Storage

    For those of you who have been reading this blog for a while you that in the past I’ve had some problems keeping this server up and running for any amount of time.

    This got old because every time that my server died, I’d loose all of my blog posts, which stinks because there’s usually a ton of them.

    To combat this ever threatning possibility, I decided to write a bash script for linus that does a few things:

    1. Backs up the entire database
    2. Archives them
    3. Encrypts them using CCRypt, although you can substitute that for whatever you want without bash knowledge
    4. Places the encrypted archive in a directory. (I just stuck the archive in a public web directory.

    I should clarity a little bit… Putting even a highly encrypted archive in a public directory is not 100% secure. It’s especially not recommended if you are storing sensitive data, and especially if you’re storing passwords in plaintext in your database. (Please don’t do that…). Consider yourself warned.

    Unfortunately, there’s another security flaw here, although like the last one, it’s not a show-stopper (for me at least). The only way that I could do the entire process automatically is by storing the password in a plaintext file somewhere either in the script itself or some external file (which is how it’s being distributed now). I think that it’s possible to create a wrapper that contains the password in a compiled (much less human readable), and have that access ccrypt, but I don’t have the time or need for that right now, although it is a good idea :).

    I should probably mention that this script is meant to be run by a cron script.

    So, without further ado, here it is:


    #!/bin/sh

    # backs up all databases
    # archives them
    # encrypts them
    # places them in a directory that you choose


    user=[database usename]
    pass=[database password]
    finaldir=[directory to store final encrypted archive]
    passpath=[path to a text file containing plaintext password]
    #=======================================================
    #You shouldn' have to edit anything below here
    #=======================================================


    workingdir=/root/dbback
    dumpname=db.sql
    pathtosql=/usr/bin/mysqldump
    arch=$dumpname.tar.gz
    crypto=$arch.cpt


    # Test to see if $workingdir exists
    [ -d $workingdir ] || mkdir $workingdir


    # Backs up all databases temporarily to $homedir/$dumpname
    $pathtosql -u $user --password=$pass -A > $workingdir/$dumpname


    # archives the database
    tar -czf $workingdir/$arch $workingdir/$dumpname


    # encrypts DB package
    ccrypt -e -fbrk $passpath $workingdir/$arch


    # copies the encrypted archive to $finaldir
    cp -Rf $workingdir/$crypto $finaldir/db@$(date +%F).tar.gz.cpt


    # make the encrypted database internet readable
    chmod 755 $finaldir/*.cpt


    # removes the files that were used in the creation of the encrypted archive
    rm -f $workingdir/*

    If you have any qyestions, or comments feel free to leave a comment and I’ll get back to you ASAP.

    Later,
    Jon Howe

    Hula is Shaping up to be Awesome

    Hula is Shaping up to be Awesome

    I upate hula one time per week from the Subversion tree. You can find instructions on how to do this in linux at The Hula Blog.

    Each time that I update my source code using subversion I get updated versions of the new web prototypes. I put them here so that you can see them.

    They’re not going to work completely yet because of the fact that they’re just prototypes.

    More Later,
    Jon Howe

    Copyright VirtJunkie.com © 2024