Joe's Linux Blog Linux Admin tips and tricks

November 17, 2010

Qt 4.7.1, alix and Qt for el6 and other housekeeping

Filed under: ALIX,Centos,Configuration,qt,Tools,Web Publishing — jfreivald @ 9:59 pm

Sorry for the delay in getting 4.7.1 out. I’m jammed up at work. I hope to get it out in the next week or so.

When Centos 6 is released I’ll be building Qt packages and ALIX images for it but I will not be abandoning el5 until it is EOL. Lord thank you for Virtual Machines!

I will also be re-configuring the repositories to make them non-centos specific.  I’ll be using ‘qt-el’ instead of ‘centos’ to eliminate confusion for RHEL users who have never heard of Centos. The update will involve moving the RPM packages to a new directory structure and updating the repository package.  The old /centos directories will have only the updated repository package in it, so when a ‘yum upgrade’ is performed on an existing machine the new package will redirect the machine to the new directory structure.  A second  ‘yum update’ will then upgrade the packages normally.

With any luck it will be entirely seamless to the community.

On a side note, we’re over 200 registered users, over 700,000 non-bot hits per month (over 670k from Yum and wget alone!), and easily keeping over 50GB of transfer per month, with a peak in October of over 110GB.  We’re #1 on Google’s search with “Qt Centos” and “ALIX Centos” and several others.  We have users in Russia, Germany, Italy, France, India, South Africa, the U.S., and dozens more, with hits coming from .com, .edu, .org and several other top-level domains.

Thank you to everyone for making this project worthwhile.

–JATF

May 18, 2010

Apache HTTP to HTTPS redirection with mod_rewrite

Filed under: Configuration,Web Publishing — jfreivald @ 5:00 pm

I was trying to enforce ssl for my mail server, which runs on a Hostmonster shared host. I already had SSL configured and the https:// version of the mail server worked perfectly if I typed in the correct https:// url. Trying to find a mod_rewrite configuration that would work redirect http:// connections properly and not give Server Error 500 was not so easy.

There are thousands of pages with ‘how to’ get this to work – but most of them don’t. It’s probably a problem with Apache versions or the setup that Hostmonster has, but I was finally able to devise a solution that works. Place these lines in the .htaccess file of any directory you want to rewrite:

#Hostmonster doesn’t allow +FollowSymLinks, so we use +SymLinksIfOwnerMatch instead.
Options +SymLinksIfOwnerMatch
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://new.location.com/$1 [R=301,L]

This will check to see if SSL is being used. If it isn’t, then it will redirect it to the new location and provide the client with the “permanently redirected” code (301). This will help other scripts, bookmarks, etc., update themselves auto-magically so they don’t make the same mistake twice.

Cheers.

–JATF

June 5, 2009

Using rsync to update a website on hostmonster.com

Filed under: Configuration,Web Publishing — Tags: , , , , — jfreivald @ 8:32 pm

I was working on a website with a software repository that had hard links in it. Linking reduces disk space on the server, and when mirroring with rsync, reduces the time needed to sync the entire mirror.  If you are using scp or ftp to push to the server it causes problems because those programs copy each link as a new file, meaning more bandwidth consumed, more time in transfer, and more disk space used on the server side.  Just what we wanted to avoid by using rsync in the first place.

So how do we use rsync to push our web site to the server when we don’t have access to any of the rsyncd configuration files and can’t work with anything higher in the file tree than our home directory?  Sure we could pay more for a dedicated server, but why?  Lets use the tools we have as a simple user to accomplish what we need cheaply and easily.

First, get ssh access for your host server. Hostmonster requires a faxed copy of a picture ID and some other confirmation. Whatever your host requires, follow their procedures.

Test your ssh connection by opening a terminal and typing:

ssh username@hostname

It will ask you if you want to remember the host key and you should respond with a yes.

If you are able to enter your password and log in, you should be at your home directory on the host server. You should be able to see the files for your website with

ls ~/public_html

Type the following commands:

mkdir ~/.ssh
chmod 700 ~/.ssh

Log out and return to your local computer’s prompt and enter the following commands:

ssh-keygen -t dsa -C youremailaddress

ssh-keygen will ask you some questions. Using the default file name (/home/username/.ssh/id_dsa) is fine. It will also prompt you for a password. This will guard your ssh key, and you only have to type it once per session, so make it a good one.

Once complete, you should have two new files in ~/.ssh: id_dsa and id_dsa.pub.   Create a configuration shortcut:

echo -e "host shortname\n\tHostName hostname\n\tUser username" >> ~/.ssh/config

Where shortname is any name that you want to use to represent your website, hostname is the host that you are uploading to, and username is your login name on that server.

Now, send the public key to the server with:

scp ~/.ssh/id_dsa.pub username@hostname:~/.ssh/authorized_keys2

Now, to prevent yourself from having to type your password every time you want to copy files or log in, type:

ssh-add

and type your password. This will put your ssh key into an ‘agent’, which will authorize you without a password for the rest of the time you are logged in.  After you log out you’ll have to do ssh-add again, but as long as you stay logged in you should be able to log into the hosting server with a simple:

ssh shortname

No password, no nothing, and all encrypted, too.  Log out of the server and get back to a local prompt.

Change to your directory that has the local copy of your web site, such as:

cd ~/public_html

To push the update your web site, type the command is:

rsync -e ssh -vramlHP --exclude '*.log' --numeric-ids --delete --delete-excluded --delete-after --delay-updates . shortname:~/public_html/

To pull the webserver down to your local directory, the command is:

rsync -e ssh -vralmHP --exclude '*.log' --numeric-ids --delete --delete-excluded --delete-after --delay-updates shortname:~/public_html/ .

It will transmit only the changed data, saving you time, and will properly handle hard and soft links, which will save you space on the server.

Just to finish the job I put them into shell scripts by:

mkdir ~/bin
echo -e '#!/bin/bash\n\nrsync -e ssh -vralHP --numeric-ids --delete --delete-excluded --delete-after --delay-updates localdirectory shortname:~/public_html/\n' >> ~/bin/pushsite
echo -e '#!/bin/bash\n\nrsync -e ssh -vralHP --numeric-ids --delete --delete-excluded --delete-after --delay-updates shortname:~/public_html/ localdirectory\n' >> ~/bin/pullsite
chmod +x ~/bin/pushsite ~/bin/pullsite

Where localdirectory is where you want the site stored locally.

Now typing ‘pushsite’ at a terminal prompt will push the update, and ‘pullsite’ will pull it down from the server (assuming your local bin dir is in your path, which it is on most systems).  Assuming you have previously done an ‘ssh-add’, you won’t even need to use a password.

Of course, this doesn’t backup databases, just static files.  But if you are dealing with static files, rsync can’t be beat.  It will push and pull only the changes, and will properly handle hard and soft links without duplicating the files.

Happy publishing.

Powered by WordPress