Joe's Linux Blog Linux Admin tips and tricks

June 9, 2010

Qt 4.6.3 and qt-creator 1.3.1-1 updates for Centos 5.5

Filed under: Centos,Configuration,qt — Tags: , , , , — jfreivald @ 9:49 am

I’ve built the Qt 4.6.3 packages for Centos 5.5.

To install, as root, type:

rpm -ivh http://software.freivald.com/centos/software.freivald.com-1.0.0-1.noarch.rpm
yum update fontconfig fontconfig-devel qt4 qt4-devel qt4-doc qt4-postgresql qt4-odbc qt4-sqlite qt-creator

Also, I’ve updated the qt-creator package to 1.3.1-1.  The issue with the package was that on the 64-bit environment, qt-creator continues to look into /usr/lib/qtcreator for it’s plugins instead of /usr/lib64/qtcreator.  I added a link from /usr/lib/qtcreator to /usr/lib64/qtcreator in the x86_64 arch build.  This means that you should not install the 32-bit version and the 64 bit version on the same machine – but I’m not sure that was ever a good idea in the first place. 🙂

Please post here if you have any issues with the Qt 4.6.3 build or the qtcreator 1.3.1-1.

I’ve also posted the public key that I use to sign the packages here.  To use it, as root, type:

rpm --import http://software.freivald.com/centos/RPM-GPG-KEY-software.freivald.com

NOTE: If you use yum-priorities you will need to set this repository to the same level as ‘core’ for these to install properly.  You’ll know if you have a priorities issue because ‘yum install qt-creator’ will scream at you that you are missing libaries.  These libraries come in the version that I compile but not in the Centos core distribution and if the priorities are wrong it will pull those packages from core.

Cheers.

May 18, 2010

Apache HTTP to HTTPS redirection with mod_rewrite

Filed under: Configuration,Web Publishing — jfreivald @ 5:00 pm

I was trying to enforce ssl for my mail server, which runs on a Hostmonster shared host. I already had SSL configured and the https:// version of the mail server worked perfectly if I typed in the correct https:// url. Trying to find a mod_rewrite configuration that would work redirect http:// connections properly and not give Server Error 500 was not so easy.

There are thousands of pages with ‘how to’ get this to work – but most of them don’t. It’s probably a problem with Apache versions or the setup that Hostmonster has, but I was finally able to devise a solution that works. Place these lines in the .htaccess file of any directory you want to rewrite:

#Hostmonster doesn’t allow +FollowSymLinks, so we use +SymLinksIfOwnerMatch instead.
Options +SymLinksIfOwnerMatch
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://new.location.com/$1 [R=301,L]

This will check to see if SSL is being used. If it isn’t, then it will redirect it to the new location and provide the client with the “permanently redirected” code (301). This will help other scripts, bookmarks, etc., update themselves auto-magically so they don’t make the same mistake twice.

Cheers.

–JATF

April 22, 2010

ALIX Centos Image

Filed under: ALIX,Centos,Installation — Tags: , , , , , — jfreivald @ 10:48 pm

UPDATE 12/31/2011: I have updated the Alix Centos 5 image to 5.7.  During the process, I removed the /etc/ssh/ssh_host* keys so that each host will generate its own keys on boot up.  Note that during the ‘yum upgrade’ process, I had boost the memory on the virtual image. Yum was unable to allocate enough ram with only 256 MB available. This means that it is unlikely that an update from 5.5 to 5.7 can be performed in a single step on a live board with only 256 MB of RAM.

As for the Centos 6 image, it is being troublesome because the up-line removed all of the non-pae kernel images for the 32-bit architecture.  I’ve attempted to custom package a few kernels to complete the image but none of the work to my satisfaction.

UPDATE 10/22/2010: Added a step in the ‘Using the Image’ section below. All active installations should ensure they replace their SSH System keys to prevent man-in-the-middle attacks. I will post an updated image that has the keys removed when I get around to it. Until then, just perform the commands in item 7 of the Using the Image section.

UPDATE: A new version of the image is available.  It had ‘yum upgrade’ executed on June 12th, 2010, which upgraded it to Centos Version 5.5.  The new image is located at http://software.freivald.com/centos/alix-centos-5.7-2gcf.gz.  There is also an MD5 sum file at http://software.freivald.com/centos/alix-centos-5.7-2gcf.md5.

I could not find my 2 GB card. I used the original image, copied it to a 4 GB card, performed the update, and then copied only the first 2 GB back into the new image. Please provide feedback if the image does not work on a 2GB card.

UPDATE: Hat-tip @Cris. In order to get the vga to work on the 3d3 board you must put the irqpoll as kernel boot parameter.  See his comment for more information.

INFO: For those who are unfamiliar with Centos, it is a distribution that is binary compatible with RedHat Enterprise Linux.

EDIT: We’ve been added to the ALIX web page. Thank you for the testing and support from the PC-Engines crew.

I’ve been working with one of PC Engine’s Alix 6e1 boards a bit lately.  It’s a 500 MHz i586 AMD Geode-based embedded board with 256 MB of RAM that sells for under $150. I was testing various distributions and found that Centos was pretty easy to adapt. It wasn’t listed as supported on the PC Engines Web Site, so I wanted to contribute an image back to the community.

The image I’ve created has the following changes from a base install:

1.  It has no swap.

2.  It has the noatime and nodiratime options for all mounted partitions, although it uses ext3 because of the wal-wart-no-backup-power-for-shutdown configuration.

3.  Grub is configured for a 2-second timeout, and uses the serial port as the console – both for grub and the kernel.  Hook up a terminal emulator set to 38000, 8N1 to view the boot sequence or access the console directly.

4.  /etc/inittab was modified to use the serial console.  xdm was also disabled.

5.  All console settings are set for 38400 because that is what the initial boot-up bios uses on the ALIX 6e1 that I have.

6.  /etc/securetty has been modified to allow login via /dev/ttyS0 (tty0 and vc/1 are also left open because I use VMWare to modify the image).

7.  Fortunately, due to the stock Centos LVM configuration, no changes were necessary to fstab or the initrd image.

8.  Only a base install was performed.  Several of the ‘default’ packages have been omitted (things like bluetooth, extra shells, smart card reader daemon, procmail, cups, NetworkManager, etc. )  Of course they are still available using YUM.

9.  Lots of the startup stuff is turned off (kudzu, gpm, netfs, iptables and others).  Use chkconfig to turn them back on if you want them.

10.  The root password is – yep, you guess it: password

11. The eth0 (next to the USB ports) is configured for DHCP. eth1 (next to the serial port) is configured for 192.168.1.50. The hardware MAC lines have been commented out so that it will work with any box, but there is a slight chance that the order of the ports will get reversed. This has never happened to me, but YMMV. You can use either port to get the box up and running with ssh or putty if you don’t want to use or don’t have a serial interface.

12.  The CF card I used was A 2GB SanDisk Ultra 15MB/s.  Because it’s LVM based, you can use the LVM tools to shrink or grow the volumes.  Check out the LVM Howto for all the recipies you need.

13. I updated the packages using ‘yum update’ on the day it was created, so hopefully you won’t have as much downloading to do. I did not enable centosplus, extras, or any other repositories, which makes the image binary compatible with RHEL 5.4.

Using the Image

1.  Download the latest image from http://software.freivald.com/centos/.

2.  Unzip the image with bunzip2.  Please verify the uncompressed image with md5sum. Several users who had issues simply had bad downloads or uncompressed the file improperly.  An md5sum will catch these types of issues.  The md5sum file is in the same directory as the image.

3.  Copy it to your Compact Flash drive using ‘dd if=<inputfile> of=<outputdevice> bs=4096’.  <inputfile> is the uncompressed image that you verified in step 2.  <outputdevice> is your compact flash card.  You can find the correct one for your system with ‘sudo parted -l’.  You must use the disk device, not a partition i.e: /dev/sdc as opposed to /dev/sdc1.  This will install the boot loader and all necessary partitions to have a running system.  If your compact flash is larger than 2GB, see the comments section of this post for ways in which you can use the rest of the space.

4.  Install the Compact Flash into the ALIX.

5.  Attach your favorite terminal program to the ALIX platform.  I use putty.exe under Windows or minicom under linux.

6.  Apply power to the unit.  It should boot without any fuss. If you don’t have a serial port, use eth0 (next to the USB) to have your DHCP router assign and address, or use eth1 (next to the serial port) for a static configuration. eth1 is configured for 192.168.1.50 and the connector auto-rolls the cable if it needs to, so configure your computer for something like 192.168.1.51 and ping until the system is online. Then use ssh, or putty.exe if you are using Windows, to access the unit.

7.  I recommend some changes: Obviously, the root password.  Also, add an MD5 password to the grub configuration, since without one anyone with a serial cable can pass parameters to the kernel. You will also probably want to add more software using yum. You might also want to create some scratch space under /tmp, or some of the /var/cache directories using tmpfs. I didn’t do any of the these because they are simple, and different users will have different requirements, especially with the advancement of CF cards (wear leveling, 1000000+writes/block, etc.). You will probably want to customize /etc/securetty for your installation.

8. On images earlier than 5.7, change the SSH server keys with:
$ sudo rm /etc/ssh/ssh_host_*
$ sudo /etc/init.d/sshd restart
(Hat tip to @pmoor for catching this one!)

With this setup, the initial boot up takes 1:32 and has 193MB of free memory. Enjoy.

–JATF

February 25, 2010

Qt 4.6.2 packages for Centos 5.4

Filed under: Centos,Configuration,Installation,qt,Tools — jfreivald @ 2:25 pm

UPDATE: New post for the new packages: http://joseph.freivald.com/linux/2010/06/09/qt-4-6-3-and-qt-creator-1-3-1-1-updates-for-centos-5-5/

The Qt4 packages for Centos are updated to 4.6.2 and Qt Creator is updated to 1.3.1.

To install:

rpm -ivh http://software.freivald.com/el/5/i386/os/software.freivald.com-2.0.0-0.el.noarch.rpm
yum update fontconfig fontconfig-devel qt4 qt4-devel qt4-doc qt4-postgresql qt4-odbc qt4-sqlite qt-creator

Verify that the versions are coming from software.freivald.com and enjoy. 🙂

November 2, 2009

Managing an openssl certificate authority with perl

Filed under: Uncategorized — jfreivald @ 12:04 pm

There are several good tutorials on how to set up a certificate authority with openssl, but once you have one in place, what is a good way to manage it?  Sure there are some tools out there that can help, but I’ve found them all to be a bit of a pain, especially when it comes time to renew a bunch of user certificates.  For this purpose, a home-grown script is almost always better than a generic tool.  Scripting allows you to customize each and every step of the process according to your specific organization’s needs.  In this article I’ll give an example of how I use simple scripts to make key generation and regeneration easy.

It’s worth noting that lots of people probably don’t need their own CA.  Generally, using a self-signed key or getting a key signed by a recognized authority will be simpler and easier, but in some cases this isn’t true.  For example, at my office we have a server that is accessible via the Internet and contains proprietary information.  It’s behind a solid firewall and is pretty well protected.  The server is restricted to SSL only, but if username/password systems are used, they constantly get hammered by idiots looking to log in.  By restricting the server to sessions authorized with an SSL key signed by our local CA only, we can limit the users that connect.  Note that if we used a recognized authority (Versign et. al.) instead of our own, then we would still have the same problem.  By using our own CA, no other keys will make it past the SSL authentication stage.  We noticed a 86% drop in hack attempts two weeks after we went to this setup on this particular server. YMMV.  Note that by doing this we also gain the advantage of users not having to enter their passwords every time they access the server, and the system admin (me) doesn’t have to worry about whether or not users are circumventing the strong password requirements (see my previous post: Subversion, SSL and Apache for Secure, Passwordless, User-based repository access controls.)

Our CA directory structure looks like this:

CA
  - certs
    - ca
    - user
    - server
  - private
    - ca
    - user
    - server
  - csr
    - ca
    - user
    - server
  - userp12

It’s a bit convoluted, but it works for our needs. As I said, that’s the beauty of scripting.
I use an openssl.cnf file to maintain all of the defaults and file locations. Here it is:

#
# OpenSSL example configuration file.
# This is mostly being used for generation of certificate requests.
#

# This definition stops the following lines choking if HOME isn't
# defined.
HOME                    = .
RANDFILE                = $ENV::HOME/.rnd

# Extra OBJECT IDENTIFIER info:
#oid_file               = $ENV::HOME/.oid
oid_section             = new_oids

# To use this configuration file with the "-extfile" option of the
# "openssl x509" utility, name here the section containing the
# X.509v3 extensions to use:
# extensions            =
# (Alternatively, use a configuration file that has only
# X.509v3 extensions in its main [= default] section.)

[ new_oids ]

# We can add new OIDs in here for use by 'ca' and 'req'.
# Add a simple OID like this:
# testoid1=1.2.3.4
# Or use config file substitution like this:
# testoid2=${testoid1}.5.6

####################################################################
[ ca ]
default_ca      = CA_default            # The default ca section

####################################################################
[ CA_default ]

dir             = .                     # Where everything is kept
certs           = $dir/certs            # Where the issued certs are kept
crl_dir         = $dir/crl              # Where the issued crl are kept
database        = $dir/index.txt        # database index file.
#unique_subject = no                    # Set to 'no' to allow creation of
 # several ctificates with same subject.
new_certs_dir   = $dir/newcerts         # default place for new certs.

certificate     = $dir/certs/ca/myca.crt         # The CA certificate
serial          = $dir/serial           # The current serial number
crlnumber       = $dir/crlnumber        # the current crl number
 # must be commented out to leave a V1 CRL
crl             = $dir/crl.pem          # The current CRL
private_key     = $dir/private/ca/myca.key       # The private key
RANDFILE        = $dir/private/.rand    # private random number file

x509_extensions = usr_cert              # The extentions to add to the cert

# Comment out the following two lines for the "traditional"
# (and highly broken) format.
name_opt        = ca_default            # Subject Name options
cert_opt        = ca_default            # Certificate field options

# Extension copying option: use with caution.
# copy_extensions = copy

# Extensions to add to a CRL. Note: Netscape communicator chokes on V2 CRLs
# so this is commented out by default to leave a V1 CRL.
# crlnumber must also be commented out to leave a V1 CRL.
# crl_extensions        = crl_ext

default_days    = 365                   # how long to certify for
default_crl_days= 30                    # how long before next CRL
default_md      = sha1                  # which md to use.
preserve        = no                    # keep passed DN ordering

# A few difference way of specifying how similar the request should look
# For type CA, the listed attributes must be the same, and the optional
# and supplied fields are just that :-)
policy          = policy_match

# For the CA policy
[ policy_match ]
countryName             = match
stateOrProvinceName     = match
organizationName        = match
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = optional

# For the 'anything' policy
# At this point in time, you must list all acceptable 'object'
# types.
[ policy_anything ]
countryName             = optional
stateOrProvinceName     = optional
localityName            = optional
organizationName        = optional
organizationalUnitName  = optional
commonName              = supplied
emailAddress            = supplied

####################################################################
[ req ]
default_bits            = 1024
default_md              = sha1
default_keyfile         = privkey.pem
distinguished_name      = req_distinguished_name
attributes              = req_attributes
x509_extensions = v3_ca # The extentions to add to the self signed cert

# Passwords for private keys if not present they will be prompted for
# input_password = secret
# output_password = secret

# This sets a mask for permitted string types. There are several options.
# default: PrintableString, T61String, BMPString.
# pkix   : PrintableString, BMPString.
# utf8only: only UTF8Strings.
# nombstr : PrintableString, T61String (no BMPStrings or UTF8Strings).
# MASK:XXXX a literal mask value.
# WARNING: current versions of Netscape crash on BMPStrings or UTF8Strings
# so use this option with caution!
# we use PrintableString+UTF8String mask so if pure ASCII texts are used
# the resulting certificates are compatible with Netscape
string_mask = MASK:0x2002

# req_extensions = v3_req # The extensions to add to a certificate request

[ req_distinguished_name ]
countryName                     = Country Name (2 letter code)
countryName_default             = US
countryName_min                 = 2
countryName_max                 = 2

stateOrProvinceName             = State or Province Name (full name)
stateOrProvinceName_default     = YourState

localityName                    = Locality Name (eg, city)
localityName_default            = YourCity

0.organizationName              = Organization Name (eg, company/unit)
0.organizationName_default      = YourOrganization

# we can do this but it is not needed normally :-)
1.organizationName              = Division
1.organizationName_default      = YouCanSkipThisOneIfYouWantTo

organizationalUnitName          = Organizational Unit Name (eg, section)
organizationalUnitName_default  = ThisOneCanBeSkippedToo

commonName                      = Common Name (eg, your name or your server\'s hostname)
commonName_max                  = 64

emailAddress                    = Email Address
emailAddress_max                = 64

# SET-ex3                       = SET extension number 3

[ req_attributes ]
#challengePassword              = A challenge password
#challengePassword_min          = 4
#challengePassword_max          = 20

#unstructuredName               = An optional company name

[ usr_cert ]

# These extensions are added when 'ca' signs a request.

# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.

basicConstraints=CA:FALSE

# Here are some examples of the usage of nsCertType. If it is omitted
# the certificate can be used for anything *except* object signing.

# This is OK for an SSL server.
# nsCertType                    = server

# For an object signing certificate this would be used.
# nsCertType = objsign

# For normal client use this is typical
# nsCertType = client, email

# and for everything including object signing:
# nsCertType = client, email, objsign

# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment

# This will be displayed in Netscape's comment listbox.
nsComment                       = "Signed by my private Certificate Authority"

# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer

# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move

# Copy subject details
# issuerAltName=issuer:copy

#nsCaRevocationUrl              = http://www.domain.dom/ca-crl.pem
#nsBaseUrl
#nsRevocationUrl
#nsRenewalUrl
#nsCaPolicyUrl
#nsSslServerName

[ v3_req ]

# Extensions to add to a certificate request

basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment

[ v3_ca ]

# Extensions for a typical CA

# PKIX recommendation.

subjectKeyIdentifier=hash

authorityKeyIdentifier=keyid:always,issuer:always

# This is what PKIX recommends but some broken software chokes on critical
# extensions.
#basicConstraints = critical,CA:true
# So we do this instead.
basicConstraints = CA:true

# Key usage: this is typical for a CA certificate. However since it will
# prevent it being used as an test self-signed certificate it is best
# left out by default.
# keyUsage = cRLSign, keyCertSign

# Some might want this also
# nsCertType = sslCA, emailCA

# Include email address in subject alt name: another PKIX recommendation
# subjectAltName=email:copy
# Copy issuer details
# issuerAltName=issuer:copy

# DER hex encoding of an extension: beware experts only!
# obj=DER:02:03
# Where 'obj' is a standard or added object
# You can even override a supported extension:
# basicConstraints= critical, DER:30:03:01:01:FF

[ crl_ext ]

# CRL extensions.
# Only issuerAltName and authorityKeyIdentifier make any sense in a CRL.

# issuerAltName=issuer:copy
authorityKeyIdentifier=keyid:always,issuer:always

[ proxy_cert_ext ]
# These extensions should be added when creating a proxy certificate

# This goes against PKIX guidelines but some CAs do it and some software
# requires this to avoid interpreting an end user certificate as a CA.

basicConstraints=CA:FALSE

# Here are some examples of the usage of nsCertType. If it is omitted
# the certificate can be used for anything *except* object signing.

# This is OK for an SSL server.
# nsCertType                    = server

# For an object signing certificate this would be used.
# nsCertType = objsign

# For normal client use this is typical
# nsCertType = client, email

# and for everything including object signing:
# nsCertType = client, email, objsign

# This is typical in keyUsage for a client certificate.
# keyUsage = nonRepudiation, digitalSignature, keyEncipherment

# This will be displayed in Netscape's comment listbox.
nsComment                       = "My CA Signed Certificate"

# PKIX recommendations harmless if included in all certificates.
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid,issuer:always

# This stuff is for subjectAltName and issuerAltname.
# Import the email address.
# subjectAltName=email:copy
# An alternative to produce certificates that aren't
# deprecated according to PKIX.
# subjectAltName=email:move

# Copy subject details
# issuerAltName=issuer:copy

#nsCaRevocationUrl              = http://www.domain.dom/ca-crl.pem
#nsBaseUrl
#nsRevocationUrl
#nsRenewalUrl
#nsCaPolicyUrl
#nsSslServerName

# This really needs to be in place for it to be a proxy certificate.
proxyCertInfo=critical,language:id-ppl-anyLanguage,pathlen:3,policy:foo

For generating a single user certificate, which we only do when someone new gets hired, we have a simple shell script. All information is entered by hand.
Here is generate-user-key:

#!/bin/bash

[ "$1" == "" ] && echo "Usage: generate-user-key <username>" && exit -1;

openssl req -config openssl.cnf -new -sha1 -newkey rsa:1024 -nodes -keyout private/user/$1.key -out csr/user/$1.pem
openssl ca -config openssl.cnf -policy policy_anything -extensions usr_cert -out certs/user/$1.pem -infiles csr/user/$1.pem
openssl pkcs12 -export -clcerts -in certs/user/$1.pem -inkey private/user/$1.key -out userp12/$1.p12

It checks to make sure a username is present, and then runs through the three openssl commands necessary for generating the certificates.

Now, as you can see in the openssl.cnf file, the user keys only last for 365 days. So every year we have to regenerate all the keys, most of them on the same day. To do that, we use a perl script: regenerate-all-user-keys

#!/usr/bin/perl

We check to make sure the CA password is provided:

$ARGV[0] =~ /.+/ or die "usage: regenerate-user-keys <ca-password>";

$password = $ARGV[0];
chomp($password);

And grab all the user keys by checking the private/user directory and stripping off the extra characters.

@keys=`ls private/user/*.key`;
chomp for(@keys);
s/\.key//g for(@keys);
s/.*\/(.*$)/\1/g for(@keys);

Now for each key we’ll go through the regeneration process.

for $key(@keys) {

Grab the existing subject line from the existing certificate and re-format it for the command line.

 $subjects=`openssl x509 -in certs/user/$key.pem -noout -text | grep Subject:`;
 chomp ($subjects);
 $subjects =~ s/, /\//g;
 $subjects =~ s/\s+Subject: (.*)/\/\1/;

Make a copy of all of the keys and certificates in case we have a failure and need to roll back.

 system "cp private/user/$key.key private/user/$key.key.last";
 system "cp csr/user/$key.pem csr/user/$key.pem.last";
 system "cp certs/user/$key.pem certs/user/$key.pem.last";
 system "cp userp12/$key.p12 userp12/$key.p12.last";

Regenerate the key and signing request

 print "\n\nopenssl req -config openssl.cnf -new -sha1 -newkey rsa:1024 -nodes -keyout private/user/$key.key -out csr/user/$key.pem -multivalue-rdn -subj '$subjects'\n";
 system "openssl req -config openssl.cnf -new -sha1 -newkey rsa:1024 -nodes -keyout private/user/$key.key -out csr/user/$key.pem -multivalue-rdn -subj '$subjects'";

Check to be certain that the process ended correctly.  If it didn’t then roll back the keys.

 if ($? == -1) {
 print "failed to execute: $!\n";
 } if ($? & 127) {
 printf "child died with signal %d, %s coredump\n", ($? & 127), ($? & 128) ? 'with' : 'without';
 } else {
 $exitval = $? >> 8;
 if ($exitval != 0) {
 printf "child exited with value %d\n", $exitval;
 print "$key failed to regenerate.  Restoring old keys.\n";
 system "cp private/user/$key.key.last private/user/$key.key";
 system "cp csr/user/$key.pem.last csr/user/$key.pem";
 system "cp certs/user/$key.pem.last certs/user/$key.pem";
 system "cp userp12/$key.p12.last userp12/$key.p12";
 push(@errored_out, $key);
 next;
 }
 }

Sign the key using the password supplied on the command line.

 print "\n\nopenssl ca -config openssl.cnf -policy policy_anything -extensions usr_cert -out certs/user/$key.pem -in csr/user/$key.pem -multivalue-rdn -subj '$subjects' -batch -key '$password'\n";
 system "openssl ca -config openssl.cnf -policy policy_anything -extensions usr_cert -out certs/user/$key.pem -in csr/user/$key.pem -multivalue-rdn -subj '$subjects' -batch -key '$password'";
 if ($? == -1) {
 print "failed to execute: $!\n";
 } elsif ($? & 127) {
 printf "child died with signal %d, %s coredump\n", ($? & 127), ($? & 128) ? 'with' : 'without';
 } else {
 $exitval = $? >> 8;
 if ($exitval != 0) {
 printf "child exited with value %d\n", $exitval;
 print "$key failed to regenerate.  Restoring old keys.\n";
 system "cp private/user/$key.key.last private/user/$key.key";
 system "cp csr/user/$key.pem.last csr/user/$key.pem";
 system "cp certs/user/$key.pem.last certs/user/$key.pem";
 system "cp userp12/$key.p12.last userp12/$key.p12";
 push(@errored_out, $key);
 next;
 }
 }

And finally, output the pcks12 formatted certificate to send to the users. Note that the output is encrypted with a passcode postpended with the username. This is sufficient for our needs, but probably not everyones. To make it so that each user gets a unique password, remove the -passout parameter and the sytem will prompt each time it goes to export a pkcs12 certificate.

 print "\n\nopenssl pkcs12 -export -clcerts -in certs/user/$key.pem -inkey private/user/$key.key -out userp12/$key.p12 -des3 -passout 'pass:ourcode$key'\n";
 system "openssl pkcs12 -export -clcerts -in certs/user/$key.pem -inkey private/user/$key.key -out userp12/$key.p12 -des3 -passout 'pass:ourcode$key'";
 if ($? == -1) {
 print "failed to execute: $!\n";
 } elsif ($? & 127) {
 printf "child died with signal %d, %s coredump\n", ($? & 127), ($? & 128) ? 'with' : 'without';
 } else {
 $exitval = $? >> 8;
 if ($exitval != 0) {
 printf "child exited with value %d\n", $exitval;
 print "$key failed to regenerate.  Restoring old keys.\n";
 system "cp private/user/$key.key.last private/user/$key.key";
 system "cp csr/user/$key.pem.last csr/user/$key.pem";
 system "cp certs/user/$key.pem.last certs/user/$key.pem";
 system "cp userp12/$key.p12.last userp12/$key.p12";
 push(@errored_out, $key);
 next;
 }
 }

}

Output each of the certificates that failed for one reason or another so that they can be addressed manually.

for $fail(@errored_out) {
 print "WARNING: $fail did not regenerate.\n";
}

And remove any remaining backup files.

system "rm -f private/user/*.last";
system "rm -f csr/user/*.last";
system "rm -f certs/user/*.last";
system "rm -f userp12/*.last";

Pretty straight forward, and makes regenerating hundreds of keys on a single day much less of a problem. A task left to the reader is to have the script email the user their new key based on the e-mail address captured in the subject line.  Our doesn’t do that because we have to get VP level approval to send automated e-mails.

Cheers.

–JATF

September 8, 2009

Register before post

Filed under: Uncategorized — jfreivald @ 8:26 am

I had to change the settings to allow only registered users to post because I’m getting hammered by spam for the past week.  Sorry about that.  Life would be a lot nicer if people weren’t assholes.

August 24, 2009

Copying Nikon RAW pictures to JPEG

Filed under: Tools — Tags: , , , , , — jfreivald @ 2:19 pm

My wife loves her Nikon camera.  She also loves Photoshop.  The two go together really well.  She takes pictures in the raw “NEF” format, and photoshop works miracles on them.  Unfortunately, for sharing snapshots it’s always a pain in the butt to get each and every picture that we want to share converted to JPEG so that everyone who doesn’t have photoshop can use them.  Not to mention that even as JPEGs, a 10 megapixel photo is too big to go e-mailing to Aunt Laura on her Dial-Up.

The dichotomy is clear: Quality vs. Portability.

So like everything else that takes for ever and is tedious, I wrote a script.  This one looks in the underlying tree and checks to see if each NEF file has a corresponding JPEG file.  If it doesn’t, then it creates one using ImageMagick.  If there is one, it ignores the file and moves on to the next one. Now she can have the super-high quality RAW pictures, and I can e-mail them to grandma.  Once again, everyone is happy in Joeland.

In this case we also convert the size of the image to two megapixels, which is plenty for sharing photos, but not great for printing blow-ups.  But that’s okay, because we still have the original NEF to manipulate if we want to!

On Centos I had to do a ‘cpan install autodie’ and ‘cpan install IPC::System::Simple’ to get this to compile right.  Autodie is nice because if you hit Ctrl-C to stop the script then it will actually stop instead of continuing on to the next picture.

Here is the script:

#!/usr/bin/perl

use autodie qw(:all);

@FILES = split(/\n/, `find . | grep "\.NEF\$"`);
foreach $file(@FILES) {
    $rawfile = $file;
    $file =~ s/NEF$/small.JPG/;
    if (-d $file) {
        print "Entering directory $file.\n";
    } elsif (! -f $file) {
        print "\t$rawfile -> $file\n";
        system("convert \"$rawfile\" -normalize -resize \"\@2000000\" \"$file\"");
    } else {
        print "\t$rawfile skipped.  $file already exists.\n";
    }
}

Enjoy!

--JATF

June 5, 2009

Using rsync to update a website on hostmonster.com

Filed under: Configuration,Web Publishing — Tags: , , , , — jfreivald @ 8:32 pm

I was working on a website with a software repository that had hard links in it. Linking reduces disk space on the server, and when mirroring with rsync, reduces the time needed to sync the entire mirror.  If you are using scp or ftp to push to the server it causes problems because those programs copy each link as a new file, meaning more bandwidth consumed, more time in transfer, and more disk space used on the server side.  Just what we wanted to avoid by using rsync in the first place.

So how do we use rsync to push our web site to the server when we don’t have access to any of the rsyncd configuration files and can’t work with anything higher in the file tree than our home directory?  Sure we could pay more for a dedicated server, but why?  Lets use the tools we have as a simple user to accomplish what we need cheaply and easily.

First, get ssh access for your host server. Hostmonster requires a faxed copy of a picture ID and some other confirmation. Whatever your host requires, follow their procedures.

Test your ssh connection by opening a terminal and typing:

ssh username@hostname

It will ask you if you want to remember the host key and you should respond with a yes.

If you are able to enter your password and log in, you should be at your home directory on the host server. You should be able to see the files for your website with

ls ~/public_html

Type the following commands:

mkdir ~/.ssh
chmod 700 ~/.ssh

Log out and return to your local computer’s prompt and enter the following commands:

ssh-keygen -t dsa -C youremailaddress

ssh-keygen will ask you some questions. Using the default file name (/home/username/.ssh/id_dsa) is fine. It will also prompt you for a password. This will guard your ssh key, and you only have to type it once per session, so make it a good one.

Once complete, you should have two new files in ~/.ssh: id_dsa and id_dsa.pub.   Create a configuration shortcut:

echo -e "host shortname\n\tHostName hostname\n\tUser username" >> ~/.ssh/config

Where shortname is any name that you want to use to represent your website, hostname is the host that you are uploading to, and username is your login name on that server.

Now, send the public key to the server with:

scp ~/.ssh/id_dsa.pub username@hostname:~/.ssh/authorized_keys2

Now, to prevent yourself from having to type your password every time you want to copy files or log in, type:

ssh-add

and type your password. This will put your ssh key into an ‘agent’, which will authorize you without a password for the rest of the time you are logged in.  After you log out you’ll have to do ssh-add again, but as long as you stay logged in you should be able to log into the hosting server with a simple:

ssh shortname

No password, no nothing, and all encrypted, too.  Log out of the server and get back to a local prompt.

Change to your directory that has the local copy of your web site, such as:

cd ~/public_html

To push the update your web site, type the command is:

rsync -e ssh -vramlHP --exclude '*.log' --numeric-ids --delete --delete-excluded --delete-after --delay-updates . shortname:~/public_html/

To pull the webserver down to your local directory, the command is:

rsync -e ssh -vralmHP --exclude '*.log' --numeric-ids --delete --delete-excluded --delete-after --delay-updates shortname:~/public_html/ .

It will transmit only the changed data, saving you time, and will properly handle hard and soft links, which will save you space on the server.

Just to finish the job I put them into shell scripts by:

mkdir ~/bin
echo -e '#!/bin/bash\n\nrsync -e ssh -vralHP --numeric-ids --delete --delete-excluded --delete-after --delay-updates localdirectory shortname:~/public_html/\n' >> ~/bin/pushsite
echo -e '#!/bin/bash\n\nrsync -e ssh -vralHP --numeric-ids --delete --delete-excluded --delete-after --delay-updates shortname:~/public_html/ localdirectory\n' >> ~/bin/pullsite
chmod +x ~/bin/pushsite ~/bin/pullsite

Where localdirectory is where you want the site stored locally.

Now typing ‘pushsite’ at a terminal prompt will push the update, and ‘pullsite’ will pull it down from the server (assuming your local bin dir is in your path, which it is on most systems).  Assuming you have previously done an ‘ssh-add’, you won’t even need to use a password.

Of course, this doesn’t backup databases, just static files.  But if you are dealing with static files, rsync can’t be beat.  It will push and pull only the changes, and will properly handle hard and soft links without duplicating the files.

Happy publishing.

May 24, 2009

Qt4 RPMs for Centos 5

Filed under: Centos,Installation,qt — Tags: , — jfreivald @ 11:03 am

UPDATE; New post for the new packages: http://joseph.freivald.com/linux/2010/06/09/qt-4-6-3-and-qt-creator-1-3-1-1-updates-for-centos-5-5/

UPDATE: Nokia released Qt 4.6.0 and qt-creator 1.3.0 today.  The new RPMs are compiled and stored in the repository.  ‘yum update’ should be sufficient to grab the new ones.  I also changed the directory to reflect Centos 5.4 instead of 5.3.  Let me know of any issues.

–JATF

Want to get the Qt SDK working on Centos 5.3?

Quick instructions:

rpm -ivh http://software.freivald.com/centos/software.freivald.com-1.0.0-1.noarch.rpm
yum update fontconfig fontconfig-devel qt4 qt4-devel qt4-doc qt4-postgresql qt4-odbc qt4-sqlite qt-creator

Verify that the versions are coming from software.freivald.com and install. 🙂

Longer story:

All of the RPM’s described in this post are in a yum repository that you can access by installing this RPM.  It includes both x86_64 and i386 repositories that are automatically selected based on your architecture.

The first problem: the FcFreeTypeQueryFace problem that is very well described here, with a manual compile and upgrade way around it.  I thought I would go one step further and create an RPM.  Here is what I did:

I started with this source file from fontconfig.org and this SRPM from redhat.com, modified the spec file from the SRPM because of a changed config file location, and created these RPM files for you to install.

The second problem:  the QtSDK is built against several other libraries that are newer than provided with CentOS 5.3.  Rather than update those libraries, I’ve opted to compile RPMs for qt4 and qt-creator for CentOS 5.3. There are all new packages for them in the repository. They upgrade the shipped version (4.2.1) to the new version. They should be binary compatible, since theoretically Qt only breaks binary backwards compatibility on a major revision number change, but I don’t have any real way to test this. Feel free to post any problems you encounter.

The third problem: qt-creator isn’t included with the qt4 source.  I created it as its own package.  ‘yum install qt-creator’ to install it by itself.

Hopefully after installing the repository package, a

yum update

and everything should ‘just work’.

Oh, and feel free to use the ‘joewidgets’ and ‘joewidgets-devel’ packages.  They include some widgets that I use for other projects, primarily a back-port of the KLed widget to QLed that removed KDE dependancies, and a multi-state button with configurable colors for each state.  The ‘devel’ package includes designer plugins that also work in qt-creator.  Source for those are published in the srpms directory.

–JATF

May 14, 2009

Subversion, SSL and Apache for Secure, Passwordless, User-based repository access controls

Filed under: Configuration — Tags: , , , — jfreivald @ 9:54 am

I get tired of passwords.  Password here, password there, everywhere a password.

I am a systems designer who does a lot of admin out of necessity.  When I get tired enough of a task, I eliminate it.

I use subversion on several projects to track documentation, source, configurations and more.  All of my servers are SSL only, and use user certificates for identity verification.  Here’s what I did to make passwordless, user-based restrictions on Subversion:

First, make sure that SSL is working on your apache server (if you get a server error when you do an https request, but http://yourserver.com:443 works then SSL is not set up right).

Put the following in /etc/httpd/conf.d/subversion.conf:

<Location /yourSubversionWebLocation>
   DAV svn
   SVNParentPath pathToYourSubversionFolder

   AuthzSVNAccessFile /etc/httpd/yourSubversionAccessFile

   SSLRequireSSL
   SSLVerifyClient require
   SSLUserName SSL_CLIENT_S_DN_Email
   SetOutputFilter DEFLATE
</Location>

Some people might want to use SSL_CLIENT_S_DN_CN as the user name instead of the email, but in my case I use the CN to put the person’s real full name in the certificate, so the email worked out better.  Also, this way I can have jsmith@company1.com and jsmith@company2.com without a collision.  Use whichever works for your situation.

Put your repository access information into your SVN access file like this:

[shared:/]
user1@yourplace.com = rw
user2@yourplace.com = rw
user3@yourplace.com = rw
readonlyuser@yourplace.com = r

[user1:/]
user1@yourplace.com = rw

[user2:/]
user2@yourplace.com = rw

[user3:/]
user3@yourplace.com = rw

Generate your User SSL keys. I do it with a script (lots of stuff on the web on how to set up your own CA, so I’m not re-hashing it here):

#!/bin/bash

[ "$1" == "" ] && exit -1;

openssl req -config openssl.myconf.cnf -new -sha1 -newkey rsa:1024 -nodes -keyout private/$1.key -out csr/$1.pem
openssl ca -config openssl.myconf.cnf -policy policy_anything -extensions usr_cert -out certs/$1.pem -infiles csr/$1.pem
openssl pkcs12 -export -clcerts -in certs/$1.pem -inkey private/$1.key -out userp12/$1.p12

Be sure to use the same email addresses that you use in the SVN authorization file.

To access subversion from the command line, put the following into your .subverions/servers file.  Be certain that the file has strict permissions (chmod -R 0600 ~user1/.subversion; chmod -R 0600 ~user1/certs):

[groups]
myrepositories = <your server address>
[myrepositories]
ssl-authority-files = /home/user1/certs/<your CA file>.crt
ssl-client-cert-file = /home/user1/certs/user1.p12
ssl-client-cert-password = <user's certificate password>

To access it with a browser, import the CA and user certificates into the browser of your choice.  Users should then be able to select your web page and auto-magically get the right repositories with the right permissions.  No passwords needed.

If you want a pretty web interface for your repository, try out websvn.  Use the same SSL configuration information for your websvn.conf as you did for your subversion.conf, follow the install information for websvn, put your repositories into your config.php and you’re done.

« Newer Posts

Powered by WordPress