As I mentioned in this post, I was having trouble getting OpenFiler to use the physical volumes or volume groups I had created from the unix command line.
I finally had time to give this another go and was stuck again until I decided to read the error.log produced by lighthttpd and also the /var/log/messages file.
/var/log/messages was saying this
Jul 9 12:00:02 domU-12-31-35-00-0A-41 crond(pam_unix)[1936]:
session closed for user openfiler
Jul 9 12:00:34 domU-12-31-35-00-0A-41 modprobe:
FATAL: Could not load /lib/modules/2.6.16-xenU/modules.dep:
No such file or directory
I had tried to use the Xen ready image from Rpath however the image had failed to boot under EC2. So I used the tarball instead and made the image from extracted tarball.
I noticed the message log was complaining about a missing module in a directory which didn’t exist either. That module was dev-mapper. No dev-mapper and lvcreate fails!!
So the Openfiler web admin tool was going through the motions of creating new volumes but failing with any indications in the tool as to why.
I rechecked the Openfiler tarball again and it doesn’t have dev-mapper in a version of Xen which EC2 can use given it boots Xen 2.6.16. So the clunky solution was to copy the files from another running AMI instance running CentOS 4.
After running mod-prod dm-mod, lvcreate worked and so did the OpenFiler admin tool.
Woot! Next stop getting iscsi running so I can get OCFS2 formatting the storage provided by Openfiler as a cluster block device for ASM to use.
Stay tuned…
Paul
Here is a dump of my screen showing how pvcreate, vgcreate and lvcreate should work.
[root@domU-12-31-36-00-31-73 ~]# umount /mnt
[root@domU-12-31-36-00-31-73 ~]# pvcreate /dev/sda2
Physical volume "/dev/sda2" successfully created
[root@domU-12-31-36-00-31-73 ~]# vgcreate /dev/sda2 vg
/dev/sda2: already exists in filesystem
New volume group name "sda2" is invalid
[root@domU-12-31-36-00-31-73 ~]# vgcreate vg /dev/sda2
Volume group "vg" successfully created
[root@domU-12-31-36-00-31-73 ~]# lvcreate -L4096M -n myvmdisk1 vg
Logical volume "myvmdisk1" created
[root@domU-12-31-36-00-31-73 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 937M 8.5G 10% /
[root@domU-12-31-36-00-31-73 ~]# mount /dev/vg/myvmdisk1 /oradata
mount: you must specify the filesystem type
[root@domU-12-31-36-00-31-73 ~]# mkfs -t ext3 /dev/vg/myvmdisk1
mke2fs 1.35 (28-Feb-2004)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
524288 inodes, 1048576 blocks
52428 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
[root@domU-12-31-36-00-31-73 ~]# mount /dev/vg/myvmdisk1 /oradata
mount: mount point /oradata does not exist
[root@domU-12-31-36-00-31-73 ~]# mkdir /oradata
[root@domU-12-31-36-00-31-73 ~]# mount /dev/vg/myvmdisk1 /oradata
[root@domU-12-31-36-00-31-73 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 937M 8.5G 10% /
/dev/mapper/vg-myvmdisk1
4.0G 41M 3.7G 2% /oradata