- 'd' to delete a line
- 'b' to boot
To restore GRUB (and be able to boot Solaris) when @## someone/something has scratched it, do:
- boot from the install CD of Solaris
- For Solaris 10, select "6" for a prompt, single user mode
- restore the boot loader
installgrub -m /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0d0s0
NB. installboot is now obsolete.
See more information on BigAdmin.
/mnt/boot/solaris/bin/update_grub -R /mntwhere /mnt is the root one wants to boot.
To do so, first, locate the GRUB menu file:
$ bootadm list-menu the location for the active GRUB menu is: /a/rpool/boot/grub/menu.lst default 0 timeout 10 0 OpenSolaris 2008.11 snv_101b_rc2 X86 With Splash ScreenNotice in my case, the menu file is located in /a/rpool/boot/grub/menu.lst. Edit that file with your favorite text editor so that such an entry
title OpenSolaris 2009.06 snv_111b With Splash Screen findroot (pool_rpool,1,a) splashimage /boot/solaris.xpm foreground d25f00 background 115d93 bootfs rpool/ROOT/opensolaris-1 kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS,console=graphics module$ /platform/i86pc/$ISADIR/boot_archivebecomes like this:
title OpenSolaris 2009.06 snv_111b text boot findroot (pool_rpool,1,a) bootfs rpool/ROOT/opensolaris-1 kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archivei.e remove the splashimage, foreground, background and console=graphics from the file.
It's done ! You can reboot. However, I recommend you always backup your old GRUB menu. NB. This can be modified 'live' at boot time, by typing e to edit the GRUB menu when it displays.
pfexec bootadm set-menu default=3The GRUB menu can be located using bootadm list-menu. upgrade process actually creates a new boot environment for the new upgrade. Beadm seamlessly handles the tasks of copying all relevant file systems and updating GRUB. To list boot environments
$ beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- opensolaris - - 7.57G static 2009-01-03 13:18 opensolaris-1 NR / 3.59G static 2009-07-20 22:38To mount an unmounted boot environment, do
$ beadm mount name /some/whereTo destroy a boot environment (deletes the corresponding dataset with all files in it, but only unshared files):
beadm destroy nameFor instance, if you destroy opensolaris,
$ beadm destroy opensolaris $ beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- opensolaris-1 NR / 11.80G static 2009-07-20 22:38You can also get information on disk space using:
$ df / Filesystem 1K-blocks Used Available Use% Mounted on rpool/ROOT/opensolaris-1 14266663 7646247 6620416 54% /
reboot -fThis command should be faster... To transition between BEs, use the init 6 / luactivate command.
Solaris cuts partition into slices. Those slices are numbered 0 to 7 and they correspond to the 's' of the device (c0t0d0s3 refers to slice number 3). Slice number 2 is reserved and refers to the entire disk by convention.
Slices of a given partition may be listed with format.
partition> print Current partition table (unnamed): Total disk cylinders available: 39691 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 - 27046 13.00GB (27047/0/0) 27263376 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 39690 19.08GB (39691/0/0) 40008528 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 partition> 3 Part Tag Flag Cylinders Size Blocks 3 unassigned wm 0 0 (0/0/0) 0 Enter partition id tag[unassigned]: Enter partition permission flags[wm]: Enter new starting cyl: 27047 Enter partition size[0b, 0c, 27047e, 0.00mb, 0.00gb]: 39690e partition> print Current partition table (unnamed): Total disk cylinders available: 39691 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 unassigned wm 0 - 27046 13.00GB (27047/0/0) 27263376 1 unassigned wm 0 0 (0/0/0) 0 2 backup wu 0 - 39690 19.08GB (39691/0/0) 40008528 3 unassigned wm 27047 - 39690 6.08GB (12644/0/0) 12745152 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 partition> label Ready to label disk, continue? yes partition> quit format> volname Enter 8-character volume name (remember quotes)[""]:secondar Ready to label disk, continue? yes
axelle@boureautic:~# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3d0
$ parted /dev/dsk/c5t1d0p0 GNU Parted 1.8.8 Using /dev/dsk/c5t1d0p0 Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print print Model: Generic Ide (ide) Disk /dev/dsk/c5t1d0p0: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 106MB 105MB primary ntfs boot 2 106MB 115GB 115GB primary ntfs 3 115GB 273GB 157GB primary 4 273GB 500GB 227GB extended lba 5 273GB 377GB 105GB logical 6 377GB 500GB 123GB logical solaris
The good news about ZFS is that it's as great as expected. Storage units may span over partitions: several disks, devices, partitions and even files can be gathered in a single zfs pool, and then, from that pool virtual disk spaces can be provided the way you want.
zpool create -f pool c0d0s6 c0d0s7
Devices can easily be set for mirroring or RAIDZ. It's as simple as adding a keyword to the command. However, make sure mirroring or raidz is what you need. For instance, my c0d0s6 slice has 15G and c0d0s7 has 17G. If I mirror them, I basically "lose" 2G in the second array. That's not something I want at home (at work, the answer might be different).
zpool create -f pool raidz c0d0s6 c0d0s7 zfs list NAME USED AVAIL REFER MOUNTPOINT pool 87K 15,3G 24,5K /pool zfs destroy pool zpool create -f pool c0d0s6 c0d0s7 zfs list NAME USED AVAIL REFER MOUNTPOINT pool 87K 32,5G 24,5K /pool
Trying ZFS on simulated disksDon't have any available disk slices but want to try ZFS ? It's possible to simulate pools with a file:
It's very easy to set compression
zfs set compression=on pool
# zpool create -f pool c0d0s6 c0d0 s7 # zfs create pool/axelle # zfs create pool/opt # zfs set mountpoint=/opt pool/opt # zfs set compression=on pool # zfs set mountpoint=/export/home/axelle pool/axelleNote before creating the pool, the two slices c0d0s6 & c0d0s7 should be backuped, unmounted and removed from /etc/vsftab. Then, once the pool is created, the original content can be restored. Also, mountpoints are acceptable only if they exist: make sure /export/home/axelle exists first. To see if a given pool has compression on,
$ zfs list -o name,compression NAME COMPRESS backup off backup/backup1 on rpool off
Quota, ReservationsTo see if there is a quota on a ZFS pool:
$ zfs get quota rpool NAME PROPERTY VALUE SOURCE rpool quota none defaultIt's a good idea to set quotas to make sure your partitions are not 100% full which is a problem (I tested...). To do so, ZFS talks of "reservations": you reserve a given amount on the partition that should always be free.
$ pfexec zfs set reservation=500m rpool $ zfs get reservation rpool NAME PROPERTY VALUE SOURCE rpool reservation 500M local
Share a pool on NFSTo share a pool on NFS:
zfs set sharenfs=on mypool zfs get sharenfs mypool sharemgr show -vp
$ zpool status pool: rpool state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3d0s0 ONLINE 0 0 0
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 41.3G 6.71G 1.42G /a/rpool rpool/ROOT 5.78G 6.71G 18K legacy rpool/ROOT/opensolaris 5.78G 6.71G 5.32G / rpool/dump 895M 6.71G 895M - rpool/export 32.3G 6.71G 19K /export rpool/export/home 32.3G 6.71G 551M /export/home rpool/export/home/axelle 31.7G 6.71G 31.3G /export/home/axelle rpool/swap 895M 7.50G 88.3M -The REFER column is the size of the file system if it stood alone.
The MOUNTPOINT column indicates where the file system is to be mounted. It does not indicate the file system is actually mounted. To list more options, such as whether the file system is mounted or not, mountable or not etc, do:
$ zfs list -o name,mountpoint,canmount,mounted NAME MOUNTPOINT CANMOUNT MOUNTED rpool /a/rpool on yes rpool/ROOT legacy off no rpool/ROOT/opensolaris / noauto no rpool/ROOT/opensolaris-1 / noauto yes rpool/dump - - - rpool/export /export on yes rpool/export/home /export/home on yes rpool/export/home/axelle /export/home/axelle on yes rpool/swap
- Setting up snapshots: time-slider-setup
- Mounting a ZFS pool:
zfs import -r <poolname>
- Listing snapshots in a given pool:
zfs list -t snapshotor even: zfs list -t snapshot -o name,used
- Destroying a given snapshot (if your system is 100% full and you absolutely need some space):
zfs destroy thesnapshot
- Restoring a given snapshot:
zfs rollback -rRf <name>
- export the pool: sudo zpool export pool
- get the id of the disk (if you want to import by disk-id):
$ sudo zpool import pool: pool id: 12116846563890507123 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: pool ONLINE ata-WDC_WD5000AADS-00S9B0_WD-WCAV90510116-part6 ONLINE
- import the disk by id:
$ sudo zpool import -d /dev/disk/by-id 12116846563890507123
$ pfexec dumpadm Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash/boureautic Savecore enabled: noTo set the size for the dump on ZFS
zfs set volsize=2G rpool/dump
$ swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 182,2 8 1832952 1832952
To mount existing partitions:
- UFS: default file system for Solaris, no need to specify file system type: mount /dev/dsk/… /mountpoint
- FAT: use 'pcfs' file system type. If the partition is located in a primary partition (number N - for PCs between 1 and 4), you can simply mount that partition:
mount -F pcfs /dev/dsk/c0d0pN /mountpointwhere p0 means the whole disk and p1 to p4 refer to the primary fdisk partitions.
However, there's another way to mount that partition: c0d0p0:<letter or number>. The letter ranges from c to z, and the number starts at 1. To select the first FAT partition: c0d0p0:1 or c0d0p0:c will do the trick. To select the second partition: c0d0p0:2 or c0d0p0:d. Note it's always p0. Beware: the letter won't always match the Windows unit drive. If your first unit drive (C:\) is a NTFS, then D:\ is FAT and E:\ is FAT, the first FAT partition is D:\ ... but to mount it in Solaris use c0d0p0:1 or c0d0p0:c !
This method is particularly useful to mount partitions located within extended partition, because there's no way to address the partition directly with a c0d0pN.
- NFS: useful command: sharemgr show -vp
- NTFS: not supported by Solaris
For automatic mounting, add an entry to /etc/vfstab:
/dev/dsk/c0d0p3 /dev/rdsk/c0d0p3 /mnt/win_e pcfs 3 yes
To mount a file as a filesystem, use lofiadm (loopback file driver). I haven't tried that yet, but see instructions here.
- Samba: this is a nice solution to mount remote Windows shares.
pfexec mount -F smbfs -o uid=username //host/share /mntpointThe partition will be mounted for the specified username.
To do so, the samba client service must be started:
pfexec svcadm enable svc:/network/smb/client:defaultNB. Samba uses the following packages:
system SUNWsmbau samba - A Windows SMB/CIFS fileserver for UNIX (Usr) system SUNWsmbfskr SMB/CIFS File System client support (Kernel) system SUNWsmbfsr SMB/CIFS File System client support (Root) system SUNWsmbfsu SMB/CIFS File System client support (Usr)for server:
system SUNWsmbskr SMB Server (Kernel) system SUNWsmbsr SMB Server (Root) system SUNWsmbsu SMB Server (Usr) system SUNWsmbau samba - A Windows SMB/CIFS fileserver for UNIX (Usr)
Plug it in, and then check where it has been mounted using df -h. My mobile phone is mounted in /rmdisk/noname .
Plug it in. It automatically mounts in /media/IOMEGA_HDD on my system.Linux Zone on OpenSolaris. Below, I shall rather detail a few common zone commands. Listing zones:
$ zoneadm list -vc ID NAME STATUS PATH BRAND IP 0 global running / native shared - lzonelinux installed /a/rpool/zones/lzonelinux lx shared ... $ zoneadm list -vc ID NAME STATUS PATH BRAND IP 0 global running / native shared 1 lzonelinux running /a/rpool/zones/lzonelinux lx shared
- running: means the OS in the zone is up and running
- installed: means the zone is operational, but it is not running currently
$ zonecfg -z lzonelinux zonecfg:lzonelinux> export create -b set zonepath=/rpool/zones/lzonelinux set brand=lx set autoboot=false set ip-type=shared add net set address=x.y.z.w/24 set physical=yukonx0 end add attr set name=kernel-version set type=string set value=2.6 endOther solution: zonecfg, then info.
zonecfg -z lzonelinux WARNING: you do not have write access to this zone's configuration file; going into read-only mode. zonecfg:lzonelinux> info zonename: lzonelinux zonepath: /a/rpool/zones/lzonelinux brand: lx autoboot: false bootargs: pool: limitpriv: scheduling-class: ip-type: shared hostid: net: address: x.y.z.w/24 physical: yukonx0 defrouter not specified attr: name: kernel-version type: string value: 2.6Note that it is not possible to modify the zone path for an installed zone. The zone must first be uninstalled:
pfexec zoneadm -z lzonelinux uninstall Are you sure you want to uninstall zone lzonelinux (y/[n])? yTo boot an (installed) zone:
pfexec zoneadm -z lzonelinux bootThen login:
pfexec zlogin lzonelinux [Connected to zone 'lzonelinux' pts/7] Welcome to your shiny new Linux zone. ...To stop (halt) a zone:
pfexec zoneadm -z lzonelinux halt