administration


Mercredi 22 mai 2013 3 22 /05 /Mai /2013 14:20

 

Did you know ? Solaris 11 is capable of doing a fast reboot, skipping the power-on style self tests (POST) that have traditionally accompanied a reboot. Finished the coffee break !?

 


On x86 machines, this will automatically happen if you use the reboot command (or init 6). To force a full test cycle, and/or to get access to the boot order menu from the BIOS, you can use halt, followed by pressing a key.



On SPARC, the default configuration requires that you use reboot -f for a fast reboot. If you wish fast reboot to be the default, you must change an SMF property fastreboot, as follows:



# svccfg -s system/boot-config:default 'setprop config/fastreboot_default=true'
# svcadm refresh svc:/system/boot-config:default



To temporarily override the setting and reboot the slow way, you can use reboot -p, aka "reboot to PROM".

 

 

It is necessary to find a new excuse for your coffee break !!

Par gloumps - Publié dans : administration
Ecrire un commentaire - Voir les 0 commentaires

Lundi 20 mai 2013 1 20 /05 /Mai /2013 21:10

 

I describe only migration P2V of a physical server in a ldom, the installation and the configuration of Oracle VM Server for Sparc are not specified in this article.

 

Some details:

  • The name of physical server is ldom-guest (Solaris 10u3 – kernel 118833-33)
  • The name of crontol domain is ldom-crtl (Solaris 11.1 SRU 5.5)

 

There are a 3 phases to migrate from a physical system to a virtual system:

  • Collection Phase: A filesystem source is created based on the configuration information that it collects about the source system.
  • Preparation Phase: A logical domain is created
  • Conversion Phase: The filesystem source is converted into a logical domain (ex: conversion from sun4u to sun4v)

 

To execute this procedure, you must use tool ldmp2v (download this path p15880570 to obtain the tool - In Solaris 11, this tool is directly available).

 

Before starting, let us look at the configuration available on control domain:

 

ldom-crtl # ldm –V

Logical Domains Manager (v 3.0.0.2)
        Hypervisor control protocol v 1.7
        Using Hypervisor MD v 1.3

System PROM:
        Hypervisor v. 1.10.0. @(#)Hypervisor 1.10.0.a 2011/07/15 11:51\015
        OpenBoot   v. 4.33.0. @(#)OpenBoot 4.33.0.b 2011/05/16 16:28

 

ldom-crtl # ldm ls –o console,network,disk primary
[…]

VCC
    NAME           PORT-RANGE
    primary-vcc0   5000-5100

VSW
    NAME           MAC          […]
    primary-vsw0   x:x:x:x:x:x  […]

VDS
    NAME           VOLUME       […]
    primary-vds0

[…]

 

A traditional configuration !?, no ?

 

Fisrt step: Collection phase (runs on the physical source system)

 

To create a consistent file system image, I suggest you to boot the server in “single mode”. To save a file system image, I often use a NFS share.

 

ldom-guest # mount –F nfs myshare:/tempo /mnt

 

By default, the cmd ldmp2v creates a flar image.

 

ldom-guest # /usr/sbin/ldmp2v collect -d /mnt/ldom-guest
Collecting system configuration ...
Archiving file systems ...
Full Flash
Checking integrity...
Integrity OK.
Running precreation scripts...
Precreation scripts done.
Creating the archive...
136740734 blocks
Archive creation complete.

ldom-guest # init 0

 

Second step: Preparation phase (runs on the control domain)

 

I start by creating a ZFS pool which will contain the data of the logical domain.

 

ldmon-crtl # zpool create -m none ldom-guest cXtYdZ

 

I prefer to use the manual mode to create a logical domain (so I edit the following file ldmp2v.conf).

 

ldmon-crtl # cat /etc/ldmp2v.conf
# Virtual switch to use
VSW="primary-vsw0"
# Virtual disk service to use
VDS="primary-vds0"
# Virtual console concentrator to use
VCC="primary-vcc0"
# Location where vdisk backend devices are stored
BACKEND_PREFIX=""
# Default backend type: "zvol" or "file".
BACKEND_TYPE="zvol"
# Create sparse backend devices: "yes" or "no"
BACKEND_SPARSE="yes"
# Timeout for Solaris boot in seconds
BOOT_TIMEOUT=60

 

Just after mounted the share NFS, I create a logical domain by indicating the following informations: cpu, mem and prefix (here it is name of ZFS pool)

 

ldom-crtl # mount –F nfs myshare:/tempo /mnt
ldom-crtl # ldmp2v prepare -c 16 -M 16g –p ldom-guest -d /mnt/ldom-guest ldom-guest
Creating vdisks ...
Creating file systems ...
Populating file systems ...
136740734 blocks
Modifying guest OS image ...
Modifying SVM configuration ...
Unmounting file systems ...
Creating domain ...
Attaching vdisks to domain ldom-guest ...

 

For this example, the ldom guest is configured with 16 vcpu and 16 Go (options –c and –M).

 

Final step: Conversion phase (runs on the control domain)

 

In the conversion phase, the logical domain uses the Oracle Solaris upgrade process to upgrade to the Oracle Solaris 10 OS. The upgrade operation removes all existing packages and installs the Oracle Solaris 10 sun4v packages, which automatically performs a sun4u-to-sun4v conversion. The convert phase can use an Oracle Solaris DVD ISO image or a network installation image. On Oracle Solaris 10 systems, you can also use the Oracle Solaris JumpStart feature to perform a fully automated upgrade operation.

 

On “jumpstart server” (do you known jet ?), I edit the jumpstart profile to add the following lines

 

install_type    upgrade
root_device c0d0s0

 

Ready for conversation !! The last command to convert the Sparc architecture and to start the guest domain.

 

ldom-crtl # ldmp2v convert -j -n vnet0 -d /mnt/ldom-guest ldmon-guest
Testing original system status ...
LDom ldom-guest started
Waiting for Solaris to come up ...
Using Custom JumpStart
Trying 0.0.0.0...
Connected to 0.
Escape character is '^]'.

 

Connecting to console "server" in group "server" ....
Press ~? for control options ..
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface vnet0...
Configured interface vnet0
Setting up Java. Please wait...
Extracting windowing system. Please wait...
Beginning system identification...
Searching for configuration file(s)...
Using sysid configuration file 10.x.x.x:/opt/SUNWjet/Clients/ldom-guest/sysidcfg
Search complete.
Discovering additional network configuration...
Completing system identification...
Starting remote procedure call (RPC) services: done.
System identification complete.
Starting Solaris installation program...
Searching for JumpStart directory...
Using rules.ok from 10.x.x.x:/opt/SUNWjet.
Checking rules.ok file...
Using begin script: Utils/begin
Using derived profile: Utils/begin
Using finish script: Utils/finish
Executing JumpStart preinstall phase...
Executing begin script "Utils/begin"...
Installation of ldom-guest at 00:41 on 10-May-2013
Loading JumpStart Server variables
Loading JumpStart Server variables
Loading Client configuration file
Loading Client configuration file
Running base_config begin script....
Running base_config begin script....
Begin script Utils/begin execution completed.
Searching for SolStart directory...
Checking rules.ok file...
Using begin script: install_begin
Using finish script: patch_finish
Executing SolStart preinstall phase...
Executing begin script "install_begin"...
Begin script install_begin execution completed.

WARNING: Backup media not specified.  A backup media (backup_media) keyword must be specified if an upgrade with disk space reallocation is required

Processing default locales
       - Specifying default locale (en_US.ISO8859-1)

Processing profile

Loading local environment and services

Generating upgrade actions
       - Selecting locale (en_US.ISO8859-1)

Checking file system space: 100% completed
Space check complete.

Building upgrade script

Preparing system for Solaris upgrade

Upgrading Solaris: 101% completed
       - Environment variables (/etc/default/init)

Installation log location
       - /a/var/sadm/system/logs/upgrade_log (before reboot)
       - /var/sadm/system/logs/upgrade_log (after reboot)

Please examine the file:
       - /a/var/sadm/system/data/upgrade_cleanup

It contains a list of actions that may need to be performed to complete the upgrade. After this system is rebooted, this file can be found at:
       - /var/sadm/system/data/upgrade_cleanup

Upgrade complete
Executing SolStart postinstall phase...
Executing finish script "patch_finish"...

Finish script patch_finish execution completed.
Executing JumpStart postinstall phase...
Executing finish script "Utils/finish"...
[…]
Terminated

Finish script Utils/finish execution completed.
The begin script log 'begin.log'
is located in /var/sadm/system/logs after reboot.

The finish script log 'finish.log'
is located in /var/sadm/system/logs after reboot.

syncing file systems... done
rebooting...
Resetting...

 

T5240, No Keyboard
Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.0.b, 16384 MB memory available, Serial #83470255.
Ethernet address 0:x:x:x:x:x, Host ID: 84f9a7af.

Boot device: disk0:a  File and args:
SunOS Release 5.10 Version Generic_118833-33 64-bit
Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hostname: ldom-guest
Loading smf(5) service descriptions: 1/1
checking ufs filesystems
/dev/rdsk/c0d1s0: is logging.

ldom-guest console login:

 

It’s already finishes. Is it simple !? You do not have any more excuses not to use ldoms.

 

See Also

 

Par gloumps - Publié dans : administration
Ecrire un commentaire - Voir les 2 commentaires

Dimanche 10 février 2013 7 10 /02 /Fév /2013 20:14

 

Lors d'un précédent article, j'ai traité la mise en place d'un serveur AI personnalisé pour l'architecture Sparc (déploiement via Wanboot). Comme convenu, je vais traité ici la mise en place d'un serveur AI mais sur l'architecture x86. La différence entre ces deux architectures (d'un point vue installation) se situe principalement sur la phase d'initialisation juste avant le début de l'installation.

 

Sur une architecture x86, la phase d'initialisation est généralement exécutée par le couple pxe / dhcp. Il est donc nécessaire de configurer un serveur dhcp permettant d'interpréter la requête pxe que le client enverra. Il peut s'agir d'un serveur dédié ou mutualisé avec le serveur AI. Dans mon exemple ci-dessous, il n'y a qu'un serveur pour la configuraton dhcp et AI.

 

Un choix s'offre à nous concernant le type de serveur dhcp. Il est possible d'utiliser le serveur dhcp de l'ISC ou alors le serveur dhcp de Solaris. La configuration d'un serveur dhcp ISC est automatique si celui-ci se trouve sur le serveur AI. Toutefois, je préfére utiliser le serveur dhcp Solaris.

 

Il faut installer les package dhcp et ai sur le serveur d'installation depuis notre serveur de repos (pour créer les repos lire cet article). Ensuite il suffit d'initialiser le serveur dhcp avec les bonnes informations.

 

# pkg install install/installadm SUNWdhcs

 

# /usr/sbin/dhcpconfig -D -r SUNWfiles -p /var/dhcp
Created DHCP configuration file.
Created dhcptab.
Added "Locale" macro to dhcptab.
Added server macro to dhcptab - aiserver.
DHCP server started.

 

# dhcpconfig -N 192.168.10.0 -m 255.255.255.0 -t 192.168.10.1
Added network macro to dhcptab - 192.168.10.0.
Created network table. 

 

# pntadm -L
192.168.10.0

 

 

Une fois ces étapes effectuées, il faut initialiser le service d'installation pour les clients x86.

 

# installadm create-service –a i386
Warning: Service svc:/network/dns/multicast:default is not online.
   Installation services will not be advertised via multicast DNS.

 

Creating service from: pkg:/install-image/solaris-auto-install
OK to use subdir of /export/auto_install to store image? [y/N]: y
DOWNLOAD              PKGS         FILES    XFER (MB)   SPEED
Completed              1/1       514/514  292.3/292.3 11.1M/s

 

PHASE                                      ITEMS
Installing new actions                   661/661
Updating package state database             Done
Updating image state                        Done
Creating fast lookup database               Done
Reading search index                        Done
Updating search index                        1/1

 

Creating i386 service: solaris11_1-i386
Image path: /export/auto_install/solaris11_1-i386

 

Refreshing install services
Warning: mDNS registry of service solaris11_1-i386 could not be verified.

 

Creating default-i386 alias

 

Setting the default PXE bootfile(s) in the local DHCP configuration
to:
bios clients (arch 00:00):  default-i386/boot/grub/pxegrub2
uefi clients (arch 00:07):  default-i386/boot/grub/grub2netx64.efi

 

Unable to update the DHCP SMF service after reconfiguration: DHCP
server is in an unexpected state: action [enable] state [offline]

 

The install service has been created and the DHCP configuration has
been updated, however the DHCP SMF service requires attention. Please
see dhcpd(8) for further information.

 

Refreshing install services
Warning: mDNS registry of service default-i386 could not be verified.

 

 

Le service pour les clients x86 est maintenant disponible.

 

# installadm list -m

Service/Manifest Name  Status   Criteria
---------------------  ------   --------

default-i386
   orig_default        Default  None

solaris11_1-i386
   orig_default        Default  None

 

 

Concernant la personnalisation, je vous renvoie au précédent article pour plus de détails. On crée un manifest spécifique en utilisant les commandes suivantes.

 

# installadm export --service solaris11_1-i386 \
--manifest orig_default \

--output /export/auto_install/manifests/sol11.1-i386-001
# vi /export/auto_install/manifests/sol11.1-i386-001
# installadm create-manifest \
-f /export/auto_install/manifests/sol11.1-i386-001 \

-n solaris11_1-i386 -m sol11.1-i386-001 -d

 

 

En cas d'autre modification sur ce manifest, on utilise les commandes suivantes.

 

# vi /export/auto_install/manifests/sol11.1-i386-001
# installadm update-manifest \
-f /export/auto_install/manifests/sol11.1-i386-001 \

-n solaris11_1-i386 -m sol11.1-i386-001

 

 

Pour éviter de garder le service et le manifest par défaut, on nettoie un peu la configuration.

 

# installadm delete-service default-i386
# installadm delete-manifest -n solaris11_1-i386 -m orig_default

 

 

On passe maintenant à la création du profile pour un client donné.

 

# sysconfig create-profile -o /export/auto_install/ref/profile.xml
# cd /export/auto_install/ref
# cp profile.xml ../clients/i386-01.xml
# vi /export/auto_install/clients/i386-01.xml

 

# installadm create-profile \
-f /export/auto_install/clients/i386-01.xml \
-n solaris11_1-i386 \

-p i386-01 -c mac="00:xx:xx:xx:xx:04"

 

 

Lors de la création du client, j'initialise la redirection série ainsi que le mode debug (connexion ssh distante pendant l'installation). Pour plus de détails sur la redirection série je vous invite à lire cet autre article.

 

# installadm create-client -e 00xxxxxxxx04 -n solaris11_1-i386 \
-b console=ttya,livessh=enable,install_debug=enable

Warning: Service svc:/network/dns/multicast:default is not online.
   Installation services will not be advertised via multicast DNS.
Adding host entry for 00:xx:xx:xx:xx:04 to local DHCP configuration.

 

Local DHCP configuration complete, but the DHCP server SMF service is
offline. To enable the changes made, enable:
svc:/network/dhcp/server:ipv4.
Please see svcadm(1M) for further information.

 

 

La configuration du serveur AI est terminée et un client a été généré (profile spécifique).

 

# installadm list -c -p -m

 
Service Name      Client Address    Arch   Image Path
------------      --------------    ----   ----------
solaris11_1-i386 00:xx:xx:xx:xx:04  i386  /export/auto_install/solaris11_1-i386

 

Service/Manifest Name  Status   Criteria
---------------------  ------   --------

solaris11_1-i386
   sol11.1-i386-001   Default  None 

 

Service/Profile Name  Criteria
--------------------  --------

solaris11_1-i386
   i386-01      mac = 00:xx:xx:xx:xx:04

 

 

Reste la configuration dhcp pour ce client.

 

# pntadm -A 192.168.10.123 -i 0100xxxxxxxx04 \
-m 0100xxxxxxxx04 -f "PERMANENT+MANUAL" 192.168.10.0

 

# pntadm -P 192.168.10.0 | grep 0100xxxxxxxx04
0100xxxxxxxx04  03  192.168.10.5  192.168.10.123   Zero   0100xxxxxxxx04

 

# dhtadm -g -A -m 0100xxxxxxxx04 -d \
":Include=`uname -n`:BootSrvA=192.168.10.5:BootFile=0100xxxxxxxx04:"

 

 

L'installation du client peut donc commencer. Depuis l'ILO de ce client x86, on sélectionne notre carte réseau comme périphérique de boot puis dane le menu de grub on sélectionne le choix 2 pour lancer l'installation. 

 

SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.
Remounting root read/write
Probing for device nodes ...
Preparing network image for use

 

Downloading solaris.zlib
--2013-01-30 20:51:33--  http://192.168.10.5:5555//export/auto_install/solaris11_1-i386/solaris.zlib
Connecting to 192.168.10.5:5555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 135808512 (130M) [text/plain]
Saving to: `/tmp/solaris.zlib'

100%[======================================>] 135,808,512 57.3M/s   in 2.3s   

2013-01-30 20:51:35 (57.3 MB/s) - `/tmp/solaris.zlib' saved [135808512/135808512]

 

Downloading solarismisc.zlib
--2013-01-30 20:51:35--  http://192.168.10.5:5555//export/auto_install/solaris11_1-i386/solarismisc.zlib
Connecting to 192.168.10.5:5555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11935744 (11M) [text/plain]
Saving to: `/tmp/solarismisc.zlib' 

100%[======================================>] 11,935,744  58.3M/s   in 0.2s   

2013-01-30 20:51:36 (58.3 MB/s) - `/tmp/solarismisc.zlib' saved [11935744/11935744]

 

Downloading .image_info
--2013-01-30 20:51:36--  http://192.168.10.5:5555//export/auto_install/solaris11_1-i386/.image_info
Connecting to 192.168.10.5.:5555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 228 [text/plain]
Saving to: `/tmp/.image_info'

100%[======================================>] 228         --.-K/s   in 0s     

2013-01-30 20:51:36 (19.5 MB/s) - `/tmp/.image_info' saved [228/228]

 

Done mounting image
Configuring devices.
Hostname: i386-01
Setting debug mode to enable
Service discovery phase initiated
Service name to look up: solaris11_1-i386
Service discovery over multicast DNS failed
Service solaris11_1-i386 located at 192.168.10.5:5555 will be used
Service discovery finished successfully
Process of obtaining install manifest initiated
Using the install manifest obtained via service discovery

 

i386-01 console login:
Automated Installation started
The progress of the Automated Installation will be output to the console
Detailed logging is in the logfile at /system/volatile/install_log

 

Press RETURN to get a login prompt at any time.

 

Installer will be run in debug mode
20:52:02    Using XML Manifest: /system/volatile/ai.xml
20:52:02    Using profile specification: /system/volatile/profile
20:52:02    Using service list file: /var/run/service_list
20:52:02    Starting installation.
20:52:02    0% Preparing for Installation
20:52:03    100% manifest-parser completed.
20:52:03    0% Preparing for Installation
20:52:03    1% Preparing for Installation
20:52:03    2% Preparing for Installation
20:52:03    4% Preparing for Installation
20:52:07    6% target-discovery completed.
20:52:07    Selected Disk(s) : c8t0d0
20:52:07    10% target-selection completed.
20:52:07    12% ai-configuration completed.
20:52:07    14% var-share-dataset completed.
20:52:30    16% Beginning IPS transfer
20:52:30    Creating IPS image
20:52:34     Startup: Retrieving catalog 'solaris' ... Done
20:52:36     Startup: Caching catalogs ... Done
20:52:37     Startup: Refreshing catalog 'site' ... Done
20:52:37     Startup: Refreshing catalog 'solaris' ... Done
20:52:40     Startup: Caching catalogs ... Done
20:52:40    Installing packages from:
20:52:40        solaris
20:52:40            origin:  http://192.168.10.5:8000/
20:52:40        site
20:52:40            origin:  http://192.168.10.5:8001/
20:52:41     Startup: Refreshing catalog 'site' ... Done
20:52:41     Startup: Refreshing catalog 'solaris' ... Done
20:52:44    Planning: Solver setup ... Done
20:52:45    Planning: Running solver ... Done
20:52:45    Planning: Finding local manifests ... Done
20:52:45    Planning: Fetching manifests:   0/408  0% complete
20:52:53    Planning: Fetching manifests: 100/408  24% complete
[…]
20:53:11    Planning: Fetching manifests: 408/408  100% complete
20:53:22    Planning: Package planning ... Done
20:53:23    Planning: Merging actions ... Done
20:53:26    Planning: Checking for conflicting actions ... Done
20:53:28    Planning: Consolidating action changes ... Done
20:53:30    Planning: Evaluating mediators ... Done
20:53:33    Planning: Planning completed in 52.04 seconds
20:53:33    Please review the licenses for the following packages post-install:
20:53:33      runtime/java/jre-7                       (automatically accepted)
20:53:33      consolidation/osnet/osnet-incorporation  (automatically accepted,
20:53:33                                                not displayed)
20:53:33    Package licenses may be viewed using the command:
20:53:33      pkg info --license <pkg_fmri>
20:53:34    Download:     0/60319 items    0.0/822.8MB  0% complete
[…]
21:00:44    Download: 60010/60319 items  822.0/822.8MB  99% complete (650k/s)
21:00:45    Download: Completed 822.79 MB in 431.69 seconds (1.9M/s)
21:01:00     Actions:     1/85295 actions (Installing new actions)
21:01:01    16% Transferring contents
21:01:01    19% Transferring contents
21:01:05     Actions: 13914/85295 actions (Installing new actions)
21:01:06    45% Transferring contents
21:01:10     Actions: 18060/85295 actions (Installing new actions)
21:01:15     Actions: 18534/85295 actions (Installing new actions)
[…]
21:09:55     Actions: 83977/85295 actions (Installing new actions)
21:10:00     Actions: 84781/85295 actions (Installing new actions)
21:10:01     Actions: Completed 85295 actions in 540.82 seconds.
21:10:01    Finalize: Updating package state database ...  Done
21:10:03    Finalize: Updating image state ...  Done
21:10:15    Finalize: Creating fast lookup database ...  Done
21:10:25    Version mismatch:
21:10:25    Installer build version: pkg://solaris/entire@0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z
21:10:25    Target build version: pkg://solaris/entire@0.5.11,5.11-0.175.1.1.0.4.0:20121106T001344Z
21:10:25    46% initialize-smf completed.
21:10:27    Setting console boot device property to ttya
21:10:27    Disabling boot loader graphical splash
21:10:27    Installing boot loader to devices: ['/dev/rdsk/c8t0d0s1']
21:10:32    Setting boot devices in firmware
21:10:32    54% boot-configuration completed.
21:10:32    55% update-dump-adm completed.
21:10:32    57% setup-swap completed.
21:10:32    58% device-config completed.
21:10:33    60% apply-sysconfig completed.
21:10:33    61% transfer-zpool-cache completed.
21:10:51    90% boot-archive completed.
21:10:51    92% transfer-ai-files completed.
21:10:52    99% create-snapshot completed.
21:10:52    Automated Installation succeeded.
21:10:52    System will be rebooted now
Automated Installation finished successfully
Auto reboot enabled. The system will be rebooted now
Log files will be available in /var/log/install/ after reboot
Jan 30 21:10:56 i386-01 reboot: initiated by root
WARNING: Fast reboot is not supported on this platform since some BIOS routines are in RAM
syncing file systems... done
rebooting...

 

Plus d'excuse maintenant, vous pouvez installer un serveur AI pour déployer aussi bien des serveurs Sparc que des serveurs i386.

 

Par gloumps - Publié dans : administration
Ecrire un commentaire - Voir les 0 commentaires

Lundi 4 février 2013 1 04 /02 /Fév /2013 21:24

 

Après avoir créé vos repos (méthode pas-à-pas disponible dans un précédant article), il est temps de créer votre serveur AI personnalisé. Je vais découper ce sujet en deux partie, un article sur l'architecture Sparc et un autre sur l'architecture x86. Et pourquoi donc ? J'utilise deux méthodes d'initialisations différentes, wanboot pour l'architecture Sparc et la paire pxe/dhcp pour l'architecture x86. Du coup je préfère distinguer ces deux architectures.

 

Pour recevoir et interpréter la procédure d'installation d'un client Sparc (via wanboot), il faut que le serveur AI soit correctement configuré (serveur web, serveur tftp et script cgi).

 

# pkg set-publisher –M ‘*’ –G ‘*’ -P -g http://10.xx.xx.xxx:8000 solaris
# pkg install network/tftp 
install/installadm

# svccfg -s system/install/server:default setprop all_services/port = 5555
# svccfg refresh system/install/server:default
 

# mkdir /var/ai/image-server/images/cgi-bin
# chmod 777 /var/ai/image-server/images/cgi-bin
# cp -pr /usr/lib/inet/wanboot/wanboot-cgi /var/ai/image-server/images/cgi-bin
 

# svccfg –s network/tftp/udp6 setprop \
netd_start/exec=”/usr/sbin/in.tftpd -s /etc/netboot”

# svcadm refresh network/tftp/udp6
# inetadm –e network/tftp/udp6

 

Une fois ces étapes effectuées, il faut initialiser le service d'installation pour les clients Sparc.

 

# installadm create-service –a sparc
Warning: Service svc:/network/dns/multicast:default is not online.
   Installation services will not be advertised via multicast DNS.


Creating service from: pkg:/install-image/solaris-auto-install
OK to use subdir of /export/auto_install to store image? [y/N]: y

DOWNLOAD            PKGS      FILES    XFER (MB)   SPEED
Completed            1/1      45/45  237.8/237.8 11.5M/s 

PHASE                                      ITEMS
Installing new actions                   187/187
Updating package state database             Done
Updating image state                        Done
Creating fast lookup database               Done
Reading search index                        Done
Updating search index                        1/1


Creating sparc service: solaris11_1-sparc

 

Image path: /export/auto_install/solaris11_1-sparc

 

Service discovery fallback mechanism set up
Creating SPARC configuration file
Refreshing install services
Warning: mDNS registry of service solaris11_1-sparc could not be verified.

 

Creating default-sparc alias

 

Service discovery fallback mechanism set up
Creating SPARC configuration file
No local DHCP configuration found. This service is the default
alias for all SPARC clients. If not already in place, the following should
be added to the DHCP configuration:
Boot file: http://10.xx.xx.xxx:5555/cgi-bin/wanboot-cgi

 

Refreshing install services
Warning: mDNS registry of service default-sparc could not be verified.

 

Le service pour les clients Sparc est maintenant disponible.

 

# installadm list -m

Service/Manifest Name  Status   Criteria
---------------------  ------   --------

default-sparc
   orig_default        Default  None

solaris11_1-sparc
   orig_default        Default  None

 

Quelques personnalisations sont nécessaires (A vous de voir ce que vous souhaitez faire). Moi je personnalise de la manière suivante. Attention, dans le reste de la procédure, on utilise cette arborescence.

  • Le répertoire ref contient le profile de référence pour tous les clients
  • Le répertoire manifests contient les manifests (par mise à jour de Solaris 11)
  • Le répertoire clients contient les profiles personnalisés de chaque client

 

# cd /export/auto_install
# mkdir clients manifests ref

 

On crée un manifest spécifique en utilisant les commandes suivantes.

 

# installadm export --service solaris11_1-sparc \
--manifest orig_default \

--output /export/auto_install/manifests/sol11.1-sparc-001
# vi /export/auto_install/manifests/sol11.1-sparc-001
# installadm create-manifest \
-f /export/auto_install/manifests/sol11.1-sparc-001 \

-n solaris11_1-sparc -m sol11.1-sparc-001 -d

 

En cas d'autre modification sur ce manifest, on utilise les commandes suivantes.

 

# vi /export/auto_install/manifests/sol11.1-sparc-001
# installadm update-manifest \
-f /export/auto_install/manifests/sol11.1-sparc-001 \

-n solaris11_1-sparc -m sol11.1-sparc-001

 

Pour éviter de garder le service et le manifest par défaut, on nettoie un peu la configuration.

 

# installadm delete-service default-sparc
# installadm delete-manifest -n solaris11_1-sparc -m orig_default

 

On passe maintenant à la création du profile pour un client donné.

 

# sysconfig create-profile -o /export/auto_install/ref/profile.xml
# cd /export/auto_install/ref
# cp profile.xml ../clients/sparc-01.xml
# vi /export/auto_install/clients/sparc-01.xml

 

# installadm create-profile \
-f /export/auto_install/clients/sparc-01.xml \
-n solaris11_1-sparc \

-p sparc-01 -c mac="00:1x:xx:xx:xx:f2"

 

Reste la création du client.

 

# installadm create-client -e 001xxxxxxxf2 -n solaris11_1-sparc
Warning: Service svc:/network/dns/multicast:default is not online.
   Installation services will not be advertised via multicast DNS.

 

La configuration du serveur AI est terminé et un client a été généré par rapport à un manifest et un profile spécifique.

 

# installadm list -c -p -m

 
Service Name      Client Address     Arch   Image Path
------------      --------------     ----   ----------
solaris11_1-sparc 00:1x:xx:xx:xx:F2  sparc  /export/auto_install/solaris11_1-sparc

 

Service/Manifest Name  Status   Criteria
---------------------  ------   --------

solaris11_1-sparc
   sol11.1-sparc-001   Default  None 

 

Service/Profile Name  Criteria
--------------------  --------

solaris11_1-sparc
   sparc-01      mac = 00:1x:xx:xx:xx:F2

 

Depuis l'OBP du client Sparc, on configure les paramètres du wanboot et on lance l'installation.

 

{0} ok setenv network-boot-arguments  host-ip=10.xx.xx.xxx,router-ip=10.xx.xx.1,
subnet-mask=255.xxx.xxx.xxx,file=http://10.xx.xx.xxx:5555/cgi-bin/wanboot-cgi

 

{0} ok boot net - install

 

Boot device: /pci@0,600000/pci@0/pci@8/pci@0/network@2  File and args: - install
1000 Mbps full duplex  Link up
<time unavailable> wanboot info: WAN boot messages->console
<time unavailable> wanboot info: configuring /pci@0,600000/pci@0/pci@8/pci@0/network@2

 

1000 Mbps full duplex  Link up
<time unavailable> wanboot progress: wanbootfs: Read 368 of 368 kB (100%)
<time unavailable> wanboot info: wanbootfs: Download complete
Mon Dec 17 15:49:54 wanboot progress: miniroot: Read 243471 of 243471 kB (100%)
Mon Dec 17 15:49:54 wanboot info: miniroot: Download complete

 

SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.
Remounting root read/write
Probing for device nodes ...
Preparing network image for use

 

Downloading solaris.zlib
--2012-12-17 16:21:37--  http://10.xx.xx.xxx:5555/export/auto_install/solaris11_1-sparc//solaris.zlib
Connecting to 10.xx.xx.xxx:5555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 133076480 (127M) [text/plain]
Saving to: `/tmp/solaris.zlib'

100%[======================================>] 133,076,480 49.6M/s   in 2.6s   

2012-12-17 16:21:39 (49.6 MB/s) - `/tmp/solaris.zlib' saved [133076480/133076480]

 

Downloading solarismisc.zlib
--2012-12-17 16:21:39--  http://10.xx.xx.xxx:5555/export/auto_install/solaris11_1-sparc//solarismisc.zlib
Connecting to 10.xx.xx.xxx:5555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 11808768 (11M) [text/plain]
Saving to: `/tmp/solarismisc.zlib'

100%[======================================>] 11,808,768  63.0M/s   in 0.2s   

2012-12-17 16:21:40 (63.0 MB/s) - `/tmp/solarismisc.zlib' saved [11808768/11808768]

 

Downloading .image_info
--2012-12-17 16:21:40--  http://10.xx.xx.xxx:5555/export/auto_install/solaris11_1-sparc//.image_info
Connecting to 10.xx.xx.xxx:5555... connected.
HTTP request sent, awaiting response... 200 OK
Length: 81 [text/plain]
Saving to: `/tmp/.image_info'

100%[======================================>] 81          --.-K/s   in 0s     

2012-12-17 16:21:40 (7.02 MB/s) - `/tmp/.image_info' saved [81/81]

 

Done mounting image
Configuring devices.
Hostname: solaris
Service discovery phase initiated
Service name to look up: solaris11_1-sparc
Service discovery over multicast DNS failed
Service solaris11_1-sparc located at 10.xx.xx.xxx:5555 will be used
Service discovery finished successfully
Process of obtaining install manifest initiated
Using the install manifest obtained via service discovery

 

solaris console login:

Automated Installation started
The progress of the Automated Installation will be output to the console
Detailed logging is in the logfile at /system/volatile/install_log

Press RETURN to get a login prompt at any time.

16:22:08    Using XML Manifest: /system/volatile/ai.xml
16:22:08    Using profile specification: /system/volatile/profile
16:22:08    Using service list file: /var/run/service_list
16:22:08    Starting installation.
16:22:08    0% Preparing for Installation
16:22:08    100% manifest-parser completed.
16:22:09    0% Preparing for Installation
16:22:09    1% Preparing for Installation
16:22:09    2% Preparing for Installation
16:22:09    3% Preparing for Installation
16:22:09    4% Preparing for Installation
16:22:24    7% target-discovery completed.
16:22:24    Selected Disk(s) : c2t0d0
16:22:24    13% target-selection completed.
16:22:24    17% ai-configuration completed.
16:22:24    19% var-share-dataset completed.
16:22:41    21% target-instantiation completed.
16:22:41    21% Beginning IPS transfer
16:22:41    Creating IPS image
16:22:45     Startup: Retrieving catalog 'solaris' ... Done
16:22:48     Startup: Caching catalogs ... Done
16:22:48     Startup: Refreshing catalog 'site' ... Done
16:22:48     Startup: Refreshing catalog 'solaris' ... Done
16:22:51     Startup: Caching catalogs ... Done
16:22:51    Installing packages from:
16:22:51        solaris
16:22:51            origin:  http://10.xx.xx.xxx:8000/
16:22:51        site
16:22:51            origin:  http://10.xx.xx.xxx:8001/
16:22:51     Startup: Refreshing catalog 'site' ... Done
16:22:52     Startup: Refreshing catalog 'solaris' ... Done
16:22:56    Planning: Solver setup ... Done
16:22:56    Planning: Running solver ... Done
16:22:56    Planning: Finding local manifests ... Done
16:22:56    Planning: Fetching manifests:   0/365  0% complete
16:23:03    Planning: Fetching manifests: 100/365  27% complete
16:23:08    Planning: Fetching manifests: 253/365  69% complete
16:23:16    Planning: Fetching manifests: 365/365  100% complete
16:23:25    Planning: Package planning ... Done
16:23:26    Planning: Merging actions ... Done
16:23:29    Planning: Checking for conflicting actions ... Done
16:23:31    Planning: Consolidating action changes ... Done
16:23:34    Planning: Evaluating mediators ... Done
16:23:37    Planning: Planning completed in 45.22 seconds
16:23:37    Please review the licenses for the following packages post-install:
16:23:37      runtime/java/jre-7                       (automatically accepted)
16:23:37      consolidation/osnet/osnet-incorporation  (automatically accepted,
16:23:37                                                not displayed)
16:23:37    Package licenses may be viewed using the command:
16:23:37      pkg info --license <pkg_fmri>
16:23:38    Download:     0/51156 items    0.0/831.8MB  0% complete
16:23:43    Download:   837/51156 items    5.4/831.8MB  0% complete (1.1M/s)

[…]

16:29:37    Download: 50159/51156 items  828.7/831.8MB  99% complete (714k/s)
16:29:42    Download: 50971/51156 items  831.1/831.8MB  99% complete (619k/s)
16:29:43    Download: Completed 831.78 MB in 365.45 seconds (2.3M/s)
16:29:55     Actions:     1/73904 actions (Installing new actions)
16:30:00     Actions: 15949/73904 actions (Installing new actions

[…]

16:34:51     Actions: 72496/73904 actions (Installing new actions)
16:34:56     Actions: 72687/73904 actions (Installing new actions)
16:35:01     Actions: Completed 73904 actions in 305.77 seconds.
16:35:02    Finalize: Updating package state database ...  Done
16:35:04    Finalize: Updating image state ...  Done
16:35:16    Finalize: Creating fast lookup database ...  Done
16:35:24    Version mismatch:
16:35:24    Installer build version: pkg://solaris/entire@0.5.11,5.11-0.175.1.0.0.24.2:20120919T190135Z
16:35:24    Target build version: pkg://solaris/entire@0.5.11,5.11-0.175.1.1.0.4.0:20121106T001344Z
16:35:24    23% generated-transfer-1181-1 completed.
16:35:25    25% initialize-smf completed.
16:35:25    Boot loader type SPARC ZFS Boot Block does not support the ...
16:35:25    Installing boot loader to devices: ['/dev/rdsk/c2t0d0s0']
16:35:26    Setting boot devices in firmware
16:35:26    Setting openprom boot-device
16:35:27    35% boot-configuration completed.
16:35:27    37% update-dump-adm completed.
16:35:27    40% setup-swap completed.
16:35:27    42% device-config completed.
16:35:28    44% apply-sysconfig completed.
16:35:29    46% transfer-zpool-cache completed.
16:35:38    87% boot-archive completed.
16:35:38    89% transfer-ai-files completed.
16:35:39    99% create-snapshot completed.
16:35:39    Automated Installation succeeded.
16:35:39    System will be rebooted now

Automated Installation finished successfully
Auto reboot enabled. The system will be rebooted now
Log files will be available in /var/log/install/ after reboot
Dec 17 16:35:43 solaris reboot: initiated by root
Dec 17 16:35:50 solaris syslogd: going down on signal 15
syncing file systems... done
rebooting...

 

La configuration d'un serveur AI et la personnalisation d'un client (manifest et profile) sont des étapes assez simple à mettre en place. Il est grand temps maintenant, pour vous, de mettre en place votre architecture d'installation pour Solaris 11.

 

Par gloumps - Publié dans : administration
Ecrire un commentaire - Voir les 0 commentaires

Jeudi 31 janvier 2013 4 31 /01 /Jan /2013 21:28

 

Ci-joint la procédure de création pas-à-pas des principales repo pour Solaris 11. Rien de plus simple... vous allez voir !

 

On commence par créer les différentes repos.

 

# zfs create –o atime=off –o mountpoint=/repo rpool/repo
# pkgrepo create /repo/solaris
# pkgrepo create /repo/site
# pkgrepo create /repo/solarisstudio
# pkgrepo create /repo/cluster

 

La repo site contiendra les packages IPS locaux (packages maison). Pour provisonner les autres repos, j'utilise les packages sous support (contrat de support valide). Les certificats sont disponibles à l'adresse suivante : https://pkg-register.oracle.com.

 

Je considére que les certificats sont disponibles dans le répertoire /tmp de mon serveur après les avoir téléchargé.

 

# mkdir -m 0755 -p /var/pkg/ssl
# cp -i /tmp/Ora* /var/pkg/ssl
# ls /var/pkg/ssl
Oracle_Solaris_11_Support.certificate.pem
Oracle_Solaris_11_Support.key.pem
Oracle_Solaris_Cluster_4_Support.certificate.pem
Oracle_Solaris_Cluster_4_Support.key.pem
Oracle_Solaris_Studio_Support.certificate.pem
Oracle_Solaris_Studio_Support.key.pem

 

Je considère aussi que mon serveur peut se connecter à internet (via un proxy web). Il suffit alors de provisionner chaque repo de la manière suivante (en exemple la repo solaris).

 

# export http_proxy=http://my-proxy:8080/
# export https_proxy=https://my-proxy:8080/
# export PKG_SRC=https://pkg.oracle.com/solaris/support/
# export PKG_DEST=/repo/solaris

# pkgrecv --key /var/pkg/ssl/Oracle_Solaris_11_Support.key.pem \
--cert /var/pkg/ssl/Oracle_Solaris_11_Support.certificate.pem \
-m all-versions '*'

# pkgrepo refresh -s /repo/solaris 

 

La repo solaris est maintenant provisonnée. Il reste à effectuer cette opération pour toutes les autres repos (execption pour la repo site). Une fois ces opérations terminées, il suffit de configurer ces différentes repos pour pouvoir y accéder à distance (j'utilise le protocole http).

 

On configure la repo par défaut (server) qui correspond à solaris (le choix du port est arbitraire).

 

# svccfg -s application/pkg/server setprop pkg/inst_root=/repo/solaris
# svccfg -s application/pkg/server setprop pkg/readonly=true
# svccfg -s application/pkg/server setprop pkg/port=8000
# svcadm refresh svc:/application/pkg/server:default
# svcadm enable svc:/application/pkg/server:default

 

Pour les autres repos, je duplique simplement le manifest pkg-server.xml et modifie le nom du manifest avant de les importer dans la base smf.

 

# cd /lib/svc/manifest/application/pkg
# cp pkg-server.xml pkg-cluster.xml
# cp pkg-server.xml pkg-studio.xml
# cp pkg-server.xml pkg-site.xml

 

# ls -l
total 59
-r--r--r--   1 root sys   3843 Jan 14 14:42 pkg-site.xml
-r--r--r--   1 root sys   3855 Jan 14 17:38 pkg-cluster.xml
-r--r--r--   1 root sys   2546 Oct 24 11:55 pkg-mdns.xml
-r--r--r--   1 root sys   3850 Oct 24 11:55 pkg-server.xml
-r--r--r--   1 root sys   3859 Jan 14 14:58 pkg-studio.xml
-r--r--r--   1 root sys   4651 Oct 24 11:58 pkg-system-repository.xml
-r--r--r--   1 root sys   2098 Oct 24 11:49 zoneproxyd.xml

 

# vi pkg-cluster.xml pkg-studio.xml pkg-site.xml

 

# scvcfg import ./pkg-cluster.xml
# scvcfg import ./pkg-studio.xml
# scvcfg import ./pkg-site.xml

 

Quelques modifications sont nécessaires avant d'activer ces repos.

 

# svccfg -s application/pkg/site setprop pkg/inst_root=/repo/site
# svccfg -s application/pkg/site setprop pkg/port=8001
# svcadm refresh svc:/application/pkg/site:default
# scvadm enable svc:/application/pkg/site:default

 

# svccfg -s application/pkg/studio setprop pkg/inst_root=/repo/solarisstudio
# svccfg -s application/pkg/studio setprop pkg/port=8002
# svcadm refresh svc:/application/pkg/studio:default
# scvadm enable svc:/application/pkg/studio:default

 

# svccfg -s application/pkg/cluster setprop pkg/inst_root=/repo/cluster
# svccfg -s application/pkg/cluster setprop pkg/port=8003
# svcadm refresh svc:/application/pkg/cluster:default
# scvadm enable svc:/application/pkg/cluster:default

 

Une petite mise à jour de mes publishers en local.

 

# pkg set-publisher -M '*' -G '*' -P -g http://10.xx.xx.100:8000/ solaris
# pkg set-publisher -M '*' -G '*' -P -g http://10.xx.xx.100:8001/ site
# pkg set-publisher -M '*' -G '*' -P -g http://10.xx.xx.100:8002/ solarisstudio
# pkg set-publisher -M '*' -G '*' -P -g http://10.xx.xx.100:8003/ ha-cluster

 

# pkg publisher
PUBLISHER         TYPE     STATUS P LOCATION
solaris           origin   online F http://10.xx.xx.100:8000/
site              origin   online F http://10.xx.xx.100:8001/
solarisstudio     origin   online F http://10.xx.xx.100:8002/
ha-cluster        origin   online F http://10.xx.xx.100:8003/

 

Et hop c'est terminé. Alors c'est simple non ? Pour vérifier l'accès aux différentes repos, il suffit simplement de tester l'accès au url dans un navigateur web (ou via la commande wget).

 

 

Pour aller plus loin :

Par gloumps - Publié dans : administration
Ecrire un commentaire - Voir les 0 commentaires

Mercredi 30 janvier 2013 3 30 /01 /Jan /2013 22:46

 

Depuis Solaris 11 update 1, le chain loader utilisé pour les plateforme x86 est GRUB2. Le fichier de configuration présent dans GRUB (menu.lst) est remplacé par un nouveau fichier nommé grub.cfg. L'édition de ce fichier est normallement déconseillé, du coup la mise à jour s'effectue via la commande bootadm.

 

Si comme moi, vous utilisez la redirection série (pour l'accès au déport console) sur les serveurs x86, il est nécessaire de paramétrer correctement les options de GRUB2.

 

Lister la configuration disponible

 

# bootadm list-menu
the location of the boot loader configuration files is: /rpool/boot/grub
default 0
console text
timeout 30
0 Oracle Solaris 11.1

 

Modifier la redirection du déport console sur le com1

 

# bootadm change-entry -i 0 kargs=console=ttya

 

Afficher la configuration actuelle du choix 0 

 

# bootadm list-menu -i 0
the location of the boot loader configuration files is: /rpool/boot/grub
title: Oracle Solaris 11.1
kernel: /platform/i86pc/kernel/amd64/unix
kernel arguments: console=ttya
boot archive: /platform/i86pc/amd64/boot_archive
bootfs: rpool/ROOT/solaris

 

Lors de l'installation d'un serveur avec Solaris 11, vous avez la possibilité de vous connecter à votre serveur pendant le processus d'installation. Cette fonctionnalité est disponible par défaut pour les platefornes Sparc uniquement.  Pour les palteformes x86, une modification de GRUB2 est nécassaire.

 

Lors de l'initialisation de votre client sur le serveur ai, utilliser simplement la syntaxe suivante

 

# installadm create-client -e 00xxxxxxxxxx -n solaris11_1-i386 \
-b console=ttya,livessh=enable,install_debug=enable

 

Rien de plus facile, non !?

 

Pour aller plus loin :

Par gloumps - Publié dans : administration
Ecrire un commentaire - Voir les 0 commentaires

Présentation

Informations personnelles

  • Passionné d'informatique, je travaille actuellement comme expert système Solaris. Vous trouverez plus de renseignements à mon sujet sur mon profil Linkedin.

Flux RSS

  • Flux RSS des articles

Recherche

Calendrier

Novembre 2014
L M M J V S D
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
             
<< < > >>
Créer un blog gratuit sur over-blog.com - Contact - C.G.U. - Signaler un abus - Articles les plus commentés