Mercredi 31 juillet 2013 3 31 /07 /Juil /2013 21:11

 

I already wrote an similar article to Solaris 11 and Zones (link). Today I will describe how to configure several Ldoms Guest an emphasis on network configuration (several vlan).

 

 

In this example, there are 3 Ldoms running on dedicated systems that are exposed to the external networks.

  • The Ldom control runs in 4 vlan (front, admin, backup, interconnect) - OS Solaris 11.1
  • Ldom Guest 1 runs in 4 vlan (front, admin, backup, interconnect) - OS Solaris 10u11
  • Ldom Guest 2 runs in 3 vlan (front, admin, backup) - OS Solaris 10u10

 

Vlans informations:

  • Vlan id 1 : address 192.168.1.0/24 - front
  • Vlan id 2 : address 192.168.2.0/24 - admin
  • Vlan id 3 : address 192.168.3.0/24 - backup
  • Vlan id 4 : address 192.168.4.0/24 - interconnect


Address for Ldom control

  • Vlan id 1 : 192.168.1.10 - defaultrouter 192.168.1.1
  • Vlan id 2 : 192.168.2.10
  • Vlan id 3 : 192.168.3.10
  • Vlan id 4 : 192.168.4.10

 

Let's go... Just wait... The network configuration of switch must be already configured (please contact network team !?)

 

 

Step 1: Create link aggregation and vlan configuration on Ldom control

 

My system (Sparc T4-2) includes 2 NICs (10G). There is no network configuration yet (I connect on ILOM).

 

# dladm show-phys
LINK       MEDIA         STATE      SPEED  DUPLEX    DEVICE
[...]
net8       Ethernet      unknown    0      unknown   ixgbe1
net9       Ethernet      unknown    0      unknown   ixgbe0
[...] 

 

I create a basic link aggregation (I use LACP) with 2 NICs.

 

# dladm create-aggr -P L2,L3 -L active -l net8 -l net9 aggr0

 

I check quicly the status of the aggregation.

 

# dladm show-link
LINK       CLASS     MTU    STATE    OVER
[...]
net8       phys      1500   up       --
net9       phys      1500   up       --
[...]
aggr0      aggr      1500   up       net8 net9

 

# dladm show-aggr -x
LINK   PORT  SPEED    DUPLEX  STATE  ADDRESS            PORTSTATE
aggr0    --  10000Mb  full    up     90:xx:xx:xx:xx:x8  --
       net8  10000Mb  full    up     90:xx:xx:xx:xx:x8  attached
       net9  10000Mb  full    up     90:xx:xx:xx:xx:x9  attached

 

Yet, I create 1 virtual card for each vlan id.

 

# dladm create-vlan -l aggr0 -v 1 front0
# dladm create-vlan -l aggr0 -v 2 admin0
# dladm create-vlan -l aggr0 -v 3 backup0
# dladm create-vlan -l aggr0 -v 4 interco0 

 

# dladm show-vlan
LINK          VID   OVER      FLAGS
front0        1     aggr0     -----
admin0        2     aggr0     -----
backup0       3     aggr0     -----
interco0      4     aggr0     -----

 

# ipadm create-ip front0
# ipadm create-addr -T static -a local=192.168.1.10/24 front0/v4
# ipadm create-ip admin0
# ipadm create-addr -T static -a local=192.168.2.10/24 admin0/v4
# ipadm create-ip backup0
# ipadm create-addr -T static -a local=192.168.3.10/24 backup0/v4
# ipadm create-ip interco0
# ipadm create-addr -T static -a local=192.168.4.10/24 interco0/v4 

 

# ipadm
NAME           CLASS/TYPE STATE  UNDER  ADDR
admin0         ip         ok     --     --
   admin0/v4   static     ok     --     192.168.2.10/24
backup0        ip         ok     --     --
   backup0/v4  static     ok     --     192.168.3.10/24
front0         ip         ok     --     --
   front0/v4   static     ok     --     192.168.1.10/24
inter0         ip         ok     --     --
   inter0/v4   static     ok     --     192.168.4.10/24
lo0            loopback   ok     --     --
   lo0/v4      static     ok     --     127.0.0.1/8
   lo0/v6      static     ok     --     ::1/128
[...]

 

Don't forget, the configuration of router.

 

# route add -p default 192.168.1.1 -ifp front0

 

 

Step 2: Create link virtual switch and configuration vnic for each Ldoms Guest

 

I create one virtual switch for all vlan

 

# ldm add-vswitch net-dev=aggr0 vid=1,2,3 primary-vsw0 primary

 

For Ldom Guest 1, I create 4 vnic (see definition)

 

# ldm add-vnet pvid=1 id=0 vnet0 primary-vsw ldom1
# ldm add-vnet pvid=2 id=0 vnet1 primary-vsw ldom1
# ldm add-vnet pvid=3 id=0 vnet2 primary-vsw ldom1
# ldm add-vnet pvid=4 id=0 vnet3 primary-vsw ldom1

 

For Ldom Guest 2, I create 3 vnic (see definition)

# ldm add-vnet pvid=1 id=0 vnet0 primary-vsw ldom2
# ldm add-vnet pvid=2 id=0 vnet1 primary-vsw ldom2
# ldm add-vnet pvid=3 id=0 vnet2 primary-vsw ldom2

 

 

Conclusion: We hope this step-by-step guide will give you some ideas for future consolidation with Oracle VM for Sparc. With Oracle Solaris 11 capabilities (aka Crossbow), you can easily set up fairly complex environments (simply network configuration).

 

 

See Also

 
Par gloumps - Publié dans : réseau
Ecrire un commentaire - Voir les 0 commentaires

Mercredi 22 mai 2013 3 22 /05 /Mai /2013 20:37

 

Mais que s'est-il passé Mercredi dernier ? Vous ne suivez pas l'actualité de l'association Guses !? C'est vraiment dommage pour vous...

 

En collaboration avec Oracle, le Guses a participé activement au TechDay Solaris 2013. Je ne vais pas vous faire un résumé de la soirée car Eric Bezille et Axel Paratre ont déjà écrit à ce sujet. Je vais juste profiter de cet article pour publier les deux présentations du Guses (ainsi que celles des précédentes éditions déjà disponibles sur mon blog).

 

Je remercie particulièrement René Garcia (Ingénieur Unix chez PSA Peugeot Citroen) pour sa présentation ZFS. Merci aussi à vous tous de votre présence, en espérant vous croiser lors d'une prochaine soirée que nous organisons.

 

Les présentations du TechDay 2013 :

 

Si jamais cela vous a échappé, le Guses a aussi participé aux précédents TechDay Solaris... Petit rappel des présentations...

 

Les présentations du TechDay 2011 :

 

La présentation du TechDay 2012 :

 

 

 

Maintenant plus d'excuses suivez le Guses...

Par gloumps - Publié dans : divers
Ecrire un commentaire - Voir les 0 commentaires

Mercredi 22 mai 2013 3 22 /05 /Mai /2013 14:20

 

Did you know ? Solaris 11 is capable of doing a fast reboot, skipping the power-on style self tests (POST) that have traditionally accompanied a reboot. Finished the coffee break !?

 


On x86 machines, this will automatically happen if you use the reboot command (or init 6). To force a full test cycle, and/or to get access to the boot order menu from the BIOS, you can use halt, followed by pressing a key.



On SPARC, the default configuration requires that you use reboot -f for a fast reboot. If you wish fast reboot to be the default, you must change an SMF property fastreboot, as follows:



# svccfg -s system/boot-config:default 'setprop config/fastreboot_default=true'
# svcadm refresh svc:/system/boot-config:default



To temporarily override the setting and reboot the slow way, you can use reboot -p, aka "reboot to PROM".

 

 

It is necessary to find a new excuse for your coffee break !!

Par gloumps - Publié dans : administration
Ecrire un commentaire - Voir les 0 commentaires

Lundi 20 mai 2013 1 20 /05 /Mai /2013 21:10

 

I describe only migration P2V of a physical server in a ldom, the installation and the configuration of Oracle VM Server for Sparc are not specified in this article.

 

Some details:

  • The name of physical server is ldom-guest (Solaris 10u3 – kernel 118833-33)
  • The name of crontol domain is ldom-crtl (Solaris 11.1 SRU 5.5)

 

There are a 3 phases to migrate from a physical system to a virtual system:

  • Collection Phase: A filesystem source is created based on the configuration information that it collects about the source system.
  • Preparation Phase: A logical domain is created
  • Conversion Phase: The filesystem source is converted into a logical domain (ex: conversion from sun4u to sun4v)

 

To execute this procedure, you must use tool ldmp2v (download this path p15880570 to obtain the tool - In Solaris 11, this tool is directly available).

 

Before starting, let us look at the configuration available on control domain:

 

ldom-crtl # ldm –V

Logical Domains Manager (v 3.0.0.2)
        Hypervisor control protocol v 1.7
        Using Hypervisor MD v 1.3

System PROM:
        Hypervisor v. 1.10.0. @(#)Hypervisor 1.10.0.a 2011/07/15 11:51\015
        OpenBoot   v. 4.33.0. @(#)OpenBoot 4.33.0.b 2011/05/16 16:28

 

ldom-crtl # ldm ls –o console,network,disk primary
[…]

VCC
    NAME           PORT-RANGE
    primary-vcc0   5000-5100

VSW
    NAME           MAC          […]
    primary-vsw0   x:x:x:x:x:x  […]

VDS
    NAME           VOLUME       […]
    primary-vds0

[…]

 

A traditional configuration !?, no ?

 

Fisrt step: Collection phase (runs on the physical source system)

 

To create a consistent file system image, I suggest you to boot the server in “single mode”. To save a file system image, I often use a NFS share.

 

ldom-guest # mount –F nfs myshare:/tempo /mnt

 

By default, the cmd ldmp2v creates a flar image.

 

ldom-guest # /usr/sbin/ldmp2v collect -d /mnt/ldom-guest
Collecting system configuration ...
Archiving file systems ...
Full Flash
Checking integrity...
Integrity OK.
Running precreation scripts...
Precreation scripts done.
Creating the archive...
136740734 blocks
Archive creation complete.

ldom-guest # init 0

 

Second step: Preparation phase (runs on the control domain)

 

I start by creating a ZFS pool which will contain the data of the logical domain.

 

ldmon-crtl # zpool create -m none ldom-guest cXtYdZ

 

I prefer to use the manual mode to create a logical domain (so I edit the following file ldmp2v.conf).

 

ldmon-crtl # cat /etc/ldmp2v.conf
# Virtual switch to use
VSW="primary-vsw0"
# Virtual disk service to use
VDS="primary-vds0"
# Virtual console concentrator to use
VCC="primary-vcc0"
# Location where vdisk backend devices are stored
BACKEND_PREFIX=""
# Default backend type: "zvol" or "file".
BACKEND_TYPE="zvol"
# Create sparse backend devices: "yes" or "no"
BACKEND_SPARSE="yes"
# Timeout for Solaris boot in seconds
BOOT_TIMEOUT=60

 

Just after mounted the share NFS, I create a logical domain by indicating the following informations: cpu, mem and prefix (here it is name of ZFS pool)

 

ldom-crtl # mount –F nfs myshare:/tempo /mnt
ldom-crtl # ldmp2v prepare -c 16 -M 16g –p ldom-guest -d /mnt/ldom-guest ldom-guest
Creating vdisks ...
Creating file systems ...
Populating file systems ...
136740734 blocks
Modifying guest OS image ...
Modifying SVM configuration ...
Unmounting file systems ...
Creating domain ...
Attaching vdisks to domain ldom-guest ...

 

For this example, the ldom guest is configured with 16 vcpu and 16 Go (options –c and –M).

 

Final step: Conversion phase (runs on the control domain)

 

In the conversion phase, the logical domain uses the Oracle Solaris upgrade process to upgrade to the Oracle Solaris 10 OS. The upgrade operation removes all existing packages and installs the Oracle Solaris 10 sun4v packages, which automatically performs a sun4u-to-sun4v conversion. The convert phase can use an Oracle Solaris DVD ISO image or a network installation image. On Oracle Solaris 10 systems, you can also use the Oracle Solaris JumpStart feature to perform a fully automated upgrade operation.

 

On “jumpstart server” (do you known jet ?), I edit the jumpstart profile to add the following lines

 

install_type    upgrade
root_device c0d0s0

 

Ready for conversation !! The last command to convert the Sparc architecture and to start the guest domain.

 

ldom-crtl # ldmp2v convert -j -n vnet0 -d /mnt/ldom-guest ldmon-guest
Testing original system status ...
LDom ldom-guest started
Waiting for Solaris to come up ...
Using Custom JumpStart
Trying 0.0.0.0...
Connected to 0.
Escape character is '^]'.

 

Connecting to console "server" in group "server" ....
Press ~? for control options ..
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface vnet0...
Configured interface vnet0
Setting up Java. Please wait...
Extracting windowing system. Please wait...
Beginning system identification...
Searching for configuration file(s)...
Using sysid configuration file 10.x.x.x:/opt/SUNWjet/Clients/ldom-guest/sysidcfg
Search complete.
Discovering additional network configuration...
Completing system identification...
Starting remote procedure call (RPC) services: done.
System identification complete.
Starting Solaris installation program...
Searching for JumpStart directory...
Using rules.ok from 10.x.x.x:/opt/SUNWjet.
Checking rules.ok file...
Using begin script: Utils/begin
Using derived profile: Utils/begin
Using finish script: Utils/finish
Executing JumpStart preinstall phase...
Executing begin script "Utils/begin"...
Installation of ldom-guest at 00:41 on 10-May-2013
Loading JumpStart Server variables
Loading JumpStart Server variables
Loading Client configuration file
Loading Client configuration file
Running base_config begin script....
Running base_config begin script....
Begin script Utils/begin execution completed.
Searching for SolStart directory...
Checking rules.ok file...
Using begin script: install_begin
Using finish script: patch_finish
Executing SolStart preinstall phase...
Executing begin script "install_begin"...
Begin script install_begin execution completed.

WARNING: Backup media not specified.  A backup media (backup_media) keyword must be specified if an upgrade with disk space reallocation is required

Processing default locales
       - Specifying default locale (en_US.ISO8859-1)

Processing profile

Loading local environment and services

Generating upgrade actions
       - Selecting locale (en_US.ISO8859-1)

Checking file system space: 100% completed
Space check complete.

Building upgrade script

Preparing system for Solaris upgrade

Upgrading Solaris: 101% completed
       - Environment variables (/etc/default/init)

Installation log location
       - /a/var/sadm/system/logs/upgrade_log (before reboot)
       - /var/sadm/system/logs/upgrade_log (after reboot)

Please examine the file:
       - /a/var/sadm/system/data/upgrade_cleanup

It contains a list of actions that may need to be performed to complete the upgrade. After this system is rebooted, this file can be found at:
       - /var/sadm/system/data/upgrade_cleanup

Upgrade complete
Executing SolStart postinstall phase...
Executing finish script "patch_finish"...

Finish script patch_finish execution completed.
Executing JumpStart postinstall phase...
Executing finish script "Utils/finish"...
[…]
Terminated

Finish script Utils/finish execution completed.
The begin script log 'begin.log'
is located in /var/sadm/system/logs after reboot.

The finish script log 'finish.log'
is located in /var/sadm/system/logs after reboot.

syncing file systems... done
rebooting...
Resetting...

 

T5240, No Keyboard
Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.0.b, 16384 MB memory available, Serial #83470255.
Ethernet address 0:x:x:x:x:x, Host ID: 84f9a7af.

Boot device: disk0:a  File and args:
SunOS Release 5.10 Version Generic_118833-33 64-bit
Copyright 1983-2006 Sun Microsystems, Inc.  All rights reserved.
Use is subject to license terms.
Hostname: ldom-guest
Loading smf(5) service descriptions: 1/1
checking ufs filesystems
/dev/rdsk/c0d1s0: is logging.

ldom-guest console login:

 

It’s already finishes. Is it simple !? You do not have any more excuses not to use ldoms.

 

See Also

 

Par gloumps - Publié dans : administration
Ecrire un commentaire - Voir les 2 commentaires

Samedi 6 avril 2013 6 06 /04 /Avr /2013 10:44

 

Oracle vient d'annoncer de nouveaux serveurs basés sur les processeurs SPARC T5 & M5. Solaris 11, le système d'exploitation d'Oracle, est au coeur de la stratégie de ces systèmes. Il permet d'en exploiter toute la quintessence: capacités de virtualisation, montée en charge au delà de 1000 threads, optimisation des bases de données Oracle, socle pour une infrastructure Cloud...

 

Nous vous invitons, en association avec le GUSES (Groupe d'Utilisateurs du Système d'Exploitation Solaris), à participer à notre séminaire "Oracle SPARC/Solaris TechDay" le 25 avril à 17h00 au Caves Legrand à Paris 2ème.

 

Nous vous proposons au cours de ce séminaire de partager l'actualité SPARC/Solaris, au travers de retours d'expériences et de cas pratiques.

 

Nous espérons avoir le plaisir de vous accueillir le 25 avril prochain. Plus de détails ici

 

noname

Par gloumps - Publié dans : divers
Ecrire un commentaire - Voir les 1 commentaires

Samedi 30 mars 2013 6 30 /03 /Mars /2013 22:20

 

Everyone knows that one of the major problem for consolidating Solaris 10 is network. if each Solaris Zones use a different network (vlan), the configuration of the Global Zone becomes a real headache.

 

In Solaris 11, Crosbow effectively addresses this problem. This article explains how to create several Solaris Zone an emphasis on network configuration (several vlan).

 

In this example, there are 3 Solaris Zone running on dedicated systems that are exposed to the external networks. Each Solaris Zone runs a different vlan.

  • The Global Zone running in vlan id 1 (Address: 192.168.1.10/24 - Router: 192.168.1.1)
  • The Solaris Zone zone1 running in vlan id 1 (Address: 192.168.1.11/24 - Router: 192.168.1.1)
  • The Solaris Zone zone2 running in vlan id 2 (Address: 192.168.2.10/24 - Router: 192.168.2.1)
  • The Solaris Zone zone3 running in vlan id 3 (Address: 192.168.3.10/24 - Router: 192.168.3.1)
  • Each port of NIC used by aggregation is configured in different vlans (vlan id 1, 2 and 3)

Let's go... Just wait... The network configuration of switch must be already configured (please contact network team !?)

 

 

Step 1: Create link aggregation

 

My system (Sparc M5000) includes 4 NICs. There is no network configuration yet (I connect on XSCF).

 

# dladm show-phys
LINK       MEDIA         STATE      SPEED  DUPLEX    DEVICE
net1       Ethernet      unknown    0      unknown   bge1
net0       Ethernet      unknown    0      unknown   bge0
net3       Ethernet      unknown    0      unknown   bge3
net2       Ethernet      unknown    0      unknown   bge2

 

I create a basic link aggregation (I don't use LACP) with 4 NICs.

 

# dladm create-aggr -P L2,L3 -l net0 -l net1 -l net2 -l net3 default0

 

I check quicly the status of the aggregation.

 

# dladm show-link
LINK          CLASS     MTU    STATE    OVER
net1          phys      1500   up       --
net0          phys      1500   up       --
net3          phys      1500   up       --
net2          phys      1500   up       --
default0      aggr      1500   up       net0 net1 net2 net3

 

Yet, I configure address on this aggregation.

 

# ipadm create-ip default0
# ipadm create-addr -T static -a local=192.168.1.10/24 default0/v4

 

Don't forget, the configuration of router.

 

# route add -p default 192.168.1.1 -ifp default0

 

 

Step 2: Create Solaris Zone for Cloning

 

It is much faster to clone Solaris Zone than to create one from scratch, because building an image from packages takes longer than, in essence, copying an existing zone. I use the cloning technique in this example to first create one Solaris Zone and then clone it three times.

 

# zfs create -o mountpoint=/zones -o dedup=on rpool/zones
# zfs create -o mountpoint=/zones/zclone rpool/zones/zclone
# chmod 700 /zones/zclone

 

# zonecfg -z zclone
Use 'create' to begin configuring a new zone.
zonecfg:zclone> create
create: Using system default template 'SYSdefault'
zonecfg:zclone> set zonepath=/zones/zclone
zonecfg:zclone> set ip-type=exclusive
zonecfg:zclone> exit

 

# zoneadm -z zclone install
Progress being logged to /var/log/zones/zoneadm.20130329T161207Z.zclone.install
       Image: Preparing at /zones/zclone/root. 
[...] 
  Next Steps: Boot the zone, then log into the zone console (zlogin -C)
              to complete the configuration process.
Log saved in non-global zone as /zones/zclone/root/var/log/zones/zoneadm.20130329T161207Z.zclone.install

 

# zoneadm -z zclone boot ; zlogin -C zclone
[Connected to zone 'zclone' console]
Loading smf(5) service descriptions: 115/115

 

When I obtain the screen to configure this Solaris Zone, I halt this zone.

 

# zoneadm -z zclone halt

 

 

Step 3: Create Solaris Zones zone1

 

Remimber, Solaris Zone zone1 use a same vlan that Global Zone. First, I create a vlan link over a datalink (default0).

 

# dladm create-vnic -v 1 -l default0 vnic1

 

Next, I create zone1 from the zclone zone (don't forget a profile creation - new sysidcfg).

 

# zonecfg -z zone1 "create -t zclone"
# zonecfg -z zone1
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> select anet linkname=net0
zonecfg:zone1:anet> set linkname=vnic1
zonecfg:zone1:anet> set lower-link=default0
zonecfg:zone1:anet> end
zonecfg:zone1> commit
zonecfg:zone1> exit

 

# zoneadm -z zone1 clone -c /tmp/sc_profile1.xml zclone
The following ZFS file system(s) have been created:
    rpool/zones/zone1
Progress being logged to /var/log/zones/zoneadm.20130329T172124Z.zone1.clone
Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20130329T172124Z.zone1.clone

 

 

Step 4: Create Solaris Zones zone2

 

Solaris Zone zone2 use a vlan id 2. First, I create a vlan link over a datalink (default0).

 

# dladm create-vnic -v 2 -l default0 vnic2

 

Next, I create zone2 from the zclone zone (don't forget a profile creation - new sysidcfg). Beware, I use the following paramater to configure the vlan id: vlan-id.

 

# zonecfg -z zone2 "create -t zclone"
# zonecfg -z zone2
zonecfg:zone2> set zonepath=/zones/zone2
zonecfg:zone2> select anet linkname=net0
zonecfg:zone2:anet> set linkname=vnic2
zonecfg:zone2:anet> set lower-link=default0
zonecfg:zone2:anet> set vlan-id=2
zonecfg:zone2:anet> end
zonecfg:zone2> commit
zonecfg:zone2> exit

 

# zoneadm -z zone2 clone -c /tmp/sc_profile2.xml zclone
The following ZFS file system(s) have been created:
    rpool/zones/zone2
Progress being logged to /var/log/zones/zoneadm.20130329T174913Z.zone2.clone
Log saved in non-global zone as /zones/zone2/root/var/log/zones/zoneadm.20130329T174913Z.zone2.clone

 

 

Step 5: Create Solaris Zones zone3

 

It's the same configuration than zone2, the only change comes from vlan id. This zone uses a vlan id 3.

 

# dladm create-vnic -v 3 -l default0 vnic3

 

# zonecfg -z zone3 "create -t zclone"
# zonecfg -z zone3
zonecfg:zone3> set zonepath=/zones/zone3
zonecfg:zone3> select anet linkname=net0
zonecfg:zone3:anet> set linkname=vnic3
zonecfg:zone3:anet> set lower-link=default0
zonecfg:zone3:anet> set vlan-id=3
zonecfg:zone3:anet> end
zonecfg:zone3> commit
zonecfg:zone3> exit

 

# zoneadm -z zone3 clone -c /tmp/sc_profile3.xml zclone
The following ZFS file system(s) have been created:
    rpool/zones/zone3
Progress being logged to /var/log/zones/zoneadm.20130329T175707Z.zone3.clone
Log saved in non-global zone as /zones/zone3/root/var/log/zones/zoneadm.20130329T175707Z.zone3.clone

 

 

Step 6: Start all Solaris Zone

 

My configuration is finished. I just start all zone.

 

# zoneadm list -cv
  ID NAME      STATUS     PATH               BRAND    IP   
   0 global    running    /                  solaris  shared
   - zclone    installed  /zones/zclone      solaris  excl 
   - zone1     installed  /zones/zone1       solaris  excl 
   - zone2     installed  /zones/zone2       solaris  excl 
   - zone3     installed  /zones/zone3       solaris  excl 

 

# zoneadm –z zone1 boot ; zoneadm –z zone2 boot ; zoneadm –z zone3 boot

 

 

Conclusion: We hope this step-by-step guide will give you some ideas for future consolidation. With Oracle Solaris 11 capabilities, you can easily set up fairly complex environments.

 

 

See Also

 

Par gloumps - Publié dans : réseau
Ecrire un commentaire - Voir les 2 commentaires

Présentation

Informations personnelles

  • Passionné d'informatique, je travaille actuellement comme expert système Solaris. Vous trouverez plus de renseignements à mon sujet sur mon profil Linkedin.

Flux RSS

  • Flux RSS des articles

Recherche

Calendrier

Décembre 2014
L M M J V S D
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        
<< < > >>
Créer un blog gratuit sur over-blog.com - Contact - C.G.U. - Signaler un abus - Articles les plus commentés