NX -OS next generation operating system Featrue
Chapte 1
: New feature in Nexus
·
Modular
o
Enable particular
module in nexus
o
E.g Feature eigrp
or No feature eighrp to enable or disable eigrp
·
High availability à PSS - Persistence storage service: If BGP process
crashes , it will not crash underlining operating system and that processes
gracefully restart and it will have all state information before the crash.
·
Unified os for lan and san,
o
prior to this we had iOS with IP protocol and SAN OS with Fiber
channel protocl,
o
Reason for lan
and san segregation
§ Security
§ Bandwidth
§ flow control
§ performance
o
Flow control:
§ LAN - sender
keep sending till receiver respond
§ SAN - Receiver
defines how much data transmitter can send
o
MX-OS 4.1 or
higher run on MDS (multilayer director switch) as well as Nexus 7K as 5K.
o
Unification of
lan and san using 10Gbps ethernet using FCoE
·
Role based access
o
Privilege levels
access – method is in old ios
o
Views - access based on view - prior to nx-os
o
In nexus there is
role based access
§ Eg: username
admin password Cisco 123 role { network-admin | network-operaor | priv-0 -15
|vdc-admin | vdc-operator }
§ Vdc-admin or vdc operator option : In N7K we can have
virtual Device context (VDCs) and have
separate admin for each context.
·
Cisco layered
approach
o
scalability
o
resilient –
failover
Chapter
2 : NEXUS Family:
The
nexus 7000 series
·
Models:
o
7009 -2sup slots , 7 i/o module slots
o
7010 - 2sup slots,
8 i/o module slots
o
7018 - 2sup slots,
16 i/o module slots
·
L2/L3, DCP (data
center bridging), FCoE
·
ISSu ( in
service software upgrade)
·
VDCs
(virtual device context)
·
Modularity
·
Separation of CP (control plan) and DP (Data plan)
·
RBAC (role based
Access control)
·
EEM (Embedded event manager)
·
Call home
·
Dual supervision
·
Dual CMP
(connectivity management processor) with own
memory , power and software. It provide lights out connectivity ( OOB )
·
Dual redundant central
Arbiter of particular traffic – multiple path through architecture of devices.
·
Redundant fan
module – hot swappable fan tray
Sup 1 is Eosale and EoL and replacement
is Sup 2E ( dual quad-core cpu with 32GB RAM)
Licensing of software
1 default base license
2 enterprise lan license - for dynamic
routing and multicast
3 advanced enterprise lan license - VDC,
Cisco trust sec
4 mpls license - mpls routing
5 transport service – OTV (Overlay
transport virtualization)
6 enhance L2 services - fabric path
Command
Download license, store on boot flash
and use following command
# install license <-- install="" license="" o:p="" to="">-->
#Show license usage
Trial license: 120 day grace period for
testing before buying license.
Modules: sup card, line card and fabric
module
Fabric module support virtual output
queuing.
Power Redundancy: with respect to 7010 we have similar option in UCS.
·
Combined power
mode – no redundancy – all power supply work together.
·
Power supply
redundancy ( n +1)
·
Input source
redundancy ( external power failure ) / grid power redundancy
·
Complete
redundancy ( Power supply + input source
Nexus 7009 v6
Nexus 7009 v6.0(2) w/ SUP1, 10GE F1,
10GE M1
M1 module support L3 feature
F1 module is L2 feature
Cisco fabric path similar to TRIL Transparent interconnection of lots of
link
---------------------------------------------------------------------------
The
Nexus 5000 series
5010
- throughput 520 gigbit per secon
5020
5548
5596
- throughput 1.92 Tbps
Common features in 5500 series
DCB –data center bridging
FCoE – Fiber channel over Ethernet
GEMS – generic expansion module slot –
add FCoE
55XX L3 routing capability,
routing can be enabled on
5548 with N55 –D160 L3 card.
5596 with N55-M160 L3
Port density is different in each model
e.g in 5596 96 à 96 1 gig Ethernet port density
Nexus 5548 v5.1 (3)
Feature : unified port
------------------------------------------------------------------------------------------
The
Nexus 2000 series
Function at Top or rack ToR, all
c-series devices are connected to N2K device on ToR and N2K are managed by N5K
device at EoR ( End of Rack)
2000 series are called a FEX – fabric
extender
Redundancy using 2000 ToR and 5000 EoR
Model
nexus 2000 series ( scalability
,oversubscription, host port and fabric
port )
·
2148
o
4 X 10G fabric
port, 1 port channel with max 4
port, 48 host port but no port channel,
no FCoE
·
2224
o
2 X 10G fabric
port, 1 port channel port, 24 X 1G host
port 24 port channel with max 8 port in one of the port channle, no FCoE
·
2248
o
4 X 10G fabric
port, 1 port channel port, 48 X 1G host
port 24 port channel , no FCoE
·
2232
o
8 X 10G fabric
port, 1 port channel , 32 X fiber optic
host port 16port channel , only can supply FCoE on ToR location
---------------------------------------------------------------------------------------
Nexus 1000v v4.2 virtualized Ethernet module or supervisior
moduel
Chapter
3: MDS family (multilayer director switch)
9500 series
9124
9148
9222i
9500
·
TABLE:
Model
-- FC port density
9506
-- 192
9509
-- 336
9513 --
528
·
nonblocking -
virtual output queuing
Requirement for SAN
- not packet loss and low latency
·
high
bandwidth 2.2 Tbs internal bandwidh /
160 gbps (16 ISL bundle)
·
Low latency –
less than 20microsecond per hop
·
Mulit
protocol ( FC, FC0E, FICON, FCIP, iSCSI)
·
Scalable – VSAN (virtual
storage Area network ) Cisco invension
·
Secure - port
security
·
High availability
– dual sup, dual clock, dual power
·
sup2 (no FCoE)
and sup2-A (FCoE), Intercross bar Fabric
(traffic cop) in 9506 and 9509 in 9513 will have separate intercross bar fabric
module
·
Licensing types
of MDS – feature based and module based.
o
feature based
§ ENt –security
§ sanover ip –
FCIP
§ main frame – FICON
o
-module based
-----------------------------------------------------------------------------------------------------------------
Cisco MDS 9124
24 port – 8 default and 16 on-demand
port
NPV (n port virtualization)
Cisco MDS 9148
16-32-48 base license
Cisco MDS 9222i
Flexible with expansion slot support
vide verity of module
18 FC 4gbps Ethernet port used for FCIP
/ iSCSI
-------------------------------------------------------------------------------------------------------------
NEXUS switch and NX-OS
NEXUS architecture:
# sh ver
software
·
BIOS version 2.12.0
·
Kickstart image-
contains linux 7K kernel version 6.2(2), 5K 5.1(3)
·
System – contain
software component of NX multilayer
director switch
Hardware
·
Supervisor :
intel xenon , 12 gb memory
Plug-in
·
Core plugin –
contain nx os software component
·
Ethernet – L2
and L3 software component
·
In future we will
see – Storage plugin – for FCoE
Chapter
4 : Monitoring Nexus Switch
Monitoring
the Nexus
·
Rj 45 console is
located on sup card.
·
Nexus 7000 with
sup 1 engine with cmp (connectivity management processor) with dedicated O.S
for OOB management access. Has led for notification and local authentication. Sup
2 don’t have this capability
·
To connect to cmp
we use command # attach
o
Attach console |
module | cmp
·
Remote acces:
o
Ssh v2 is enabled
by default, ssh / telnet clinet & server capability , ip v4 and v6
supported
o
In cmp use command
# ssh server enable or # telnet server enable
·
Management
o
Support concept
of VRF, by default there are two VRF ( default and management vrf) exist in
nexus switch.
o
Management
interface is in management vrf. To test connectivity for management purpose
is # ping 10.10.10.10 vrf management
·
ISSU (in service
software upgrade)
o
Upgrade with no
disruption , data plane continue forwarding packet during upgrade proces
o
4.2.1
o
Kickstart, bios,
system, fex (2000 series), i/o module bios and image
o
Started with 7000
series which has dual sup card, first upgrade standby sup engine, then upgrade
second sup engine. This feature is also now supported in 5000 series which has
single sup engine. Here control plane will be offline during upgrade. If 5500
series with L3 functionality it will not support ISSU.
o
Steps for 5000
series
§ Download appropriate software Cisco.com/go/fn (feature
navigator)
§ Copy tftp à
boot flash
§ Show incompatibility
( show incompatible with new image) ß
pre upgrade command
§ Show install all impact ( shows impact of upgrade) ß pre upgrade command
§ Show install all status ç
post upgrade to verify installation status.
·
Control plane
policing
o
Data plane
o
Mgmt plane – snmp
o
Control plan – L2
stp, LAcP, L3 ospf , bgp
§ CoPP control plan policing – restrict number of packet
entering CP. There is default CoPP,
during installation it ask for strict , moderate, lenient or no default policy.
·
KEY CLI command
o
Where – shows
mode, which VDC
o
Show run ipqos |
all allà show everything including default
o
Sho run interface
all
o
Show module
o
Show loggin
Chapter
5 vPC
vPC
( virtual port channel)
·
Used to bundle with uplink with two different
uplink switch.
o
Virtual port
channel peer
o
One of 7K become
primary and other secondary
o
Orphan port –
port not participating on vPC infrastructure
o
Member port –
port participating in vPC
o
CFS – Cisco
Fabric Service – used to synchronize stateful information
o
Peer keep-alive
link – logical link work OOB path (no data or sycn message is sent over this
link)
o
Limitation
§ Peer link should be 10 gig Ethernet port, at least 2 used.
§ vPC is per VDC
§ L2 portcahnnel technology
o
Dual sided vPC 7K
= 5K = 2K = c –series sever
o
Show vpc brief
§ Vpc domain id
§ Peer status
§ Keep alive status
§ vPC role
o
show vpc
peer-keepalive
§ keep alive tos 192 – binary representation of tos byte
§ Role – priority
==== > lower is better
o
show vpc
consistency-parameter interface port-channel 20
vPC
(Virtual port-channel )
Overview
A virtual PortChannel (vPC) allows links that are physically connected to two different Cisco Nexus 7000 or 5000 Series devices to appear as a single PortChannel to a third device. The third device can be a Cisco Nexus 2000 Series Fabric Extender or a switch, server, or any other networking device. A vPC can provide Layer 2 multipathing, which allows you to create redundancy by increasing bandwidth, enabling multiple parallel paths between nodes and load-balancing traffic where alternative paths exist.
After you enable the vPC function, you create a peer keepalive link, which sends heartbeat messages between the two vPC peer devices.
The vPC domain includes both vPC peer devices, the vPC peer keepalive link, the vPC peer link, and all the PortChannels in the vPC domain connected to the downstream device. You can have only one vPC domain ID on each device.
A vPC provides the following benefits:
• Allows a single device to use a PortChannel across two upstream devices
• Eliminates Spanning Tree Protocol blocked ports
• Provides a loop-free topology
• Uses all available uplink bandwidth
• Provides fast convergence if either the link or a device fails
• Provides link-level resiliency
• Helps ensure high availability
The vPC not only allows you to create a PortChannel from a switch or server that is dual-homed to a pair of Cisco Nexus 7000 or 5000 Series Switches, but it can also be deployed along with Cisco Nexus 2000 Series Fabric Extenders.
The following list defines critical vPC concepts:
• vPC: vPC refers to the combined PortChannel between the vPC peer devices and the downstream device.
• vPC peer switch: The vPC peer switch is one of a pair of switches that are connected to the special PortChannel known as the vPC peer link. One device will be selected as the primary device, and the other will be the secondary device.
• vPC peer link: The vPC peer link is the link used to synchronize states between the vPC peer devices. The vPC peer link carries control traffic between two vPC switches and also multicast, broadcast data traffic. In some link failure scenarios, it also carries unicast traffic. You should have at least two 10 Gigabit Ethernet interfaces for peer links.
• vPC domain: This domain includes both vPC peer devices, the vPC peer keepalive link, and all the PortChannels in the vPC connected to the downstream devices. It is also associated with the configuration mode that you must use to assign vPC global parameters.
• vPC peer keepalive link: The peer keepalive link monitors the vitality of a vPC peer switch. The peer keepalive link sends periodic keepalive messages between vPC peer devices. The vPC peer keepalive link can be a management interface or switched virtual interface (SVI). No data or synchronization traffic moves over the vPC peer keepalive link; the only traffic on this link is a message that indicates that the originating switch is operating and running vPC.
• vPC member port: vPC member ports are interfaces that belong to the vPCs.
vPC configuration on the Cisco Nexus 5000 Series includes these steps:
• Enable the vPC feature.
• Create a vPC domain and enter vpc-domain mode.
• Configure the vPC peer keepalive link.
• (Optional) Configure system priority.
• (Optional) Configure vPC role priority.
• Create the vPC peer link.
• Move the PortChannel to vPC.
Overview
A virtual PortChannel (vPC) allows links that are physically connected to two different Cisco Nexus 7000 or 5000 Series devices to appear as a single PortChannel to a third device. The third device can be a Cisco Nexus 2000 Series Fabric Extender or a switch, server, or any other networking device. A vPC can provide Layer 2 multipathing, which allows you to create redundancy by increasing bandwidth, enabling multiple parallel paths between nodes and load-balancing traffic where alternative paths exist.
After you enable the vPC function, you create a peer keepalive link, which sends heartbeat messages between the two vPC peer devices.
The vPC domain includes both vPC peer devices, the vPC peer keepalive link, the vPC peer link, and all the PortChannels in the vPC domain connected to the downstream device. You can have only one vPC domain ID on each device.
A vPC provides the following benefits:
• Allows a single device to use a PortChannel across two upstream devices
• Eliminates Spanning Tree Protocol blocked ports
• Provides a loop-free topology
• Uses all available uplink bandwidth
• Provides fast convergence if either the link or a device fails
• Provides link-level resiliency
• Helps ensure high availability
The vPC not only allows you to create a PortChannel from a switch or server that is dual-homed to a pair of Cisco Nexus 7000 or 5000 Series Switches, but it can also be deployed along with Cisco Nexus 2000 Series Fabric Extenders.
The following list defines critical vPC concepts:
• vPC: vPC refers to the combined PortChannel between the vPC peer devices and the downstream device.
• vPC peer switch: The vPC peer switch is one of a pair of switches that are connected to the special PortChannel known as the vPC peer link. One device will be selected as the primary device, and the other will be the secondary device.
• vPC peer link: The vPC peer link is the link used to synchronize states between the vPC peer devices. The vPC peer link carries control traffic between two vPC switches and also multicast, broadcast data traffic. In some link failure scenarios, it also carries unicast traffic. You should have at least two 10 Gigabit Ethernet interfaces for peer links.
• vPC domain: This domain includes both vPC peer devices, the vPC peer keepalive link, and all the PortChannels in the vPC connected to the downstream devices. It is also associated with the configuration mode that you must use to assign vPC global parameters.
• vPC peer keepalive link: The peer keepalive link monitors the vitality of a vPC peer switch. The peer keepalive link sends periodic keepalive messages between vPC peer devices. The vPC peer keepalive link can be a management interface or switched virtual interface (SVI). No data or synchronization traffic moves over the vPC peer keepalive link; the only traffic on this link is a message that indicates that the originating switch is operating and running vPC.
• vPC member port: vPC member ports are interfaces that belong to the vPCs.
vPC configuration on the Cisco Nexus 5000 Series includes these steps:
• Enable the vPC feature.
• Create a vPC domain and enter vpc-domain mode.
• Configure the vPC peer keepalive link.
• (Optional) Configure system priority.
• (Optional) Configure vPC role priority.
• Create the vPC peer link.
• Move the PortChannel to vPC.
Chapter
6 : Fabric Path
·
Fabric Path Cisco
version of TRILL
·
TRILL
(transparent interconnection of lots of link) its replacement technology for
STP.
In TRILL L3 routing intelligence is
brought to L2 , thus keep L2 routing table.
·
IS-IS
·
Fabric path
perfrom ECMP (Equal cost multiple path) , 16 – way equal cost multi path. ECMP
with port channel technology 16 10gbps port channel will give 2.56 Tbs usable
bandwidth.
·
Fabric path
network appears as one single switch for STP running legacy switch network.
·
Switch using
legacy STP port is called as Classical Ethernet port and Fiber channel
infrastructure port is called as FPP ( Fiber path port)
·
Enhancement in
Fabric path ( modification from TRILL)
o
Conversational
learning – learns only active mac address
·
Verification
o
Show mac
address-table will show destination hop
fabric path device.
o
Show fabricpath
route
Chapter
7 : OTV
OTV
(Overlay Transport Virtualization)
Basic understanding of OTV
Today i am going to help you understand Why we need OTV?
Lets say, we have 3 switches (A,B,C). Switch A is connectec to B and Switch B is connected to Switch C. and Switch A has 2 vlans created on it, vlan 10 and 20. What if we want the the vlan 10 and 20 to be extended to Switch C over Switch B, We will have to simply create vlan 10 and 20 on both switch B and C and allow both the vlans on trunks connecting the switches, right? and its simple!!
If you look at this pic, we have two Datacenters, DC1 and DC2 which are geographicaly far away from each other, lets say one in Newyork and another one in Los Angles and there are some server which are there in both data centers,however, they sync their hearbeat over layer 2 only and doesnt work on layer 3. So,we have a requirment that we have to extend vlan 10 and 20 from DC1 to another data center, DC2!! You may call it Datacenter Interconnect (DCI).
can we do the same thing which we did to extend vlan from switch A to switch C in above example? Ofcourse Not!!, so what the are the solutions to achieve this?
Until OTV came into picture, we had few of the below options to achieve this:
-VPLS
-Dark Fiber (CWDM or DWDM)
-AToM
-L2TPv3
These are the services provided by Service Providers and they work on different mechanisms but basicaly what they do is, they provide you a layer 2 path between DC1 to DC2 similar to a trunk link between Switch A and Switch B. So what does that mean? If a broadcast is sent or a ARP request is sent, that will travel across the service provider to another data center in that VLAN? Ofcourse YES!! Your STP domain will also get extended over DCI. So, if a device in vlan 10 in DC1 is trying to communicate with another device which is also in DC1 but the ARP request will go all the way to DC2 switches on which that particular vlan is configured.
So, to avoid such problems, Cisco introduced OTV (Overlay Transport Virtualization) which is basicaly a DCI (data center interconnect) technology to be configured on Nexus Switches. Using OTV, we can extend Layer 2 between two or more datacenters over traditional L3 infrastructure provided by Service Provider, and we dont need a seperate L2 link for layer 2 extension and we will still be able to limit STP domain and unnecessary broadcast over WAN links. It can overlay multiple VLAN with a simple design. Basically what it does is that, Datacenters will be able to advertise their MAC addresses to each other(its called
Mac in IP" routing) and a decision can be made on the basis of MAC addresses whether that MAC address is local or in another data center and based on that, frame can be forwarded or limited to a particular data center only. OTV uses a control protocol to map MAC address destinations to IP next hops that are reachable through the normal L3 network core.
So, in Cisco's language "OTV can be thought of as MAC routing in which the destination is a MAC address, the next hop is an IP address, and traffic is encapsulated in IP so it can simply be carried to its MAC routing next hop over the core IP network. Thus a flow between source and destination host MAC addresses is translated in the overlay into an IP flow between the source and destination IP addresses of the relevant edge devices. This process is called encapsulation rather than tunneling as the encapsulation is imposed dynamically and tunnels are not maintained"
How this is implemented, that i will show in another simplified post!!Thank you!!
Today i am going to help you understand Why we need OTV?
Lets say, we have 3 switches (A,B,C). Switch A is connectec to B and Switch B is connected to Switch C. and Switch A has 2 vlans created on it, vlan 10 and 20. What if we want the the vlan 10 and 20 to be extended to Switch C over Switch B, We will have to simply create vlan 10 and 20 on both switch B and C and allow both the vlans on trunks connecting the switches, right? and its simple!!
If you look at this pic, we have two Datacenters, DC1 and DC2 which are geographicaly far away from each other, lets say one in Newyork and another one in Los Angles and there are some server which are there in both data centers,however, they sync their hearbeat over layer 2 only and doesnt work on layer 3. So,we have a requirment that we have to extend vlan 10 and 20 from DC1 to another data center, DC2!! You may call it Datacenter Interconnect (DCI).
can we do the same thing which we did to extend vlan from switch A to switch C in above example? Ofcourse Not!!, so what the are the solutions to achieve this?
Until OTV came into picture, we had few of the below options to achieve this:
-VPLS
-Dark Fiber (CWDM or DWDM)
-AToM
-L2TPv3
These are the services provided by Service Providers and they work on different mechanisms but basicaly what they do is, they provide you a layer 2 path between DC1 to DC2 similar to a trunk link between Switch A and Switch B. So what does that mean? If a broadcast is sent or a ARP request is sent, that will travel across the service provider to another data center in that VLAN? Ofcourse YES!! Your STP domain will also get extended over DCI. So, if a device in vlan 10 in DC1 is trying to communicate with another device which is also in DC1 but the ARP request will go all the way to DC2 switches on which that particular vlan is configured.
So, to avoid such problems, Cisco introduced OTV (Overlay Transport Virtualization) which is basicaly a DCI (data center interconnect) technology to be configured on Nexus Switches. Using OTV, we can extend Layer 2 between two or more datacenters over traditional L3 infrastructure provided by Service Provider, and we dont need a seperate L2 link for layer 2 extension and we will still be able to limit STP domain and unnecessary broadcast over WAN links. It can overlay multiple VLAN with a simple design. Basically what it does is that, Datacenters will be able to advertise their MAC addresses to each other(its called
Mac in IP" routing) and a decision can be made on the basis of MAC addresses whether that MAC address is local or in another data center and based on that, frame can be forwarded or limited to a particular data center only. OTV uses a control protocol to map MAC address destinations to IP next hops that are reachable through the normal L3 network core.
So, in Cisco's language "OTV can be thought of as MAC routing in which the destination is a MAC address, the next hop is an IP address, and traffic is encapsulated in IP so it can simply be carried to its MAC routing next hop over the core IP network. Thus a flow between source and destination host MAC addresses is translated in the overlay into an IP flow between the source and destination IP addresses of the relevant edge devices. This process is called encapsulation rather than tunneling as the encapsulation is imposed dynamically and tunnels are not maintained"
How this is implemented, that i will show in another simplified post!!Thank you!!
Chapter
8 : Network Virtualization .
7K
with 5.X code
·
4 VDCs
·
VDCI ( default
VDC, VDC 1) – control shared resources
·
VLANs / VRFs per
VDC. Eg. Vlan 100 in each VDC will be separate.
·
Failure of
protocol in one VDC will not affect same protocol in other VDC.
·
To create virtual
interface we have to be that particular VDC.
·
Some IO module
each port work independently and for some port grouping need to be done.
·
Role based access
control.
o
Vdc admin from
one user created context can not control other vdc.
o
To create vdc
with advanced service license.
o
Configuration
§ Vdc test
§ Switch to vdc test
o
Verification
§ Show vdc
§ Show vdc membership (interface membership)
§ Show run vdc-all ( running configuration for all vdc)
§ Copy run start vdc-all
·
NVI ( network
Interface virtualization)
o
Refers to FEX
o
VNtag à IEE802.1Qbh
2K – FEX
Physical port on 5K
connecting to 2K = Fabric port
Bundle of 5 K connecting to
2K = F.P. Channle
Uplink on 2K connecting to
5K is called as FEX uplink
2K port connecting to server
= FEX port
Virtualization in storage:
Device controlling
virtualization.
1 host / server
2 physical disk system (
array-based)
3 network devices
Identify logical disk in SAN
we use LUN ( logical unit number ) – sometime we refer logical disk as LUN but
Lun is just numeric identifier.
In storage area we have to
control the number of device accessing logical disk. This id done by LUN masking using PWWN ( port
world wide name). Also there is LUN mapping here we map logical disk to
particular HBA host bus adapter in server.
Cisco developed LUN zoning,
this technology is not dependant on storage vendor technology.
It is found in MDS switch
and allows us to map logical disk to particular HBA.
Storage virtualization
Types: Block Disk Tape File system
Block: provide logical volume of disk to user that is stored
physically somewhere.
Disk Virtualization : providing disk out of large disk array,
Tape:
File system: allow user to access a file.
File and record virtualization :
presenting logical volume to particular user
It can be done : host level , network
level or Array level
How we do it : in-band ( data and
control going through same channel ) or
OOB ( data and control over separate channel)
Advantage of Network based
·
It is independent
of server, OS, storage solution
·
Offloading
virtualization.
Chapter
9 : Server virtualization
Before virtualization we had server
sprawl (each entity or service had their own box or system), this is scalable
but consume lots of power and administration work.
Aps Aps Aps
Os Os Os
--------------------------------------------------
Virtualization software – allocate
hardware resources.
---------------------------------------------------
Rack mount server – CPU + RAM + NIC +
Disk space
Benefit of Virtualization
-
Partitioning
- vm instance
-
Encapsulation à
everything is in set of files. We can take backup and manipulate this file to
tweak the server
-
Isolation à
-
Hardware
abstraction à we can move VM and move between hosts.
-
Cap EX / Op Ex
-
Capital expense –
buying new hardware
-
Operational
expense – spending for operation , like cooling and maintenance
Virtualization techniques: (based on how much modification done on guest OS)
Full – host os vs bare metal
Partial
Para-virtualization
VMware workstation 8 – type 2 – host OS,
window 7 – VMware workstation 8
Hypervisor EXSi - type 1
Bare metal - hypervisor –
EXS is old version of ESXi, EXS is Linux
based kernel.
Vsphere 5.x software suite
ESXi, Vsphere server, vsphere client, vshpere view product,
Microsoft Hyper – V server 2012
Citrix – Xen Server
Chapter
10 : Nexus 1000v
Chapter
11 : Storage Area Network
·
File-based vs
Block-based
o
Eg. CIFS ( Common
Internet File System)
o
Network File
System (NFS)
o
This protocol has
high latency and work over TCP/IP and
are chatty.
o
Suitable for MS
office or print services
·
Block-based
o
SCSI
o
Many input output
operations per second ( IOPS)
o
SCSI Parallel
cable ( low latency, distance limitation)
o
iSCSI – transport
of scsi over TCP (high latency)
o
FC – low latency
and high B.W
o
FCoE – works on
10 gbps link
·
NFS ( network
file system)
o
Client server
environment
o
Unix client –
service mountd
o
Unix serve - Volume
- portmatp / rpcbind
o
Mountd à portmap/rpcbind
o
Mount /
automounter ( mount file on demand)
o
Once mound system
we can use command like ls à NFSD
o
NFSv2 RFS 1094 32
bit stateless
o
NFSv3 rfs 1819 64 bit stateless
o
NFSv4 rfc 3530 statefull
, security
·
SCSI
o
Host à
initiator
o
Storage à Target
o
Daisy chain but
max 16 devices
o
Bus length 25 m
o
Max bw 320 mbps
·
FC
o
Overcome
limitation of parallel scsi
o
16 million nodes
o
Loop ( token ring
/ modern day switched transport (also
called as fabric)
o
800 mbps /
Fabric 8Gbps
o
6.2 miles
o
Multiple protocols.
·
SAN terminology
o
Initiator
o
HBA
o
Target
o
Smallest unit of
data is word à encoded into 40 bit form by 8bit-10bit encoding
process of FC, word get packaged into Frame (equivalent to ip packet in IP
network), sequence is series of frame sending between nodes and are
unidirectional. Sequencing which happen
during R / W operation are called as exchange ( similar to tcp session )
·
iSCSI
o
encapsulate scsi
command and data over IP
o
TCP port 3260 –
congestion control and in order deliver of
error free data in iSCSI environment
o
Distance concern
is address
o
MDS 9222i and
9000 series provide transparent SCSI routing
·
DAS ( direct
Access storage)
o
Limited mobility
( it often refer to capture storage)
·
SAN
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.