This document was ed by and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this report form. Report 3b7i
Overview 3e4r5l
& View Data Ontap 7-mode istration. Studentguide.pdf as PDF for free.
ATTENTION The information contained in this guide is intended for training use only. This guide contains information and activities that, while beneficial for the purposes of training in a closed, non-production environment, can result in downtime or other severe consequences and therefore are not intended as a reference guide. This guide is not a technical reference and should not, under any circumstances, be used in production environments. To obtain reference materials, please refer to the NetApp product documentation located at http://now.netapp.com/ for product information.
RESTRICTED RIGHTS LEGEND NetApp Documentation is protected by Copyright and is provided to U.S. Government Agencies with LIMITED RIGHTS as defined at FAR 52.227-14(a). Use, duplication, or disclosure by the U.S. Government is subject to the restrictions as set forth therein. In the event of use by a DOD agency, the Government's rights in Documentation are governed by the restrictions in the Technical Data Commercial Items clause at DFARS 252.227-7015 and the Commercial Computer Software and Commercial Computer Software Documentation clause at DFARS 252.227-7202.
TRADEMARK INFORMATION NetApp, the NetApp logo, Go Further, Faster, Data ONTAP, Appliance Watch, ASUP, Auto, Bolt Design, Center-to-Edge, ComplianceClock, ComplianceJournal, ContentDirector, Cryptainer, Data Motion, DataFabric, DataFort, Decru, Decru DataFort, Evolution of Storage, Exec-Vault, FAServer, FilerView, FlexCache, FlexClone, FlexShare, FlexVol, FPolicy, Get Successful, gFiler, LockVault, Manage ONTAP, MultiStore, NearStore, NetApp Availability Assurance, NetApp IT As A Service, NetApp ProTech Expert, NetCache, NOW, NOW (NetApp on the Web), ONTAPI, Raid-DP, Replicator-X, SANscreen, Secure, SecureShare, Shadow Tape, Simulate ONTAP, SmartClone, SnapCache, SnapCopy, SnapDrive, SnapLock, SnapManager, SnapMirror, SnapMover, SnapRestore, Snapshot, SnapStore, SnapSuite, SnapValidator, SnapVault, Spinnaker Networks, Spinnaker Networks logo, SpinCluster, SpinFlex, SpinFS, SpinHA, SpinMove, SpinServer, SpinStor, StoreVault, SyncMirror, Tech OnTap, Topio, vFiler, VFM, VFM (Virtual File Manager), WAFL, and Web Filer are either trademarks, ed trademarks, or service marks of NetApp, Inc. in the United States and/or other countries. Not all common law marks used by NetApp are listed on this page. Failure of a common law mark to appear on this page does not mean that NetApp does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Apple and QuickTime are either trademarks or ed trademarks of Apple Computer, Inc. in the United States and/or other countries. Microsoft and Windows Media are either trademarks or ed trademarks of Microsoft Corporation in the United States and/or other countries. RealAudio, RealNetworks, RealPlayer, RealSystem, RealText, RealVideo, RealMedia, RealProxy, and SureStream are either trademarks or ed trademarks of RealNetworks, Inc. in the United States and/or other countries. All other brands or products are either trademarks or ed trademarks of their respective holders and should be treated as such. NetApp is a licensee of the CompactFlash and CF Logo trademarks.
Type of Information Book titles. Words or characters that require special attention. Variable names or placeholders for information that must be supplied, for example: An ifstat command looks like this: ifstat -z -a
The name of the interface for which you want to view statistics is interface.
Monospaced font
Command names, daemon names, and option names. Information displayed on the system console or other computer monitors. The contents of files.
Bold monospaced font
Words or characters that are typed, for example: Enter the following command: options httpd.enable on license add
AND ACRONYMS USED IN THIS COURSE This table lists and concepts that are used frequently in this course. Many of the relate to specific areas of NetApp® technology, such as SAN, network-attached storage (NAS), the Data ONTAP® operating system, and protocols.
1-3
Data ONTAP 7-Mode istration: NetApp Storage Environment
Storage Industry Data storage—an industry worth US $27 billion Centralized storage – Reduced IT costs – Increased flexibility – Maximum efficiency of processes and services
Trends in the marketplace – Data lifecycle – Virtualization – Storage efficiency
STORAGE INDUSTRY As IT departments throughout the world attempt to reduce costs and increase flexibility, the storage industry is expanding rapidly. Established trends continue, and new trends arise
1-5
Data lifecycle: controlling data through its various stages of life, meeting the needs of each stage in the cycle Virtualization: consolidating servers and hosting multiple machines on one physical platform Storage efficiency: using techniques, such as thin provisioning and deduplication, that maximize storage resources Security: securing data (an ever-increasing problem for many IT departments) Data in motion: moving data to the optimal storage storage location Cloud storage: providing or using storage as a service
Data ONTAP 7-Mode istration: NetApp Storage Environment
NetApp: Leader in the Storage Industry NetApp firsts: First in the industry to unified storage (NAS and SAN) on one platform First in the industry with Fibre Channel over Ethernet (FCoE) and Unified Connect First storage vendor to decouple physical storage from logical storage (flexible volumes)
NETAPP HARDWARE SOLUTION: FAS NetApp storage systems offer unmatched business agility, superior application uptime, simplicity of management, and breakthrough value. FAS6200 series: Rely on the versatility, scalability, and reliability of the FAS6200 series for your largest enterprise applications and your most demanding technical workloads. Achieve lower acquisition and operation costs—compared to traditional, large-scale storage. FAS3200 series: Do more for your business than you thought possible with a storage system. Choose the FAS3200 for its flexibility, performance, availability, and the responsiveness to growth that a high-bandwidth 64-bit architecture provides. FAS2000 series: With the FAS2000 series, you can manage your growing, complex data in dispersed departments or remote locations and add functionality easily and cost effectively.
1-8
Data ONTAP 7-Mode istration: NetApp Storage Environment
NETAPP FAS6200 SERIES The FAS6200 family is a platform of high-end storage systems that are ed on the Data ONTAP 8.0.1 7-Mode operating system and the Data ONTAP 8.0.1 Cluster-Mode operating system. The family includes three models, each with a unique configuration. Each FAS6200 model has the following characteristics: 6U chassis Embedded SAS, FC, Gigabit Ethernet (GbE), 10 GbE Minimum of four Peripheral Component Interconnect Express (PCIe) slots per controller Embedded Service Processor (integrated platform management device) USB flash for OS boot media Depending on the model, the chassis can accommodate one controller, one controller and an I/O expansion module (IOXM), or two controllers. The IOXM provides additional PCIe slots to the system. When two controllers are installed in a chassis, they form an HA pair through a nonvolatile RAM 8 (NVRAM8) backplane connection. In addition, two FAS6200 systems, each with a controller and an IOXM, can form an HA pair through external cabling. The following FAS6200 system configurations are ed: System | Single chassis, 1 controller, 1 empty bay | Single chassis, 2 controllers | Two chassis, 1 controller, 1 IOXM 6210
|
Yes
|
Yes
|
No
6240
|
No
|
No
|
Yes
6280
|
No
|
No
|
Yes
1 - 10
Data ONTAP 7-Mode istration: NetApp Storage Environment
NETAPP HARDWARE SOLUTION: V-SERIES V-Series open-storage controllers enable you to manage disk arrays from EMC®, IBM®, Hewlett-Packard Company, Hitachi Data Systems®, and other storage vendors as easily as you can manage NetApp storage. V6200 series: The open-storage controllers of the NetApp V6200 series can handle your largest enterprise and technical applications in multiprotocol, multivendor storage environments. V3200 series: The open-storage controllers of the V3200 series give you advanced data-management and storage-efficiency capabilities in multiprotocol, multivendor environments.
1 - 11
Data ONTAP 7-Mode istration: NetApp Storage Environment
NETAPP COMPATIBLE DISK SHELVES NetApp storage s a variety of disk shelves, from high-performance FC shelves to inexpensive ATA shelves. With the DS4243 shelf and the new 2.5-inch disk form-factor DS2246, shelf NetApp storage s SAS drives with both speed and cost efficiency. The DS4243 is named for its 4U rack size with 24 disk and 3 Gb line rate while the DS2246 is named for its 2U rack size with 24 disk and 6 Gb line rate.
1 - 12
Data ONTAP 7-Mode istration: NetApp Storage Environment
SSDs in a NetApp Disk Shelf Solid-state disks (SSDs): Can provide consistently fast response times for your mission-critical applications Are ed in the highly reliable DS4243 disk shelf Use 24 x 100 GB SSDs per shelf Are available with higher performance NetApp FAS and V-Series storage controllers, which run the Data ONTAP 8.0.1 or later system
SSDS IN A NETAPP DISK SHELF Solid-state disks (SSDs) are best suited for random read-intensive workloads that require consistently fast response times. Currently, SSDs are available in the DS4243 shelf, which houses 24 drives in 3.5-inch form – factor carriers. Each shelf gives you approximately 2 TB of raw capacity. For best results, use SSDs with a high-performance storage controller.
1 - 13
Data ONTAP 7-Mode istration: NetApp Storage Environment
NetApp Hardware Solution: Flash Cache Flash Cache (PAM II): Was formerly named Performance Acceleration Module (PAM) Eliminates up to 75% of the high-performance disk drives in a storage system while providing better response time across the I/O throughput
NETAPP HARDWARE SOLUTION: FLASH CACHE Use Flash Cache, formerly the Performance Acceleration Module II (PAM II), to optimize the performance of random read-intensive workloads—such as file services, messaging, virtual infrastructure, and OLTP databases—without using additional high-performance disk drives. This intelligent read cache speeds access to your data, reducing latency by a factor of 10 or more—compared to disk drives. Faster response times can translate into higher throughput for random I/O workloads.
1 - 14
Data ONTAP 7-Mode istration: NetApp Storage Environment
USE CASES FOR FLASH CACHE AND SSDS Both intelligent caching and persistent storage are effective ways to improve performance for random readintensive workloads. When both Flash Cache and SSDs are part of a storage system configuration, the data from SSD volumes is not placed in Flash Cache. Instead, SSD data is placed in the first-level read cache, which resides in the controller’s dynamic RAM (DRAM) main memory. Only data from rotating media (disk drives) is placed in Flash Cache. In this case, Flash Cache functions as a second-level read cache. It is often said that SSDs improve write performance, not just read performance. Because the WAFL® (Write Anywhere File Layout) file system in combination with the use of NVRAM as a write journal enables NetApp storage to handle random writes very efficiently, this statement applies more to traditional FC-SAN storage than to NetApp storage. Nevertheless, SSDs can improve the write performance of a NetApp controller for workloads that are very write intensive. Currently, promotion of autotiering software is at its peak, so the gap between expectations and reality is large. Autotiering software works better for moving data downhill, to a lower tier, than for moving data uphill, to a higher tier. When data on a lower tier (such as on SATA disk drives) suddenly becomes hot, it is typically moved to an upper tier (such as SSDs) only after several hours, perhaps only after several days. As a factor in the delay, chunk sizes range from 512 KB for Compellent® to 1 GB for EMC® CLARiiON® Fully Automated Storage Tiering (FAST). As chunk size increases, so does the likelihood that cold data will be moved with hot data. As a consequence, larger chunk sizes increase the burden on the storage controller. In contrast to autotiering software, NetApp intelligent caching moves newly hot data into cache in small 4-KB chunks and in real time. The data chunk is initially placed in the first level of read cache, which is in controller memory (DRAM). Eventually, the data chunk flows into the much larger, second-level Flash Cache.
1 - 15
Data ONTAP 7-Mode istration: NetApp Storage Environment
NETAPP OS PLATFORMS: DATA ONTAP Achieve new levels of scalability and storage flexibility, resulting in decreased TCO, maximum business agility, and 24x7 business continuity. Accelerate your move to a service-oriented architecture with the Data ONTAP 8.0 operating system, which enables service levels across a diverse set of applications and extends data center virtualization. The Data ONTAP 8.0 operating system provides a unified, scalable platform that addresses your NAS, SAN, multitier, multiprotocol, and multitenant virtualized environments.
1 - 17
Data ONTAP 7-Mode istration: NetApp Storage Environment
Simple transition from Data ONTAP 7G Scale-up technology that enables aggregates to be 100 TB (higher in the future) Simple configuration for NAS or SAN
Cluster-Mode
Cluster-
Mode Simple transition from Data ONTAP GX Scale-out technology that enables a pool of storage controllers to manage the storage cluster One NAS namespace shared across the cluster
DATA ONTAP 8.0.X OPERATING SYSTEM The Data ONTAP 8.0 7-Mode operating system is both scalable and flexible. It provides: More efficient storage High availability Business continuance Quality of service Reduced complexity, greater simplicity To achieve high performance and high capacity, deploy the Data ONTAP 8.0 Cluster-Mode operating system. The Data ONTAP 8.0 Cluster-Mode operating system helps you achieve results and get to market faster by providing the massive throughput and scalability that you need to meet the demanding requirements of your high-performance computing and digital media content applications. Achieve high levels of performance, manageability, and reliability for your large Linux®, UNIX®, or Microsoft® Windows® clusters with the Data ONTAP 8.0 Cluster-Mode operating system. The Data ONTAP 8.0 Cluster-Mode operating system includes:
1 - 18
Multinode scaling, using a global namespace NetApp FlexVol® technology storage virtualization Clustered file system Snapshot® technology replication and mirroring
Data ONTAP 7-Mode istration: NetApp Storage Environment
UPGRADING TO DATA ONTAP 8.0.X Upgrading Data ONTAP or Data ONTAP GX is easy when you upgrade within the same ―mode‖, such as Data ONTAP 7.3.x to Data ONTAP 8.0.x 7-Mode. In a high-availability configuration, you can upgrade from Data ONTAP 7.3.x to Data ONTAP 8.0.x 7-Mode with a nondisruptive upgrade (NDU), maintaining data access during the upgrade. In a single-node configuration, you can upgrade from Data ONTAP 7.3.x to Data ONTAP 8.0.x 7-Mode without disturbing the data on the shelves (called ―Data in Place‖). Although upgrades from Data ONTAP GX to Data ONTAP 8.0 Cluster-Mode require a reboot, all data can be maintained. All other upgrades are conversions, requiring disks and systems to be wiped clean.
1 - 19
Data ONTAP 7-Mode istration: NetApp Storage Environment
Data ONTAP-v Configures Data ONTAP as a virtual machine (VM) Runs in VMware® vSphere™ 4.1 with a Fujitsu® PRIMERGY® BX400 blade server Infrastructure Blade
Server Blade
Data ONTAP VSA
Vendor VM
CF Card NVRAM
VM Services
WAFL RAID 0 SAS SCSI
NFS CIFS SCSI Target
vmdk vmdk vmdk
VMFS iSCSI Initiator
Server Blade ESX
ESX Vswitch
VMFS iSCSI Initiator
iSCSI Initiator Network Stack
WAN
Vswitch
Storage Blade Storage Blade Storage Blade RAID 5
Network Backplane Virtual machine storage provisioned and managed by Data ONTAP
vmdk
vmdk
vmdk
vmdk
parity
Async Mirror Target
Volume mounted directly from Data ONTAP (NFS, CIFS, iSCSI)
NFS Client
CIFS Client
Storage managed by Data ONTAP storage stack V-NVRAM backing store provisioned by ESX Physical Disk
DATA ONTAP-V The Data ONTAP operating system in a virtual machine (Data ONTAP-v) delivers the Data ONTAP storage stack, data management, and caching features within a virtual machine. Currently, Data ONTAP-v is based upon Data ONTAP 8.0.1 7-Mode. The virtual machine is on physical servers that use direct-attached storage or are part of an external storage system. The Data ONTAP-v product is included within the virtual storage appliance category. Data ONTAP-v (in a virtual environment) and Data ONTAP (in a physical environment) provide the same capabilities. The capabilities of Data ONTAP-v can be configured for multiple usage scenarios. When used with the Fujitsu® BX400, Data ONTAP-v enables storage stack management of local Fujitsu disks and provides IP-based (CIFS, iSCSI, and NFS) data access for home directories, e-mail, and business applications for small-sized and medium-sized firms. NetApp offers Data ONTAP-v to Fujitsu as an OEM product. Data ONTAP-v will be incorporated into the Fujitsu SX960 storage blade for the PRIMERGY® BX400 blade server. As of January 2011, the Data ONTAP-v system embedded with the SX960 storage blade is sold exclusively by Fujitsu and its worldwide authorized resellers.
1 - 20
Data ONTAP 7-Mode istration: NetApp Storage Environment
NETAPP SOFTWARE MANAGEMENT PRODUCTS The NetApp manageability software family consists of four suites that provide software tools for effective data management. With the NetApp storage suite of products—including Operations Manager, File Storage Resource Manager, SAN Manager, and Command Central Storage—you can do more with less. Instead of managing physical storage systems individually, you can view and manage multiple devices from a central console. The NetApp server suite includes the SnapDrive® data management software and ApplianceWatch™ Performance and Resource Optimization (PRO) Management Pack product families. SnapDrive products provide a server-aware alternative to maintaining manual host connections to underlying NetApp storage systems. ApplianceWatch products integrate with third-party system-management tools from HP, IBM, and Microsoft. With ApplianceWatch products, you can view, monitor, and manage NetApp storage systems from within their respective system-management environments. The NetApp application suite delivers increased productivity and flexibility across the entire enterprise. The various NetApp SnapManager® management software products enable you to improve data availability, reduce unexpected data loss, and increase storage management flexibility by leveraging the power of integrated NetApp storage systems. NetApp SANscreen® storage management software provides effective tools for managing SAN environments.
1 - 22
Data ONTAP 7-Mode istration: NetApp Storage Environment
Is a proven tool for managing petabyte-scale, globally distributed repositories of images, video, and records for enterprises and service providers. Provides tremendous scalability by eliminating the typical constraints of mapping data into predefined data containers as blocks and files. It s billions of files or objects and multiple petabytes of capacity in one global namespace. Enables intelligent data management and secure content retention. It optimizes data placement, metadata management, and efficiency through a global policy engine with built-in security that manages how data is stored, placed, protected, and retrieved. Technologies such as digital fingerprints and encryption protect the content from corruption and tampering. Helps provide data availability any time, anywhere, to facilitate nonstop operations. Because the solution is designed to allow flexible deployment configurations, it can meet the varying needs of global, multisite organizations.
Data ONTAP 7-Mode istration: NetApp Storage Environment
NAS AND SAN TOPOLOGIES SAN is a block-based storage system that makes data available over the network, using FC, FCoE, and iSCSI protocols. NAS is a file-based storage system that makes data available over the network, using NFS and CIFS protocols. Within the Data ONTAP 8.0.1 7-Mode operating system, NetApp provides Unified Connect, which allows a single 10-Gb adapter on the storage system, called a Unified Target Adapter (UTA), and a single 10Gb adapter on a client host, called a Converged Network Adapter (CNA), to be used as an Ethernet path for both NAS and SAN. NetApp SAN and Unified Storage Architecture provide an outstanding level of investment protection and flexibility. The FAS system on the slide implies one ―box.‖ However, the actual storage environment includes small and large FAS systems.
1 - 25
Data ONTAP 7-Mode istration: NetApp Storage Environment
PROTOCOLS ED BY DATA ONTAP NFS: The NFS protocol allows UNIX and PC NFS clients to mount file systems to local mount points. The storage appliance s NFS v2, NFS v3, NFS v4, and NFS over Datagram Protocol (UDP) and Transmission Control Protocol (T). CIFS: The CIFS protocol s Windows Server® 2000, Windows Server 2003, and Windows Server 2008. FTP: The FTP enables UNIX clients to remotely transfer files to and from the storage appliance. HTTP: The HTTP enables Web browsers to display files that are stored on the storage appliance. WebDAV: Web-based Distributed Authoring and Versioning (WebDAV) enables certain applications to create, modify, and access files by using extensions to HTTP. FC, iSCSI, or FCoE: The FC, iSCSI, and FCoE protocols enable a storage device to communicate with one or more hosts that are running operating systems such as Solaris™ or Windows in a SAN environment.
1 - 26
Data ONTAP 7-Mode istration: NetApp Storage Environment
DATA ONTAP 7.3.X ARCHITECTURE The Data ONTAP 7.3 operating system architecture consists of these elements:
1 - 27
Network interface: The point of interconnection between a terminal and the network. The network layer delivers data to RAM through the simple kernel and through some libraries. Protocol stack: Enables the processing of the data that is placed into RAM by the network layer. Processing is based on protocols (CIFS, NFS, FC, iSCSI, FTP, or HTTP). WAFL® (Write Anywhere File Layout): An intelligent file system that actively optimizes write performance by identifying the most effective way to lay out data. RAID Layer: Provides RAID 4 and RAID-DP® protection by taking the data that is processed by the WAFL file system. The RAID layer creates stripes that are used to calculate parity. The RAID layer also protects data by performing RAID scrubs and assists in the reconstruction of failed disks. Storage: Manages data transfer to and from disks. The storage layer is responsible for writing to the disks. According to the data that is delivered by the WAFL file system and RAID, it optimizes the write process to the disks. Nonvolatile RAM (NVRAM): Logs all transactions that change the state of the file system. Because writes are processed in system RAM, NVRAM provides battery-backed protection against data loss only in emergency situations. After an improper shutdown, NVRAM is read only.
Data ONTAP 7-Mode istration: NetApp Storage Environment
DATA ONTAP 8.0.X 7-MODE ARCHITECTURE M-Host Within FreeBSD®, the M-host management component provides an API to the Data ONTAP 8.0 operating system. M-host has a swap space on the root volume of the D-blade. D-Blade The D-blade is a kernel module within FreeBSD that provides Data ONTAP 7G compatibilities. The D-blade consists of these elements:
NetApp Technical Assisted-service products – Edge – Edge Standard – Edge Secure for Government – Storage Availability Audits – Rapid Deployment Services Self-service products – NetApp site—formerly NOW ® (NetApp on the Web) – Auto and My Auto
NETAPP TECHNICAL The NetApp architecture eliminates single points of failure and helps you achieve high availability, but there are factors that no design can eliminate. NetApp technical can help. The NetApp global team of experts is ready to respond to your problems. NetApp provides cost-effective technical that is scaled and priced for your needs, whether you are a large enterprise, classified government installation, or small business.
1 - 30
Data ONTAP 7-Mode istration: NetApp Storage Environment
NETAPP SITE The NetApp knowledge database provides , information, and documentation. The NetApp site is a NetApp customer-driven and employee-driven knowledgebase that is accessible at either of these locations: http://.netapp.com http://now.netapp.com When you to the NetApp site, the Home page is displayed. From this page, you can access the following kinds of istrative :
1 - 31
Request technical assistance Submit or review the status of a technical assistance case Submit or review the status of a Return Materials Authorization (RMA) Find bug reports Locate documentation Find s Find information about your product Locate troubleshooting solutions
Data ONTAP 7-Mode istration: NetApp Storage Environment
Storage Efficiency in My Auto Statistics on system efficiency and effective utilization of NetApp Overview of physical and effective capacity Calculation of storage efficiency savings from: – – – – –
STORAGE EFFICIENCY IN MY AUTO My Auto (formerly called Auto) is based upon Auto data and is designed to tell you:
1 - 32
How much storage you are using and what storage efficiency features are enabled How much storage savings you are realizing—compared to traditional storage What additional storage savings you can realize by enabling more storage efficiency components
Data ONTAP 7-Mode istration: NetApp Storage Environment
Module Summary In this module, you should have learned to: Identify the key features and functions of NetApp storage systems Describe the advantages of a NetApp storage system Distinguish between NAS and SAN topologies Describe NetApp Unified Storage Architecture Access the NetApp site to obtain software and hardware documentation
Check Your Understanding What are the NetApp hardware solutions? What is the primary function of the WAFL file system? What storage topologies are ed by NetApp and the Data ONTAP operating system? How is SAN different from NAS?
Where can you find for the Data ONTAP operating system?
CLI and GUI A storage system can be managed from: The command-line interface (CLI) – Accessed directly through a serial connection to the console – Accessed remotely through Secure Shell (SSH) or Telnet
A graphic interface (GUI): accessed remotely through a variety of protocols
COMMAND-LINE INTERFACE To enable two sessions, use the following command: system> options telnet.distinct.enable on NOTE: If two sessions are not created, s must share the one session.
Console Connections: Serial Port The console allows a physical connection through the: Serial port – Storage systems have an RLM or SP RJ45 port marked IOIOI (on BMC the rear ). – You connect the DB9 end to a serial port on a host computer. – Properties:
Serial Port 3
4
5
6
0
e0a
e0b
e0c
e0d
e0e
0a
e0f LINK
LINK
LINK LINK
0b
0c LINK LINK
0d LINK
LINK
7
13
8
14
9
15
10
16
Speed: 9600 bits per second Data bits: 8 Stop bits: 1 Parity: none Flow control: hardware or none
CONSOLE CONNECTIONS: SERIAL PORT For console access, you can connect a terminal (or terminal server) to the storage system console port through a standard RS232 connection, such as a DB9-to-DB9 serial cable (null modem). You use with the following settings for the serial communication port: Bits per second: 9600 Data bits: 8 Parity: none Stop bits: 1 Flow control: hardware or none Console access can be protected. On newer systems with a Service , may access a Service Processor (SP) through the serial port by type the Control-G keystroke combination.
Console Connections: RLM or SP The console allows a physical connection through the: Serial port Remote LAN Manager (RLM) or Service Processor (SP) – Remote access to your storage system regardless of BMC the system state SP Ports
3
4
5
6
0
e0a
e0b
e0c
e0d
e0e
0a
e0f LINK
LINK
LINK LINK
0b
0c LINK LINK
0d LINK
LINK
7
13
8
14
9
15
10
16
– Continuous power and secure access – An rlm command or sp command used for configuration – The naroot used to as root
CONSOLE CONNECTIONS: RLM OR SP The Remote LAN Module (RLM) or the new Service Processor (SP is a remote management card that provides remote platform management capabilities, including remote access, monitoring, troubleshooting, logging, and alerting features. These connections stays operational regardless of the operating state of the storage system. They are powered by standby voltage, which are available as long as the storage system has input power to at least one of the storage system’s power supplies. The RLM and the SP has a single temperature sensor to detect ambient temperature around the module board. Data that is generated by this sensor is not used for any system or RLM environmental policies. It is only used as a reference point that might help you troubleshoot storage system issues. For example, it might help a remote system determine if a system was shut down due to an extreme temperature change in the system. The FAS30xx series and FAS6000 series storage systems provide an Ethernet interface for connecting to the RLM. The FAS32xx series and FAS62xx series storage system provide two separate Ethernet interfaces (e0M and e0P) for connecting to the SP.
Console Connections: BMC The console allows a physical connection through the: Serial port RLM or SP On the FAS2000 series, Baseboard Management Controller (BMC) BMC Port 2
REPLACE THIS ITEM WITHIN 2 MINUTES OF REMOVAL
2
REPLACE THIS ITEM WITHIN 2 MINUTES OF REMOVAL
REPLACE THIS ITEM WITHIN 2 MINUTES OF REMOVAL
FAS2050
LNK
0a
0b
e0a
e0b
e0a
e0b
LNK
1
A B
2
REPLACE THIS ITEM WITHIN 2 MINUTES OF REMOVAL
FAS2050
LNK
0a
0b
LNK
1
2
– Remote access to your storage system regardless of the system state – Continuous power and secure access – A bmc command used for configuration
CONSOLE CONNECTIONS: BMC The Baseboard Management Controller (BMC) is a remote management device that is built into the motherboard of the FAS2000 series storage systems. It provides remote platform management capabilities, including remote access, monitoring, troubleshooting, logging, and alerting features. The BMC stays operational regardless of the operating state of the storage system. Both the BMC and its dedicated Ethernet network interface card (NIC) use standby voltage for high availability. The BMC is available as long as the storage system has input power to at least one of the storage system’s power supplies.
SHELL ACCESS: E0M AND E0P Some storage system models include an interface named e0M and e0P. These interfaces are dedicated to Data ONTAP® management activities with e0M for standard management functionality and e0P for private management traffic. They enable you to separate management traffic from data traffic on your storage system for security and throughput benefits. On a storage system that includes the e0M interface, the Ethernet port that is indicated by a wrench icon on the rear of the chassis connects to an internal Ethernet switch. On a storage system that includes the e0P interface, the Ethernet port that is indicated by a wrench icon with a padlock on the rear of the chassis connects to an internal Ethernet switch. When you set up a system that includes the e0M or e0P interface, the Data ONTAP setup script informs you that, for environments that use dedicated LANs to isolate management traffic from data traffic, e0M and e0P are the preferred interfaces for the management LAN. The setup script prompts you to configure e0M and e0P. The e0M and e0P configurations are separate from the RLM or SP configuration. Both configurations require unique IP and MAC addresses to allow the Ethernet switch to direct traffic to either the management interfaces or the RLM or SP. Although the e0M interface and the RLM both connect to the internal Ethernet switch by means of the Ethernet port that is indicated by a wrench icon on the rear of the chassis, the e0M interface and the RLM serve different functions. The e0M interface serves as the dedicated interface for environments that have dedicated LANs for management traffic. You use the e0M interface for Data ONTAP istrative tasks. The RLM, conversely, can be used for managing the Data ONTAP operating system and for providing remote management capabilities for the storage system, including remote access to the console, monitoring, troubleshooting, logging, and alerting features. Also, the RLM stays operational regardless of the operating state of the storage system and regardless of whether the Data ONTAP operating system is running or not. After e0M is configured, you can open a Telnet, RSH, or SSH session on a client.
Shell Access: Ethernet In addition to using direct console access, s can access a storage system through: e0M and e0P (if available) Ethernet – Communication protocols:
Defaults to secure protocols Defaults to insecure protocols
– Secure protocols like SSH and SSL are recommended – The following insecure protocols are not
SHELL ACCESS: ETHERNET If your system is not configured with an e0M or e0P interface, use a standard Ethernet port for istrative communication. NetApp recommends using a dedicated interface (such as e0a) for istrative access. Using a secure shell protocol is also recommended.
Secure Shell Secure shell (SSH): – Allows for secure istrative access to the storage system – Requires no license; set on by default in Data ONTAP 8.0.x – Is ed by the Data ONTAP 7.3.x and Data ONTAP 8.0.x operating systems
To configure SSH 2.0: system> secure setup ssh – Follow the wizard and enter a host key of 768 bits. – Wait for a syslog message that indicates that SSH is set up. system> secure enable ssh2
SECURE SHELL Although the Data ONTAP 7.3.x and Data ONTAP 8.0.x operating systems SSH 1.x, the use of SSH 1.x is not recommended because SSH 1.x contains known vulnerabilities.
Secure Sockets Layer Secure Sockets Layer (SSL): – Uses a certificate to provide a secure connection between the storage system and a Web browser – Can use either of two types of certificates Self-signed certificate Certificate-authority-signed certificate
To configure a self-signed certificate SSL: system> secure setup ssl – Enter country, state, locality, organization, unit, common, email, days until expiration, and key length. – The certificate is created in the /etc/keymgr directory. – A self-signed certificate is called secure.der.
SECURE SOCKETS LAYER Secure Socket Layer (SSL) is an industry-accepted method to encrypt communication between an host and a storage system. SSL uses a certificate to provide a secure connection between the storage system and a Web browser. Two types of certificates are used: a self-signed certificate and a certificate-authority-signed certificate.
Self-signed certificate: A certificate that is generated by the Data ONTAP operating system. Self-signed certificates can be used as is, but they are less secure than certificate-authority-signed certificates because the browser has no way of ing the signer of the certificate. This means that the system could be spoofed by an unauthorized server. Certificate-authority-signed certificate: A certificate-authority-signed certificate is a self-signed certificate that is sent to a certificate authority to be signed. The advantage of a certificate-authoritysigned certificate is that it verifies to the browser that the system is the system to which the client intended to connect. The Data ONTAP 8.0 operating system comes with SSL enabled by default. However, if you upgrade, NetApp strongly recommends that you configure the protocol.
Secure Sockets Layer Configuration To configure a certificate-authority-signed certificate SSL: system> secure addcert ssl directory_path For directory_path enter the full path: /etc/tempdir/secure.pem / The certificate is created in the /etc/keymgr directory/ A certificate-authority-signed certificate is called secure.pem/
COMMAND-LINE PRIVILEGES The Data ONTAP operating system provides two sets of commands that are based on privilege level: istrative and advanced. Use the priv command to set the privilege level. The istrative level provides access to commands that are sufficient for managing your storage system. The advanced level provides access to these same istrative commands, plus additional troubleshooting commands. Advanced-level commands should only be used with the guidance of NetApp technical . When you use advanced-level commands, the following warning is displayed: “Warning: These advanced commands are potentially dangerous; use them only when directed to do so by NetApp personnel.”
BASIC ISTRATION COMMANDS At the normal istration privilege level, entering a question mark at the command line displays the commands that are available to the system for disk management, networking and system management, physical and virtual interface configuration, and related tasks. Some commands are simple, some use arguments, and some perform an obvious function, such as backup, ping, or help. Type help command_name on the command line to display a brief description of the command. Type only command_name on the command line to display the full syntax of the command and any arguments that it takes. Data ONTAP 8.0.1 7-Mode example is shown.
ADVANCED PRIVILEGE COMMANDS In privileged mode, you can use advanced commands that provide more control and access to the storage system. In some cases, more arguments or options are available for a given command when you are in privileged mode. These commands are potentially dangerous and should be used only by knowledgeable personnel. To access the advanced commands, enter priv set advanced. Typing this command enables advanced privileges and changes the command line prompt by appending an asterisk (*). To return to basic istration mode, enter priv set . Some istration commands that are considered advanced are also available in the basic istration mode, but they are hidden and do not appear when you enter the help command from the basic istration mode. Data ONTAP 8.0.1 7-Mode example is shown.
GUIs Used to Manage Storage Systems A storage system can be managed through various GUIs: NetApp System Manager NetApp Operations Manager (formerly DataFabric® Manager) Microsoft® Windows® interfaces, such as Computer Management for certain CIFS functionality
NETAPP SYSTEM MANAGER 1.1 NetApp System Manager provides comprehensive management and the ability to manage one or more arrays through a simple, easy-to-use, intuitive UI. NetApp System Manager is a Microsoft Management Console (MMC) 3.0 Windows application that s discovery, set up, Fibre Channel (FC), iSCSI, CIFS, NFS, deduplication, provisioning, thin provisioning, Snapshot® technology, and configuration management of multiple NetApp storage systems from a single UI. To learn more, go to the NetApp site.
NETAPP SYSTEM MANAGER FEATURES NetApp System Manager 1.1 is the first release of this product to the Data ONTAP 8.0 7-Mode operating system. NetApp System Manager includes these features:
2 - 22
Seamless Windows integration: Integrates seamlessly into your management environment through the MMC. Discovery and setup of storage systems: Enables you to quickly discover a storage system or a highavailability (HA) configuration on a network subnet. You can easily set up a new system and configure it for storage. iSCSI and FC: Manages iSCSI and FC protocol services for exporting data to host systems. SAN provisioning: Provides a workflow for LUN provisioning, as well as simple aggregate and FlexVol® creation. Network-attached storage (NAS) provisioning: Provides a unified workflow for CIFS and NFS provisioning, as well as management of shares and exports. Management of storage systems: Provides ongoing management of your storage system or HA configuration. Streamlined HA configuration management: Provides combined setup for HA configuration of NetApp systems, logical grouping and management of such a configuration in the console or navigation tree, and common configuration changes for both systems in an HA configuration. Systray (Windows notification area): Provides real-time monitoring and notification of key healthrelated events for a NetApp system.
NETAPP SYSTEM MANAGER: STORAGE SYSTEMS Use the Setup wizard to configure storage systems. If you are not authenticated, the NetApp System Manager prompts you for your credentials.
Operations Manager Discovers, monitors, and manages NetApp storage Provides maximum availability, reduces TCO, and ensures business policy compliance
OPERATIONS MANAGER NetApp Operations Manager delivers comprehensive monitoring and management for NetApp enterprise storage and content caching environments. From a central point of control, Operations Manager provides alerts, reports, and configuration tools to keep your storage infrastructure in line with your business requirements, for maximum availability and reduced TCO. Operations Manager is a simple, centralized istration tool that enables comprehensive management of enterprise storage and content delivery infrastructures. No other single management application provides the same level of NetApp monitoring and management for NetApp FAS systems storage. The detailed performance and health monitoring of Operation Manager gives s proactive information to help resolve potential problems before they occur and troubleshoot problems faster if they do occur.
ALTERNATIVE GUIS Microsoft Windows Server 2000 and later, and client operating systems such as Microsoft Windows XP and later, have a management console called Computer Management that can connect to a storage system. Alternatively, MMCs can be used to istrate a storage system remotely.
Configuring Your System To change the configuration of a storage system, use one of the following methods: – CLI – Configuration files – NetApp System Manager
Steps in setting up a new storage system: – – – –
the date, time, and time zone configuration Set up SNMP variables to be monitored, if any Review the System Log (Syslog) Configure the Auto tool
configuration: Auto tool to report configurations
CLI COMMANDS The options command implementation is unique in the Data ONTAP operating system. If you enter just the command, the system displays all of the visible options and their values. If you enter the options command along with a feature name (such as cifs or raid), the system displays all of the visible options settings for that feature. Taking this a step further, if you enter the options command along with any characters, the system parses the string and displays all of the visible options that match what you entered.
REGISTRY FILES Persistent configuration information and other data is stored in a registry database. There are several backups of the registry database that are used automatically if the original registry becomes unusable. The /etc/registry.lastgood database is a copy of the registry as it existed after the last successful boot. The /etc/registry is edited by the Data ONTAP operating system and should not be edited manually. Configuration commands, such as the network interface configuration (ifconfig), must remain in the /etc/rc file.
Editing Files from the CLI 1. Make a backup copy of the file. 2. Read the file: rdfile. 3. Use one of two command to write to the file: – To write to the file and delete the original file: wrfile – To append 1 line to the file without deleting the original file: wrfile –a NOTE: Better yet, use NetApp System Manager.
EDITING FILES FROM THE CLI The console command rdfile displays the present contents of an ASCII text file. If the file doesn’t exist or is empty, this command returns nothing. For example, to display the /etc/hosts file from the CLI, enter rdfile /etc/hosts. To create or re-create a file, enter the console command wrfile. For example, to re-create the /etc/hosts file from the CLI, enter wrfile /etc/hosts. Then enter the contents of the file and press Ctrl-C to commit the file. Enter the command rdfile to the contents of the file.
CLI: TIME OPTIONS Use these settings with the date command:
2 - 41
-u: sets the date and time to Greenwich Mean Time instead of the local tim CC: the first two digits of the current year yy: the second two digits of the current year mm: the current month (If the month is omitted, the default is the current month.) dd: the current day (If the day is omitted, the default is the current day.) hh: the current hour, using a 24-hour clock mm: the current minute ss: the current second. If the seconds are omitted, the default is 0
CLI: Syslog A syslogd daemon performs message logging. The /etc/syslog.conf configuration file on the storage system's root volume determines how system messages are logged. Messages can be sent to: – The console – A file – A remote system
CLI: SYSLOG The syslog contains information and error messages that the storage system displays on the console and logs in the /etc/messages file. To specify the types of messages that the storage system logs, use the Syslog Configuration page in NetApp System Manager to edit the /etc/syslog.conf file. This file specifies which types of messages are logged by the syslogd daemon. (A daemon is a process that runs in the background, rather than under the direct control of a .)
The /etc/syslog.conf File The /etc/syslog.conf file consists of lines with space-separated fields in the following format: facility.level action
The facility parameter specifies the subsystem from which the message originated. The level parameter describes the severity level of the message. The action parameter specifies where messages are sent.
THE /ETC/SYSLOG.CONF FILE By default, the /etc/syslog.conf file does not exist; however, there is a sample /etc/syslog.conf file. To view a manual page, enter the man syslog.conf command. The facility parameter uses one of the following keywords: kern, daemon, auth, cron, or local7. The level parameter is a keyword from the following ordered list (higher to lower): emerg, alert, crit, err, warning, notice, info, debug. The action parameter can be in one of three forms:
A pathname (beginning with a leading slash): Selected messages are appended to the specified log file. A hostname (preceded by “@”): Selected messages are forwarded to the syslogd daemon on the named host. /dev/console: Selected messages are written to the console. For more information about /etc/syslog.conf settings, see the System istration Guide.
AUTO TOOL The Auto tool is a call-home feature that is included in the Data ONTAP operating system software for all NetApp systems. This integrated and efficient monitoring and reporting tool constantly monitors the health of your system. The Auto tool allows storage systems to send messages to the NetApp technical team and to other designated addressees when specific events occur. The Auto message contains useful information for technical to identify and solve problems quickly and proactively. You can also subscribe to the abbreviated version of urgent Auto messages through alphanumeric pages, or you can customize the type of message alerts that you want to receive. The Auto Message Matrices list all of the current Auto messages in order of software version. The Auto tool allows the system to send messages directly to system s and NetApp technical , which has a dedicated team that continually enhances Auto analysis tools. To continuously monitor your system’s status and health, the Auto tool:
2 - 48
Is automatically triggered by the kernel once a week to send information to the e-mail addresses that are specified in the auto.to option before the message file is backed up. In addition, the options command can be used to invoke the Auto mechanism to send this information. Sends a message in response to events that require corrective action from the system or NetApp technical . Sends a message when the system reboots.
EXAMPLES OF AUTO EVENTS Auto e-mail messages can be triggered by the following events: Weekly logs (/etc/messages) System reboots Low NVRAM batteries Disk, fan, and power supply failures Shelf faults File system growing too large -defined SNMP traps To read descriptions of some of the Auto messages that you might receive, access the NetApp site and search for Auto Message Matrices. You can view either the online version or the version in the Data ONTAP operating system guide.
CLI: CONFIGURING AUTO, STEPS 3-7 Auto e-mail messages contain the following information: Output of system commands Message date and timestamp NetApp software version Storage system ID and host name Software licenses enabled SNMP information and location (if configured) Contents of the /etc/messages file Contents of the /etc/serialnum file (if created) Auto messages also contain additional information that is specific to your storage system. This information helps identify crucial parameters that are required for follow-up handling of the triggering event. To control the detail level of event messages and weekly reports, use the options command to specify the value of auto.content as complete or minimal. Complete Auto messages are required for normal technical . Minimal Auto messages omit sections and values that might be considered sensitive information and reduce the amount of information sent. However, keep in mind that choosing minimal greatly affects the level of that NetApp is able to provide.
Testing Auto Messages To send an Auto manual message, run the following command on the storage system console: system> options auto.doit ‘[message]’
The message can be a word or a string that is enclosed in single quotation marks (‘ ’). For testing your Auto configuration, NetApp recommends that you use the message TEST or TESTING.
Check Your Understanding What methods can you use to access a storage system’s CLI? How can you configure a FAS system from a remote host? When are Auto messages generated?
Storage The Data ONTAP operating system provides data storage for clients: A volume (or a smaller increment within vol1 a volume) makes storage available to clients through protocols. Volumes are contained in an aggregate. aggr1 Aggregates are not visible to clients.
Storage Architecture: Aggregates Aggregates: – Are created by s aggr1 – Contain one or two plexes Aggregate types: plex0 – Traditional: deprecated – 32-bit: 16-TB limitation – 64-bit: Data ONTAP 8.0.x operating system only system> aggr status Aggr State aggr_trad online
STORAGE ARCHITECTURE: AGGREGATES To the differing security, backup, performance, and data sharing needs of your s, group the physical data storage resources on the storage system into one or more aggregates. An aggregate provides storage to the volume or volumes that it contains. Each aggregate has its own RAID configuration, plex structure, and set of assigned disks. When you create an aggregate without an associated traditional volume, you can use the aggregate to hold one or more FlexVol® volumes—the logical file systems that share the physical storage resources, RAID configuration, and plex structure of that common containing aggregate. When you create an aggregate that contains a single traditional volume, the aggregate and its volume are tightly bound together as an istrative unit, and the aggregate can contain only that volume.
Storage Architecture: Plexes A plex: When used with SyncMirror® software, provides mirror capabilities Contains one or more RAID groups If mirroring is not used, is limited to one per aggregate
aggr1 plex0
rg0
rg1
system> sysconfig -r ... Plex /aggr1/plex0 (online, normal, active, pool0) RAID group /aggr1/plex0/rg0 (normal) ... RAID group /aggr1/plex0/rg1 (normal)
STORAGE ARCHITECTURE: PLEXES Mirrored aggregates have two copies of their data, called plexes, which use the SyncMirror® synchronous mirroring software functionality to provide redundancy by duplicating the data. When SyncMirror synchronous mirroring software is enabled, all of the disks are divided into two disk pools, and a copy of the plex is created. The plexes are physically separated (each plex has its own RAID groups and its own disk pool), and the plexes are updated simultaneously. This architecture provides added protection against data loss if there is a double-disk failure or a loss of disk connectivity because the unaffected plex continues to serve data while you fix the cause of the failure. After you fix the affected plex, you can resynchronize the two plexes and re-establish the mirror relationship.
STORAGE ARCHITECTURE: RAID PROTECTION The Data ONTAP operating system s two levels of RAID protection: RAID-DP® and RAID 4. RAIDDP technology can protect against double-disk failures or failures during reconstruction. RAID 4 can protect against single-disk failures. You assign the RAID level on a per-aggregate basis.
STORAGE ARCHITECTURE: DISKS Disks provide the basic unit of storage for storage systems that run the Data ONTAP operating system. Understanding how the Data ONTAP operating system uses and classifies disks helps you manage your storage more effectively.
Disk Qualification NetApp allows only qualified disks to be used with the Data ONTAP operating system. Qualification – Ensures quality and reliability – Is enforced by /etc/qual_devices Caution! Modifying the disk qualification requirement file can cause your storage system to halt.
DISK QUALIFICATION NetApp® storage systems only disks that are qualified by NetApp. Disks must be purchased from NetApp or an approved reseller. Unqualified Disks The Data ONTAP operating system automatically detects unqualified disks. If you attempt to use an unqualified disk, the Data ONTAP operating system responds by issuing a “delay forced shutdown” warning, giving you 72 hours to remove and replace the unqualified disk before a forced system shutdown occurs in Data ONTAP 7.2.1 through 7.2.3. In Data ONTAP 7.2.4 and later, you will receive a console warning. In addition, when the Data ONTAP operating system detects an unqualified disk, it takes the following actions:
Provides notification through syslog entries, console messages, and the Auto™ tool Generates an automatic error message and delayed forced shutdown if the /etc/qual_devices file is modified Marks uned drives as “unqualified” Disk Qualification
If you install a new disk drive into your disk shelf and the storage system responds with an unqualified disk error message, you must remove the disk and replace it with a qualified disk. To correct an unqualified disk error and avoid a forced shutdown, complete the following steps: 1. Remove any disk drives that were not provided by NetApp or an authorized NetApp vendor or reseller. 2. To update your list of qualified disks, and install the most recent /etc/qual_devices file from http://now.netapp.com/NOW//tools/diskqual/. 3. If the unqualified disk error message persists after you install an up-to-date /etc/qual_devices file, try reinstalling the /etc/qual_devices file. 4. If the reinstallation fails, remove the unqualified disk and NetApp technical . 3 - 11
ED DISK CONNECTION ARCHITECTURES The Data ONTAP operating system s six disk types: Fibre Channel-Arbitrated Loop (FC-AL), ATA, BSAS, SATA, SAS, and SSD using two different disk connection architectures: FC-AL and SAS. For a specific configuration, the disk types that are ed depend on the storage system model, the disk shelf type, and the I/O modules that are installed in the system. FC and ATA disks are attached using the FC-AL disk connection architecture. SAS, BSAS, SATA and SSD disks are attached using the SAS disk connection architecture. Generally, different disk types are not allowed within a single aggregate. However, the following exceptions to this rule apply when you are creating an aggregate or increasing the size of an aggregate: SATA and ATA disks are treated as the same disk type SAS and FC disks are treated as the same disk type NOTE: Data ONTAP also s LUN disk type over a FC connection in the NetApp V-Series systems.
FC-AL Architecture FC and ATA disks connect through an FC-AL (Fibre Channel Arbitrated Loop) architecture with ESH (electronically switched hub) technology Uses FC and ATA disks types
FC-AL Device Names The system assigns the device ID automatically through the host_adapter and disk_id. system> sysconfig -r Aggregate aggr0 (online, raid_dp, redirect) (block checksums) Plex /aggr0/plex0 (online, normal, active) RAID group /aggr0/plex0/rg0 (normal) RAID Disk --------dparity parity data
Device -----0a.16 0a.17 0a.18
HA -0a 0a 0a
SHELF BAY CHAN Pool Type ----- --- ---- ---- ---1 0 FC:A FCAL 1 1 FC:A FCAL 1 2 FC:A FCAL
RPM ---10000 10000 10000
Used (MB/blks)... -------------34000/69632000... 34000/69632000... 34000/69632000...
FC-AL DEVICE NAMES Disks are numbered in all storage systems. Disk numbering allows you to: Interpret messages that are displayed on your screen, such as command output or error messages Quickly locate a disk that is associated with a displayed message To determine a disk ID, use the sysconfig –r, vol status –r, or aggr status –r command.
FC-AL DEVICE NAMES: HOST_ADAPTER Disks are numbered based on a combination of their host_adapter and device_id, represented as host_adapter.device_ID, as shown in the graphic. Host_adapter refers to the adapter number that is associated with the disk, and device ID refers to the logical loop ID of the disk.
FC-AL DEVICE NAMES: DISK_ID The table shows the numbering system for FC loop IDs. The following IDs of DS14 series shelves are reserved and are not used: 0-15, 30-31, 46-47, 62-63, 78-79, 9495, 110-111. The table can be summarized by the following formula: DS14 Device ID = DS14 Shelf ID * 16 + Bay Number
The fcstat device_map Command Use the fcstat command to troubleshoot disks and shelves. Use the fcstat device_map command to display the relative physical position of the drives on an FC loop and the mapping of devices to shelves. system> fcstat device_map Loop Map for channel 0a: Translated Map: Port Count 7 29 Shelf mapping: Shelf 1: 29 Loop Map for channel 0b: Translated Map: Port Count 7 45 Shelf mapping: Shelf 2: 45
THE FCSTAT DEVICE_MAP COMMAND An FC loop is a logically closed loop from a frame transmission perspective. Consequently, signal integrity problems that are caused by a component upstream are seen as problem symptoms by components downstream. The relative physical position of drives on a loop is not necessarily related directly to the drives’ loop IDs (which are, in turn, determined by the drive shelf IDs). To determine the relative physical positions of drives on a loop, use the device_map subcommand. The device_map subcommand displays: The relative physical position of drives on the loop, as if the loop were one flat space The mapping of devices to shelves, thus enabling you to quickly correlate disk IDs with shelf tenancy Error codes can be found in the device map output such as BYP and XXX:
3 - 17
BYP (Drive By) events are situations which cause the ESH to by a drive port thus making it inaccessible by the host and isolating the drive from the loop. See this FAQ for more information: https://kb.netapp.com//index?page=content&id=3012395 XXX are slots that do not have any disks.
SAS Architecture Serial Attached SCSI (SAS) provides the affordability of SATA with the reliability of FC SAS uses expanders – Expanders are switches – Maintain point-to-point connections with disks
SAS ARCHITECTURE NetApp uses the term "stack" to refer to a collection of correctly wired and interconnected SAS shelves and adapters. The following guidelines apply:
3 - 18
Up to 10 shelves are ed per stack, except in the FAS2040 and FAS2050, which up to 4 shelves per stack. The DS4243 and DS2246 cannot be mixed in the same stack. They can be mixed in the same system by connecting them to different SAS ports. If you connect them to different ports on the same adapter, the adapter will automatically use the correct link speed for each stack. When using the DS4243, SAS and SATA drives can be mixed in the same stack but not in the same shelf. You should not mix 15k and 10k RPM SAS drives in the same aggregate. While it is possible, this will very likely limit the performance you’ll see from the faster drives. NetApp extensively qualifies SAS cables for use with our SAS shelf family. SAS cables are a highperformance and critical component of the SAS architecture. Only official NetApp SAS cables are ed for use in SAS data path connections.
SAS Device Names The system assigns the device ID automatically through the host_adapter, shelf_id and bay_id. system> sysconfig -r Aggregate aggr0 (online, raid_dp, redirect) (block checksums) Plex /aggr0/plex0 (online, normal, active) RAID group /aggr0/plex0/rg0 (normal) RAID Disk --------dparity parity data
Device -----0a.00.18 0a.00.19 0a.00.20
HA -0a 0a 0a
SHELF ----00 00 00
BAY --18 19 20
CHAN Pool Type ---- ---- ---SA:A SAS SA:A SAS SA:A SAS
SAS DEVICE NAMES Shelf IDs can range from 00 to 99. We recommend asg intervals of 10 to each stack attached to a storage system. For example, the first stack would have IDs 10 through 19 reserved, while stack two would have IDs 20 through 29 reserved, and so on. Those IDs are reserved even if the stacks are not fully populated. For example, if the first stack has four shelves, the shelf IDs would be 10, 11, 12, and 13. The second stack would start at shelf ID 20, even though IDs 14 through 19 are not currently utilized. This helps you to easily identify which shelves are assigned to each stack. Unused IDs can be used for future stack expansion. Both the DS4243 and DS2246 the hot addition of capacity, and additional shelves can be added to a running storage system without disruption.
Alternative Control Path SAS Shelves like the DS4243 and DS2246 have multiple connections – Data paths are over standard SAS cables – Alternative control paths (As) are over Ethernet connections
A (Ethernet) …
Data Paths (SAS)
A requires configuration (system’s setup command)
ALTERNATIVE CONTROL PATH Alternative control path (A) was introduced at the same time as the DS4243 for out-of-band management on SAS disk shelves. A gives you an alternate communications path to each of your disk shelves. It is completely separate from the SAS data path and provides options for nondisruptive recovery of shelf modules, including the ability to reset or power cycle an individual I/O module (IOM) or an entire domain (that is, all IOMs connected to the “A" shelf modules within a SAS stack). A gives the storage the ability to power cycle the entire shelf as well. For redundancy, each shelf contains two IOMs, and each IOM has A connectivity. A technology enhances the ability of Data ONTAP® to automatically reset a misbehaving component in order to return it to a fully operational mode without disruption. A actively monitors the SAS data path for issues that can be corrected by a nondisruptive shelf module reset or power cycle. The proactive shelf recovery mechanism of Data ONTAP performs these actions automatically; this is enabled by default when using A. Although NetApp highly recommends it, A is not required. Because A is completely separate from the data path, the data path continues to function when A is not connected or not operational. Many of the A functions described can be performed manually when A is not present. If e0P is available, use this private management port instead of other ports.
Disk Ownership Disks are assigned to one system controller. Disk ownership is either: – Hardware-based: determined by the slot position of the host bus adapter (HBA) and shelf module port – Software-based: determined by the storage system Storage Systems
Software Disk Ownership
FAS6200 series FAS6000 series FAS3200 series FAS3100 series FAS3000 series FAS2000 series
DISK OWNERSHIP Disk ownership can be hardware-based or software-based. Hardware-Based Ownership In hardware-based disk ownership, disk ownership and pool hip are determined by the slot position of the host bus adapter (HBA) or onboard port and the shelf module port where the HBA is connected. NOTE: Only the FAS3020 and FAS3050 hardware-based ownership. The FAS3040 and FAS3070 only software-based ownership. Software-Based Ownership In software-based disk ownership, disk ownership and pool hip are determined by the storage system . The Data ONTAP operating system might also set disk ownership and pool hip automatically, depending on the initial configuration. Slot position and shelf module port do not affect disk ownership. The Data ONTAP 8.0 operating system s only software-based ownership.
Hardware-Based Ownership Determined by two conditions: 1. 2.
How a storage system is configured How the disk shelves are attached to the storage system
A standalone system owns all disks that are directly attached to it. If part of a high-availability configuration: Local node owns the disks connected to the ESH A channel Partner node owns the disk connected to the ESH B channel
HARDWARE-BASED OWNERSHIP The Data ONTAP 7.3.x operating system s hardware-based ownership on some hardware platforms. The Data ONTAP 8.0.x operating system does not hardware-based ownership on any hardware platform.
Software-Based Ownership Ownership is determined by the system : To current ownership: system> disk show -v DISK OWNER --------- --------------0b.43 Not Owned ... 0b.29 system (84165672) ...
POOL ----NONE
SERIAL NUMBER ------------41229013
Pool0
41229011
To view all disks without an owner: system> disk show -n DISK OWNER --------- --------------0b.43 Not Owned ...
Software-Based Ownership: Disk Assign To assign disk ownership:
system> disk assign {device_list|all| [-T storage_type] -n count|auto}... – device_list is the disk IDs of the unassigned disks – T is ATA, FCAL, LUN, SAS, or SATA
To assign a specific set of disks:
system> disk assign 0b.43 0b.41 0b.39
To assign all unassigned disks: system> disk assign all
To unassign disks:
Specify the device IDs that you want to work with.
system> disk assign 0b.39 -s unowned -f – Use -s to specify the sysid to take ownership. – Use -f to force assignment of previously assigned disks. NOTE: Unassign only hot-spare disks.
Software-Based Ownership: Auto Assign Automatic assignment option: system> options disk.auto_assign
This option specifies whether disks are automatically assigned on systems with software disk ownership. The default is on. The Data ONTAP operating system assigns unassigned disks to the system and pool based upon the disk loop. Automatic assignment is invoked: – 10 minutes after boot – Every five minutes system> disk assign auto
Disk Selection When creating an aggregate, the Data ONTAP operating system selects disks: – With the same speed – That match the speed of existing disks
DISK SELECTION If disks with different speeds are present on a NetApp system (for example, 10,000 RPM and 15,000 RPM disks), the Data ONTAP operating system attempts to avoid mixing them in one aggregate or traditional volume. By default, the Data ONTAP operating system selects disks:
With the same speed when creating an aggregate or traditional volume in response to the following commands: – –
aggr create vol create
That match the speed of existing disks in an aggregate or traditional volume that requires expansion or mirroring in response to the following commands: – – – –
aggr add aggr mirror vol add vol mirror
If you use the -d option to specify a list of disks for commands that add disks, the operation fails if disk speeds differ from each other or differ from the speed of disks already included in the aggregate or traditional volume. The commands for which the -d option fails in this case are aggr create, aggr add, aggr mirror, vol create, vol add, and vol mirror. For example, if you enter aggr create vol4 -d 9b.25 9b.26 9b.27, and two of the disks are different speeds, the operation fails.
Using Multiple Disk Types in an Aggregate Drives in an aggregate can be: – Different speeds (not recommended) – On the same shelf or on different shelves
Not all drive types can be mixed within an aggregate: – FC and SAS can be mixed (not recommended) – FC and SATA or SAS and SATA cannot be mixed
USING MULTIPLE DISK TYPES IN AN AGGREGATE The storage system allows the use of various disk sizes. This sometimes occurs when disks are purchased after the original equipment is set up. However, different-sized disks require different versions of the Data ONTAP operating system and different disk shelves. For specific information about your system, see the System Configuration Guide on the NetApp site. You must ensure that parity and hot-spare disks are as large as the largest disk in a RAID group so that they can all of the stripes on the data disks. When creating RAID groups with disks of different sizes, the Data ONTAP operating system assigns parity to the largest disk. If you later add larger disks to the RAID group, the Data ONTAP operating system reassigns parity to the largest of those disks. NOTE: Although mixing disk sizes in a volume is a ed configuration, this practice can lead to suboptimal volume performance. NetApp recommends that all disks in a volume be the same size.
Spare Disks Spare disks are used to: – Increase aggregate capacity – Replace failed disks
Disks must be zeroed before use: – Disks are automatically zeroed when they are brought into use. – NetApp recommends zeroing disks before use: system> disk zero spares
SPARE DISKS You can add spare disks to an aggregate to increase its capacity. If the spare is larger than the other data disks, it becomes the parity disk. However, it does not use the excess capacity unless another disk of similar size is added. The second largest additional disk has full use of additional capacity. Replacing Failed Disks with Spares If a disk fails, a spare disk is automatically used to replace the failed disk. If the spare that is used is larger than the failed disk that is being replaced, the excess capacity of the larger disk is not used. Zeroing Used Disks After you assign ownership to a disk, you can add that disk to an aggregate on the storage system that owns it, or leave it as a spare disk on that storage system. If the disk has been used previously in another aggregate, you should use the disk zero spares command to zero the disk to reduce delays when the disk is used.
RAID Groups RAID groups are a collection of data disks and parity disks. RAID groups provide protection through parity. The Data ONTAP operating system organizes disks into RAID groups. The Data ONTAP operating system s: – RAID 4 – RAID-DP technology
RAID GROUPS To understand how to manage disks and volumes, it is important to first understand the concept of RAID. A RAID group includes several disks that are linked together in a storage system. Although there are various implementations of RAID, the Data ONTAP operating system s only RAID 4 and RAID-DP technology. In the Data ONTAP operating system, each RAID 4 group consists of one parity disk and one or more data disks. The storage system assigns the role of parity disk to the largest disk in the RAID group. When a data disk fails, the storage system identifies the data on the failed disk and rebuilds a hot-spare disk with that data. NOTE: If a parity disk fails, it can be rebuilt from data on the data disks.
RAID 4 Technology RAID 4 protects against data loss that results from a single-disk failure in a RAID group. A RAID 4 group requires a minimum of two disks: – One parity disk – One data disk
RAID 4 TECHNOLOGY RAID 4 protects against data loss due to a single-disk failure within a RAID group. Each RAID 4 group contains the following: One parity disk (assigned to the largest disk in the RAID group) One or more data disks Using RAID 4, if one disk block goes bad, the parity disk in that disk's RAID group is used to recalculate the data in the failed block, and then the block is mapped to a new location on the disk. If an entire disk fails, the parity disk prevents any data from being lost. When the failed disk is replaced, the parity disk is used to automatically recalculate its contents. This is sometimes referred to as row parity.
RAID-DP Technology RAID-DP technology protects against data loss that results from double-disk failures in a RAID group. A RAID-DP group requires a minimum of three disks: – One parity disk – One double-parity disk – One data disk
RAID-DP TECHNOLOGY RAID-DP technology protects against data loss due to a double-disk failure within a RAID group. Each RAID-DP group contains the following: One data disk One parity disk One double-parity disk RAID-DP technology employs the traditional RAID 4 horizontal row parity. However, in RAID-DP technology, a diagonal parity stripe is calculated and committed to the disks when the row parity is written. For more information about RAID-DP processes, see Technical Report 3298, found at http://www.netapp.com/library/tr/3298.pdf.
RAID GROUP SIZE RAID groups can include anywhere from 2 to 28 disks, depending on the platform and RAID type. For best performance and reliability, please see Technical Report 3437 and Technical Report 3838.
Data Validation NetApp uses various methods to validate data: – RAID-level checksums – Media scrub process – RAID scrub process
RAID-level checksums enhance data protection and reliability.
– The WAFL® (Write Anywhere File Layout) file system ensures that real-time validation occurs. – Two disk checksums: Block checksum (BCS) Zone checksum (ZCS), used only with V-Series
BCS METHOD WITH FC The block checksum (BCS) method provides 64 bytes of checksum data for every 4 KB of data. Within BCS, there are two possibilities: 520 bytes per sector of FC disks or 512 bytes per sector of ATA disks. FC disks: With 520 bytes per sector, one block is composed of eight 520-byte sectors. The eighth sector within the block is composed of both data and checksum data. The entire block's space is 4160 bytes (8 sectors x 520 bytes = 4160 bytes).
BCS METHOD WITH ATA ATA disks: With 512 bytes per sector, one block is composed of nine 512-byte sectors. Eight of these sectors are used for data, and the ninth sector is used to store the 64-byte checksum. The entire block’s space is 4096 bytes. About eight-ninths of the disk is used for data.
DATA VALIDATION PROCESSES Media scrubbing checks disk blocks for physical errors. Disk scrubbing checks disk blocks on all disks in the storage system for media errors and logical parity errors. If the Data ONTAP operating system identifies media errors or inconsistencies, it repairs them by reconstructing the data from parity data, and then rewriting the data back to the data disk. Disk scrubbing reduces the chance of data loss from media errors that occur during reconstruction.
Media and RAID Scrubs A media scrub: Runs in the background when the storage system is not busy Looks for unreadable sectors at the lowest level (0’s and 1’s) Is unaware of the data stored in a sector Takes corrective action when it finds too many unreadable blocks on a disk (sends warnings or fails the disk, depending on findings)
A RAID scrub: Is enabled by default Can be scheduled or disabled NOTE: Disabling is not recommended. Uses RAID checksums Reads a block and then verifies the data On finding a discrepancy between the RAID checksum and the data read, re-creates the data from parity and writes it back to the block Reads every block in an aggregate to ensure that data has not become stale, even if s haven’t accessed the data
MEDIA AND RAID SCRUBS Storage systems use disk scrubbing to protect data from media errors or bad sectors on a disk. Each disk in a RAID group is scanned for errors. If errors are identified, they are repaired by reconstructing data from parity and rewriting the data. Without this process, a disk media error could cause a multiple disk failure, causing the storage system to run in degraded mode. Automatic RAID scrub is enabled by default. If you prefer to control the timing of RAID scrubs, you can turn off the automatic scrubs. You can also manually start and stop disk scrubbing regardless of the current value (on or off) of the raid.scrub.enable option. ERROR MESSAGE
CAUSE
Inconsistent parity on volume volume_name, RAID group n, stripe #n. Rewriting bad parity block on volume volume_name, RAID group n.
Inconsistent parity block
Rewriting bad parity block on volume volume_name, RAID group n, stripe #n.
Media error on the parity disk or a data disk
Multiple bad blocks found on volume volume_name, RAID group n, stripe #n.
More than one bad block
Scrub found n parity inconsistencies. Scrub found n media errors. Disk scrubbing finished.
Disk Failure and Physical Removal To fail a disk: system> disk fail device_id To unfail a disk: system> priv set advanced system*> disk unfail device_id To unload a disk so that it can be physically removed: system> disk remove device_id The disk is now ready to be pulled from the shelf.
Disk Sanitization A way to protect sensitive data—to make recovery of the data impossible The process of physically obliterating data by overwriting disks with three successive byte patterns or with random data s can specify the byte patterns or use the Data ONTAP default pattern.
DISK SANITIZATION Disk sanitization is the process of physically obliterating data by overwriting disks with specified byte patterns or with random data so that recovery of the original data is impossible. Use the disk sanitize command to ensure that no one can recover the data on the disks. The disk sanitize command uses three successive default or -specified byte overwrite patterns for up to seven cycles per operation. Depending on the disk capacity, the patterns, and the number of cycles, the process can require several hours. Sanitization runs in the background. You can start, stop, and display the status of the sanitization process.
Disk Sanitization Commands To license the storage for sanitization: system> license add XXXXXX To the disks to be sanitized: system> sysconfig -r To start the sanitization operation: system> disk sanitize start -r -c 3 device_list – Use –r to indicate that a random pattern should overwrite the disks. – Use –c to specify the number of times to run the operation (maximum is seven). – Use –p to provide a custom overwrite pattern. – Use device_list to specify a space-separated list of disk IDs.
DISK SANITIZATION COMMANDS You can monitor the status of the sanitization process by using the /etc/sanitized_disks and /etc/sanitization.log files: Status for the sanitization process is written to the /etc/sanitization.log file every 15 minutes. The /etc/sanitized_disks file contains the serial numbers of all drives that have been successfully sanitized. For every invocation of the disk sanitize start command, the serial numbers of the newly sanitized disks are appended to the file. You can that all of the disks were successfully sanitized by checking the /etc/sanitized_disks file.
Degraded Mode Degraded mode occurs when a disk fails in a RAID group During degraded mode: – Data is still available – Performance is less than optimal Data must be recalculated from parity until the failed disk is replaced. U usage increases to calculate from parity.
– The failed disk (or disks for RAID-DP) will be rebuilt on a spare drive (if available)
DEGRADED MODE If one disk in a RAID group fail, the system operates in “degraded” mode. In degraded mode, the system does not operate optimally, but no data is lost. Within a RAID 4 group, if a second disk fails, data is lost; within a RAID-DP group, if a third disk fails, data is lost. The following Auto message will be broadcast: [monitor.brokenDisk.notice:notice]. If the maximum number of disks have failed in a RAID group (two for RAID-DP, one for RAID 4) and there are no suitable spare disks available for reconstruction, the storage system automatically shuts down in the period of time specified by the raid.timeout option. The default timeout value is 24 hours. See this FAQ for more information: https://kb.netapp.com//index?page=content&id=2013508. Therefore, you should replace failed disks and used hot-spare disks as soon as possible. You can use the options raid.timeout command to modify the timeout internal. However, keep in mind that, as the timeout interval increases, the risk of subsequent disk failures also increases.
Hot-Swapping: Replacing Failed Disks Hot-swapping is the process of removing or installing a disk drive while the system is running. Hot-swapping allows for: – Minimal interruption – The addition of new disks
Removing two disks from a RAID 4 group results in: – Double-disk failure – Data loss
Removing two disks from a RAID-DP group results in: – Double-degraded mode – No data loss
REPLACING FAILED DISKS When a failed disk is replaced, the size of the new disk must be equal to or larger than the usable space of the replaced disk to accommodate all of the data blocks on the failed disk. If the usable space on the replacement disk is larger than the failed disk, the replacement disk is right-sized to the capacity of the failed disk. The extra space on the disk is not usable.
Disk Replacement To replace a data disk with a spare disk: system> disk replace start device_id spare_device_id system> disk replace start 0a.21 0a.23
Parity Disk
0a.20
0a.21
0a.22
0a.23
Data Disk
Target Disk
Data Disk
Spare Disk
To check the status of a replace operation: system> disk replace status
Aggregates Aggregates logically contain flexible volumes (FlexVol® volumes). NetApp recommends that aggregates be 32-bit or 64-bit. An aggregate name must: – Begin with a letter or the underscore character (_) – Contain only letters, digits, and underscore characters – Contain no more than 255 characters
AGGREGATES To the differing security, backup, performance, and data-sharing requirements of s, physical data storage resources on your storage system can be grouped into one or more aggregates. Aggregates provide storage for the volume or volumes they contain. Each aggregate has its own RAID configuration, plex structure, and set of assigned disks. When you create an aggregate without an associated traditional volume, you can use it to hold one or more FlexVol volumes (logical file systems that share the physical storage resources, RAID configuration, and plex structure of the aggregate container). When you create an aggregate with a traditional volume tightly bound, the aggregate can contain only that volume. A single storage system s up to 100 aggregates (including traditional volumes). Aggregate Names Aggregate names must follow the naming conventions shown here. The same rules apply to volume names.
– The CLI: system> aggr create ... – NetApp System Manager: the Aggregate Wizard
Know the following information: – – – – – – –
Aggregate name (required) Aggregate type (32-bit is default) RAID Type (DP is default) RAID group size Disk selection method Disk size Number of disks including parity (required)
To create an aggregate: system> aggr create aggr1 3
Using the CLI to Create an Aggregate To create a 64-bit aggregate: system> aggr create aggr -B 64 24
– The 64-bit aggregate, which is called aggr, has 24 disks. – By default, the aggregate uses RAID-DP technology. – The command succeeds only if 24 disks (spares) are available. To create a 32-bit aggregate: system> aggr create aggr -B 32 24
USING THE CLI TO CREATE AN AGGREGATE For more information about 64-bit aggregates, see Technical Report 3786, found at http://www.netapp.com/us/library/technical-reports/tr-3786.html.
Aggregate Space Allocation: Concerns How the Data ONTAP operating system allocates space How you balance conflicting goals Use space efficiently Protect data
To calculate space allocation, use 5 easy steps: system> aggr create aggr1 5@847 1-TB ATA disks are used.
Aggregate Space Allocation: Binary Format 2. Disks are calculated differently: Disks are originally calculated in decimal format, where 1 GB = 1000 MB. A 1-TB disk is in decimal format: 1000000 MB. When the Data ONTAP operating system analyzes a disk, it computes it in binary format, where 1 GB = 1024 MB.
1000000 MB / 1024 GB / 1024 MB = 976.56 GB Data
0
...
977 GB 1 TB
system> aggr status -r aggr1 ... RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used(MB/blks) Phys(MB/blks) -----------------------------------------------------------------------------data 2b.52 2b 3 4 FC:A - ATA 7200 847555/... 847827/...
AGGREGATE SPACE ALLOCATION: SIZE VARIANCE NOTE: ATA drives have only 512 bytes per sector and lose an additional one-ninth (12.5%) due to block checksum allocation. The following table is an abbreviated list of right-sized capacities for Data ONTAP 8.0.1 7-Mode. Please see the Storage Management Guide of the appropriate Data ONTAP operating system version for a complete list. DISK TYPE
Aggregate Space Allocation: Counted Disks 4. Disks to count are different per OS version: – In Data ONTAP operating systems earlier than version 7.3, aggregate size is calculated using all of the aggregate’s disks.
Data
Data
Data
Parity Double-Parity
– Beginning with the Data ONTAP 7.3 operating system, aggregate size is calculated using only the aggregate’s data disks.
Space Usage of an Aggregate To show how much space is available in an aggregate: system> aggr show_space aggr In increments of GB system> aggr show_space -g aggr1 Aggregate ‘aggr1'
Space available after right-sizing and allocation of kernel space
Total space WAFL reserve Snap reserve Usable space BSR NVLOG A-SIS Smtape 2483GB 248GB 0GB 2234GB 0GB 0GB 0GB This aggregate contains no volume Aggregate Total space Snap reserve WAFL reserve
Module Summary In this module, you should have learned to: Describe Data ONTAP RAID technology Identify a disk in a disk shelf based on its ID Execute commands to determine a disk ID Identify a hot-spare disk in a FAS system Describe the effects of using multiple disk types Create a 32-bit and a 64-bit aggregate Execute aggregate commands in the Data ONTAP operating system Calculate usable disk space
Module Objectives By the end of this module, you should be able to: Explain the concepts related to volume in the Data ONTAP® operating system Define and create a flexible volume Execute vol commands
VOLUMES Volumes are file systems that hold data that is accessible by means of one or more of the access protocols that the Data ONTAP operating system s, including NFS, CIFS, HTTP, Web-based Distributed Authoring and Versioning (WebDAV), FTP, Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI. You can create one or more Snapshot® copies of the data in a volume so that multiple, space-efficient, point-in-time images of the data can be maintained for backup and error recovery. The Data ONTAP operating system limits a storage system to only 100 aggregates, but within those aggregates you can create up to 500 traditional and flexible volumes. For FAS2040 and FAS3050 systems, the limit is 200 volumes per storage system.
Rules for Volumes A volume name must: – Begin with either a letter or underscore character (_) – Contain only letters, digits, and underscore characters – Contain no more than 255 characters
The maximum size of a flexible volume is limited by the maximum containing aggregate size with a space-guaranteed volume.
One root volume per storage system Contained in an aggregate: – Data ONTAP 7.3 and 8.0 7-Mode operating systems only 32-bit aggregates – Data ONTAP 8.0.1 7-Mode operating system and later s both 32-bit and 64-bit aggregates
ROOT VOLUMES The storage system contains a root volume that was created when the system was initially set up. The default root volume name is /vol/vol0. Storage systems on which the Data ONTAP 7.0 operating system or later was preinstalled have a FlexVol volume for a root volume. Systems that run earlier versions of the Data ONTAP operating system have a traditional root volume. Each storage system has only one root volume, but the designated root volume can be changed. The root volume is used to start up the storage system. It is the only volume with root attributes, which means that its /etc directory is used for configuration information.
VOLUMES ACCESS Volume path names begin with /vol. For example: If the name of a volume is /vol0, the volume path is /vol/vol0. If the name of a directory in a volume of shared s is cheryl, the volume path is /vol/s/cheryl. NOTE: There is no directory called /vol. Rather, /vol is a special virtual root path under which the storage appliance mounts directories. You cannot mount /vol to view all of the volumes on the storage system; you must mount each volume separately.
Flexible Volumes Flexible volumes allow you to manage the logical layer of the file system independently of the physical layer of storage. Multiple flexible volumes can exist within a single aggregate.
FLEXIBLE VOLUMES A flexible volume (also called a FlexVol volume) is a volume that is loosely coupled to its container aggregate. Because the volume is managed separately from the aggregate, you can create small FlexVol volumes (20 MB or larger), and then increase or decrease the size of FlexVol volumes in increments as small as 4 KB. Advantages of flexible volumes:
You can create flexible volumes almost instantaneously. These volumes: – – –
You can increase and decrease a flexible volume while online, allowing you to: – – –
4-9
Can be as small as 20 MB Are limited to aggregate capacity (if guaranteed) Can be as large as the volume capacity that is ed for your storage system (not guaranteed) Resize without disruption Size in any increment (as small as 4 KB) Size quickly
Creating a Flexible Volume To create a flexible volume using the command-line interface (CLI): system> vol create volname aggrname size[k|m|g|t]
To create a flexible volume using NetApp System Manager, use the Volume Wizard. When creating a flexible volume, you should know the following information available: – – – – –
Volume name (required) Aggregate name (required) Size (required) Language Space guarantee settings (discussed in Module 13)
CREATING A FLEXIBLE VOLUME When you create a FlexVol volume, you must provide the following information:
A name for the volume The name of the container aggregate The size of a FlexVol volume must be at least 20 MB and no more than 16 TB (or whatever is the largest size your system configuration s). In addition, you can provide the following optional FlexVol volume values:
4 - 12
Language (by default, the language of the root volume) Space-guarantee setting for the new volume
USING THE CLI TO CREATE A FLEXIBLE VOLUME To manage traditional and flexible volumes, use the vol commands. The majority of the vol commands work on traditional as well as flexible volumes. For a complete list of all the vol commands, see the system documentation. Language specifies the language that you want to use on this volume. The default language is the same as that set for the root volume. NOTE: It is strongly recommended that all volumes have the same language as the root volume, and that you set the volume language at volume creation time. Changing the language of an existing volume can cause some files to become inaccessible.
Qtrees A qtree is: – A logically defined file system within a volume – A special subdirectory at the root of a volume – Viewed as a directory by clients
Qtrees allow the to: – Further partition data within a volume – Establish unique quotas for the restricting space – Perform backup and recover with SnapVault® software – Perform logical mirroring with SnapMirror® software
QTREES You might consider creating a qtree for the following reasons:
You can easily create qtrees for managing and partitioning data within a volume. You can create a qtree to assign -based or workgroup-based usage quotas (soft or hard) and limit the amount of storage space that a specific or group of s can consume on the qtree to which they have access. Creating Qtrees When you want to group files without creating a volume, you can create qtrees instead. When you create qtrees, you can group files using any combination of the following criteria: Security style Oplocks setting Quota limit Qtree Limitations The primary limitation of qtrees is that a maximum of 4,995 qtrees are allowed per volume on a storage system. NOTE: When you enter a df command with a qtree path name on a UNIX® client, the command displays the smaller client file system limit or the storage system disk space, making the qtree look fuller than it actually is.
CLI: QTREE MANAGEMENT You can back up individual qtrees to: Add flexibility to your backup schedules Modularize backups by backing up only one set of qtrees at a time Limit the size of each backup to one tape Many products with NetApp software (such as SnapMirror® and SnapVault®) are “qtree-aware.” When you work at the qtree level, because you are working in a smaller increment than the entire volume, you can back up and recover files quickly.
Module Summary In this module, you should have learned to: Explain the concepts related to volume in the Data ONTAP operating system Define and create a flexible volume Execute vol commands
WRITE REQUEST DATA FLOW: WRITE BUFFER Write requests are received from clients. Each write request is stored in a buffer in memory. A copy of each request is made in the NVLOG. The WAFL file system acknowledges receipt as requests are received.
CONSISTENCY POINT A consistency point () is a completely self-consistent image of the entire file system and is not actually accomplished until the data has been written to disk and a new root inode is determined. Although s occur for many reasons, a few of the major reasons are:
5-8
Half of the nonvolatile RAM (NVRAM) card is full 10 seconds elapse A Snapshot copy is created The system is halted
s in the Data ONTAP Operating System For a , the Data ONTAP operating system flushes writes to disk – It always writes to new data blocks. – The volume is always consistent on the disk.
S IN THE DATA ONTAP OPERATING SYSTEM At least once every 10 seconds, the WAFL file system generates a (an internal Snapshot copy) so that disks contain a completely self-consistent version of the file system. When the storage system boots, the WAFL file system always uses the most recent on the disks, so you don’t have to spend time checking the file system, even after power loss or hardware failure. The storage system boots in a minute or two, with most of the boot time devoted to spinning up disk drives and checking system memory. The storage system uses battery-backed NVRAM to avoid losing data write requests that might have occurred after the most recent . During a normal system shutdown, the storage system turns off protocol services, flushes all cached operations to disk, and turns off the NVRAM. When the storage system restarts after a power loss or hardware failure, it replays into system RAM any protocol requests stored in NVRAM that are not on the disk. To view the types that the storage system is currently using, use the sysstat –x 1 option. s triggered by the timer, a Snapshot copy, or internal synchronization are normal. Other types of s can occur from time to time. Atomic Operations An atomic operation is actually a set of operations that can be combined so that they appear to the rest of the system to be a single operation, with only two possible outcomes: success or failure. For an operation to be atomic, the following conditions must be met: 1. Until the entire set of operations is complete, no other process can be “aware” of the changes being made. 2. If any single operation fails, then the entire set of operations fail and the system state is restored to its state prior to the start of any operations. Source: http://en.wikipedia.org/wiki/Atomic_operation
WRITE REQUEST DATA FLOW: WAFL TO RAID The WAFL file system provides short response times to write requests by saving a copy of each write request in system memory and battery-backed NVRAM, and immediately sending acknowledgments. This process is different from traditional servers that must write requests to the disk before acknowledging them. The WAFL file system delays writing data to the disk, which provides more time to collect multiple write requests and determine how to optimize storage of data across multiple disks in a RAID group. Because NVRAM is battery-backed, you don’t have to worry about losing data. In the WAFL file system: Data has no fixed location except in the superblock. Metadata is stored in files. All data is stored in files. Layouts can always be optimized. By combining batch writes, the WAFL file system:
5 - 10
Allows the Data ONTAP operating system to convert multiple small file writes into one sequential disk write. Distributes data across all disks in a large array, meaning no overloaded disks or hotspots (disks that might be utilized more than other disks in an array).
s from the WAFL File System to RAID The RAID layer calculates the parity of the data: – To protect it from one or more disk failures – To protect stripes of data
The RAID layer calculates checksums, stored in the block or zone method. If a data disk fails, the missing information can be calculated from parity. The storage system can be configured in one of two ways: – RAID 4: The system can recover from one disk failure in the RAID group. – RAID-DP®: The system can recover from up to two disk failures in the RAID group.
S FROM THE WAFL FILE SYSTEM TO RAID The WAFL file system then hands off data to the RAID subsystem, which calculates parity and then es the data and parity to the storage layer, where the data is committed to the disks. RAID uses parity to reconstruct broken disks. Parity scrubs, which proactively identify and solve problems, are performed at the RAID level using checksum data.
WRITE REQUEST DATA FLOW: RAID TO STORAGE Storage drivers move data between system memory and storage adapters, and ultimately to disks. The disk driver component reassembles writes into larger I/O operations and also monitors which disks have failed. The SCSI driver creates the appropriate SCSI commands to synchronize with the reads and writes that it receives.
s from RAID to Storage The storage layer commits data and parity to the physical disks. The root inode is updated to point to the new file inodes on the disk. NVRAM is flushed and made available. The now is complete.
WRITE REQUEST DATA FLOW: STORAGE WRITES The storage layer transfers data to physical disks. After data is written to the disks, a new root inode is updated, a is created, and the NVRAM bank is cleared.
NVRAM The Data ONTAP operating system writes from system memory: – NVRAM is never read during normal write operations. – NVRAM is backed up with a battery.
NVRAM NVRAM is best viewed as a log. This log stores a subset of incoming file actions. When a request comes in, two things happen:
The request gets logged to NVRAM. NVRAM is not read during normal processing. It is simply a log of requests for action (including any data necessary, such as the contents of a write request). The request is acted upon. The storage system's main memory is used for processing requests. Buffers are read from the network and from the disk and processed according to the directions that came in as CIFS or NFS requests. NVRAM holds the instructions that are necessary if the same actions need to be repeated. If the storage system does not crash, the NVRAM eventually is cleared without ever being read back. If the storage system crashes, the data from NVRAM is processed as if the storage system were receiving those same requests over the wire again. The same response is made by the storage system for the request in NVRAM, just as if it had come in through the network again.
Read Requests Every time a read request is received, the WAFL file system does one of two things: – Reads the data from the system memory, also known as the “cache” – Reads the data from the disks
The cache is populated by: – Data recently read from disk – Data recently written to disk
READ REQUESTS The Data ONTAP operating system includes several built-in, read-ahead algorithms. These algorithms are based on patterns of usage, which helps ensure that the read-ahead cache is used efficiently. Five Steps in a Read 1. 2. 3. 4. 5.
The network layer receives an incoming read request (read requests are not logged to NVRAM). If the requested data is located in cache, it is returned immediately to the requesting client. If the requested data is not located in cache, the WAFL file system initiates a read request from the disk. Requested blocks and intelligently chosen read-ahead data are sent to cache. The requested data is sent to the requesting client.
NOTE: In the read process, cache is used to refer to system memory.
READ REQUEST DATA FLOW: READ FROM DISK Read requests that can be satisfied from the read cache are retrieved from the disk. The read cache is then updated with new disk information for subsequent read requests.
READ REQUEST DATA FLOW: CACHE When a read request is received from a client, the WAFL file system determines whether to read data from the disk or respond to the request using the cache buffers. The cache can include data that was recently written to or read from the disk.
Module Summary In this module, you should have learned to: Describe how data is written to and read from a WAFL file system on a volume Explain the WAFL file system concepts, including s, RAID management, and storage levels Describe how RAID is used to protect disk data Explain how the WAFL file system processes write and read requests
Module Objectives By the end of this module, you should be able to: Restrict istrative access Restrict console and NetApp® System Manager access Configure a client machine as an istration host to manage a storage system
Storage System Access Exercise careful attention when you set up istrative access. – Limit who has istrative access – Limit where s can gain access
To secure your system: – – – –
Ensure a secure configuration Manage s Communicate securely with the storage system Guard physical access NOTE: Data ONTAP 8.0 operating system and later default more secure settings than previous versions
Securing a NetApp Storage System Use secure to enable Secure Shell (SSH) and Secure Sockets Layer (SSL) and to that the following settings are set: system> system> system> system>
options options options options
ssh.enable on ssh2.enable on ssh.wd_auth.enable on ssh.pubkey_auth.enable on NOTE: These steps were performed during the discussion of configuring a storage system with NetApp System Manager and the command-line interface (CLI).
Securing a NetApp Storage System Disable nonsecure protocols: system> system> system> system> system> system>
options options options options options options
rsh.enable off telnet.enable off ftpd.enable off httpd.enable off httpd..enable off ssh1.enable off
Check to ensure that the s are hardened: system> system> system> system> ...
options options options options
security.wd.rules.everyone on security.wd.rules.history 6 security.wd.minimum 8 security.wd.minimum.digit 1 Set options in compliance with corporate security policies.
istrative s Initially, there is only one istrative : root. Multiple istrative s are allowed, managed by role-based access control (RBAC). information is tracked in the syslog (/etc/messages) file, including: – name – Time of access – Node name or address
ISTRATIVE S To manage a storage system, you can use the default system istration , or root. You can also use the command to create additional s. s are beneficial because:
You can give s and groups of s differing levels of istrative access to your storage systems. You can limit an individual 's access to specific storage systems by giving the individual an istration on only those systems. Having different s allows you to display information about who is performing commands on a storage system, and what commands they are using. The auditlog file keeps a record of all operations that are performed on a storage system and the who performed each operation, as well as any operations that failed due to insufficient capabilities. You can assign each to one or more groups whose assigned roles (sets of capabilities) determine what operations they are authorized to carry out on the storage system. If a storage system that is running CIFS is a member of a domain or a Windows® workgroup, domain s that are authenticated on the Windows domain can use any available method to access the storage system. The Audit Log An audit log is a record of commands that are executed at the console through a Telnet shell, SSH, or by using the rsh command. All commands that are executed in a source-file script are also recorded in the audit log. Audit log data is stored in the /etc/log directory, in the auditlog file. HTTP istration operations, such as those resulting from the use of NetApp System Manager, are also logged. You can use the auditlog.max_file_size option to specify the maximum size of the auditlog file. By default, the Data ONTAP® operating system is configured to save an audit log. 6-8
Role-Based Access Control Role-based access control (RBAC) is a mechanism for managing a set of capabilities that an can perform on a storage system.
Follow these steps to implement RBAC:
– Create a role with specific capabilities. – Create a group with one or more assigned roles. – Create one or more s that are assigned to one or more groups.
ROLE-BASED ACCESS CONTROL Role-based access control (RBAC) specifies how s and s can use a particular computing environment. Most organizations have multiple system s, some of whom require more privileges than others. By selectively granting or revoking privileges for each , you can customize the degree of access that each has to the system. RBAC allows you to define sets of capabilities that apply to one or more s. s are assigned to groups based on their job functions, and each group is granted a set of roles to perform those functions.
Capabilities Capabilities are predefined privileges that allow s to execute commands or take other specified actions. A role is a set of capabilities. The following capabilities are predefined: – – – –
CAPABILITIES A capability is a privilege that is granted to a role to execute commands or take other specified actions. The Data ONTAP operating system uses four types of capabilities:
rights: These capabilities begin with - and are used to control which access methods an is permitted to use for managing the system. CLI rights: These capabilities begin with cli- and are used to control which commands an can use in the Data ONTAP command-line interface (CLI). Security rights: These capabilities begin with security- and are used to control an ’s ability to use advanced commands or change s for other s. API rights: These capabilities begin with api- and are used to control which API commands can be used. API commands are usually executed by programs, not s. However, you might want to restrict a specific program to certain APIs by creating a special for it, or you might want to have a program authenticate as the who is using the program and then limit the program by that ’s roles. See the System istration guide for the appropriate Data ONTAP operating system version.
Roles A role is a defined set of capabilities. The Data ONTAP® operating system includes several predefined roles. s can create additional roles or modify existing roles. Role
ROLES A role is a collection of capabilities or rights to execute certain functions. Usually, a role is created to assign a task or tasks to a particular group of s.
Predefined istrative Roles The root role grants all possible capabilities. The role grants all CLI, API, , and security capabilities. The power role grants the ability to: – Invoke all cifs, exportfs, nfs, and CLI commands – Make all cifs and nfs API calls – using Telnet, HTTP, RSH, and SSH sessions The compliance role grants the ability of the power role, along with the ability to use SnapLock® compliance software and file API calls. The audit role grants the ability to make snmp-get and snmpget-next API calls. The backup role grants the ability to make NDMP calls. The none role grants no istration capabilities.
PREDEFINED ISTRATIVE ROLES Roles have assigned capabilities that can be modified. The role list command is used to view capabilities that are assigned to each role. To assign a to a system, you must first assign the to a group that has specified capabilities.
GROUPS A group is a collection of s or domain s. It is important to that the groups that are defined in the Data ONTAP operating system are separate from other groups, such as groups that are defined in the Microsoft® Active Directory® server or a Network Information System (NIS) environment. This is true even if the groups that are defined in the Microsoft Active Directory and the groups that are defined in the Data ONTAP operating system have the same name. When you create new s or domain s, the Data ONTAP operating system requires that you specify a group. Therefore, you should create appropriate groups before you define s or domain s.
PREDEFINED GROUPS To create or modify a group, start by giving the group capabilities that are associated with one or more predefined or customized roles.
Creation Requirements: Role Use this CLI command to view current role definitions: system> role list [role]
– Leave the role list empty to view general information for all roles. – Enter a specific role to view detailed information about that particular role.
Use this CLI command to create a new role: system> role add
–a
,…
s A is: – An individual that may or may not have capabilities defined for the storage system – Part of a group NOTE: For security purposes, each should have a unique .
Creation Requirements: Group Use this CLI command to view current group definitions: system> group list [group_name]
– Leave the group name list empty to view general information for all groups. – Enter a specific group name to view detailed information about that particular group. Use this CLI command to create a new group: system> group add
-r
,…
A group must be associated with one or more roles.
Purpose of Local s Local s: – Are used for istrative access – In CIFS: Provide a list of authenticated s with Microsoft Windows workgroup authentications Provide access to s when there is no domain controller access with Windows domain authentications
PURPOSE OF LOCAL S Local s are often used to delegate configuration duties to other s. However, local s are also created if the system storage is configured to perform local authentication with CIFS or NFS protocols (for example, when the storage system’s CIFS server is configured for Windows workgroup authentication).
Security istration: s Use the following command from the CLI to manage s: – This command allows you to list, add, and delete s. – The is maintained in the /etc/registry file.
authentication is performed locally on the storage system.
Security istration Security options in this command define control: system> options security
This CLI command defines management: system> wd
NOTE:
– The root ID cannot be deleted. – There is no initial for root for upgrades (new installs require a root by default). – A cannot be the same as the associated name. – Root has full istration rights to the machine without if there are no other definitions or settings.
Specifies whether new s and s who for the first time after another has changed his or her must change the when they . The default value for this option is off. NOTE: If you enable this option, you must ensure that all groups have -telnet and cli-wd capabilities. s in groups that do not have these capabilities cannot to the storage system.
security.wd.lockout.numtries num
Specifies the number of allowable attempts before a is disabled. The default value for this option is 4,294,967,295.
security.wd.rules. enable {on|off}
Specifies whether a check-for- composition is performed when new s are specified. If this option is set to on, s are checked against the rules that are specified in this table, and the is rejected if it doesn't the check. If this option is set to off, the check is not performed. The default value for this option is on. By default, this option does not apply to root or s.
security.wd.rules. everyone {on|off}
Specifies whether a check-for- composition is performed for the root and s. If the security.wd.rules.enable option is set to off, this option does not apply. The default value for this option is off.
Specifies the number of previous s that are checked against a new to disallow repeats. The default value for this option is 0, meaning that no repeat s are allowed.
security.wd.rules. maximum max_num
Specifies the maximum number of characters in a . NOTE: This option can be set to a value greater than 16, but a maximum of 16 characters are used to match the . s with s longer than 14 characters cannot through Windows interfaces, so if you are using Windows, do not set this option to a value greater than 14. The default value for this option is 256.
security.wd.rules. minimum min_num
Specifies the minimum number of characters in a . The default value for this option is 8.
security.wd.rules. minimum.alphabetic min_num
Specifies the minimum number of alphabetic characters in a . The default value for this option is 2.
security.wd.rules. minimum.digit min_num
Specifies the minimum number of digit characters (numbers from 0 to 9) in a . The default value for this option is 1.
security.wd.rules. minimum.symbol min_num
Specifies the minimum number of symbol characters (white space and punctuation characters) in a . The default value for this option is 0.
Creation Requirements: Use this CLI command to view the current : system> list []
– Leave the list empty to view general information for all s. – Enter a specific to view detailed information about that particular . Use this CLI command to create a new : system> add <name> -g
,…
– A can be required (see security options). – The must be associated with one or more groups.
Data ONTAP 8.0 Security The Data ONTAP 8.0 operating system ships with the most secure options enabled: – SSH and SSL are enabled by default, but configuration is required. – Telnet, RSH, HTTP, and FTP are disabled by default.
When you upgrade a storage system, it inherits the settings of the previous version.
istration Host The setup command requests the name and IP address of the istration host. – This is typically a UNIX® or Linux® host that has access to mount the root volume from the storage system. – When mounted, root on the istration host has root access to the root volume.
ISTRATION HOST The term istration host is used to describe an NFS client machine that has the ability to view and modify configuration files that are stored in the /etc directory of the storage system’s root volume. When you designate a workstation as an istration host, the storage system's root file system (/vol/vol0 by default) is accessible by NFS mounting if NFS is licensed. You can designate additional istration hosts after setup by modifying the storage system's NFS exports and CIFS shares. The istration host can set using the setup command or the hidden .host option. istration Host Privileges The storage system grants root permissions to the istration host after the setup procedure is complete. This table describes istration host privileges. IF THE ISTRATION HOST IS ...
YOU CAN ...
An NFS client A CIFS client
6 - 32
Mount the storage system root directory and edit configuration files from the istration host. Use an RSH connection to enter Data ONTAP commands.
Connect to the storage system as root or , and then edit configuration files from any CIFS client.
Restricting Access To improve security, you can configure the storage system to allow s only from trusted hosts. Configure this option by using: – The CLI command: system> options trusted.hosts [hostname|*|-]
– NetApp System Manager
You can specify up to five clients to be given SSH and NetApp System Manager privileges.
RESTRICTING ACCESS When you restrict access by using the options trusted.hosts command:
6 - 33
Host names should be entered as a comma-separated list with no spaces. Enter an asterisk (*) to allow access to all clients (this is the default). Enter a hyphen (-) to disable access to the server. This value is ignored for Telnet if options telnet.access is set, and is ignored for HTTP istration if options httpd..access is set.
Module Summary In this module, you should have learned to: Restrict istrative access Restrict console and NetApp System Manager access Configure a client machine as an istration host to manage a storage system
Module Objectives By the end of this module, you should be able to: Identify the configuration of network settings and components in the Data ONTAP® operating system Explain and configure name resolution services Configure routing tables in the Data ONTAP operating system Define and create interface groups Discuss the operation of virtual LANs (VLANs) and how to route them
Interface Configuration The setup command performs the initial network interface configuration. After initial setup, you can create and modify the interface configuration using: – Command-line interface (CLI) with the ifconfig command – NetApp® System Manager
Interface configuration is stored in the /etc/rc file, which is executed when the storage system boots normally.
INTERFACE CONFIGURATION From the CLI, the ifconfig command displays and configures network interfaces for a storage system. These are ifconfig command examples:
Display network interface configurations: ifconfig -a
Change an interface IP address: ifconfig interface 10.10.10.XX
Bring down an interface: ifconfig interface down
Bring up an interface: ifconfig interface up
The /etc/rc file configures the interface settings during boot. To edit this configuration on the storage system, you can use the wrfile command from NetApp System Manager or from an istration host that uses CIFS or NFS. Example: Using the ifconfig command in the /etc/rc file: ifconfig interface 10.10.10.XX netmask 255.255.252.0 up
EHTERNET INTERFACE NAMING Your storage system s these interface types:
Ethernet, including quad-port Ethernet adapters Gigabit Ethernet (GbE) Asynchronous transfer mode (ATM). Emulated LAN (ELAN), and FORE/IP Onboard network interfaces (ed on FAS250, FAS270, FAS3000/V3000, and FAS6000/V6000 systems) 10 Gigabit Ethernet (10GbE) T offline engine (TOE) network interface card (NIC) Your storage system also s these virtual network interface types:
7-5
Virtual interface Virtual LAN (VLAN) Virtual hosting (VH)
INTERFACE NAMING QUIZ For physical interfaces, interface names are assigned automatically, based on the slot where the network adapter is installed. VLAN interfaces are displayed in the interfaceID_and_slot_number-vlan_id format, where slot_number is the slot where the network adapter is installed, and vlan_id is the identifier of the VLAN that is configured on the interface. For example, e8-2, e8-3, and e8-4 are three VLAN interfaces for VLANs 2, 3, and 4, configured on interface e8. You can assign names to vifs (for Data ONTAP 7.3.x), interface groups (for Data ONTAP 8.0 7-Mode and later), and emulated LAN interfaces.
Managing Interfaces: ifconfig These network interface parameters can be configured: – IP address – Netmask address – Broadcast address – Media type and speed – Maximum transmission unit (MTU) – Flow control (Gigabit Ethernet II controller only) – Up or down state system> ifconfig e0c 10.10.10.10 netmask 255.255.255.0 up
MANAGING INTERFACES: IFCONFIG Changes that you make by using the CLI are not permanent until you use either the CLI or NetApp System Manager to add the changes to the /etc/rc file. Network Parameter Descriptions
7-7
IP address: Standard format is used for IP addresses (for example, 192.168.23.10). IP addresses are mapped to host names in the /etc/hosts file. Netmask and broadcast address: Standard format is used for netmask and broadcast addresses (for example, 255.255.255.0 for netmask, and 192.168.1.255 for broadcast address). Media type and speed: These media types can be configured: [ mediatype { tp | tp–fd | 100tx | 100tx–fd | 1000fx | auto }] MTU: Use a smaller interface maximum transmission unit (MTU) value if a bridge or router on the attached network cannot break large packets into fragments. Flow control for the GbE II controller: The original GbE controller s only full duplex, not flow control. The GbE Controller II negotiates flow control with an attached device that s autonegotiation. However, if autonegotiation fails on either device, the flow control setting that was entered using the ifconfig command is used. These flow control settings can be configured: [ flowcontrol { none | receive | send | full } ] Up or down state: The state of any interface can be configured up or down.
CLI: Managing Interfaces To configure the current status: system> ifconfig
To display permanent settings: system> rdfile /etc/rc
Better yet, use NetApp System Manager.
To change permanent settings: system> wrfile /etc/rc
– – – –
A command overwrites the existing file. You can cut and paste existing information. Press Ctrl-C to save changes and exit. To activate changes to the /etc/rc file, reboot or issue source /etc/rc.
NETAPP SYSTEM MANAGER: INTERFACES SETUP NetApp System Manager provides an -friendly way to manage network interfaces with confidence. NetApp System Manager offers advanced functionality, such as interface groups and VLAN management. Any modifications that you make by using NetApp System Manager persist in the /etc/rc file through reboot.
NETAPP SYSTEM MANAGER: INTERFACE EDITS When you specify an interface as untrusted (untrustworthy), any packets that are received on the interface are likely to be dropped. For example, if you run a ping command on an untrusted interface, the Internet Control Message Protocol (ICMP) response packet that is received on the interface might be dropped.
INTERFACE GROUPS The Data ONTAP operating system connects with networks through physical interfaces (links). The Data ONTAP operating system has ed IEEE 802.3ad link aggregation for many years. This standard enables multiple network interfaces to be combined into one virtual interface group. After it is created, this group is indistinguishable from a physical network interface. In the Data ONTAP 7.3 operating system, virtual interfaces were referred to as “vifs.” In the Data ONTAP 8.0 operating system and later, interface aggregation groups are referred to as “interface groups.”
SINGLE-MODE INTERFACE GROUP Interface groups can be single-mode or multimode. In a single-mode interface group, one interface is active, and the other interface is inactive (on standby). NOTE: Failure of the active interface signals the inactive interface to take over and maintain the connection with the switch.
MULTIMODE INTERFACE GROUP Called simply “multi” in the ifgrp command, the multimode static interface group implementation complies with the IEEE 802.3ad static standard. The multimode dynamic link is compliant with the IEEE 802.3ad dynamic standard, also called Link Aggregation Control Protocol (LA). Dynamic multimode interface groups can detect the loss of link status, as well as a loss of data flow. However, a compatible switch must be used to implement the dynamic multimode configuration.
LOAD BALANCING Load balancing ensures that all the interfaces in a multimode vif or interface group are equally utilized for outbound traffic. Load balancing, which is ed for multimode trunks only, relies on an even distribution of hosts. Three methods of load balancing use the IP-based default:
IP-based: The outgoing interface is selected on the basis of the storage system and client’s IP address. Port-based: The outgoing interface is selected using a fast hashing algorithm on the source and destination IP addresses, along with the transport layer port number. Round-robin: All of the interfaces are selected on a rotating basis. NOTE: The round-robin method provides true load balancing, but it can cause out-of-order packet delivery and retransmissions due to overruns. Another method of load balancing, MAC-based, selects the outgoing interfaces on the basis of the storage system and client’s Media Access Control (MAC) address. Both IP-based and MAC-based address methods use a formula to determine which interface to use for outgoing frames. The formula uses the exclusive operator (XOR) value of the last four bits of the source destination addresses to determine which interface to return data on.
Example: Single-Mode Interface Group An interface must be brought down to be added to an interface group.
Entries created on the command line are not permanent. system> ifgrp create single Singig1 e0a e0c system> ifconfig Singig1 172.17.200.201 netmask 255.255.255.0 mediatype auto up system> ifgrp favor e0a system> ifconfig Singig1 Singig1:flags=1148043
mtu 1500 inet 172.17.200.201 netmask 0xffffff00 broadcast 172.17.200.255 ether 02:a0:98:03:28:8e
EXAMPLE: SINGLE-MODE INTERFACE GROUP These are the Data ONTAP 7.3.x operating system commands for a single-mode interface group: system> vif create single Singig1 e0a e0c system> ifconfig Singig1 172.17.200.201 netmask 255.255.255.0 mediatype auto up system> vif favor e0a system> ifconfig Singig1 Singig1:flags=1148043
mtu 1500 inet 172.17.200.201 netmask 0xffffff00 broadcast 172.17.200.255 ether 02:a0:98:03:28:8e
EXAMPLE: MULTIMODE INTERFACE GROUP These are the Data ONTAP 7.3.x operating system commands for a multimode interface group: system> vif create multi multiig2 e0a e0b e0c e0d system> ifconfig multiig2 172.17.200.202 netmask 255.255.255.0 mediatype auto system> ifconfig multiig2 multiig2:flags=1148043
mtu 1500 inet 172.17.200.202 netmask 0xffffff00 broadcast 172.17.200.255 ether 02:a0:98:03:28:8e
EXAMPLE: SECOND-LEVEL INTERFACE GROUP These are the Data ONTAP 7.3.x operating system commands for a second-level interface group: system> vif create multi multiig1 e0c e0e system> vif create multi multiig2 e0d e0f system> vif create single l2ig multiig1 multiig2 system> ifconfig l2ig 172.17.200.206 netmask 255.255.255.0 mediatype auto system> ifconfig l2ig l2ig:flags=1148043
mtu 1500 inet 172.17.200.206 netmask 0xffffff00 broadcast 172.17.200.255 ether 02:a0:98:03:28:8c
Host-Name Resolution Mechanisms The Data ONTAP operating system stores and maintains host information in these locations: – /etc/hosts file – Domain Name System (DNS) server – Network Information Service (NIS) server
In host-name resolution:
– The /etc/nsswitch.conf file controls the order in which these three locations are checked. – The Data ONTAP operating system stops checking locations when a valid IP address is returned.
NOTE: For convenience, you can use NetApp System Manager.
HOST-NAME RESOLUTION MECHANISMS The Data ONTAP operating system uses these methods to resolve host information on a storage system: /etc/hosts file Domain Name System (DNS) server Network Information Service (NIS) server DNS and NIS can be configured using the setup command during installation of a storage system. Therefore, many of the commands and files that are included in this lesson are executed automatically. Usually, NIS or DNS commands are only entered manually when: NIS or DNS was not configured during setup You need to make a change to a configuration The /etc/nsswitch.conf file displays the order in which a storage system searches for resolution. For example, to resolve host names, a storage system uses the search order list for hosts and (in this example) searches first using the /etc/hosts file, then NIS, and then DNS. Each line in the /etc/nsswitch.conf file uses this format. You can change the default search order for host-name resolution at any time by modifying this file. After a storage system resolves the host name, the search ends.
Configuration by /etc/hosts The /etc/hosts file provides local IP and name resolution. To modify /etc/hosts, use: – The rdfile and wrfile commands in CLI – Any client machine where the /etc directory is visible, such as an istration host – NetApp System Manager
CONFIGURATION BY /ETC/HOSTS Because the /etc/hosts file is checked first and changes in it take effect immediately, it is important to keep this file current. You can edit the file using a standard editing program. When using a standard editing program, be sure to include a blank line at the end. The /etc/hosts format is: IP address
hostname
alias(es)
The /etc/hosts file is generated automatically during the storage system setup procedure as part of the data installation process. It is populated at that time with IP addresses and host names. NOTE:
7 - 25
The default IP address for the storage system is listed in the /etc/hosts file. Installed cards without IP addresses are included in the /etc/hosts file, but they are commented out.
DNS Configuration The DNS provides a centralized mechanism for host-name resolution in Windows® and UNIX® environments. To configure the DNS: – NetApp System Manager – In the CLI, use: setup command options dns dns command
DNS CONFIGURATION DNS matches domain names to IP addresses and enables you to centrally maintain host information so that you do not have to update the /etc/hosts file every time you add a new host to the network. This is particularly helpful if you have several storage systems on your network. You can configure DNS by using options and commands. To make the configuration commands permanent, enter them in the /etc/rc file. The /etc/rc file is generated automatically during the setup procedure, as part of the Data ONTAP installation process. If you choose to set up DNS at that time, the file is populated with DNS configuration information. Use the information command dns info to display the status of the DNS resolver, a list of DNS servers, the state of each DNS server, the default domain that is configured on the storage system, and a list of other domains that are used with unqualified names for name lookup. EXAMPLE
– A centralized mechanism for host-name resolution – authentication
The storage system can participate only as a NIS client. To configure NIS: – Use NetApp System Manager – In the CLI, use: setup command options nis nis command
NIS The NIS client service provides information about security-related parameters on a network, such as hosts, s, groups, and netgroups. NIS enables you to centrally maintain host information, so you don't have to update the /etc/hosts file on every storage system on your network. Although the storage system can be an NIS client and can query NIS servers for host information, it cannot be an NIS server. You can use the options nis.slave.enable command to configure the storage system, an NIS client, as an NIS slave. The storage system then s NIS maps from the NIS master servers that are defined in nis.servers. The storage system NIS slave checks the master servers every 45 minutes. ed maps are stored under /etc/yp/
/. If you want to use NIS as the primary method for host resolution, specify it above the other methods that are listed in the /etc/nsswitch.conf file.
Route Information A route defines the path to a network or host. To display the current routing table in CLI, use netstat -r. system> netstat -r Routing tables Internet: Destination Gateway default 66.166.149.161 66.166.149.160/2 link#1 66.166.149.161 0:20:6f:10:25:7a
ROUTE INFORMATION A storage system does not function as a router for other network hosts, even if it has multiple network interfaces. However, the storage system does route its own packets. To display the defaults and explicit routes that your storage system uses to route its own packets, use the netstat -r command to view the current routing table. The netstat command displays network-related data structures. The route command enables you to manually manipulate the network routing table for a specific host or network that is specified by destination. To add or delete a specific host or network route in the routing table, use route. COMMAND
RESULT
route add default 10.10.10.1 1
Adds a default route through 10.10.10.1 with a metric (hop) of 1
route delete 193.20.8.173 193.20.4.254
Deletes the route destination 193.20.8.173 connecting through 193.20.4.254
The netstat Command Use the netstat –r command to view or change the network routing tables. Use the netstat –nr command to view or change the network routing tables with IP addresses (instead of name resolution). Use the netstat –rs command to view or display the per protocol statistics.
THE NETSTAT COMMAND The netstat command symbolically displays the contents of various network-related data structures. There are a number of output formats, depending on the options that you choose. Use the manual pages (using the man command) to see all available options.
The route Command Use the route -s command to show routing tables. Use the route -f command to flush all gateway entries in the routing table. Use the route –ns command to view network routing tables with IP addresses (instead of name resolution).
THE ROUTE COMMAND The route command enables you to manually manipulate the network routing table for a specific host or network that is specified by destination.
VLANS A virtual LAN (VLAN) is a switched network that is logically segmented by function, project team, or applications. End stations can be grouped by department, by project, or by security level. End stations can be geographically dispersed and still be part of the broadcast domain in a switched network. Advantages of VLANs
7 - 39
Ease of istration: VLANs enable a logical grouping of s who are physically dispersed. Moving to a new location does not interrupt hip in a VLAN. Similarly, changing job functions does not require moving the end station because it can be reconfigured into a different VLAN. Confinement of broadcast domains: VLANs reduce the need for routers on the network to contain broadcast traffic. Packet flooding is limited to the switch ports on the VLAN. Reduction of network traffic: Because the broadcast domains are confined to the VLAN, traffic on the network is significantly reduced. Enforcement of security: End stations on one VLAN cannot communicate with end stations on another VLAN unless a router is connected between them.
VLAN COMMANDS You can create a VLAN by using the vlan create command in the CLI or in the FilerView® browserbased istration tool. After you create the trunk, you can configure the VLAN like any other regular network interface by using the ifconfig command. EXAMPLE
RESULT
vlan create –g on e4 2 3 4
Creates three VLANs on interface e4 named e4-2, e4-3, and e4-4. The -g on option enables GVRP on the VLANs. Enter this command in the /etc/rc file to make it persistent over reboots.
vlan delete –q e8 2
Removes VLAN e8-2. If the interface was configured up, a message appears asking you to confirm the deletion.
vlan add e8 3
Adds e8-3 to the VLAN. Enter this command in the /etc/rc file to make it persistent over reboots.
vlan stat e4 10
Displays the number of packets that were received and transmitted on each interface. You can specify the time interval (in seconds) at which the statistics are displayed. If no number is entered, statistics are displayed by default at two-second intervals.
vlan modify –g off e8
Interface e8 is excluded from participating with GVRP. Enter this command in the /etc/rc file to make it persistent over reboots.
Using the CLI to Create a VLAN system> ifconfig e0b down system> vlan create e0b 10 vlan: e0b-10 has been created system> ifconfig e0b-10 172.17.200.201 netmask 255.255.255.0 mediatype auto system> ifconfig –a e0b:flags=80908043
mtu 1500 ether 00:a0:98:03:28:8f (auto-1000tfd-up) flowcontrol full
USING THE CLI TO CREATE A VLAN Use the vlan create and the ifconfig commands to create and configure a VLAN. The vlan create command: Creates a VLAN interface Includes the VLAN interface in one or more VLAN groups as specified by the VLAN identifier Enables VLAN tagging Enables (optionally) GVRP on the VLAN interface After you create the VLAN interface with the vlan command, you can configure it by using the ifconfig command.
Module Summary In this module, you should have learned to: Identify the configuration of network settings and components in the Data ONTAP operating system Explain and configure name resolution services Configure routing tables in the Data ONTAP operating system Define and create interface groups Discuss the operation of VLANs and how to route them
Check Your Understanding Where can you set or change IP to host-name resolution locally on the storage system? How do you configure host-name resolution for a storage system? What is the difference between single-mode and multimode trunks? What are the benefits of a VLAN?
Module Objectives By the end of this module, you should be able to: Explain NFS implementation in the Data ONTAP® operating system License NFS on a storage system Explain the purpose and format of /etc/exports List and define the export specification options Describe the use of the exportfs command Mount an export on a UNIX® host
NFS Overview NFS enables network file systems (clients) to share files and directories that are stored and istered centrally from a storage system. These platforms usually NFS: – – – –
NFS OVERVIEW NFS, a protocol that was originally developed by Sun Microsystems in 1984, enables s on a client computer to access files over a network as easily as if the network devices were attached to its local disks. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. The NFS protocol is specified in RFC 1094, RC 1813, and RFC 3530.
EXPORTED RESOURCES OVERVIEW In this diagram, the storage system contains resources that many s need, such as data_files, eng_files, and misc_files. To use a resource, the storage system must have the resource exported, and Client1 must have the resource mounted. A on Client1 can then change to the directory (cd) that contains the mounted resource and access it as if it were stored locally (assuming that permissions are set appropriately).
Setting up NFS Configure NFS using either: – CLI – NetApp® System Manager
When you set up NFS, you should: – Have an NFS license code – Determine if you are enabling NFS over T, Data Protocol (UDP), or both – Determine which version of NFS to enable
CLI: NFS SETUP When you license NFS on a storage system, it starts the daemons (rpc.mountd and nfsd) that handle NFS RPC protocol. These are NFS configurable options:
Exporting Resources To make resources available to remote clients, the resource must be exported. To export a resource persistently: – Edit the /etc/exports file with a new entry. – Execute the exportfs -p command.
EXPORTING RESOURCES To export resources, use one of these methods:
For persistence across reboots, specify the resources to export in the /etc/exports file, and then execute the exportfs -a command to make the changes effective immediately. For temporary access, use the exportfs command to export resources that are not specified in the /etc/exports file, or to export resources that are specified in the file but with different access permissions.
Five Rules for Creating Exports 1. You must export each volume separately. If you create, rename, or destroy a volume, the /etc/exports file is updated automatically. You can disable this functionality by using the options nfs.export.auto-update switch. 2. The storage system must be able to resolve host names if host names are used in exports: /etc/hosts, NIS, DNS 3. Access must be granted in a positive way: – –
A host is excluded when it is not listed or when it is preceded by a dash (-). If no host is specified, all hosts have access.
4. Subdirectories of parent exports can be exported with different option specifications. 5. Permissions are determined by matching the longest prefix to the access permissions in the /etc/exports file.
Specifies the full path to the directory that is exported
The first option is listed following a dash. Additional options are separated by commas. In this example the -rw option enables host1 and host2 to mount the pubs directory with readwrite permissions. Host names are listed, separated by colons.
ADDING AN EXPORT: /ETC/EXPORTS System s must control how NFS clients access files and directories on a storage system. Exported resources are resources that are made available to hosts. NFS clients can only mount resources that have been exported from a storage system that is licensed for NFS. To export directories, add an entry for each directory to the /etc/exports file. Use the full path to the directory and options. The full path name must include /vol. Export specifications use these options to restrict access:
8 - 13
root = list of hosts, netgroup names, and subnets rw = list of hosts, netgroup names, and subnets ro = list of hosts, netgroup names, and subnets
Check Your Understanding 1. Allow root access to /vol/vol0 by the host. 2. Allow read-write access to /vol/vol0/home by host1 and host2. 3. Allow read-write access to /vol/vol1 by host1 and read-only access by host3.
THE EXPORTFS COMMAND To specify which file system paths the Data ONTAP operating system automatically exports when NFS starts, add export entries to (or remove them from) the /etc/exports file. To manually export or unexport file system paths, use the exportfs command in the storage system CLI. Editing the /etc/exports File To add export entries to (or remove entries from) the /etc/exports file, use a text editor on an NFS client that has root access to the storage system. This is an example of /etc/exports file entries: #Auto-generated by setup Mon Mar 24 14:39:40 PDT 2008 /vol/vol0
COMMON EXPORTFS OPTIONS To export a file system path and add a corresponding export entry to the /etc/exports file, enter this command: exportfs -p [options] path NOTE: If you do not specify an export option, the Data ONTAP operating system automatically exports the file system path with the rw and sec=sys export options. To export all file system paths that are specified in the /etc/exports file and unexport all file system paths that are not specified in the /etc/exports file, enter this command: exportfs –r exportfs –uav exportfs -u path exportfs -z path To unexport all file system paths without removing the corresponding export entries from the /etc/exports file, enter this command: To unexport a file system path without removing the corresponding export entry from the /etc/exports file, enter this command: To unexport a file system path and remove the corresponding export entry from the /etc/exports file, enter this command:
Mounting From a Client To mount an export from a client: 1. Establish a session with a client. 2. Create a directory as a mountpoint for the storage system. 3. Mount the exported directory in the host directory that you just created. 4. Change directories to the mounted export. 5. Enter ls –l to that the storage appliance is mounted and accessible. telnet 10.32.30.20
(1)
# mkdir /system-vol1-qt1 (2) # mount system:/vol/vol1/qtree1 /system-vol1-qt1 # cd /system-vol1-qt1 (4) # ls –l (5) -rwxr-xr-x root 719634 FEB 11 2004 ,general -rwxr-xr-x root 719634 FEB 13 2004 ,policy
MOUNTING FROM A CLIENT Use the mount command to mount an exported NFS directory from another machine. An alternate way to mount an NFS export is to add a line to the /etc/fstab (called /etc/vfstab on some UNIX systems). This line must specify the NFS server host name, the exported directory on the server, and the local machine directory where the NFS share is to be mounted. For more information, see the NFS documentation for your client.
Other NFS istration Resources For more information about NFS istration, see the Data ONTAP NFS istration course. This advanced course covers: – Exporting resources across domains, subnets, and netgroups – Advanced configuration – NFS statistics gathering – NFS performance tuning – NFS troubleshooting
Module Summary In this module, you should have learned to: Explain NFS implementation in the Data ONTAP operating system License NFS on a storage system Explain the purpose and format of /etc/exports List and define the export specification options Describe the use of the exportfs command Mount an export on a UNIX host
Check Your Understanding What does NFS stand for? What is the format for the /etc/exports file? What is the purpose of export options? What command would you use to view what is exported from the storage appliance?
Module Objectives By the end of this module, you should be able to: Describe the CIFS environment Configure the storage system to participate in the CIFS environment Share a resource on the storage system Map a drive from a client to the shared resource on the storage system
CIFS Definition CIFS is a Microsoft® network file-sharing protocol that evolved from the Server Message Block (SMB) protocol. In a CIFS environment, any application that processes network I/O can access and manipulate files and folders (directories) on remote servers. CIFS can be either SMB 1.0 or SMB 2.0 Data ONTAP Version
CIFS DEFINITION CIFS is a Microsoft® network file-sharing protocol that evolved from the Server Message Block (SMB) protocol. When using CIFS, any application that processes network I/O can access and manipulate files and folders (directories) on remote servers in a manner that is similar to the way in which the application accesses and manipulates files and folders on the local system.
Authentication In a CIFS environment, the storage system authenticates s in one of four ways: – Active Directory® authentication – Microsoft Windows NT® 4.0 domain authentication – Windows workgroup authentication – Authentication for non-Windows workgroups
This module focuses on only Active Directory authentication.
Storage System s a Domain When a storage system s a domain: The domain controller adds the storage system to a domain database. The storage system becomes a member server.
STORAGE SYSTEM S A DOMAIN When a storage system s a domain, it becomes a member server that provides services to clients. The storage system (member server) sends a request to a domain controller, and the domain controller adds the machine to the directory database.
Domain Name to IP Resolution When a client accesses a storage system’s resource: Client requests the browse list from the domain controller. Domain controller s the DNS/WINS server for the IP address. Client communicates with storage system. What is the storage system’s IP?
Authentication authentication on a storage system in a domain: Domain s are created on domain controller. session authentication occurs at the domain controller. Authenticated s must be authorized to access a share and resources.
Member Server
Clients
A
B
Domain Controller Machine name
Session with Client-B Client-B requests session authentication
AUTHENTICATION Domain s that have already been added to the domain controller can browse the storage system for available shares and then request access to the storage system and its shares and to the resources in a share. session authentication with a name and is performed centrally on the domain controller; this establishes a session with the storage system. s must be authorized to access a share and the resources in a share. Data access on a storage system requires a network to the storage system. A can ister a storage system through the network (for example, through a Telnet session) using a local on the storage system; however, a cannot locally to a storage system to access data. In this example, Client-B’s requests session authentication with the member server (storage system). The member server requests the domain controller to authenticate Client-B’s . The domain controller authenticates Client-B’s , and a session is established with Client-B’s and the member server (storage system).
CLI: CIFS Setup To prepare a storage system to Windows client s, complete these steps at the command-line interface (CLI): 1. License CIFS. 2. Perform the initial CIFS configuration by running the cifs setup program or using NetApp® System Manager.
If the setup is successful, the CIFS server starts automatically.
CLI: CIFS SETUP Steps to Set Up CIFS During CIFS setup, you can perform these tasks:
9 - 10
Assign or remove WINS servers Configure the storage system Active Directory site information (if not already configured) the storage system to a domain or change domains Automatically generate /etc/wd and /etc/group files when NIS or Lightweight Directory Access Protocol (LDAP) is enabled
CREATING AND MANAGING SHARES When you create CIFS shares, there is a limitation with Windows Computer Management. For more information, see the Data ONTAP CIFS istration course and the File Access and Protocols Management Guide.
THE CIFS SHARES COMMAND You can use the CLI or NetApp® System Manager to create and modify shares. Using the CLI to Create and Access Shares To display one or more shares, add a share, change a share, or delete a share, use the cifs shares command. Parameters Used with CIFS Shares The parameters that are used with the cifs shares command enable you to modify or display CIFS shares information. Share settings can be changed at any time, even if the share is in use.
The cifs shares Command: Example system> cifs shares -add pub /vol/vol0/pub -comment “new pub” system> cifs shares Name ---ETC$
HOME C$ pub
Mount Point Description --------------------/etc Remote istration BUILTIN\s / Full Control /vol/vol0/home Default Share everyone / Full Control / Remote istration BUILTIN\s / Full Control /vol/vol0/pub new pub everyone / Full Control
system> cifs shares -delete pub system> cifs shares Name ---ETC$ HOME C$
Mount Point Description --------------------/etc Remote istration BUILTIN\s / Full Control /vol/vol0/home Default Share everyone / Full Control / Remote istration BUILTIN\s / Full Control
MANAGING SHARE PERMISSIONS Providing Access to Shares After you have created shares, you can use the cifs access command to set or modify the access control list (ACL) for that share. This command grants or removes access by specifying the share, the rights, and the or group. EXAMPLE
RESULT
cifs access webfinal tuxedo Full Control
Gives full Windows NT access to the group tuxedo on the webfinal share
cifs access webfinal engineering\jbrown rw
Gives read/write access to the engineering\jbrown on the webfinal share
Specifies that the is the name of the UNIX® group. Use this option when you have a UNIX group and a UNIX , or a Windows NT or group with the same name.
Specifies that the or group for the ACL entry can be a Windows NT or group (if the storage system uses NT domain authentication), or can be the special group “everyone.”
group
Specifies the or group for the ACL entry. Can be a Windows NT or group (if the storage system uses NT domain authentication), or can be the special group “everyone.”
rights
Assigns either Windows NT or UNIX-style rights. Windows NT rights are: No Access, Read, Change, and Full Control.
MAPPING A DRIVE TO A SHARE On a Windows workstation, map a network drive letter to a share by performing these steps: 1. Open Windows Explorer and click Tools > Map Network Drive. The Map Network Drive window appears. 2. In the Drive list box, select any unused letter. In the example, the letter K is selected. 3. In the Folder list box, type \\storage_system\C$. NOTE: The storage system name can be the name or IP address. 4. Click Finish. The Map Network Drive utility attempts to connect to the storage system and share. 5. When the “Connect to” window appears, in the name text box, type , and in the text box, type the ’s . 6. Click OK. NOTE: Steps continue on next page.
MAPPING A DRIVE TO A SHARE COMPLETE The mapped network drive letter (Z is shown in this example) displays the mapping to the C$ share. The etc folder and the home folder are both in the C$ share.
TERMINATING SESSIONS The cifs terminate command stops the CIFS service. If a single host is named, all CIFS sessions that were opened by that host are terminated. If a host is not specified, all CIFS sessions are terminated, and the CIFS service is shut down. If you run the cifs terminate command without specifying a time until shutdown, and there are s with open files, you are prompted to enter the number of minutes to delay before terminating. If the CIFS service is terminated immediately on a host that has one or more files open, s are unable to save changes. You can use the -t option to warn of an impending service shutdown. If you execute cifs terminate from rsh, you must supply the -t option. EXAMPLE
RESULT
cifs terminate -t 10 gloriaswan
Terminates a session in 10 minutes for the host gloriaswan and periodically sends alerts to the affected host or hosts
cifs terminate -t 0
Terminates all CIFS sessions immediately for all clients
cifs restart
Reconnects the storage appliance to the domain controller and restarts the CIFS service
CLI: Reconfiguring CIFS Use the cifs terminate command to disconnect s and stop the CIFS service. Use the cifs setup command to reconfigure the CIFS service. The storage system automatically attempts to restart the CIFS service with the new CIFS configuration.
CLI: RECONFIGURING CIFS To reconfigure CIFS, you must run the cifs setup program again, and then enter new configuration settings. You can use cifs setup to change these CIFS settings: WINS server addresses Security style (multiprotocol or NTFS-only) Authentication (Windows domain, Windows workgroup, or UNIX ) File system that is used by the storage system Domain or workgroup to which the storage system belongs Storage system name Prerequisites for Reconfiguring CIFS Before you reconfigure CIFS, you must meet these prerequisites: The CIFS service must be terminated. If you want to change the storage system's domain, the storage system must be able to communicate with the primary domain controller for the domain in which you want to install the storage system. You cannot use the backup domain controller for installing the storage system.
Displaying CIFS Sessions You can display these types of CIFS session information: – A summary of session information – Share and file information for one or all connected s – Security information for one or all connected s
To obtain session information, use: – CLI – NetApp System Manager
DISPLAYING CIFS SESSIONS You can display these types of session information:
A summary of session information, including storage system information and the number of open shares and files that were opened by each connected NOTE: The number of open shares that are shown in the session information includes the hidden IPC$ share.
Share and file information about one connected or all connected s, including names of shares opened by a specific connected or all connected s Access levels of opened files Security information about a specific connected or all connected s, including the UNIX UID and a list of UNIX groups and Windows groups to which the belongs.
CLI: MANAGING CIFS SESSIONS To display a summary of information about the storage system and connected s, use the cifs sessions command without arguments. For information about a single connected , you can specify the , the machine name, or IP the address. You can use the -s option to obtain security information about one or all connected s. EXAMPLE
RESULT
cifs sessions
Displays a summary of all connected s
cifs sessions growe
Displays information about the , files opened by the , and the access level of the open files
cifs sessions growe_NT cifs sessions 192.168.33.3
Displays information about the host, files opened by the host, and the access level of the open files
cifs sessions –s
Displays security information about all connected s
cifs sessions –s growe_NT
Displays security information about the connected machine
Other CIFS istration Resources For more information about CIFS istration, see the Data ONTAP CIFS istration course. This advanced course covers: – Different CIFS authentication methods:
– – – –
Workgroup Active Directory Windows NT 4.0 domain Non-Windows workgroup
Module Summary In this module, you should have learned to: Describe the CIFS environment Configure the storage system to participate in the CIFS environment Share a resource on the storage system Map a drive from a client to the shared resource on the storage system
Module Objectives By the end of this module, you should be able to: List some security methods for protecting data Explain and configure a security style setting for a volume and a qtree Describe methods of tracking and restricting storage usage Explain, create, and manage quotas Explain and configure the Data ONTAP® FPolicy® file-screening policy
NAS Management After you configure network-attached storage (NAS) protocols, additional steps are needed to ensure that you get the full use of NAS technologies. This module examines: – Securing data – Tracking and restricting storage usage
Data Security Techniques NetApp® systems provide several security methods to protect data on your storage system. Data ONTAP® operating system – Limit protocol access by interface – Specify and manage the security style
Integration with third-party products – Virus scanning – File screening and hierarchical storage management (HSM)
Securing NAS Data with Data ONTAP By default, protocols are accessible by all configured interfaces To restrict access through a particular interface: system> options interface.blocked.cifs e0a system> options interface.blocked.nfs e0a,e0b
To allow a protocol access through all interfaces: system> options interface.blocked.cifs “”
Multiprotocol Volumes and qtrees can have either: – New Technology File System (NTFS) security style access control list (ACL) permissions – UNIX®-style permissions
MULTIPROTOCOL Qtrees can have one of these three security styles:
NTFS: – –
UNIX: UNIX files and directories, like UNIX systems, have UNIX permissions. Mixed: – –
10 - 7
For CIFS clients, security is handled using Windows NTFS ACLs. For NFS clients, the NFS ID (UID) is mapped to a Windows security identifier (SID) and its associated groups. These mapped credentials are used to determine file access, based on the NFTS ACL.
Both NTFS and UNIX security are allowed. A file or directory can have either Windows NT® permissions or UNIX permissions. The default file security style is the style that was most recently used to set permissions on that file.
Security Style Interaction For a Windows to access: An NTFS security style volume or qtree, the Windows is tested against NTFS security style ACLs A UNIX security style volume or qtree, the Windows must be mapped to a UNIX UID (and an associated UNIX GID)
SECURITY STYLE INTERACTION mapping between UNIX s and NTFS s always occurs, whether the chosen security style is NTFS or multiprotocol. Even when a Windows client is accessing data through an NTFS qtree on a storage system with NTFS security style, a mapping occurs for the Windows client .
WINDOWS-TO-UNIX RESOLUTION When a CIFS attempts to access a volume or qtree that has UNIX permissions, the is authenticated with the method by which the CIFS server has previously been configured. If the storage system has been configured for domain authentication, the storage system es the credentials to the domain controller for proper authentication. The credentials are either authenticated or not. If the storage system has been configured for workgroup authentication, then the is authenticated by the /etc/registry.
WINDOWS-TO-UNIX RESOLUTION A Windows authenticated is looked up in the /etc/map.cfg file. Three possibilities are available. The might be mapped to a UNIX , not mapped at all, or mapped to an empty string. If the is mapped, then the mapped UNIX is ed to verification. If the is not mapped, then the authenticated CIFS ’s name, with all lowercase letters, is tried for UNIX verification. If the is mapped to an empty string (“ ”), then the is invalid. Verification The storage system attempts to a UNIX by employing the mechanism that is specified in the /etc/nsswitch.conf file. The mechanism could be using /etc/wd, using Network Information Service (NIS), using Lightweight Directory Access Protocol (LDAP), or using NIS and LDAP. If verification is unsuccessful, then the option wafl.default_unix_ is tried as a generic . A typical default UNIX is “pc” (UID=65534 and GID=65534), which is stored in /etc/wd file by default. If verification is successful, the CIFS is properly associated with a UNIX . If verification is unsuccessful, the CIFS is invalid. Windows The Windows is a special case. The is mapped to the UNIX name “root” (UID=0 and GID=0) if the wafl.nt__priv_map_to_root option is set “on.”
WINDOWS-TO-UNIX RESOLUTION Unauthenticated or invalid s still might be allowed access to the resource if the option cifs.guest_ is configured. The guest is then ed to the storage system for UNIX verification that is specified by the /etc/nsswitch.conf file.
UNIX Access to Files For a UNIX to access: A UNIX security style volume or qtree, the UNIX is tested against the UNIX files permissions An NTFS security style volume or qtree, the UNIX and group must be mapped to a Windows (and associated Windows groups)
UNIX ACCESS TO FILES This section explains the default mechanism (/etc/map.cfg) for mapping UNIX names to Windows s. This mapping can also be accomplished by using LDAP, Active Directory, or NIS servers, as described in http://www.netapp.com/library/tr/3458.pdf.
UNIX-TO-WINDOWS RESOLUTION For the sake of this example, assume that the version of NFS is NFS v2 or v3. When an NFS attempts to access a volume or qtree that has NTFS ACLs, the ’s UID is ed from the client to the storage system. The storage system attempts to resolve the ’s name by the normal UNIX methods, as defined in /etc/nsswitch.conf.
UNIX-TO-WINDOWS RESOLUTION A valid name is then looked up in the /etc/map.cfg file. Three possibilities are available. The might be mapped to a Windows , not mapped at all, or mapped to an empty string. If the is mapped, then the mapped Windows is ed to verification. If the is not mapped, then the UNIX ’s name is tried for CIFS verification. If the is mapped to an empty string (“ ”), then the is automatically invalid. Verification The storage system attempts to a Windows by using the mechanism as configured by the CIFS server. The mechanism is either using the local s that are defined in the /etc/registry or ing verification to a domain controller. If verification is unsuccessful, then the option wafl.default_nt_ is tried as a generic . There is no default setting for this value, so it must be configured. If verification is successful, the NFS is properly associated with a Windows . If verification is unsuccessful, the NFS is invalid.
SECURITY STYLES A CIFS can access the file without disrupting UNIX permissions by using one of these techniques:
For the versions of the Data ONTAP operating system earlier than version 7.2, the CIFS must have the SecureShare® multiprotocol file-locking system, an add-on from the NetApp site. For the Data ONTAP 7.2 operating system and later, the CIFS can manage security directly with cifs.preserve_unix_security. For more information, see the CIFS istration on Data ONTAP course.
Setting Security Styles To set a security style for a volume: system> qtree security /vol/vol0 ntfs
To set a security style for a qtree: system> qtree security /vol/vol0/q1 ntfs
Changing a security style resets all security permissions within a volume or qtree to the default. – NTFS: Everyone has read-write access – UNIX: /group/world have rwx drwxrwxrwx
Securing NAS Data with Third-Party Tools Data ONTAP operating system can integrate with third-party data to secure NAS data. Virus protection: – Provides on-access virus scanning of files on a storage system – Requires a virus-scanning Windows server running compliant antivirus applications – May require a file to be scanned before a CIFS client can open it
SECURING NAS DATA WITH THIRD-PARTY TOOLS CIFS virus protection is a feature of the Data ONTAP operating system that enables a virus-scanning Windows server running compliant antivirus applications to provide on-access virus scanning of files on a storage system. On-access virus scanning means that a file is scanned before a CIFS client is allowed to open it. For more information about virus scanning, please see Technical Report 3107 entitled Antivirus Scanning Best Practices Guide at the NetApp website.
PURPOSE OF QUOTAS Quotas are important tools for managing the use of disk space on your storage system. A quota is a limit that is set to control or monitor the number of files or the amount of disk space that an individual or group can consume. Quotas enable you to manage and track the use of disk space by clients on your system. A quota is used to:
Limit the amount of disk space or the number of files that can be used Track the amount of disk space or the number of files that are used, without imposing a limit Warn s when disk space or file usage is high
QUOTA TYPE The quota limit type can be a: : Indicated by a UNIX or Windows ID Group: Indicated by UNIX GIDs Qtree: Represented by the qtree path name quotas, group quotas, and qtree quotas are stored in the /etc/quotas file. You can edit this file at any time. In both NFS and CIFS environments, quotas are based on a Windows name, UNIX ID, or GID. The CIFS system must maintain: The /etc/wd file for CIFS s to obtain UIDs (if those s are going to create UNIX files) The /etc/group file for CIFS s to obtain GIDs or use an NIS server to implement CIFS quotas Qtree quotas do not require UIDs or GIDs. If you only implement qtree quotas, you do not have to maintain the /etc/wd and /etc/group files (or NIS services).
QUOTA LIMITS Disk Column The Disk Space Hard Limit field specifies the maximum disk space that is allocated to the quota target. This hard limit cannot be exceeded. If the limit is reached, messages are sent to the and console, and SNMP traps are generated. Files Column The Files Hard Limit field specifies the maximum number of files that the quota target can use. To track usage of the number of files without imposing a quota, enter a blank or a dash (-) in this field. You can omit abbreviations (uppercase or lowercase) and you can enter an absolute value, such as 15000. NOTE: The value for the Files Hard Limit field must be on the same line in your quotas file as the value for the disk field; otherwise, the Files field is ignored. Threshold Column The Threshold field specifies the limit at which write requests trigger messages to the console. If the threshold is exceeded, the write still succeeds, but a warning is logged to the console. The Threshold field uses the same format as the Disk field. Do not leave this field blank. The value that follows Files is always assigned to the Threshold field. If you do not want to specify a threshold limit, enter a dash (-) here. Soft Disk Column The Disk Space Soft Limit field specifies the disk space that can be used before a warning is issued. If this limit is exceeded, a message is logged to the console, and an SNMP trap is generated. When the soft disk limit returns to normal, another syslog message and SNMP trap is generated. The Disk Space Soft Limit field has the same format as the Disk Space Hard Limit field. If you do not want to specify a soft limit, enter a dash (-) or leave this field blank. NOTE: The Disk Space Soft Limit value must be on the same line as the value for the Disk Space Hard Limit field; otherwise, the soft disk limit is ignored. The sdisk limit is the NFS equivalent of a CIFS threshold. 10 - 24
Soft Files Column The Files Soft Limit field specifies the number of files that can be used before a warning is issued. If the soft limit is exceeded, a warning message is logged to the storage system console, and an SNMP trap is generated. When the soft files limit returns to normal, another syslog message and SNMP trap is generated. The Files Soft Limit field has the same format as the Files Hard Limit field. If you do not want to specify a soft files limit, enter a dash (-) or leave the field blank.
NETAPP SYSTEM MANAGER: RESIZE QUOTAS The quota resize command adjusts currently active quotas to reflect changes in the /etc/quotas file. For example, if you edit an entry in /etc/quotas to increase a quota, executing the quota resize command causes the change to take effect. To view active quotas, create a quota report before and after the quota resize. Use quota resize only when quotas are already set for the volume. The quota resize command implements additions and changes to the /etc/quotas file.
Quota Messages “Disk quota exceeded” results from requests that cause a or group to exceed an applicable quota. “Out of disk space” results from requests that cause the number of blocks or files in a qtree to exceed the qtree limit. Root or Windows : – Group quotas do not apply – Tree quotas do apply
QUOTA MESSAGES Quotas are set to warn you that limits are being approached, enabling you to act before s are affected. For all quota types, the Data ONTAP operating system sends console messages when the quota is exceeded and when it returns to normal. SNMP traps for quota events are also initiated. Additional messages are sent to the client when hard quota limits are exceeded. NOTE: Threshold quotas in CIFS are the same as soft quotas in NFS. Quota Error Messages When receiving a write request, the Data ONTAP operating system checks to see if the file to be written is in a qtree. If the write would exceed the tree quota, this error message is sent to the console: tid tree_ID: tree quota exceeded on volume vol_name If the qtree is not full but the write would cause either the or group quota to be exceeded, the Data ONTAP operating system logs one of these errors: uid _ID: disk quota exceeded on volume vol_name gid group_ID: disk quota exceeded on volume vol_name Error Messages Received by Clients When hard quota limits are violated, the Data ONTAP operating system returns an out-of-disk-space error to the NFS write request or a disk-full error to the CIFS write request.
Quota Rules New s or groups that are created after the default quota is in effect have the default value. s or groups that do not have a specific quota defined have the default value. Configurable rules (/etc/quotas fields) are: # Target
QUOTA RULES Target Column The Target column identifies what the quota is applied against. In this example, there are multiple equivalent ways in which you can specify the target. These entries provide target UIDs (for s) or GIDs (for groups) of the local storage system. The ID numbers must not be 0. The system checks quotas every time it receives a write request, so it is important to use a target that won’t change over time, unless you for the change in the quotas file. NOTE: Do not use the backslash (\) or an “at” sign (@) in UNIX quota targets. The Data ONTAP operating system interprets these characters as part of Windows names. Type Column You can create a quota based upon the following types: , @volume_path, @tree_path, group, group@volume_path, group@tree_path, or a tree (short for qtree). Default Quotas You can create a default quota (*) for s, groups, or qtrees. A default quota applies to quota targets that are not explicitly referenced in the /etc/quotas file. Overriding Default Quotas If you do not want the Data ONTAP operating system to apply a default quota to a particular target, you can create an entry in the /etc/quotas file for that target so that the explicit quota overrides the default quota.
QUOTA REPORT The quota report command prints the current file and space consumption for each , group, and qtree. Using a path argument, it displays information about all quotas that apply to the files in the path. Space consumption and disk limits are rounded up and reported in multiples of 4 K. In the example above, the quota report command is used with the –u option. For targets with multiple IDs, this report shows the first ID on the first line of each report entry. Other IDs are shown on separate lines with one ID per line. Each ID is followed by its original quota specifier, if any. The default is to display one ID per target. These options are available with the quota report command:
The -s option shows soft and hard limit values for each , group, and qtree. The -u option shows the first ID on first line of each report entry for targets with multiple IDs. Other IDs are shown on separate lines with one ID per line. Each ID is followed by its original quota specifier, if any. The default is to display one ID per target. The -x option shows all IDs (separated by commas) on first line of each report entry for targets with multiple IDs. The report also shows threshold column. Columns are tab delineated. The -t option prints the threshold of the quota entry. If omitted, the warning threshold is not included.
Quota Information Beginning with the Data ONTAP 7.3 operating system, the Auto™ tool message contains this quota information: – A collection of quota statistics, including a set of new counters that collect quota statistics – The quota configuration file (/etc/quotas) – The mapping file (/etc/map.cfg)
Quota information is included in Auto tool message as attachments.
QUOTA INFORMATION Starting with Data ONTAP 7.3 operating system, the message that is generated by the Data ONTAP operating system’s Auto™ tool now includes quota information, which enables NetApp to improve its response to quota-related questions. Before the release of the Data ONTAP 7.3 operating system, if you had a quota-related question, it was necessary for you to gather the appropriate information and send it to NetApp technical , which sometimes created a delay of several days. With the inclusion of quota information in the latest version of the Auto tool message, this information is automatically sent to NetApp technical . Quota information in Auto also enables NetApp to store quota statistics that are useful for analysis. Quota information is included in Auto message as an attachment. The attachment name appears in the format YYYYMMDDHHMM.N.quotas.gz. For privacy protection, the contents of the quota files are encrypted. The Auto attachment contains three types of quota information:
A collection of quota statistics The configuration file /etc/quotas The -mapping file /etc/map.cfg
Qtree Statistics To display the number of NFS and CIFS operations resulting from access to files in a qtree: system> qtree stats ... Volume -------NASvol
QTREE STATISTICS To help you determine what qtrees are incurring the most traffic, the qtree stats command enables you to display statistics er accesses to files in the qtrees on your system. This information can identify traffic patterns to help with qtree-based load balancing. The storage system maintains counters for each qtree in each of the storage system’s volumes. These counters are not persistent. To reset the qtree counters, use the -z option. The values that are displayed by the qtree stats command correspond to the operations on the qtrees that have occurred since most recent occurrence of one of these actions:
Volume containing the qtrees was created Volume containing the qtrees was brought online on the storage system (either through a vol online command or a reboot) Counters were reset If you do not specify a name in the qtree stats command, the statistics for all qtrees on the storage system are displayed. Otherwise, statistics for qtrees in the named volume are displayed. Similarly, if you do not specify a name with the -z option, the counters are reset on all qtrees and all volumes. The qtree stats command displays the number of NFS and CIFS accesses on the designated qtrees since the counters were last reset. The qtree stats counters are reset when one of these actions occurs:
System is booted Volume containing the qtree is brought online Counters are explicitly reset using the qtree stats -z command
FPolicy File-Screening Policy FPolicy file-screening policy: – Enables s to create file policies and associate them with file operations that are executed with CIFS and NFS v4 – Example: Restrict .jpg and .mpg files from being stored on a storage system
FPolicy file-screening policy is enabled two ways: – Using third-party file screening software (can be located at www.netapp.com/partners)
FPOLICY FILE-SCREENING POLICY A file-screening policy determines how the storage system handles requests from individual client systems for operations such as open, rename, create, and delete. You use a file-screening policy to specify files or directories and the restrictions to be placed on them. Upon receiving a file operation request (such as open, write, create, or rename), the Data ONTAP operating system checks its file-screening policies before permitting the operation.
Third-Party File-Screening Process 1. A client requests a file. 2. The storage system consults the screening server. 3. The screening server responds as follows: – If a file is OK, the storage system allows access. – If a file is denied, the storage system denies access. Storage System
File Screen Server
Operations that can be controlled by policy are: – Creation of a new file – Opening an existing file – Renaming a file
HIERARCHICAL STORAGE MANAGEMENT (HSM) Hierarchical storage management (HSM) automates the migration of data among storage devices, usually based on inactivity or time. HSM is based on the concept of a cost-performance storage model. By accepting lower access performance (higher latency), you can store data less expensively. By automatically moving less frequently accessed objects to less expensive hardware, you can achieve a better overall cost-performance ratio. In this example, a policy is created to migrate files that are more than six months old from primary storage systems to secondary storage. When the policy runs, the file named my.docx moves from the primary storage system to the secondary storage system. A stub or sparse file remains in the directory structure as a pointer that is still visible to clients.
Hierarchical Storage Management When a client requests a file, Data ONTAP 7.3 and later operating systems redirect the request. HSM Server (FPolicy Server)
HIERARCHICAL STORAGE MANAGEMENT When a read request comes in, the Data ONTAP operating system forwards the request to the HSM server, which retrieves the file from the secondary storage system.
Configuring Native-Blocking FPolicy Turn the feature on: system> options fpolicy.enable on
Create a file policy: system> fpolicy create <policy_name> screen
NOTE: Screen is the only ed policy type.
Add extensions and options to the file policy or remove extensions and options from the file policy. Set up a file policy monitor. Enable the file policy: system> fpolicy enable <policy_name>
BLOCKING MP3S EXAMPLE NOTE: This is intended as a high-level discussion. The corresponding labs have detailed instructions on how to implement this example.
Module Summary In this module, you should have learned to: List some security methods for protecting data Explain and configure a security style setting for a volume and a qtree Describe methods of tracking and restricting storage usage Explain, create, and manage quotas Explain and configure the Data ONTAP FPolicy file-screening policy
Module Objectives By the end of this module, you should be able to: Explain the purpose of a SAN Identify ed SAN configurations Distinguish between Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI protocols Define a LUN and explain LUN attributes Use the lun setup command and NetApp® System Manager to create iSCSI-attached LUNs Access and manage a LUN from a Windows® host Define SnapDrive® data management software and its features
UNIFIED STORAGE SAN is a block-based storage system that makes data available over the network, using Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI protocols. Network-attached storage (NAS) is a file-based storage system that makes data available over the network, using NFS and CIFS protocols. The NetApp SAN and Unified Storage Architecture provides an outstanding level of investment protection and flexibility. The FAS system at the bottom of this image implies one “box.” However, the actual storage environment includes small and large FAS systems.
SAN PROTOCOLS Network access to LUNs on a NetApp storage system can be through either an FC network or a T/IP-based network. Both of these protocols carry encapsulated SCSI commands as the data transport mechanism.
INITIATOR AND TARGET Initiators, including Windows and UNIX®-type hosts, are consumers or clients within a SCSI relationship. Targets, including NetApp controllers and storage arrays, present data as logical units and are the servers within a SCSI relationship.
SAN TYPES LUNs on a NetApp storage system can be accessed through either an FC SAN fabric, using the FC protocol, or an Ethernet network, using the FCoE or iSCSI protocol. In all cases, the transport portals (FC, FCoE, or iSCSI) carry encapsulated SCSI commands as the data transport mechanism.
PORTS Data is communicated over ports. In an Ethernet SAN, the data is communicated by means of Ethernet ports. In an FC SAN, the data is communicated over FC ports.
NODE AND PORT NAMES IN FC In FC SAN, a worldwide node name (WWNN) describes a machine, and a worldwide port name (WWPN) describes a physical port that is attached to that machine. The FC specification for the naming of nodes and the ports on those nodes can be quite complicated. Each device is given a globally unique WWNN and an associated WWPN for each port on the node. WWNNs and WWPNs are 64-bit addresses made up of 16 hexadecimal digits grouped together in pairs, with a colon separating each pair (for example, 21:00:00:2b:34:26:a6:54). The first number in the address defines what the other numbers in the address represent, according to the FC specification. The first number is usually a 1, 2, or 5. In the example of QLogic® initiator HBAs, the first number is usually a 2. For Emulex® initiator HBAs, the first number is usually a 1. A NetApp storage system is assigned a 5.
NODES AND PORTALS IN ISCSI In IP SAN, the node name describes a machine, and the portal describes a physical port. Each iSCSI node must have a node name. There are two possible node name formats. IQN-Type Designator The format of this node name is conventionally: iqn.yyyy-mm.backward_naming_authority: unique_device_name This is the most popular node name format and is the default that is used by a NetApp storage system. These are the components of the logical name: Type designator, IQN, followed by a period (.) The date when the naming authority acquired the domain name, followed by a period The name of the naming authority, optionally followed by a colon (:) A unique device name Eui-Type Designator The format of this node name is: eui.nnnnnnnnnnnnnnnn These are the components of the logical name:
The type designator itself, “eui,” followed by a period (.) Sixteen hexadecimal digits Example: eui.123456789ABCDEF0
Set Up a SAN To set up a SAN: 1. License the appropriate SAN protocol on the storage system. 2. Create a volume or qtree where the LUN will reside. 3. that the SAN protocol service is on. 4. Configure the host initiator. 5. Create the LUN and igroup, and then associate the igroup to the LUN.
SET UP A SAN To configure a SAN, you must ensure that these requirements are implemented: 1. 2. 3. 4. 5. 6. 7.
The initiator on a host must discover the target on a storage system. The initiator binds to the target. The bindings can optionally be persisted across reboots. On the storage system, initiators that are allowed access are placed within an igroup. A LUN must be created on the storage system. A LUN must be mapped to an igroup containing the initiator. The initiator operating system then finds the virtual disk or LUN on the host. The host operating system must generally prepare the LUN by labeling, formatting, and mounting the LUN.
Managing FC or iSCSI After licensing, the FC or iSCSI service can be activated. To manage the FC or iSCSI protocols, use the command-line interface (CLI): – FC system> f [subcommand]
CONFIGURING THE INITIATOR This module focuses on Windows platforms using the iSCSI Software Initiator from Microsoft®. For more information about configuring iSCSI and FC LUNs on other platforms, see the SAN Fundamentals on Data ONTAP Web-based course and the SAN Implementation Workshop instructor-led course.
ISCSI SOFTWARE INITIATOR: FAVORITE TARGETS In the Connect to Target window, if you select “Add this connection to the list of Favorite Targets,” then the connection appears on the Favorite Targets tab.
CREATING LUNS You can create a LUN by using one of the following methods:
Use the lun create command on the storage system. –
This command only creates a LUN. When using this command, you must complete the following additional configuration steps:
–
Create initiator groups using the igroup create command. Map the LUN to an initiator group using the lun map command.
Add portset (FC only).
Use the lun setup command on the storage system. This command is a wizard that walks you through creating and mapping the LUN and igroup. Use NetApp System Manager on a client host. Use SnapDrive data management software on a client host.
The lun setup Command LUN creation, optionally igroup creation, and igroup and LUN mapping can be accomplished with a single command: lun setup
This is a wizard-like command that prompts the for relevant information. The result of the command is a newly created LUN that is mapped to a new or existing igroup.
Creating a LUN with lun setup system> lun setup This setup will take you through the steps needed to create LUNs and to make them accessible by initiators. You can type ^C (Control-C)at any time to abort the setup and no unconfirmed changes will be made to the system. Do you want to create a LUN? [y]: y
lun setup: Initiator Group Creation The LUN will be accessible to an initiator group. You can use an existing group name, or supply a new name to create a new initiator group. Enter '?' to see existing initiator group names. Name of initiator group []: ? No existing initiator groups. Name of initiator group []: salesigroup
Type of initiator group iWIN_f (F/iSCSI) [F]: iSCSI
lun setup: LUN Masking Enter comma separated portnames: iqn.199105.com.microsoft:slu2-win.edsvcs.netapp.com Enter comma separated portnames:
The initiator group has an associated OS type. The following are currently ed: solaris, windows, hpux, aix, linux, netware or vmware. OS type of initiator group "iWIN_f" [windows]: windows
lun setup: Summary and Confirmation LUN Path : /vol/winvol/tree1/lun0 OS Type : windows Size : 12.0g (12889013760) Comment : Windows LUN Initiator Group : salesigroup Initiator Group Type : iSCSI Initiator Group : iqn.199105.com.microsoft:slu2-win.edsvcs.netapp.com Mapped to LUN-ID : 0 Do you want to accept this configuration? [y]: y
SCANNING FOR A NEW LUN After creating a LUN with the lun setup command or with NetApp System Manager, use Windows Disk Management on the host to prepare the LUN for use. The new LUN should be visible as a local disk. If it is not, click the Action button in the toolbar, and then click Rescan Disks.
PROVISIONING A NEW LUN To open the New Simple Volume Wizard, right-click the bar that represents the unallocated disk space, and then select New Simple Volume. Or, from the Action drop-down menu in the Computer Management window, you can click All Tasks > New Simple Volume.
VOLUME SIZE AND MOUNT OPTIONS Choose the partition size and drive letter. Accept the default drive assignment or use the drop-down list to select a different drive.
FORMAT AND SUMMARY PAGES Partition the drive using the settings shown, but change the Volume Label to an appropriate Windows volume name that represents the LUN you are creating.
SAN Management NetApp provides a number of SAN management techniques to simplify block storage istration. This module focuses on: – SnapDrive data management software – NetApp DataMotion® for Volumes data migration software – NAS and SAN traffic over the same connection with NetApp Unified Connect
SNAPDRIVE SnapDrive is server management software for Windows 2000 Server, Windows Server 2003, and Windows Server 2008 systems. SnapDrive provides virtual-disk and Snapshot management on the client side. Use SnapDrive to create FC or iSCSI LUNs on a Windows host. SnapDrive includes three main components: 1. Windows 2000 service 2. Microsoft Management Console (MMC) plug-in 3. CLI SnapDrive includes the same features of the lun setup command on the storage system, but can also add LUNs to the Windows host and integrate the use of LUNs into other NetApp applications, such as SnapManager® management software.
DATAMOTION FOR VOLUMES SOFTWARE NetApp DataMotion® for Volumes is a new feature for FAS and V-Series disk shelves. This feature was introduced in the Data ONTAP 8.0.1 operating system to enable nondisruptive movement of volumes from one aggregate to another. The DataMotion for Volumes feature is currently limited to 7-Mode volumes with LUNs that use block storage networking protocols, including FC, FCoE, and iSCSI. NetApp DataMotion for Volumes was referenced internally as nondisruptive volume migration (NDVM). It is an extension of the NetApp DataMotion suite of offerings and the data mobility component of the NetApp cloud initiative, highlighting the flexibility of NetApp solutions in the dynamic data center. The value of NetApp DataMotion for Volumes is to nondisruptive operations by enabling a volume to be moved from one aggregate to another, without disruption to host application, to facilitate:
Load balancing: address hot spots Capacity management: capacity spillover Hardware servicing or upgrades Performance optimization between classes of storage media (FC, SAS, SSD, SATA) NetApp DataMotion (for vFiler)
NetApp DataMotion for Volumes
Protocols ed
NFS, iSCSI
iSCSI, F, FCoE
Moves data
Two different controllers on Aggregates within the same two different HA pairs controller
ed Data ONTAP version
7.3.3+ only, no 8.0.x
8.0.1+
Management
Provisioning Manager only
CLI only
vFiler
Data must be in a vFiler unit Data cannot be in a vFiler unit
DATAMOTION FOR VOLUMES DETAILS By default, a nondisruptive volume move (NDVM) automatically enters cutover when it determines that the destination volume can be synchronized with the source volume in fewer than 60 seconds. If the volume move is unable to complete automatic cutover in the specified number of cutover attempts, you can initiate manual cutover. s have an option to disable this automatic cutover from NDVM and use a manual cutover with the –m switch on the vol move start command. With the manual cutover option, NVDM simply keeps creating SnapMirror updates from the source to the destination volume, reporting the amount of data that is transferred, along with the time to transfer the data, but without computing the delta blocks between the two volumes. s must carefully examine the EMS logs and note the delta between the source and destination volumes before initiating a manually triggered cutover for NDVM. NOTE: The source volume cannot be exported in the /etc/exports file for the move operation to succeed.
UNIFIED CONNECT In data the ONTAP 8.0.1 7-Mode operating system, NetApp is pleased to introduce Unified Connect, which s both FCoE and IP-based traffic over the same 10-Gb connection. Unified Connect enables standard Fibre Channel (FC) host bus adapter (HBA) traffic from the host initiator to be mapped to the unified target adapter (UTA) on the storage. Unified Connect s complete end-to-end connection between a converged network adapter (CNA) and the UTA. For more information, please see the NetApp Unified Connect Technical Overview and Implementation Web-based course.
Other SAN istration Resources For more information about SAN istration, see the SAN Implementation Workshop instructor-led course. This advanced course covers: Configuring Linux hosts for F and iSCSI Configuring Windows hosts for F and iSCSI Creating F and iSCSI LUNs from the CLI Creating F and iSCSI LUNs from SnapDrive with Windows – SAN in a high-availability controller configuration – SAN performance tuning – SAN troubleshooting – – – –
OTHER SAN ISTRATION RESOURCES SnapDrive for Windows provides an interface for Windows to interact with LUNs directly. SnapDrive also:
Enables online storage configuration, LUN expansion, and streamlined management Integrates Snapshot technology to create point-in-time images of data that is stored in LUNs
Module Summary In this module, you should have learned to: Explain the purpose of a SAN Identify ed SAN configurations Distinguish between FC, FCoE, and iSCSI protocols Define a LUN and explain LUN attributes Use the lun setup command and NetApp System Manager to create iSCSI-attached LUNs Access and manage a LUN from a Windows host Define SnapDrive data management software and its features
Check Your Understanding List the SAN protocols that are ed by NetApp. What are the functions of a LUN? What are the methods of creating a LUN? Why would you use SnapDrive in a SAN environment?
Module Objectives By the end of this module, you should be able to: Describe the function of Snapshot® copies Explain the benefits of Snapshot copies Identify and execute Snapshot commands Create and delete Snapshot copies Configure and modify Snapshot options Explain the importance of the .snapshot directory Describe how Snapshot technology allocates disk space for volumes and aggregates Schedule Snapshot copies Configure and manage the Snapshot copy reserve
Snapshot Technology A Snapshot copy is a read-only image of the active file system at a point in time. The benefits of Snapshot technology are: – Nearly instantaneous application data backups – Fast recovery of data lost due to: Accidental data deletion Accidental data corruption
SNAPSHOT TECHNOLOGY Snapshot technology is a key element in the implementation of the WAFL® (Write Anywhere File Layout) file system: A Snapshot copy is a read-only, space-efficient, point-in-time image of data in a volume or aggregate. A Snapshot copy is only a “picture” of the file system, and it does not contain any data file content. Snapshot copies are used for backup and error recovery. The Data ONTAP® operating system automatically creates and deletes Snapshot copies of data in volumes to commands that are related to Snapshot technology.
TAKING A SNAPSHOT COPY Before a Snapshot copy is created, there must be a file system tree that points to data blocks, which contain content. When the Snapshot copy is created, the file structure metadata is saved. The Snapshot copy points to the same data blocks as the file structure metadata that was existing upon creation of the Snapshot copy. There is no significant impact on disk space when a Snapshot copy is created. Because the file structure takes up little space, and no data blocks must be copied to disk, a new Snapshot copy consumes almost no additional disk space. In this case, the phrase “consumes no space” really means no appreciable space. The so-called “top-level root inode,” which contains metadata that is necessary to define the Snapshot copy, is 4 KB.
CHANGING DATA Snapshot copies begin to use space when data is deleted or modified. The WAFL file system writes the new data to a new block (C’) on the disk and changes the root structure for the active file system to point to the new block. Meanwhile, the Snapshot copy still references the original block C. Any time that a Snapshot copy references a data block, the block remains unavailable for other uses, which means that Snapshot copies start to consume disk space only when the file system changes after a Snapshot copy is created.
Snapshot Copies and Inodes A Snapshot copy saves the current copy of the root inode of a volume. Each volume can contain up to 255 Snapshot copies. The inodes of Snapshot copies are read-only. When the Snapshot inode is created: – The Snapshot copy points to exactly the same disk blocks as the root inode. – New Snapshot copies consume only the space that is required for the inode itself.
SNAPSHOT COPIES AND INODES A Snapshot copy is a frozen, read-only image of a traditional volume, a FlexVol® volume, or an aggregate that reflects the state of the file system at the time that the Snapshot copy was created. Snapshot copies are your first line of defense for backing up and restoring data. You can configure the Snapshot copy schedule.
Inodes An inode is a 192-byte data structure that is used to represent file system objects, such as files and directories. An inode describes a file’s attributes, including this information: – – – – – –
Type of file (regular file, directory, link, and so on) Size Owner, group, permissions Pointer to xinode access control lists (ACLs) Complete file data if the file is 64 bytes or less Pointers to data blocks
INODES WAFL inodes are similar to Berkeley FFS (Fast File System) inodes. Veritas™ and Microsoft® file systems are based on the Berkeley FFS, which forces writes to preallocated locations. The primary difference is in the way that the WAFL file system writes contiguous data and metadata blocks to the next available block instead of to predefined locations. The most important metadata file is the root inode, which contains the inodes that describe all of the other files in the file system. The root inode has a fixed disk location.
MANAGING INODES For file sizes between 64 GB and 8 TB, the single-indirect blocks in Level 3 inodes become double-indirect blocks. These double-indirect blocks reference 1024 single-indirect blocks, which then reference up to 1024 4-KB data blocks. df -i The df -i command displays the number of inodes in a volume. For more information about this command, see the manual pages (using the man command). maxfiles The maxfiles command increases the number of inodes that are designated in a volume. For more information about this command, see the manual pages.
Snapshot Copy Reserve s can make Snapshot copies of: – Aggregates The aggregate default for Snapshot copy reserve is 5% of the aggregate. Restoring an aggregate Snapshot copy restores all volumes within that aggregate.
– Volumes The volume default for Snapshot copy reserve is 20% of the volume. s can restore the entire volume or one or more files.
To change the amount of Snapshot copy reserve:
system> snap reserve [ -A | -V ] [aggr or vol] [percent]
SNAPSHOT COPY RESERVE Volumes Snapshot copies for traditional and flexible volumes are stored in special subdirectories that can be made accessible to Windows® and UNIX® clients so that s can access and recover their own files without assistance. The maximum number of Snapshot copies per volume is 255. Aggregates In an aggregate, 5% of space is reserved for Snapshot copies. In normal, day-to-day operations, aggregate Snapshot copies are not actively managed by a system . For example, the Data ONTAP operating system automatically creates Snapshot copies of aggregates to commands that are related to the SnapMirror software, which provides volume-level mirroring. NOTE: Even if the Snapshot copy reserve is 0%, you can still create Snapshot copies. If there is no Snapshot copy reserve, Snapshot copies take their blocks from the active file system.
Each aggregate has 5% allocated for Snapshot copy reserve.
Aggregate Space
Flexible Volumes
Each volume has 20% allocated for Snapshot copy reserve. The remainder is used for client data.
Active File System
80%
Snap Reserve
20%
Snapshot Copy Reserve
The amount of space that is allocated for Snapshot copy reserve is adjustable. To use this space for data (not recommended), you must manually override the allocation that is reserved for Snapshot copies.
CLI: SNAPSHOT CREATION In the snap command, option -A is used for aggregates and option -V is used for volumes. If neither -A nor -V is specified, volume is the default. This table lists the commands that are used to create and manage Snapshot copies. If you omit the volume name from any of these commands, the command will apply to the root volume. EXAMPLE snap create engineering test snap list engineering snap delete engineering test snap delete –a vol2 snap rename engineering nightly.0 firstnight.0 snap reserve vol2 25 snap sched vol2 0 2 6 @ 8, 12, 16, 20
12 - 13
RESULT Creates a Snapshot copy called ―test‖ in the engineering volume. Lists all available Snapshot copies in the engineering volume. Deletes the Snapshot copy ―test‖ in the engineering volume. Deletes all Snapshot copies in vol2. Renames the Snapshot copy from nightly.0 to firstnight.0 in the engineering volume. Changes the Snapshot copy reserve to 25% on vol2. Sets the automatic schedule on vol2 to save these weekly Snapshot copies: 0 weekly, 2 nightly, and 6 hourly at 8 a.m., 12 p.m., 4 p.m., and 8 p.m.
SCHEDULING SNAPSHOT COPIES Set the nosnap option to on to disable automatic Snapshot creation. You can still create Snapshot copies manually at any time.
SNAPSHOT SCHEDULE The snap sched command sets a schedule to automatically create Snapshot copies and specifies how many of each type are stored. When the limit is reached, the oldest Snapshot copy for each interval is deleted and replaced by a new Snapshot copy. This example shows a default schedule, which specifies that Snapshot copies will be made at 8:00, 12:00, 16:00, and 20:00 (24-hour time), and that the two most recent daily Snapshot copies and the six most recent hourly Snapshot copies will be kept. Snapshot copies are like a picture of a volume. The only difference between a weekly Snapshot copy and a nightly or hourly copy is the time at which the Snapshot copy is created and any data that has changed between the Snapshot copies.
Recovering Data When you recover data, you have two options: – Copy the data from a Snapshot copy. – Use SnapRestore data recovery software.
To copy data from a Snapshot copy: – Locate the Snapshot copy. – Recover the copy from the Snapshot directory. To overwrite the data, copy to the original location. For a new version, copy to a new location.
RECOVERING DATA Using Snapshot Copies to Recover Data To recover data, you can: Restore a file from a Snapshot copy Use SnapRestore data recovery software (license required) To restore a file from a Snapshot copy: 1. Locate the Snapshot copy that contains the correct version of the file. 2. Restore the file from the .snapshot directory. – –
12 - 24
To overwrite existing data, copy to the original location. To restore a writeable version, copy to a new location.
Snapshot Visibility to Clients Make the .snapshot directory invisible to clients, and turn off access to the .snapshot directory: system> vol options vol nosnapdir [on|off]
Make the ~snapshot directory visible to CIFS clients: system> options cifs.show_snapshot [on|off]
Make the .snapshot directory visible to NFS clients: system> options nfs.hide_snapshot [on|off]
SNAPSHOT VISIBILITY TO CLIENTS This table lists the options that are available for controlling the creation of Snapshot copies and access to those copies and Snapshot directories on a volume:
Make the .snapshot directory invisible to clients and turn off access to the .snapshot directory. Setting the nosnapdir option to on disables access to the Snapshot directory that is present at client mountpoints and at the root of CIFS directories, and makes the Snapshot directories invisible. (NFS uses .snapshot for directories, and CIFS uses ~snapshot.) By default, the nosnapdir option is off (directories are visible). To make the ~snapshot directory visible to CIFS clients: 1. 2.
Turn the cifs.show_snapshot option on. Turn the nosnapdir option off for each volume for which you want directories to be visible.
NOTE: You must also ensure that “Show Hidden Files and Folders” is enabled on your Windows system.
To make the .snapshot directory invisible to NFS clients: Turn the nfs.hide_snapshot option on. 2. Turn the nosnapdir option off for each volume that you want directories to be visible.
THE .SNAPSHOT DIRECTORY OF VOL0 The .snapshot directory is at the root of a volume. In this example, the directory structure is shown for an NFS client mounting vol0 to the mountpoint /mnt/system.
SNAPSHOT VIEW FROM A UNIX CLIENT Snapshot Directories Every volume in your file system contains a special Snapshot subdirectory that enables you to access earlier versions of the file system to recover lost or damaged files. Viewing Snapshot Copies from a UNIX Client The Snapshot subdirectory appears to NFS clients as .snapshot. The .snapshot directories are usually hidden and are not displayed in directory listings. To view a .snapshot directory: 1. On the storage appliance, as root and ensure that the nosnapdir option is set to off. 2. To view hidden directories, from the NFS mountpoint, enter the ls command with the -a (all) option. When client Snapshot directories are listed, the timestamp is usually the same for all directories. To find the actual date and time of each Snapshot copy, use the snap list command on the storage system.
SNAPSHOT VIEW FROM A WINDOWS CLIENT Snapshot directories are hidden on Windows clients. To view them, you must first configure the file manager to display hidden files, then navigate to the root of the CIFS share and find the directory folder. The subdirectory for Snapshot copies appears to CIFS clients as ~snapshot. Files that are displayed here are those files that are created automatically for specified intervals. Manually created Snapshot copies are also listed here. Restoring a File To restore a file from the ~snapshot directory, rename or move the original file, then copy the file from the ~snapshot directory to the original directory.
Integration with Snapshot Copies NetApp s the integration of Snapshot copy files with CIFS using Microsoft’s Previous Versions tool. To configure: system> options cifs.ms_snapshot_mode xp
SNAPRESTORE: CREATING A SNAPSHOT COPY The Data ONTAP® operating system preserves pointers to all of the disk blocks currently in use at the time that a Snapshot copy is created. In this illustration, the active file system contains a file named MYFILE, which is made of blocks A, B, and C. Based on a schedule or in response to the snap create command, the active file system is captured in a Snapshot copy named SNAP1. This Snapshot copy uses almost no additional disk space, because the version of MYFILE within this Snapshot copy includes the same blocks as the blocks for MYFILE in the active file system.
CHANGES TO A FILE AFTER A SNAPSHOT COPY When a file is changed, the Snapshot copy still points to the disk blocks where the file existed before it was modified. Changes are written to new disk blocks. Snapshot copies begin to use extra space only when corresponding files in the active file system are changed or deleted. In the illustration, a client modifies some data in MYFILE, causing the contents of block C to change. The WAFL® (Write Anywhere File Layout) file system uses a copy-on-write policy that writes the modified block to a new location on disk, creating block C’. The active file system version of MYFILE is now composed of disk blocks A, B, and C’, whereas the Snapshot copy SNAP1 still points to blocks A, B, and C. Deleting MYFILE in the active file system won’t free-up disk blocks A, B, and C, because SNAP1 still has those blocks reserved.
REVERTING AND RESTORING A FILE Follow these steps to restore a single file. 1. that the volume is online and writable. 2. List the Snapshot copies in the volume. system> snap list volume_name 3. Notify network s that you are going to revert a file. 4. If you know the name of the Snapshot copy, initiate the reversion using this command: system> snap restore -t file -s snapshot_name path_and_file_name – –
-t file indicates that a file SnapRestore is to be performed. snapshot_path_and_file_name is the complete path to the name of the file to be reverted.
The Data ONTAP operating system displays a warning message and prompts you to confirm your decision to revert the file. Press Y to confirm that you want to revert the file. If you do not want to proceed, enter Ctrl-C or N for no. If the file already exists in the active file system, it will be overwritten by the version in the Snapshot copy.
SnapRestore Technology Versus Copying If a file is large (such as a database), you should revert it with SnapRestore technology rather than copying the file: – Copying requires double the storage and time – Reverting saves time and reinstates the data – NetApp® recommends SnapRestore technology over alternative technologies to ensure reliability
SNAPRESTORE TECHNOLOGY VERSUS COPYING Restoring large quantities of data takes a long time if you either copy files from a Snapshot copy or restore them from tape. Instead, use SnapRestore technology to save time.
FlexClone Volume Clones FlexClone technology: Enables the creation of multiple, instant dataset clones with no storage overhead Provides dramatic improvement for application test and development environments
FLEXCLONE VOLUME CLONES FlexClone volume clones provide an efficient way to copy data for: Manipulation Projection operations Upgrade testing The Data ONTAP operating system enables you to create a volume duplicate in which the original volume and clone volume share the same disk space for storing unchanged data.
How Volume Cloning Works Volume cloning: Aggregate
FlexVol Volume
Parent
Snapshot copy of Parent
Clone
– Starts with a volume – Makes a Snapshot copy of the volume – Creates a clone (a new volume based on the Snapshot copy)
Modifications of the original volume are separate from modification of the cloned volume. Result: Independent volume copies are efficiently stored.
HOW VOLUME CLONING WORKS FlexClone volumes are managed similarly to regular FlexVol volumes, with a few key differences. Consider these important facts about FlexClone volumes:
FlexClone volumes are a point-in-time, writable copy of the parent volume. Changes made to the parent volume after the FlexClone volume is created are not reflected in the FlexClone volume. You can only clone FlexVol volumes. To create a copy of a traditional volume, you must use the vol copy command, which creates a distinct copy with its own storage. FlexClone volumes are fully functional volumes that are managed, as with the parent volume, by using the vol command. Likewise, FlexClone volumes can be cloned. FlexClone volumes always exist in the same aggregate as parent volumes. FlexClone volumes and parent volumes share the same disk space for common data. This means that creating a FlexClone volume is instantaneous and requires no additional disk space (until changes are made to the clone or parent). A FlexClone volume is created with the same space guarantee as the parent. You can sever the connection between the parent and the clone. This is called splitting the FlexClone volume. Splitting removes all restrictions on the parent volume and causes the FlexClone to use its own storage. IMPORTANT: Splitting a FlexClone volume from its parent volume deletes all existing Snapshot copies of the FlexClone volume and disables the creation of new Snapshot copies while the splitting operation is in progress.
Quotas that are applied to a parent volume are not automatically applied to the clone. When a FlexClone volume is created, existing LUNs in the parent volume are also present in the FlexClone volume, but these LUNs are unmapped and offline.
Flexible Volume Clone Syntax Use the vol clone create command to create a flexible volume clone. system> vol clone create
[-s none|file| volume] -b <parentvol> [parent_snapshot>] This is an example of a CLI entry that is used to create a flexible volume clone: system> vol clone create clone1 –b flexvol1 snap1 system> vol status clone1 Volume State Status Options clone1 online raid_dp, flex guarantee=volume(disabled) Clone, backed by volume ‘snap1’ snapshot clone_clone1.1' Containing aggregate: 'aggr1'
With a volume and a Snapshot copy of that volume, create a clone of that volume. Split volumes when most of the data on a volume is not shared. Replicate shared blocks in the background. Result: A new, permanent volume is created for forking (branching) project data.
SPLITTING VOLUMES Splitting a FlexClone volume from its parent removes any space optimizations that are currently employed by the FlexClone volume. After the split, both the FlexClone volume and the parent volume require the full space allocation that is specified by their space guarantees. After the split, the FlexClone volume becomes a normal FlexVol volume. When splitting clones, consider these important facts:
When you split a FlexClone volume from its parent, all existing Snapshot copies of the FlexClone volume are deleted. During the split operation, no new Snapshot copies of the FlexClone volume can be created. Because the clone-splitting operation is a copy operation that could take some time to complete, the Data ONTAP operating system provides the vol clone split stop and vol clone split status commands to stop clone-splitting or to check the status of a clone-splitting operation. The clone-splitting operation is executed in the background and does not interfere with data access to either the parent or the clone volume. If you take the FlexClone volume offline while clone-splitting is in progress, the operation is suspended. When you bring the FlexClone volume back online, the splitting operation resumes. After a FlexClone volume and its parent volume have been split, they cannot be reed.
THE VOL CLONE SPLIT COMMAND How to View the Results of a clone split Command Example: vol clone split status: vol clone split start clone1 Tue Oct 12 23:49:43 GMT [wafl.scan.start:info]: Starting volume clone split on volume clone1. Clone volume 'clone1' will be split from its parent. Monitor system log or use 'vol clone split status' for progress. vol clone split status Volume 'clone1', 117193 of 364077 inodes processed (32%) 18578 blocks scanned. 18472 blocks updated.
The snap list Command system> snap list Volume vol0 working... %used %total -----------0% (0%) 0% (0%) 17% (20%) 1% (1%) 33% (20%) 2% (1%) %used
This column shows the relationship between accumulated Snapshot copies and the total disk space that is consumed by the active file system. Values in parentheses show the contribution of this individual Snapshot copy.
date -----------Apr 20 12:00 Apr 20 10:00 Apr 20 08:00
%total
This column shows the relationship between accumulated Snapshot copies and the total disk space that is consumed by the volume. Values in parentheses show the contribution of this individual Snapshot copy.
date
name -------hourly.0 hourly.1 hourly.2
This column shows the date and time that the Snapshot copy was made. Time is indicated on the 24-hour clock, and in this example reflects the hours that are set in the automatic Snapshot schedule.
name
Scheduled Snapshot copies are automatically renumbered as new copies are made, so that the most recent copy is always “0.” This numbering scheme ensures that the file with the highest number is always the oldest.
THE SNAP LIST COMMAND The snap list command displays a single line of information for each Snapshot copy in a volume. In the Snapshot List Example shown here, a list of Snapshot copies is displayed for the engineering volume. The list consists of these columns:
%used: Shows the relationship between accumulated Snapshot copies and the total disk space that is consumed by the active file system. Values in parentheses show the contribution of this individual Snapshot copy. %total: Shows the relationship between accumulated Snapshot copies and the total disk space that is consumed by the volume. Values in parentheses show the contribution of this individual Snapshot copy. date: Shows the date and time that the Snapshot copy was made. Time is indicated on the 24-hour clock and, in this example, reflects the hours that are set in the automatic Snapshot copy schedule. name: Lists the names of each of the saved Snapshot copies. Scheduled Snapshot copies are automatically renumbered as new ones are created, so that the most recent copy is always 0. This numbering scheme ensures that the file with the highest number (in this case, hourly.2) is always the oldest Snapshot copy. Examples: snap list The examples that follow demonstrate how the %used values in the snap list command output relate to the size of Snapshot copies, and how to determine which Snapshot copies to delete to reclaim the most space.
SNAP RECLAIMABLE AND SNAP DELTA The snap delta command displays the rate of change of data between Snapshot copies. snap delta [ vol_name [ snap ] [ snap ]] When you use the snap delta command without any arguments, it displays the rate of change of data between Snapshot copies for all volumes in the system—or in the case of in the case of snap delta –A, all aggregates. If you specify a volume, the rate of change of data is displayed for that particular volume. You can make the query more specific by specifying the beginning and ending Snapshot copies to display the rate of change between them for a specific volume. If you don’t specify an ending snapshot copy, the rate of change of data between the beginning Snapshot copy and the Active File System is displayed. The rate of change information is displayed in two tables. In the first table, each row displays the differences between two successive Snapshot copies. The first row displays the differences between the youngest snapshot copy in the volume and the Active File System. Each subsequent row displays the difference between the next older snapshot copy and the previous Snapshot copy, stepping through all of the Snapshot copies in the volume until the information for the oldest Snapshot copy is displayed. Each row displays the names of the two Snapshot copies that are being compared, the amount of data that changed between them, how long the first snapshot copy that is listed has been in existence, and how fast the data changed between the two Snapshot copies. The second table shows the summarized rate of change for the volume between the oldest Snapshot copy and the Active File System.
Snapshot Automatic Delete The autodelete option determines when (or if) Snapshot copies are automatically deleted. This option is set at the volume level. system> snap autodelete vol [on|off|show|reset]
If autodelete is enabled, then options are available: system> snap autodelete vol option value
Using snap autodelete: commitment A snap autodelete will delete Snapshot copies based upon the “commitment” criterion: The can protect certain kinds of Snapshot copies from deletion. The commitment option maybe either: – try Deletes snapshot copies that are not being used by any data mover, recovery, or clones that are not locked.
– disrupt Deletes snapshot copies that are locked by applications that move data (such as the SnapMirror application), dump data, and restore data (mirror and dumps are aborted).
Using target_free_space A snap autodelete stops when the free space in the trigger criteria reaches a specified percentage. This percentage is controlled by the value of target_free_space. The default percentage is 20%.
Using snap autodelete: order The order in which Snapshot copies are deleted is specified by the delete_order option, which defines the age order. If the value is set to: – oldest_first Delete oldest Snapshot copies first.
Using snap autodelete: defer_delete The defer_delete option defines the order for deletion. If the value is set to: – scheduled Delete the scheduled Snapshot copies last (identified by the scheduled Snapshot naming convention).
– _created Delete the -created Snapshot copies last.
– prefix Delete the Snapshot copies with names that match the prefix string last.
Module Summary In this module, you should have learned to: Describe the function of Snapshot copies Explain the benefits of Snapshot copies Identify and execute Snapshot commands Create and delete Snapshot copies Configure and modify Snapshot options Explain the importance of the .snapshot directory Describe how Snapshot technology allocates disk space for volumes and aggregates Schedule Snapshot copies Configure and manage the Snapshot copy reserve
Check Your Understanding What is a Snapshot copy? What are some of the NetApp products that are based on Snapshot technology? What are some of the Snapshot commands? What is the Snapshot schedule syntax?
COMPOUNDING EFFECT OF STORAGE EFFICIENCY NetApp® provides a number of storage efficiency technologies that can dramatically reduce storage costs and still provide the capacity that you need to meet an ever growing demand.
Full and Thin Provisioning of Volumes s have flexibility to manage their storage systems by allocating volumes as: Full provisioning of volumes (space guarantee) – Requires space to be reserved for the volume within the aggregate at the volume’s creation – Uses default allocation – Cannot overcommit an aggregate – Simplifies storage management
Space Reservations Within a volume, whether it uses full provisioning or thin provisioning, all files, whether a LUN or a NAS file, can: – Have space reservations (default for LUNs) Requires reserving space within the volume so that the entire file can be overwritten, even if blocks are retained by a Snapshot™ copy Writes to the file succeed Requires simpler management
– Not have space reservations (default for NAS file) Does not require reserving space within the volume Writes to the file might fail Requires active space monitoring
Space Guarantees Space guarantee is an attribute of a volume that is reserved in (or “set aside” from) the containing aggregate. Space guarantee parameters: – volume (default): Reserves the FlexVol® volume’s total size within its containing aggregate and allows for space-reserved or nonspacereserved files. – File (not recommended): Enables the creation of a spaceguaranteed file total size within its containing nonspace-reserved FlexVol volume. – none: Does not reserve any space for the FlexVol volume within its containing aggregate, and allows for only nonspace-guaranteed files
Space guarantee can be set: – At the volume’s creation: system> vol create vol1 -s volume aggr1 5GB – On existing volumes: system> vol options vol1 guarantee none
Solutions for Full Volumes If a volume fills up (because of Snapshot copies, active file system data, or both) an can: Delete Snapshot copies manually or automatically Expand the volume Manage active file system data if the blocks are not part of a Snapshot copy Implement deduplication, provided that there is enough space to turn on deduplication
Snapshot Automatic Delete Snapshot automatic delete determines when (or if) Snapshot copies are automatically deleted. This option is set at the volume level: snap autodelete vol [on|off|show|reset]
If autodelete is enabled, then options are available: system> snap autodelete vol option value
Volume Autosize You might want to grow the volume. The vol autosize command determines if a volume should grow when it is nearly full. – Set at volume level – Possible values: On – Increment size (default 5% of original size) – Maximum size (default 120% of original size)
’s Choice s can choose which procedure to employ first: snapshot auto delete vol autosize Use the volume option: – try_first – Possible values: snap_delete volume_grow (default)
DEDUPLICATION Deduplication can be thought of as the process of “unduplicating” data. The term “deduplication” was first coined by database s many years ago as a way of describing the process of removing duplicate records after two databases had been merged. In the context of disk storage, deduplication refers to any algorithm that searches for duplicate data objects (for example, blocks, chunks, files) and discards those duplicates. When duplicate data is detected, it is not retained, but instead a “data pointer” is modified so that the storage system references an exact copy of the data object that is already stored on disk. This deduplication feature works well with datasets that have a lot of duplicated date (for example, full backups). When NetApp deduplication is configured, it runs as a background process that is transparent to any client that accesses data from a storage system. This feature allows a reduction of storage costs by reducing the actual amount of data that is stored over time. For example, if a 100-GB full backup is made on the first night, and then a 5-GB change in the data occurs during the next day, the second nightly backup only needs to store the 5 GB of changed data. This amounts to a 95% spatial reduction on the second backup. A full backup can yield more than a 90% spatial reduction with incremental backups averaging about 30% of the time. With nonbackup scenarios, such as with virtual machine images, gains of up to 40% space savings can be realized. To estimate your own savings, please visit the NetApp deduplication calculator at http://www.dedupecalc.com.
DEDUPLICATION IN ACTION In this example, one creates a PowerPoint® presentation (presentation.ppt) that is 20 blocks in size. This presentation is then copied to another location by another . Finally, a third copies the presentation to a third location and edits the file, adding 10 blocks. When these files are stored on a storage system with deduplication configured, the original presentation file is saved, but the second copy (because it is identical to the original) merely references the original file’s location on the storage system. The third location of the presentation file is not completely duplicated. Because the third edited the file, the edits are saved to the storage system, but all unedited blocks are referenced back to the original file. With NetApp deduplication, 30 blocks are used to store a total of 70 blocks worth of data. This is a 58% space savings.
NETAPP DEDUPLICATION: INTERNALS Typically, when deduplication is enabled on a volume, data already exists on the volume. NetApp deduplication scans the existing blocks in the flexible volume and creates a fingerprint file. A fingerprint is a combination of a calculated value and the block location (that is, [fingerprint value, block location]). To create the fingerprint file, an runs the sis start -s command. During this phase, a gatherer process identifies all existing files, generates fingerprints, and places them in a gatherer file. The results are a 32-bytes-per-fingerprint record, which represents 0.8% overhead. The gatherer process es the fingerprint information to a Fingerprint Manager, which sorts the fingerprints by using quick sort and merge sort techniques (“qsort” and Merge Sort in the figure). New fingerprints are then written to the fingerprint file. Over time, the fingerprint file might accrue a number of stale entries due to files being deleted or moved to another volume. After 20% of the entries become stale, a stale remover phase occurs, to purge the fingerprint file of outdated records.
NETAPP DEDUPLICATION: INTERNALS Duplicates are identified during the merge sort process. Identified duplicate records are sorted by inode, and then duplicate blocks are eliminated by the block sharing engine, one after the other, in the order of the inode number. Fingerprints are used to find potential duplicate blocks, but data comparison is always done before duplicates are eliminated. After the block has been identified as a true duplicate, indirect blocks are updated by pointing to the already existing data block. The reference count metadata is incremented. The duplicate block, having no inode or indirect blocking to it (that is, a refcount value of 0), is considered free by the WAFL® (Write Anywhere File Layout) file system.
NETAPP DEDUPLICATION: INTERNALS When new write requests come in to the storage system, a new fingerprint is calculated and is written to a change the flexible volume metadata.
NETAPP DEDUPLICATION: INTERNALS The change log is then sorted by the Fingerprint Manager, and the new fingerprints are merged into the fingerprint file. While the first change log is being processed, all new data that is written to the storage system is fingerprinted, and the change log’s fingerprint is written to a second change log file.
NETAPP DEDUPLICATION: INTERNALS Duplicates are then identified and sorted by inode. After a byte-by-byte comparison to that the blocks are truly duplicate, indirect blocks are updated by pointing to the already existing data block. The reference count metadata is updated. The duplicate block, having no inode or indirect blocking to it (that is refcount value of 0), is considered free by the WAFL file system.
NETAPP DEDUPLICATION: INTERNALS For maintenance, a storage can run the sis check command, which verifies the integrity of the fingerprint file. This verification is automatically triggered by the deduplication operation when 20% of fingerprint entries become stale.
NETAPP DEDUPLICATION: STAGES NetApp deduplication eliminates duplicated data through sharing across files. This can be summarized in three back stages: Gathering or initialization Sorting Deduplicating files In addition, a checking stage verifies the integrity of the fingerprint file.
CONFIGURATION OVERVIEW To configure NetApp deduplication, you must first license it on the storage system. Use the license add command to perform this task. Next, you must use the sis on command to turn it on for the volume that you want to deduplicate. If data already exists on the storage system volume that you want to deduplicate, run the sis start -s command on the volume. This command scans the file system to collect fingerprints of each data block and sorts the fingerprints to identify duplicate blocks. Each fingerprint entry maps a fingerprint value to the location of a disk block: [fingerprint value, block location]. This data structure enables you to query blocks based on block contents. You can use the sis config command to configure the system to run the deduplication process at a particular time. The storage can then run the sis start command to process fingerprints that are present in the change log (because they were recorded when data was written to disk). During this step, new duplicate blocks are eliminated, and a list of new fingerprints is added to the database. You can perform this step manually by running the sis start command, or the step can be triggered automatically by a scheduled deduplication process. The storage can view sis status to the status of the deduplication operation and use df -s to view the amount of space savings. NOTE: When files are removed, the fingerprints are not automatically purged. Stale fingerprints are purged after a certain threshold is reached, or when a sis check command is run explicitly on the volume.
Configuring Deduplication system> sis on /vol/vol1 SIS for "/vol/vol1" is enabled. Already existing data could be processed by running "sis start -s /vol/vol1".
system> sis start -s /vol/vol1 The file system will be scanned to process existing data in /vol/vol1. This operation may initialize related existing metafiles. Are you sure you want to proceed with scan (y/n)? y Fri Nov 10 11:42:58 EST [wafl.scan.start:info]: Starting SIS volume scan on volume vol1. The SIS operation for "/vol/vol1" is started.
CONFIGURING DEDUPLICATION Here is an example of turning on deduplication on a volume named “vol1.” Next, the storage scans the volume to identify current space savings and adds the existing data’s fingerprint records to the fingerprint database by using the sis start -s command.
ING DEDUPLICATION Here, the storage uses the sis status command to confirm the initialization scan progress that was started with the sis start -s command. When the system is idle, the amount of time that the process has been idle appears in response to the sis status command. Finally, a storage can the amount of savings by using the df -s command.
THE SIS STATUS COMMAND: STAGES The sis status command displays different messages, depending on stage of the deduplication process that is occurring on the storage system. This example shows the four basic stages and the associated progress messages.
SCHEDULING DEDUPLICATION By default, deduplication occurs at midnight every day. You can configure the schedule by using the sis config command. You can specify the schedule (-s) parameter in one of four ways: 1. Specify "-" to specify that there should be no scheduled deduplication operation on the flexible volume. 2. List the hours, and enter the @ sign separator, followed by the day list. 3. Specify “auto” to indicate that deduplication should run on the flexible volume whenever there are 20% new fingerprints in the change log. 4. List the days, and enter the @ sign separator, followed by the hours list.
Data ONTAP Data ONTAP Data ONTAP Data ONTAP Data ONTAP 8.0.0 Data ONTAP 8.0.0 8.0.0 8.0.1 8.0.1 8.0.1 Max Flex Vol Size Max Dense Total Max Dense Max Flex Vol Max Dense Vol Max Dense (TB) Data (TB) Vol Size (TB) Size (TB) Size (TB) Total Data (TB)
DEDUPLICATION VOLUME LIMITS In the Data ONTAP 7.3.x operating system, the volume limits are: Platform System Data ONTAP Data ONTAP Data ONTAP Memory 7.3.0 7.3.0 7.3.0 Max Flex vol Max dense Max dense size (TB) vol size (TB) total data (TB)
TYPICAL STORAGE SAVINGS These are sample savings that were achieved with internal and customer testing. Actual customer savings are highly dependent on the data type and data layout.
Compression Requirements Compression requires: – Data ONTAP 8.0.1 7-Mode operating system or later – Formal NetApp approval with written agreement (Policy Variance Request or PVR) Implementation: – Add a free compression and deduplication license – Enable compression for each FlexVol volume FlexVol volumes in 64-bit aggregates only 16 TB maximum size
– Enable deduplication (even if using compression only)
TOTAL STORAGE EFFICIENCY EFFECT NetApp provides numerous storage efficiencies to reduce storage costs while still providing the capacity that is needed. THIN REPLICATION Thin replication using NetApp SnapMirror. You can now improve network bandwidth efficiency up to 70% to existing storage and network efficiencies achieved with SAN thin replication. Thin replication can efficiently replicate production data and, with FlexClone® copies, provide multiple, instant copies for disaster recovery, decision , business intelligence analysis, or to a development and test environment – all with minimal overhead. VIRTUAL COPIES Storage istration can save up to 80% using writable virtual copies called FlexClone. See Module 12 for more information.
Module Objectives By the end of this module, you should be able to: Describe high-availability (HA) solutions Discuss how high availability increases the reliability of storage Define HA controller configuration Describe the three modes of HA operation with an HA pair Analyze how high availability affects client protocols during failover and giveback operations
SYNCMIRROR SOFTWARE SyncMirror® protects against data loss by maintaining two copies of the data contained in the aggregate, one in each plex. Any data loss due to disks failure in one plex is repaired by the undamaged data in the other plex.
Hardware Disk Ownership When disk ownership is hardware-based, disks are assigned to a pool based on the slot that the shelf is connected to. DISCONNECT AC POWER CORD BEFORE REMOVAL
PROPERLY SHUT DOWN SYSTEM BEFORE OPENING CHASSIS.
e0a
e0b
RLM
e0c
e0d
e0a
e0b
RLM
e0c
e0d
0d
FC
0d
0e
LVD SCSI
0e
LVD SCSI
AC
AC
status
status
3
1
A B
X2
X2
Pool0 1Gb 2Gb 4Gb
SHELF ID
2
ESH4
4Gb 2Gb
1Gb ELP
1
A B
X2
Pool1 1Gb 2Gb 4Gb
SHELF ID
3
NOTE: Data ONTAP® 8.0 does not hardware-based disk ownership.
NOTE: Disks in different pools must be on different loops or shelves.
0b
LINK
ESH4
e0a
1Gb ELP
0
4Gb 2Gb
system> disk assign {disk_list...}[-p pool] ... – disk_list is the list of device IDs of unassigned disks – pool is either 0 or 1 system> disk assign 0a.46 -p 1
Implementing Mirrored Aggregates To implement SyncMirror software: 1. Add the (no-cost) syncmirror_local license. 2. that the disks are in the correct pools. 3. Reboot the system.
To create a new mirrored aggregate:
system> aggr create aggr –m number_of_disks
To add a mirror to an existing aggregate: system> aggr mirror aggr
Nondisruptive shelf replacement (NDSR) is now available in Data ONTAP 7.3.2 and later. If multiple disks fail in an aggregate, data is still available from the alternate pool.
Maintenance of Mirrors Split a mirror aggregate Re a split aggregate Remove a plex from a mirror aggregate Compare plexes of a mirrored aggregate NOTE: For more information about SyncMirror software, please see the High Availability Webbased courses.
HA CONTROLLER CONFIGURATION In an HA configuration, the controllers of two storage systems (nodes) are connected to each other either directly or through switches. The nodes are connected to each other through a cluster adapter or NVRAM adapter, which allows one node to serve data to the disks on its partner if the partner node fails. Each node continually monitors its partner, mirroring data for the partner’s NVRAM.
BENEFITS OF HA CONTROLLER CONFIGURATION HA configurations provide fault tolerance and the ability to perform nondisruptive upgrades and maintenance:
Fault tolerance: When one node fails or becomes impaired, a takeover occurs and the partner node continues to serve the data of the failed node. Nondisruptive software upgrades: When you halt one node and allow takeover, the partner node continues to serve data for the halted node, allowing you to upgrade the halted node. For more information about nondisruptive software upgrades, see the Data ONTAP Upgrade Guide. Nondisruptive hardware maintenance: When you halt one node and allow takeover, the partner node continues to serve data for the halted node, allowing you to replace or repair hardware on the halted node.
Requirements for High Availability Architecture compatibility Disk and disk shelf compatibility Installed cluster interconnect adapters and cables (some systems have the cluster interconnect built-in to the backplane) Nodes attached to the same networks The same software licensed and enabled
REQUIREMENTS FOR HIGH AVAILABILITY The number of disks in a standard HA configuration must not exceed the maximum configuration capacity. In addition, the total amount of storage attached to each node must not exceed the capacity of a single node. To determine your maximum configuration capacity, see the System Configuration Guide at http://now.netapp.com/NOW/knowledge/docs/hardware/hardware_index.shtml. NOTE: When a failover occurs, the takeover node temporarily serves data from all the storage in the HA configuration. When the single-node capacity limit is less than the total HA configuration capacity limit, the total disk space in a cluster can be greater than the single-node capacity limit. It is acceptable for the takeover node to temporarily serve more than the single-node capacity would normally allow, as long as it does not own more than the single-node capacity. Disks and disk-shelf compatibility
Both Fibre Channel (FC) and SATA storage types are ed in standard HA configurations, if the two storage types are not mixed on the same loop. A node can have exclusively FC storage and the partner node can have exclusively SATA storage. Cluster interconnect adapters and cables must be installed. Nodes must be attached to the same network and the network interface cards must be configured correctly. System features such as CIFS, NFS, or SyncMirror software must be licensed and enabled on both nodes.
Partner Communication In an HA controller configuration, partners communicate through the interconnect with a heartbeat. The system state is written to disk in a ―mailbox.‖ Data not committed to disk is written to the local and partner nonvolatile RAM (NVRAM).
PARTNER COMMUNICATION To ensure that each node in an HA controller configuration maintains the correct and current status of its partner node, heartbeat information and node status are stored on each node in the mailbox disks. Mailbox disks are a redundant set of disks used in coordinating takeover and giveback operations. If one node stops functioning, the surviving partner node uses the information on the mailbox disks to perform takeover processing, which creates a virtual storage system. If an interconnect failure occurs, the mailbox heartbeat information prevents an unnecessary failover from occurring. Moreover, if the HA configuration information that is stored on the mailbox disks is not synchronized during boot, the HA controller nodes automatically resolve the situation. The FAS system failover process is extremely robust, preventing split-brain issues from occurring.
HA Controllers and NVRAM Each node reserves half of the total NVRAM for the partner’s data. During takeover, the surviving partner performs the down system’s reads and writes using the mirror nonvolatile log (NVLOG).
HA CONTROLLERS AND NVRAM The Data ONTAP operating system uses the WAFL® (Write Anywhere File Layout) file system to manage data processing and NVRAM to guarantee data consistency before committing writes to disks. If the storage controller experiences a power failure, the most current data is protected by the NVRAM, and file system integrity is maintained. In an HA controller environment, each node reserves half of the total NVRAM size for the partner node’s data, to ensure that exactly the same data exists in NVRAM on both storage controllers. Therefore, only half of the NVRAM in the high-availability controller is dedicated to the local node. If failover occurs, when the surviving node takes over the failed node, all WAFL checkpoints stored in NVRAM are flushed to disk. The surviving node then combines the split NVRAM. How the Interconnect Works The interconnect adapters are critical components in an HA controller configuration. The Data ONTAP operating system uses these adapters to transfer system data between the partner nodes, which maintains data synchronization in the NVRAM on both controllers. Other critical information is also exchanged through the interconnect adapters, including the heartbeat signal, system time, and details about temporary disk unavailability due to pending disk-firmware updates.
CONFIGURING HIGH AVAILABILITY To add the license, enter the following command on both node consoles for each required license: license add xxxxxx where xxxxxx is the license code you received for the feature To reboot both nodes, enter the following command: reboot To enable the license, enter the following command on the local node console: cf enable To that controller failover is enabled, enter the following command on each node console: cf status
Setting Matching Node Options 1. Analyze the option values for both nodes. 2. that the option values are the same. 3. Correct any mismatched option values. This table lists parameters that must match: Parameter
SETTING MATCHING NODE OPTIONS Because some Data ONTAP options must match on both the local and partner node, use the options command on both nodes and ensure that the options match. STEPS 1. View and note the values of the options on the local and partner nodes, using the following command on each console: options The current option settings for the node are displayed on the console. Output similar to the following is displayed: auto.doit TEST auto.enable on 2. that the options with comments in parentheses are set to the same value for both nodes. The comments are as follows: – – –
Value might be overwritten in takeover Same value required in local+partner Same value in local+partner recommended
3. Correct any mismatched options using the following command: options option_name option_value.
TAKEOVER OPERATION When a takeover occurs, the functioning partner node takes over the functions and disk drives of the failed node by creating an emulated storage system that: Assumes the identity of the failed node Accesses the failed node’s disks and serves its data to clients The partner node maintains its own identity and its own primary functions, but also handles the added functionality of the failed node through the emulated node.
Events That Trigger Takeover A node undergoes a software or system failure that leads to a panic A node undergoes a system failure (for example, a loss of power) and cannot reboot A mismatch occurs between the disks that each node believes it owns A network interface that is configured to failover becomes unavailable A node cannot send heartbeat messages to its partner and no other mechanism is available A node is halted (such as with the halt command) A takeover is manually initiated
GIVEBACK OPERATION When a failed node is repaired and functioning again, execute the cf giveback command, which terminates the emulated node on the partner. The failed node resumes serving its own data and the HA configuration resumes normal operation. Each node is ready to take over for its partner if the partner fails.
If the storage system was previously managed as a standalone system by NetApp System Manager, the storage system must be deleted and then added again after high availability is configured.
Enable HA You cannot currently enable high availability from NetApp System Manager so you must enable it from the command-line interface: 1. License controller failover (cf): system> license add xxxxxx system2> license add xxxxxx
ADD SECOND PARTNER IP ADDRESS The following table lists the three types of interface configurations that you can enable in an HA pair. Interface type
Description
Shared
This type of interface s both the local and partner nodes. It contains both the local node and partner node IP addresses. During takeover, it s the identity of both nodes.
Dedicated
This type of interface only s the node in which it is installed. It contains the local node IP address only and does not participate in network communication beyond local node during takeover. It is paired with a standby interface.
Standby
This type of interface is on the local node, but only contains the IP address of the partner node. It is paired with a dedicated interface
Negotiated Failover Data ONTAP allows failover to occur with failure of one or more network interfaces to ensure continual client interaction To enable negotiated failover (NFO): Off by default system> options cf.takeover.on_network_interface_failure on
To configure policy for marked network interface cards (NICs): system> options cf.takeover.on_network_interface_failure.policy {all_nics | any_nics}
To mark a NIC to participate in NFO: system> ifconfig
nfo
NEGOTIATED FAILOVER To enable negotiated failover for failed network interfaces, you must explicitly enable the cf.takeover.on_network_interface_failure option, set the failover policy, and mark each interface that can trigger a negotiated failover (NFO). NOTE: The cf.takeover.on_network_interface_failure.policy option must be set manually on both of the controllers in an HA pair: all_nics: All interfaces marked for failover must fail before takeover will occur. any_nic: Any interface marked for failover will trigger a high-availability takeover. The cf.takeover.on_network_interface_failure option is not the primary defense against a network switch becoming a single point of failure. This option should only be considered when a single-mode vif or second-level vif cannot be used. Controller failover is disruptive to CIFS clients and can be disruptive to NFS clients that use soft mounts. In contrast, interface group (or virtual interface in Data ONTAP 7.3) failover is completely nondisruptive and is therefore the preferred method. Also note that negotiated failover is being used increasingly in MultiStore® environments.
HA Best Practices Test failover and giveback operations before placing HA controllers into production. Monitor: – The performance of the network – The performance of disks and storage shelves – U utilization of both controllers, in order to ensure that neither exceeds 50%
HA BEST PRACTICES General best practices require comprehensive testing of all mission-critical systems before introducing them into a production environment. HA controller testing should include takeover and giveback, or functional testing as well as performance evaluation. Extensive testing validates planning. Monitor network connectivity and stability. Unstable networks not only affect total takeover and giveback times, they adversely affect all devices on the network in various ways. NetApp® storage controllers are typically connected to the network to serve data, so if the network is unstable, the first symptom is degradation of storage-controller performance and availability. Client service requests are retransmitted many times before reaching the storage controller, appearing to the client as slow responses from the storage controller. In a worst-case scenario, an unstable network can cause communication to time-out, and the storage controller appears to be unavailable. During takeover and giveback operations in an HA controller environment, storage controllers attempt to connect to numerous types of servers on the network, including DNS, Network Information Service (NIS), Lightweight Directory Access Protocol (LDAP), and application servers, as well as Windows domain controllers. If these systems are unavailable or the network is unstable, the storage controller continues to try to establish communications, which delays takeover and giveback times.
MULTIPLE HA TECHNIQUES IN COMBINATION The HA techniques described do not have to be used in isolation; often they are combined. The example in this slide shows multipathing high-availability controller configuration, referred to as MPHA.
Module Summary In this module, you should have learned to: Describe HA solutions Discuss how high availability increases the reliability of storage Define HA controller configuration Describe the three modes of HA operation with an HA pair Analyze the effect on client protocols during failover and giveback operations
Check Your Understanding What are the three modes of operation for an HA controller configuration? What is the purpose of using an HA controller configuration? What happens during a takeover? True or False: – Options must be set the same on both nodes. – The license must be set the same on both nodes. – Both nodes must have the same number of disks. – Both nodes must be part of the same domain.
Module Objectives By the end of this module, you should be able to: List the virtualization vendors that the Data ONTAP® operating system s Illustrate virtualization solutions Describe how to virtualize a storage controller using MultiStore® software Configure MultiStore software Assign client protocols on MultiStore software
Virtualization and NetApp Solutions NetApp provides storage solutions for virtual infrastructures. Virtualization storage client – VMware® cloud, server, and desktop – NetApp, VMware, and Cisco® FlexPod™ for VMware – Microsoft Hyper-V™ – Citrix® server and desktop
STORAGE ARCHITECTURE Storage architecture has evolved. First, there were application-based silos with separate physical servers and storage. Then VMware® began a movement to virtualize these silos. Virtualization might begin in one department at a company, but over time much of the company was virtualized and consolidated. But consolidation is limited, generally by backup or compliance restrictions. Share infrastructure is now increasing, such as IT as a service (or ITaaS), which is an internal cloud of resources available. External service providers (external clouds) also meet a market need by standardizing, automating, and providing selfservice computing and storage resources.
15 - 4
Data ONTAP 7-Mode istration: Virtualization Solutions
NETAPP AND VMWARE CLOUD SOLUTIONS NetApp and VMware cloud solutions provide these advantages:
15 - 7
Fault tolerance for FCoE SAN multipath I/O for 10 Gigabit Ethernet (GbE) storage networking vNetwork distributed switch (network configuration can be done across multiple VMs and then applied to individual VMs) Service Console and VMkernel for IPv6
Data ONTAP 7-Mode istration: Virtualization Solutions
SNAPMANAGER FOR VIRTUAL INFRASTRUCTURE SMVI co-exists with Virtual Center and communicates with Virtual Center using the API when providing requests to ESX Servers for ESX Server VMs. Virtual Center communicates with ESX Servers for all management functions, and uses resident agents in the ESX Servers. SMVI uses Data ONTAP APIs to schedule Snapshot copy creation and SnapRestore® functions on NetApp platforms, as well as invoking the SnapMirror product family for replication of the Snapshot copies. Replication of Snapshot copies can be directed at a disaster recovery site. Snapshot copies are always created at data-store levels, but restores can be done at the level of individual VMs.
15 - 8
Data ONTAP 7-Mode istration: Virtualization Solutions
DISASTER RECOVERY AUTOMATION The SnapMirror product family uses Snapshot copies and SnapRestore technology to quickly and efficiently replicate data between sites. Other solutions are often deployed only on the most critical applications, but the SnapMirror product family is flexible enough and simple enough to use in various ways across all applications. Because the solution uses application-aware Snapshot copies, application consistency is maintained. In a VMware environment, the SnapMirror product family is integrated with VMware Site Recovery Manager (SRM), which manages VMs and ESX Servers across sites. Note that VMware does not provide the capability to directly copy virtual servers to a remote location, so VMware recommends using storage-based replication for both virtual servers and data.
15 - 10
Data ONTAP 7-Mode istration: Virtualization Solutions
With NetApp FlexClone Software, VM1 VM2 VM3 VM4 s can VMDK VMDK VMDK VMDK test the disaster recovery automation Storage Pool of SRM to ensure reliability Primary Site
TESTING DISASTER RECOVERY AUTOMATION NOTE: NetApp recommends using FlexClone software to nondisruptively test the disaster recovery site. For more information, your NetApp Professional Services specialist.
15 - 11
Data ONTAP 7-Mode istration: Virtualization Solutions
INTEGRATED STORAGE VIRTUALIZATION NetApp storage systems use NetApp deduplication and Snapshot technology to provide savings. In this example, the competitor solution requires 61 disks and 3 arrays, where the NetApp solution uses 22 disks and 2 arrays.
15 - 12
Data ONTAP 7-Mode istration: Virtualization Solutions
Cisco Nexus® Family Switches NetApp FAS 10 GE and FCoE Complete Bundle
A shared infrastructure for a wide range of environments and applications
Features Complete data center in a single rack Performance-matched stack Step-by-step deployment guides
Solutions guide for multiple environments Multiple classes of computing and storage ed in a single FlexPod Centralized management: NetApp OnCommand and Cisco UCS Manager
FLEXPOD FlexPod is the best infrastructure foundation ing both virtualized and nonvirtualized workloads using Cisco Unified Computing System (UCS), Cisco Nexus (servers and network), and NetApp FAS (storage). It provides the best unified computing, networking, and storage. The FlexPod solution is based on three key capabilities:
Low Risk: As a validated, simplified data center solution and a cooperative model, FlexPod provides a safe and proven journey to virtualization and toward the cloud. Business agility from flexible IT: FlexPod scales to fit a variety of use cases and environments, such as SAP®, Exchange 2010, SQL, VDI, and Secure Multi-Tenancy (SMT). Reduced TCO from higher data center efficiency: FlexPod decreases the number of operational processes, reduces energy consumption, and maximizes resources.
15 - 13
Data ONTAP 7-Mode istration: Virtualization Solutions
1 Rack Data Center Solution 30 Westmere Us (180 cores) 2 TB server memory (up to 4 TB) 40-Gbps interconnect (4x 10 GE) 512-GB SSD storage cache 42 TB storage
1 Enterprise IT Infrastructure
For an organization of 1500 s with a mixed workload of the following (and with headroom for more applications): VMware View 4.5 (MS Windows® 7) MS Exchange 2010 MS SharePoint 2010 MS SQL Server 2008 R2
Two classes of computing dense memory and general virtualized workloads
FLEXPOD FOR VMWARE The base FlexPod configuration has a FAS3210A and s up to 1,500 s for these four popular workloads simultaneously: Virtual desktop infrastructure (VDI) Microsoft Exchange Microsoft SharePoint SQL Server The configuration also provides sufficient headroom for additional applications. The storage array can be replaced by a larger (or smaller) system, and blade configuration can also be tuned to specific workload requirements, yet FlexPod retains its integrity and single architecture. You can scale up either using a standard FlexPod each time or by scaling individual components. Regardless of the number of FlexPods or the capacity in each layer of the infrastructure, resources are always managed as one pool, rather than for individual FlexPods.
15 - 14
Data ONTAP 7-Mode istration: Virtualization Solutions
MAXIMIZE SERVER AND STORAGE UTILIZATION In a virtualized environment (or a physical environment), NetApp storage systems provide unique capabilities that enable different IT teams to perform automated data management tasks from their tools. SANscreen® VM Insight enables discovery of the cross-domain view. It allows you to track and report all changes and be proactive about showing the implication of the changes on the services you expect to deliver. In other words, SANscreen VM Insight allows you to track where redundant paths have been removed. Service Insight allows you to proactively manage your environment and quickly identify and resolve latent quality problems. VM Insight provides visibility for connections between VMs (virtual machines) and allocated volumes. This visibility allows both the server and storage teams to enable more effective forecasting, planning, and chargeback. VM Insight also helps you improve storage utilization and reduce storage consumption caused by VM sprawl. It does so by giving you tools to use to identify VMs and associated storage that are no longer in use or are being underutilized.
15 - 15
Data ONTAP 7-Mode istration: Virtualization Solutions
MICROSOFT VIRTUALIZED INFRASTRUCTURE Features such as deduplication, thin provisioning, cloning, multiprotocol, and RAID-DP® have a compounding effect on the level of efficiency and flexibility achieved in a Microsoft virtualized environment. Many customers that currently use a direct-attached storage (DAS) topology are concerned about the costs associated with transitioning to virtualized network storage. Yet they want to realize the benefits of quick migration and high-availability that are provided by Hyper-V and require a network storage topology. By using cost-effective and flexible NetApp storage solutions, customers can maximize their investment in their Microsoft virtualized environments.
15 - 20
Data ONTAP 7-Mode istration: Virtualization Solutions
Continuing NetApp Microsoft Education Review these courses at NetApp University: Design and Implement Hyper-V Solutions on NetApp Storage SAN Implementation Workshop
NETAPP AND CITRIX XENDESKTOP NetApp and Citrix provide the following advantages:
Unified VM and storage Deduplication and thin provisioning Zero-cost desktop clones Desktop boot accelerator VM-aware array-based backup Single desktop cloning Disaster recovery replication for immutable—write once, read many (WORM)—storage
15 - 25
Data ONTAP 7-Mode istration: Virtualization Solutions
NETAPP ADAPTER IN XENSERVER 5.5 NetApp and Citrix co-developed the XenServer integrated adaptor to enable server s to manage NetApp storage directly from the XenCenter console. The NetApp integrated storage adaptor for Citrix XenServer enables server s to increase productivity by managing common storage functions within the XenCenter console. NetApp solutions provide instant storage provisioning and cloning of XenServer VMs, accelerating testing and development or production from weeks to minutes.
15 - 26
Data ONTAP 7-Mode istration: Virtualization Solutions
Consolidation and ease of management: Application service providers can consolidate your the storage needs. You can maintain domain infrastructure while providing multidomain storage consolidation. You can reduce management costs while offering independent, domain-specific storage management. Security: Security is one of the key concerns when storage is consolidated either within an organization or by an application service provider. Different vFiler® units can have different security systems within the same storage system. Delegation of management: vFiler unit s can have different access rights than storage system s. Disaster recovery and data migration: MultiStore software enables you to migrate or back up data from one storage system to another without extensively reconfiguring the destination storage system.
15 - 29
Data ONTAP 7-Mode istration: Virtualization Solutions
ENABLING MULTISTORE SOFTWARE The vFiler unit limit (the number of vFiler units that you can create on this storage system, including vFiler0) is set to a default value between 3 and 11, depending on the memory capacity of the host storage system.
15 - 30
Data ONTAP 7-Mode istration: Virtualization Solutions
Interface must be down to assign with no address 31
IPSPACE An IPspace defines a distinct IP address space in which vFiler units can participate. IP addresses defined for an IPspace are applicable only within that IPspace. A distinct routing table is maintained for each IPspace. No cross-IPspace traffic is routed.
15 - 31
Data ONTAP 7-Mode istration: Virtualization Solutions
CONFIGURING STORAGE FOR A VFILER UNIT As the physical storage system , if you need to manage storage resources that belong to a vFiler unit, but you do not have istrative access to the vFiler unit, you can temporarily move the vFiler unit's resources or temporarily destroy the vFiler unit.
15 - 32
Data ONTAP 7-Mode istration: Virtualization Solutions
CREATING A VFILER UNIT You must meet these conditions before you can create a vFiler unit:
You must create at least one unit of storage (qtrees or volumes, traditional or flexible) before creating the vFiler unit. The storage unit that contains the vFiler unit configuration information must be writable; it must not be a read-only file system, such as the destination volume or qtree in a SnapMirror relationship. The IP address used by the vFiler unit must not be configured when you create the vFiler unit.
15 - 33
Data ONTAP 7-Mode istration: Virtualization Solutions
VFILER START You can start a vFiler unit that is in the stopped state; after a vFiler unit starts, it is in running state and can receive packets from clients.
15 - 34
Data ONTAP 7-Mode istration: Virtualization Solutions
CONFIGURE NETWORKING FOR THE VFILER Because vFiler units in the same IPspace share one routing table, you can manipulate the routing table by entering the route command from the host storage system.
15 - 35
Data ONTAP 7-Mode istration: Virtualization Solutions
vFiler Management To learn more about running commands on vFiler1, display the vFiler1 help:
10.x.x.x e0a
e0b
0a
0b
0c
0d
system> vfiler run vFiler1 ?
To assign a protocol to a vFiler unit: 1. License the protocol on the storage system 2. Assign the protocol to a vFiler unit system> vfiler allow vFiler1 proto=cifs proto=iscsi
VFILER MANAGEMENT By default, a vFiler unit can use the protocols for the host storage system. You can select the protocols that you want to allow on the vFiler units.
15 - 36
Data ONTAP 7-Mode istration: Virtualization Solutions
MULTISTORE INTEGRATIONS MultiStore software integrates into the SnapMirror product family and into SnapVault software: Snapmirror Product Family The SnapMirror product for mirroring volumes and qtrees has been integrated to work with vFiler technology after the SnapMirror product family is licensed on the source and destination storage systems. You can enter SnapMirror commands from the default storage system (vFiler0) or from a specific non-default vFiler unit. SnapMirror commands entered from the default storage system can be used to affect or display information about all of the non-default vFiler units on the host storage system. SnapMirror commands entered from a non-default vFiler unit only affect or display information about that specific unit. For backward compatibility, the default storage system (vFiler0) can operate on all volumes and qtrees, even if they are owned by other vFiler units. If vFiler unit storage volumes and qtrees are mirrored by vFiler0, the SnapMirror relationship will be reflected only on vFiler0. Snapvault Software After SnapVault software is licensed on the source and destination storage units, the SnapVault product for backing up volumes and qtrees is integrated to work with vFiler technology. See the MultiStore Management Guide for more information.
15 - 38
Data ONTAP 7-Mode istration: Virtualization Solutions
VFILER DISASTER RECOVERY You can prepare for recovery from a potential disaster by creating a backup vFiler unit that can be used for disaster recovery. Before a disaster occurs, you can safeguard information by creating vFiler units on the destination storage system that remain inactive unless a disaster occurs. You should perform checks to ensure that the storage system and network are ready. The vFiler dr configure command uses the Data ONTAP SnapMirror feature as its underlying technology. Multiple paths can be set from the source to the destination storage systems in the SnapMirror product family. The -a option of the vfiler dr configure command enables you to set multiple paths for the configuration operation. See the MultiStore Management Guide for more information.
15 - 39
Data ONTAP 7-Mode istration: Virtualization Solutions
VFILER MIGRATION If heavy traffic to one vFiler unit is affecting the performance of its host node, and its high-availability configuration partner is lightly loaded, you can transfer ownership of the vFiler unit to the partner. Transferring ownership allows you to balance load processing on the two nodes without copying data. When you migrate a vFiler unit, you move it from the remote storage system to the local storage system. You initiate migration on the destination storage system, which will host the vFiler unit after the migration. Add a static route entry, if required, because the static routing information will not be carried to the destination storage system. Migration across storage systems enables workload management. Migration automatically destroys the source vFiler unit and activates the destination, which starts serving data to its clients automatically. Only the configuration is destroyed on the source, not the data. See the MultiStore Management Guide for more information.
15 - 40
Data ONTAP 7-Mode istration: Virtualization Solutions
Data Motion – Mirrors a vFiler unit to another storage system – Allows cut over to completely non-disruptively move a vFiler unit and all its data to a new storage system
DATA MOTION FOR VFILER SERVERS NetApp Data Motion provides true data mobility to storage infrastructure without affecting the availability of client applications. Without Data Motion, lifecycle management tasks such as system maintenance, hardware refreshes, and software upgrades require planned outages and affect the ability of a company to provide continuous access to data. With the increasing need to optimize service levels, it is also important to dynamically balance system performance without compromising transaction performance or integrity. NetApp Data Motion allows clients to always stay connected to their data, even as it is moved to a new physical location.
15 - 41
Data ONTAP 7-Mode istration: Virtualization Solutions
vFiler Stop and Destroy To stop a vFiler unit: system> vfiler stop vFiler1
10.x.x.x e0a
e0b
0a
0b
0c
0d
To destroy a vFiler unit: system> vfiler stop vFiler1 system> vfiler destroy vFiler1 – Resources for the destroyed vFiler unit return to vFiler0. – Destroying a vFiler unit does not destroy any data.
VFILER STOP AND DESTROY You can stop a vFiler unit if you need to troubleshoot vFiler unit problems or destroy the vFiler unit. After you stop a vFiler unit, the vFiler unit can no longer receive packets from clients. The stopped state is not persistent across reboots; after you reboot the storage system the vFiler unit resumes automatically. If iSCSI is licensed on the storage system, stopping a vFiler unit stops iSCSI packet processing for that vFiler unit. You can start a vFiler unit that is in the stopped state; after a vFiler unit starts, it is in running state and can receive packets from clients.
15 - 42
Data ONTAP 7-Mode istration: Virtualization Solutions
Module Summary In this module, you should have learned to: List the virtualization vendors that the Data ONTAP operating system s Illustrate virtualization solutions Describe how to virtualize a storage controller using MultiStore software Configure MultiStore software Assign client protocols on MultiStore software
Check Your Understanding How does NetApp make management of virtualization solutions easier? What is MultiStore software? What is the main command that is used to configure MultiStore software?
Module Objectives By the end of this module, you should be able to: List the methods available to back up and recover data Use ndmpcopy to process full and incremental data transfers Discuss dump and restore Describe, enable, and configure Network Data Management Protocol (NDMP) on a storage system
Snapshot Copies Back up to Local Storage Recover files
s can back up and recover files quickly—almost instantaneously—with Snapshot copies. Snapshot copies do not replace standard backups to other media locations.
Snapshot Copy and SnapRestore Software Back up to Local Storage Recover aggregate, volume, or single file
Use Snapshot® copies to backup locally Use SnapRestore® data recovery software to revert a file system to any specified Snapshot copy or restores a single file from a Snapshot copy. – Restores files, volumes, and aggregates quickly online – Works from multiple recovery points – Provides an easy recovery process based on a single command input – Requires the snaprestore license code
SNAPSHOT COPY AND SNAPRESTORE SOFTWARE SnapRestore® data recovery software enables you to quickly revert a local volume or a file on a storage system to the state it was in when a particular Snapshot® copy was created. In most cases, reverting a file or volume is much faster than restoring files from tape or copying files from a Snapshot copy to the active file system. You use SnapRestore to recover from data corruption. If a primary storage system application corrupts data files in a volume, you can revert the volume or specified files in the volume to a Snapshot copy created before the data corruption. You can also use SnapRestore data recovery software if you are testing a volume or file and want to restore that volume or file to pretest conditions. You must purchase and install the snaprestore license code to enable and use the SnapRestore service.
16 - 5
Data ONTAP 7-Mode istration: Backup and Recovery Methods
ndmpcopy Command Back up to Local or Remote Storage Recover volume, qtrees, or directories Used to transfer data between storage systems that NDMPv3 or NDMPv4 system> ndmpd on
NDMPCOPY COMMAND The ndmpcopy command enables you to transfer file system data between storage systems that Network Data Management Protocol verison 3, NDMPv3, or NDMPv4, and the UNIX® file system (UFS) dump format. Using the ndmpcopy command, you can perform both full and incremental data transfers. However, incremental transfers can have no more than two levels (one full and no more than two incremental levels). You can transfer full or partial volumes, qtrees, or directories, but not individual files. To copy data within a storage system or between storage systems using ndmpcopy, use the following command from the source or the destination system, or from a storage system that is not the source or the destination: system> ndmpcopy [options] source_hostname:source_path destination_hostname:destination_path In this command, source_hostname and destination_hostname can be host names or IP addresses. If destination_path does not specify a volume (or specifies a nonexistent volume), the root volume is used.
16 - 6
Data ONTAP 7-Mode istration: Backup and Recovery Methods
SnapVault Software Back up to Remote Storage Recover qtrees, directories, or files
SnapVault® software is Data Center embedded NetApp FAS1 software for disk-todisk backup and FAS2 archiving s can back up and recover: Primary – Qtrees – Directories on storage not produced by NetApp
SNAPVAULT SOFTWARE SnapVault® software is a disk-based storage backup feature of the Data ONTAP operating system. SnapVault software enables you to back up data stored on multiple storage systems to a central, secondary storage system quickly and efficiently as read-only Snapshot copies. If data is lost or corrupted on a storage system, you can restore backed-up data from the SnapVault secondary system with less downtime and uncertainty than is associated with conventional tape backup and restore operations. Additionally, s who want to restore their own data may do so without the intervention of a system . The SnapVault secondary system may be configured with NFS exports and CIFS shares to let s copy files from Snapshot copies to the correct locations.
16 - 7
Data ONTAP 7-Mode istration: Backup and Recovery Methods
BACKUP COMMAND EXAMPLES See the Data Protection Tape Backup and Recovery Guide for your version of the Data ONTAP operating system for more information.
16 - 9
Data ONTAP 7-Mode istration: Backup and Recovery Methods
Dump and Restore Format The Data ONTAP dump adheres to Solaris™ ufsdump.
Dump format: – Phases 1 and 2: Build a map of files and directories, and collect file history and attribute information – Phase 3: Dump data to tape, specifically directory entries – Phase 4: Dump files – Phase 5: Dump access control lists (ACLs)
Dump and Restore Event Logs Event logging is off by default; to enable event logging, execute this command: system> options backup.log.enable {off|on}
Event log files – Stored in the /etc/log/backup log file – Rotated once a week – Saved for up to six weeks
Event log message format: type timestamp identifier event (event_info)
THE NDMP STANDARD NDMP is an open standard for centralized control of data management across an enterprise. NDMP enables backup software vendors to provide for NetApp storage systems without having to port client code. An NDMP-compliant solution separates the flow of backup and restore control information from the flow of data to and from the backup media. These solutions invoke the native dump and restore features of the Data ONTAP operating system. The solutions back up data from, and restore data to, NetApp storage systems. NDMP also provides low-level control of tape devices and media changers. Data protection services provided by backup applications that NDMP offer a number of advantages:
Sophisticated scheduling of data protection operations across multiple storage systems Media management and tape inventory management services that eliminate or minimize manual tape handling during data protection operations for data catalog services that simplify the process of locating specific recovery data; Direct Access Recovery optimizes access to specific data from large backup tape sets for multiple topology configurations, allowing efficient sharing of secondary storage resources (tape library) through the use of three-way network data connections
16 - 12
Data ONTAP 7-Mode istration: Backup and Recovery Methods
Last Version Certified 6.0, 6.5 & 7.0 with Data ONTAP 7.2.2, 7.2.3 and 7.3.3 6.5 & 7.0 with Data ONTAP 7.3, 7.3.1, 7.3.2 and 7.3.3 Symantec NetBackup 6.5 & 7.0 with Data ONTAP 10.0.3 and 10.0.4 6.5 & 7.0 with Data ONTAP 8.0 7-Mode and Data ONTAP 8.0 Cluster-mode 5.51 and 6.1 with Data ONTAP 7.3, 7.3.1 and 7.3.2 5.5 with Data ONTAP 7.2.2 and 7.2.3 5.4 with Data ONTAP 7.2.2 and 7.2.3 IBM Tivoli Storage Manager 5.5.2 with Data ONTAP 10.0.3 and 10.0.4 6.1 with Data ONTAP 8.0 7-Mode and Data ONTAP 8.0 Cluster-mode 6.1 and 6.2 with Data ONTAP 7.3.3 6.1 with Data ONTAP 7.2 7.0 with Data ONTAP 7.3, 7.3.1 and 7.3.2 Simpana 8.0 with Data ONTAP 7.3, 7.3.1,7.3.2 and 7.3.3 CommVault Simpana 7.0 with Data ONTAP 10.0.3 and 10.0.4 Simpana 8.0 with Data ONTAP 10.0.3 and 10.0.4 Simpana 8.0 with Data ONTAP 8.0 7-Mode and Data ONTAP 8.0 Cluster-Mode 10.3.0.2.0 with Data ONTAP 7.3 Oracle Oracle Secure Backup 10.1.0.3 with Data ONTAP 7.1.1.1 and 7.2.1 7.1.1 with Data ONTAP 7.0 7.4.5 with Data ONTAP 7.2 7.4.5 with Data ONTAP 7.2.2 and 7.2.3 8.0U2 and NDMP 7.0.6 with Data ONTAP 7.2.3 BakBone NetVault 8.2 with Data ONTAP 10.0.3 and 10.0.4 8.2 with Data ONTAP 7.3, 7.3.1 and 7.3.2 NVBU 8.5 with Data ONTAP 8.0 7-Mode NVBU 8.5.2 with Data ONTAP 7.3.3 2.3 with Data ONTAP 7.0 3.0.1 with Data ONTAP 10.0.3 and 10.0.4 Syncsort BackupExpress 3.1 with Data ONTAP 7.3, 7.3.1 and 7.3.2 3.2.1 with Data ONTAP 7.3.3 and Data ONTAP 8.0 7-Mode Atempo Time Navigator 3.7 with Data ONTAP 6.5
NDMP MATRIX Solutions based on NDMP can centrally manage and control backup and recovery of highly distributed data while minimizing network traffic. These solutions can direct a NetApp storage system to back itself up to a locally attached tape drive without sending the backup data over the network. NDMP-based solutions are designed to assure data protection and efficient restoration in the event of data loss. The solutions include many control and management features—such as discovery, configuration, scheduling, media management, tape library control, and a interface—that are not available using the native dump and restore commands for NetApp storage systems. In 1996, NetApp partnered with Intelliguard to create NDMP. Since then, the two companies have promoted the industry standardization of NDMP. In the compatibility matrix in the figure above, key backup vendors and their NDMP solutions are listed. To obtain a complete list of third-party NDMP backup applications and software versions, see the online documentation on the NOW® (NetApp on the Web) online service and site. NDMP third-party solutions provide:
Central management and control of highly distributed data Local backup of NetApp storage systems (without sending data over the network) Control of robotics in tape libraries Data protection in a mixed server environment with UNIX, Windows® Server, and NetApp storage systems Investment protection with established backup strategies For more information about these and other NDMP Certification products, please see www.netapp.com.
16 - 13
Data ONTAP 7-Mode istration: Backup and Recovery Methods
NDMP Terminology and Components NDMP client – The NDMP client is a backup application. – NDMP clients submit requests to an NDMP server, and then receive replies and status back from the NDMP server.
NDMP TERMINOLOGY AND COMPONENTS In the following definitions, the primary system performs the NDMP data service and the secondary system performs the NDMP tape service. Data management application: An application that controls the NDMP session. The data management application is also called the backup application. Examples are Veritas™ NetBackup™ and EMC® NetWorker®. NDMP service: A service that provides data service, tape service, and SCSI service. Control connection: A bidirectional T/IP connection that carries NDMP messages encoded in external data representation standard (XDR) between the data management application and the NDMP server. The control connection is analogous to an NDMP session on the storage system. Data connection: A connection between two NDMP systems that carries a data stream either internal to the NetApp storage system (local) or by means of T/IP (remote). Data service: An NDMP service that transfers data between the primary storage system (where the data resides on disks) and the data connection. Tape service: An NDMP service that transfers data between the secondary storage and the data connection, allowing the data management application to manipulate and access secondary storage.
16 - 14
Data ONTAP 7-Mode istration: Backup and Recovery Methods
TYPICAL NDMP BACKUP SESSION The figure above represents a data protection topology in which data is backed up from storage system (primary) to storage system (secondary) to tape. In this topology, the backup operation is driven by a data management application host (NDMP client) designed by number 1 in the figure. The data management application opens connections to, and activates NDMP services in, both storage systems, designed by numbers 2 and 3. Control messages to the services configure the services and create a data connection between them. More control messages initiate and start backups; the data service creates the payload (backup image) and writes it to the data connection, where the tape service receives it. Log messages and notifications are sent from the services to the data management application.
16 - 15
Data ONTAP 7-Mode istration: Backup and Recovery Methods
NDMP Connection Information NDMP uses a T/IP connection to a dedicated port. NDMP does not require a CIFS, NFS, HTTP, Fibre Channel (FC), or iSCSI protocol license. The storage system listens for NDMP requests on port 10000 when ndmpd is enabled. All messages are encoded using external data representation (XDR) standard (see RFC 1014 for more information).
USING TAPE DEVICES WITH NDMP When using NDMP, a storage system can read from or write to the following devices: Stand-alone tape drives or tapes in a tape library that is attached to the storage system Tape drives or tape libraries attached to the workstation that runs the backup application Tape drives or tape libraries attached to a workstation or storage system on your network NDMP-enabled tape libraries attached to your network NOTE: To use NDMP to manage your tape library, you must set the tape stacker autoload setting to off. Otherwise, the system won’t allow media-changer operations to be controlled by the NDMP backup application. Naming Conventions for Tape Libraries The following names are used to refer to tape libraries: mcn or /dev/mcn; sptn or /dev/sptn. Tape libraries can also be aliased to worldwide names (WWNs). Use these commands with tape libraries: To view the tape libraries that are recognized by the system, use sysconfig –m. To display the names currently assigned to libraries on the storage system, use storage show mc. To display the aliases of tape drives, use storage show tape. For more information about tape aliasing and tape commands, see the Data ONTAP Data Protection Tape Backup and Recovery Guide on the NOW site.
16 - 17
Data ONTAP 7-Mode istration: Backup and Recovery Methods
ENABLING AND CONFIGURING NDMP To enable a storage system for basic management by an NDMP backup application, you must enable the storage system’s NDMP , and specify the configured NDMP version of the backup application, host IP address, and authentication method. To prepare a storage system for NDMP management, complete these steps: 1. Enable the NDMP service: system> options ndmpd.enable on When you disable ndmpd, the storage system continues to process all requests for sessions already established, but rejects new sessions. 2. Specify the NDMP version to on the storage system. This version must match the version configured on the NDMP backup application server: system> ndmpd version {2|3|4} Data ONTAP s NDMP versions 2, 3, and 4 (4 is the default value). The storage system and the backup application must agree on a version of NDMP to be used for each NDMP session. 3. If you want to specify a restricted set of NDMP backup-application hosts that can connect to the storage system, set the following option: system> options ndmpd.access {all|legacy | host[!]=hosts | if[!]=interfaces} 4. Specify the authentication method by which s are allowed to start NDMP sessions with the storage system. This setting must include an authentication type ed by the NDMP backup application: system> options ndmpd.authtype {challenge|plaintext|challenge,plaintext} The challenge authentication method is generally the preferred, and more secure, authentication method. Challenge is the default type. With the plaintext authentication method, the is transmitted as clear text. 16 - 18
Data ONTAP 7-Mode istration: Backup and Recovery Methods
ENABLING AND CONFIGURING NDMP To prepare a storage system from using NDMP management, complete the following steps: 5. If you have operators without root privileges on the storage system that will be carrying out tape-backup operations through the NDMP backup application, then add a new backup to the Backup Operators group list: system> add backup_ -g “Backup Operators” 6. Specify an 8-character or 16-character NDMP length (the default value is 16): system> options ndmpd._length 7. Generate an NDMP for the new : system> ndmpd backup_ NOTE: If you change the to your regular storage system , repeat this procedure to obtain your new system-generated, NDMP-specific . 8. Enable logging of NDMP connection attempts with the storage system: system>options ndmpd.connectlog.enabled on This enables the Data ONTAP operating system to log NDMP connection attempts in the /etc/messages file. These entries can help you determine if and when authorized or unauthorized s are attempting to start NDMP sessions. The default for this option is off. 9. Include or exclude files when the changed timestamp (ctime) value changed from incremental dumps according to your backup requirements: system>options ndmpd.ignore_ctime.enabled {on|off} When this option is on, s can exclude files with the ctime value changed from system storage incremental dumps, because other processes (like virus scanning) often alter the ctime of files. When this option is off, backups on the storage system include all files with a change or modified time later than the last dump in the previous level dump.
16 - 19
Data ONTAP 7-Mode istration: Backup and Recovery Methods
NDMP STATUS AND SESSION INFORMATION To display NDMP session information, use the ndmpd status command: system>ndmpd status [session] The session variable is the specific session number you want the status of, from 0 to 99. To display the status of all current sessions, leave session blank.
16 - 20
Data ONTAP 7-Mode istration: Backup and Recovery Methods
Module Summary In this module, you should have learned to: List the methods available to back up and recover data Use ndmpcopy to process full and incremental data transfers Discuss dump and restore Describe, enable, and configure NDMP on a storage system
Check Your Understanding What can you recover using SnapRestore technology? What is NDMP? What are the NetApp disk-to-disk backup and recovery methods? What are some limitations of ndmpcopy?
Module Objectives By the end of this module, you should be able to: Use the sysstat, stats, and statit commands Describe the factors that affect RAID performance Execute commands to collect data about write and reads throughputs Execute commands to the operation of hardware, software, and network components Identify commands and options used to obtain configuration and status
System Health Performance problems can originate from multiple sources. To avoid some of these problems, check or monitor the following: Disk configuration – Disk status – Read and write performance
SYSTEM HEALTH Good performance results when hardware, software, and communication protocols work together at optimal limits. Failure or underperformance of one element affects the other elements. Therefore, monitor your system and use NetApp® command tools to adjust the system. Correct adjustments reduce latency, improve data throughput, and allow you to achieve optimal performance.
17 - 3
Data ONTAP 7-Mode istration: Data Collection Tools
DISK STATUS To that host adaptors on a storage appliance are communicating with Fibre Channel (FC) disk shelves, use the shelfchk command. The command prompts you to whether specified LEDs are on or off. Because you must be able to see the LEDs, enter the command from a console near the shelves. Checking Disk LED Function To that LEDs are working on all disks, run the led_on and led_off tests. To use these commands, you must be operating in advanced mode. NOTE: The led_on and led_off tests can also be used to identify the address where disks are located. To that LEDs are working on all disks, complete the following steps: 1. 2. 3. 4. 5.
17 - 5
To set command privileges to advanced, run priv set advanced. To turn on LEDs on a specific disk, run led_on [device_name]. Locate the disk on the shelf and that the LEDs are lit. To turn off the device LEDs, enter led_off [device_name]. To return command privileges back to the basic istration mode, enter priv set .
Data ONTAP 7-Mode istration: Data Collection Tools
SYSLOG MESSAGES shm: disk has reported a predicted failure (PFA) event: disk XX, serial_number XXXX Description: The disk's internal error processing and logging algorithm computation results exceed an internally set threshold. The disk will likely fail in a matter of hours. shm: link failure detected, upstream from disk: id XX, serial_number XXXXX Description: An FC disk (or cable, if disks are in different disk shelves) might be malfunctioning, causing an open loop condition. This results in a synchronization loss of more than 100 milliseconds for the downstream disk that reported the problem as a link failure. shm: disk I/O completion times too long: disk XX, serial number XXXXX Description: Either the disk is old and slow, or it is internally recovering errors and taking too long to complete an I/O. This message also indicates that there are too many I/O timeouts and retries on the disk. The disk might also be frequently returning the Command Aborted status. All these issues can produce a low datathroughput rate for this specific disk and a reduction in overall system performance. shm: possible link errors on disk: id XX, serial number XXXXX Description: One of a group of four FC disks in a disk shelf (or any connecting cable) might be malfunctioning. This results in a large number of invalid cyclic redundancy check (CRC) frames and data under-runs on the loop. The invalid CRC and under-run count has crossed the specified threshold several times.
17 - 6
Data ONTAP 7-Mode istration: Data Collection Tools
shm: disk returns excessive recovered errors: disk XX, serial number XXXXX Description: Either the disk has found media or hardware errors (unrecovered errors), or it has internally recovered a large number of errors. The disk might also be returning a Command Aborted status. The errors returned have exceeded the bit error rate specified by the disk vendor. shm: intermittent instability on the loop that is attached to Fibre Channel adapter: id XXX, name XXXXX Description: An FC adapter, attached disk shelf, disk, cable, or connector might have caused instability on the FC-AL loop, which resulted in I/O completion rates below a set threshold.
17 - 7
Data ONTAP 7-Mode istration: Data Collection Tools
Read Performance The Data ONTAP® operating system is optimized for write performance. Read performance can decrease over time, although efficient use of cache can offset some disk performance issues. To istrate read performance: – To measure optimization level: system> reallocate measure [vol | file]
– To optimize a system for read performance: system> reallocate start pathname
READ PERFORMANCE The WAFL® (Write Anywhere File Layout) file system does the following to optimized write performance:
Writes adjacent blocks in files that are adjacent on the disk, whenever possible. As the file system grows, blocks may not be written on an immediately adjacent disk, but the blocks will still be close. Reserves 10% of the disk space to increase the probability of blocks being available at or near optimal locations. Manages interleaved writes much better than other file systems, because it does not immediately allocate date to the write. By holding the write data in system memory until a consistency point () is generated, the WAFL file system can allocate a lot of write data from a particular file into contiguous blocks. Minimizes the impact of write performance with the “write anywhere" allocation scheme, which minimizes disk-seeks for writes. Write optimization can lead to decreased file and LUN read performance as the file system ages, because files are written to the best place on the disks for write performance. As the WAFL file system expands, fewer options are available for writing blocks, so the system may have to write to blocks that are not immediately adjacent on the disk. You can help prevent problems by using flexible volumes and the autosize volume option. In addition, the WAFL file system uses built-in, multiple-read cache algorithms to offset any potential performance degradation. NOTE: Reallocation is covered in detail in the Data ONTAP Performance course.
17 - 9
Data ONTAP 7-Mode istration: Data Collection Tools
WRITE PERFORMANCE COMMANDS When planning a drive configuration that optimizes write performance, base your choices on thorough knowledge of current system performance, needs, and resource constraints.
17 - 10
Data ONTAP 7-Mode istration: Data Collection Tools
WRITE PERFORMANCE: SYSSTAT COMMAND The best command for viewing system utilization is sysstat [interval], for which interval is the incremental interval in seconds (the default is every 15 seconds). The sysstat command resembles a speedometer for your storage system—it allows you to view real-time activity per second. The statistics displayed by the sysstat command should help you answer questions such as: Is the system usage steady or does it fluctuate? Is the U percentage high without corresponding I/O activity? Interpreting sysstat Results The sysstat command output includes:
U: An average of the usage percentage of the busiest Us NOTE: The sysstat –M command displays statistics for each U in a multiprocessor system. NFS: The number of NFS operations per second. CIFS: The number of CIFS operations per second. HTTP: The number of HTTP operations per second. Net KB/s in and Net KB/s out: The kilobytes per second of data requested from the network as a read or write. This is the network traffic displayed in KBps, which tells you how much network traffic the storage appliance is processing, how constant that traffic is, and if the system is exceeding its network traffic limitations. Disk KB/s read and Disk KB/s write: The disk read and write activity. Disk reads occur if data is not cached. Ideally, disk writes should occur every 10 seconds. Cache age: The age, in minutes, of the oldest read-only blocks in the buffer cache (which is not relevant when diagnosing performance issues).
17 - 11
Data ONTAP 7-Mode istration: Data Collection Tools
Performance Counters Counters are organized in an object-instancecounter hierarchy. Counters are collected from Counter Manager. The stats command allows s to look at any object-instance and the corresponding counter (and s preset files).
COUNTER MANAGER Counter Manager is a thin layer built into the Data ONTAP architecture that provides a single view of Data ONTAP performance counters and a standard performance API set for all clients. Clients include Manage ONTAP®, the Auto™ tool, Windows® perfmon, SNMP, and the command-line interface. The Purpose of Counter Manager Counter Manager was introduced in Data ONTAP 6.5. It provides a complete set of performance metrics that supply statistics for you to use when analyzing configuration mistakes. Counter Manager provides an infrastructure to: Improve customer and internal performance monitoring Provide simple performance problem diagnoses Enhance existing sizing processes Provide capacity planning capabilities For a complete list of the performance counters available in Data ONTAP, look in the Operations Manager documentation for Performance Objects and Counters.
17 - 13
Data ONTAP 7-Mode istration: Data Collection Tools
stats Command Syntax The stats command lets you collect or view statistical data on a storage system. The stats command can be run in one of three ways: – Single: current counter values are displayed stats show – Repeating: counter values are displayed multiple times at a fixed interval stats show –i 1 – Period: counters are gathered over a single period of time and then displayed stats start then stats stop
In the sample above, we are listing stats for all the disks system>stats show disk:20::00::00::0c::50::a3::6b::58:disk_busy disk:20:00:00:0c:50:a3:6b:58:disk_busy:0% system>
Note: The disk instance name contains colons, therefore it must de-referenced by using the colon twice
In the sample above, we are listing a specific counter for a disk instance
PRESET SYSSTAT.XML FILE The stats command s preset configurations that contain commonly used combinations of statistics and formats. Specify the preset to use with the -p command-line argument. For example: stats show -p sysstat Each preset is stored in a file, the /etc/stats/preset directory of the root volume. This directory contains a few template files that may be customized.
17 - 18
Data ONTAP 7-Mode istration: Data Collection Tools
CLIENT-SIDE TOOLS: PERFMON The perfmon performance monitoring tool is integrated into the Microsoft Windows operating system. If you use storage systems in a Windows environment, you can use perfmon to access many of the counters and objects available through the Data ONTAP stats command. This feature currently does not work with Windows Server 2008 and higher. Using perfmon to access system performance statistics To use perfmon to access storage system performance statistics, you must specify the name or IP address of the storage system as the counter source. The lists of performance objects and counters reflect the objects and counters available from the Data ONTAP operating system. NOTE: The default sample rate for perfmon is once per second. Depending on which counters you choose to monitor, that sample rate could cause some performance degradation on the storage system. If you want to use perfmon to monitor storage system performance, change the sample rate to once per 10 seconds. You can change the sample rate using the System Monitor Properties.
17 - 19
Data ONTAP 7-Mode istration: Data Collection Tools
RAID GROUPS The relationship between aggregates and RAID groups has these characteristics:
Each aggregate has at least one RAID group, and each RAID group belongs to only one aggregate. When a new aggregate is created, a new RAID group is also created with two parity disks and at least one data disk. When disks are added that exceed the specified or maximum RAID group size, new RAID groups are automatically created for an aggregate. You can increase or decrease RAID group size using the aggr options aggr_name raidsize option.
17 - 21
Data ONTAP 7-Mode istration: Data Collection Tools
RAID Group Size and Composition Poor RAID configuration choices: Unnecessary use of multiple RAID groups Mixed disk sizes RAID groups with wide variations in capacity RAID groups with only one or two data disks each RAID groups with a number of disks larger than the default
RAID GROUP SIZE AND COMPOSITION When you configure the storage system, you must effectively change the number of drives and RAID groups. Although write performance can benefit from more drives, any change might be masked by the effect of NVRAM and the efficient manner in which the WAFL file system manages write operations. Configuring multiple RAID groups in a volume should not impact performance. However, improper configuration can significantly impact performance. For best results when configuring RAID, use the default RAID group and then follow the guidelines in Technical Report 3437, Storage Best Practices and Resiliency Guide at http://www.netapp.com/us/library/technical-reports/tr-3437.html.
17 - 22
Data ONTAP 7-Mode istration: Data Collection Tools
Adding Disks to Existing RAID Groups Add RAID groups when the applied load is stressing the drives in the current array. Add RAID groups and disks before the file system or aggregate is 80% to 90% full. Add disks in groups. Plan data expansion so that at least several data disks are used for each RAID group.
ADDING DISKS TO EXISTING RAID GROUPS The maximum RAID group size is 28 (26 data disks and two parity disks when using RAID-DP technology) for SAS and FC disks and 16 for SATA disks. When creating an aggregate, if you do not specify a RAID group size, the system uses the default. Considerations for sizing RAID groups To configure an optimum RAID group size for an aggregate, you must make trade-offs. You must decide which feature of the aggregate is most important to you—speed of recovery, assurance against data loss, or maximization of data storage space. In most cases, the default RAID group size is the best size for your RAID groups. However, you can change the maximum size of those groups.
17 - 24
Data ONTAP 7-Mode istration: Data Collection Tools
MONITOR CONNECTIVITY Connectivity problems can arise with functions at the Media Access Control (MAC), T/IP, and protocol layers. At the MAC level, you can use the commands in this table to view connectivity statistics and settings: SAMPLE COMMAND
ifstat –a ifstat ns1
RESULT
The ifstat –a command displays status information for all interfaces. To view status information about a specific interface, enter ifstat interface_name for each interface (for example, ifstat ns1). If the number of collisions, CRCs, or runt frames is high, there may be a problem with the media type or card. The arp command displays the contents of the Address Resolution Protocol (ARP) table (hostname and IP address) so that you can modify the table. The command can also help you identify duplicate MAC addresses.
arp
arp –a arp -d arp –s
17 - 26
The arp -a command displays all current contents of the table. The arp -d command deletes or flushes a bad MAC address from the ARP table. The arp -s command adds a new entry to the ARP table.
Data ONTAP 7-Mode istration: Data Collection Tools
Measuring NFS Performance Use this option: nfs.per_client_stats.enable [on|off]. Disable the option when you are not using nfsstat –l. This display shows the breakdown on this mountpoint of lookups, reads, writes, and all operations. The average deviation and the settings for retransmissions of each type also are displayed.
Data ONTAP NFS Output Command: nfsstat -l
Round-trip response times for specific NFS operations are displayed.
MEASURING NFS PERFORMANCE You can track the performance of each NFS server by routinely collecting statistics in the background across all subnets. One of the most important ways to measure performance is to capture response times for NFS operations, such as writes, reads, lookups, and get attributes, so the data can be analyzed by the server and the file system. You can obtain statistics for NFS operations by server (where the storage system is the NFS server) by enabling the per-client stats option and running nfsstat -l. After you establish site-specific baseline measurements, you can compare your system’s performance against optimum benchmark configurations, or against the system’s own performance at different times. Any changes from the baseline can indicate problems that require further analysis. To measure NFS performance, use the sysstat and nfsstat commands:
To display real-time NFS operations every second on your console, enter sysstat 1, or view the output using NetApp System Manager. To focus the output on counters related to response times on Solaris NFS clients, run nfsstat -m. To reset statistics and counters to zero, use nfsstat -z.
17 - 28
Data ONTAP 7-Mode istration: Data Collection Tools
Measuring CIFS Performance This number is the total number of operations since smb_hist statistics were last reset.
This column represents millisecond (ms) timestamps for operations.
Analyzing smb_hist output CIFS request time processing: (46457) - milliseconds units
0ms
1ms
2ms
3ms
4ms
5ms
6ms
7ms
13175
17752
5111
664
451
478
570
568
<16ms
<24ms
<32ms
<40ms
<48ms
<56ms
<64ms
unused
4039
2309
569
165
61
21
10
0
Every other row displays the number of operations that took place in the interval in the row above it. In this example, 13,715 operations happened in less than 0.5 ms.
The time interval window lies halfway between the values for adjacent columns. In this example, 165 operations occurred in the window from 36 ms to 44 ms.
MEASURING CIFS PERFORMANCE You can use the sysstat and smb_hist commands to measure CIFS performance. Enter the sysstat 1 command to display CIFS operations per second on the console or use NetApp System Manager. For CIFS throughput statistics, follow the steps below to set advanced command privileges. Click the command-line interface window to step through the process: 1. 2. 3. 4.
Enter smb_hist –z to zero the counters. Wait long enough to get a good sample. Enter smb_hist to view CIFS statistics generated since the reset. Review the first section of output.
In the first part of this example smb_hist output, 13,715 operations occurred in less than 0.5 milliseconds (ms), 17,752 operations occurred in the window from 0.5 ms to 1.5 ms, and 5,111 operations occurred in the window from 1.5 ms to 2.5 ms. In normal situations, as the interval window gets larger, the number of operations that take that long decreases to zero.
17 - 29
Data ONTAP 7-Mode istration: Data Collection Tools
The statit Command Is an advanced-mode command used for detailed analysis of system performance Gathers per-second statistics averaged over the length of time it runs in the background Shows statistics representing all physical and some logical objects on the storage system Collects data that usually represents rates at which things happen
Using the statit Command To obtain statistics using the statit command, complete these steps: 1. Enter advanced privilege mode: priv set advanced 2. Start collecting statistics: statit –b
3. After the necessary amount of time to capture the desired functionality’s statistics, run: statit –e –n 4. To return to normal privilege mode, run: priv set
Sections of the statit Command Report
U Multiprocessor CSMP domain switches Miscellaneous WAFL® (Write Anywhere File Layout) RAID Network interface Disk Aggregate Spares and other disks F iSCSI Tape
U Statistics U Statistics 506.934263 time (seconds) 275.044317 system time 23.412966 rupt time 251.466451 non-rupt system time 271.837944 idle time 439.543653 time in 21.837230 rupt time in
U STATISTICS The first section of the statit statistics report provides U statistics. In this example:
275.044317 system time 54%: Shows the percentage of time the Us were busy. 23.412966 rupt time 5% (7022 rupts x 0 usec/rupt): Shows the number of interrupts received when the U ran at interrupt level. 271.837944 idle time 44%: Shows the percentage of time the Us executed the idle loop. 439.543653 time in 92% to 100%: Shows the percentage of time the system was in a , flushing data to disk.
17 - 33
Data ONTAP 7-Mode istration: Data Collection Tools
16477.97 hard context switches 0.00 CIFS operations 102220.83 network KB received 76757.23 disk KB read 0.00 NVRAM KB written 0.00 WAFL bufs given to clients 0.00 no checksum - partial buffer iSCSI operations
NFS operations HTTP operations network KB transmitted disk KB written nolog KB written checksum cache hits(0%) F operations
MISCELLANEOUS STATISTICS The miscellaneous section of the statistics report includes rates (or counts) for many operations. The statistics from this section that are most likely to be of interest to you are:
NFS, CIFS, and HTTP operations Network KB received and transmitted Disk KB read and written F and iSCSI operations
17 - 35
Data ONTAP 7-Mode istration: Data Collection Tools
WAFL RATES The WAFL section of the statistics report displays WAFL rates (or counts). The statistics from this section that are most likely to be of interest to you are: All cache hits and misses Inode cache hits and misses Per-second rates for all the types All cache hits and misses and inode cache hits and misses provide information about read performance. It is generally better to have more hits than misses. However, you should consider many factors when analyzing these numbers. For example, a file that is only read once, such as a backup application file, does not reside in cache.
17 - 36
Data ONTAP 7-Mode istration: Data Collection Tools
Disk Statistics Disk Statistics (per second) ut% is the percent of time the disk was busy. xfers is the number of data transfer commands issued per second. xfers = ureads + writes + reads + greads + gwrites chain is the average number of 4K blocks per command. usecs is the average disk round trip time per 4K block. disk ut% xfers ureads--chain-usecs writes--chain-usecs reads-chain-usecs /vol0/plex0/rg0: 8a.16 5 3.69 0.57 1.00 94500 ... 8a.21 4 3.12 0.57 1.00 39500 ...
DISK STATISTICS The Disk section of the statistics report provides statistics for each drive. Some of the column headings are defined at the top of the screen. Beginning with the fourth column of data, the report uses hyphens in the column headings to group related information. For example, reads and the associated chain and round-trip times are linked in the heading ureads--chain—usecs. The following list defines some of the column headings on the Disk Statistics report:
disk: indicates which drives are included in the statistics ut%: shows the drive utilization averaged per second, as in the percent of elapsed time that the driver had a request outstanding; utilization rates of more than 80% suggest an I/O bottleneck xfers: shows the total number of transfers, or reads and writes, averaged per second; most drives are capable of 50 to 100 I/O operations per second
17 - 38
Data ONTAP 7-Mode istration: Data Collection Tools
Other Resources For more information about data collection and performance, see the Data ONTAP Performance Analysis course, in which you learn to: Use the recommended methodology to compare performance data and performance analysis information Monitor performance using performance tools and establish a baseline of expected throughput and response times for storage systems under planned and increasing workloads Perform capacity planning by monitoring performance and comparing baseline information over time to determine when a storage system will reach maximum capacity Tune protocols such as CIFS, NFS, and SAN for optimal performance (including locating resources with tuning guidelines for database scenarios) Perform bottleneck analysis
Module Summary In this module, you should have learned to: Use the sysstat, stats, and statit commands Describe the factors that affect RAID performance Execute commands to collect data about write and read throughputs Execute commands to the operation of hardware, software, and network components Identify commands and options used to obtain configuration and status
Check Your Understanding Which command or commands can you use to display disk utilization? Which command or commands can you use to monitor connectivity? Which command or commands can you use to help detect impending disk problems before they occur?
Module Objectives By the end of this module, you should be able to: Access the NetApp® site for the following documents: – Data ONTAP® Upgrade Guide – Data ONTAP Release Notes
Collect data for installation using a configuration worksheet Describe how to perform Data ONTAP software upgrades and reboots Configure a storage system using the setup command
Boot Sequence When the boot sequence starts, a may press any key to abort and go to the firmware prompt: CFE version 1.2.0 based on Broadcom CFE: 1.0.35 Copyright (C) 2000,2001,2002,2003 Broadcom Corporation. Portions Copyright (C) 2002,2003 Network Appliance Corporation.
U type 0x1040102: 600MHz Total memory: 0x20000000 bytes (512MB) Starting AUTOBOOT press any key to abort... Loading: 0xffffffff80001000/8659992 Entry at 0xffffffff80001000
DATA ONTAP BOOTING You can boot the storage system with the following boot options from the boot environment prompt (which can be CFE> or LOADER>, depending on your storage system model):
boot_ontap: boots the current Data ONTAP® software release stored on the boot device (such as the CompactFlash card). By default, the storage system automatically boots this release if you do not select another option from the basic menu. boot_primary: boots the Data ONTAP release stored on the boot device as the primary kernel. This option overrides the firmware AUTOBOOT_FROM environment variable if it is set to a value other than PRIMARY. By default, the boot_ontap and boot_primary commands load the same kernel. boot_backup: boots the backup Data ONTAP release from the boot device. The backup release is created during the first software preserve the kernel that shipped with the storage system. It provides a “known good” release from which you can boot the storage system if it fails to automatically boot the primary image. netboot: boots from a Data ONTAP version stored on a remote HTTP or TFTP server. Netboot enables you to: – –
18 - 5
Boot an alternative kernel if the boot device becomes damaged Upgrade the boot kernel for several devices from a single server
boot_diags: boots Data ONTAP into a special diagnostic kernel that can be used to troubleshoot hardware problems.
Displaying the Boot Menu As the storage system boots, press Ctrl-C to display the special boot menu on the console: CFE version 1.2.0 based on Broadcom CFE: 1.0.35 Copyright (C) 2000,2001,2002,2003 Broadcom Corporation. Portions Copyright (C) 2002,2003 Network Appliance Corporation. U type 0x1040102: 600MHz Total memory: 0x20000000 bytes (512MB)
Starting AUTOBOOT press any key to abort... Loading: 0xffffffff80001000/8659992 Entry at 0xffffffff80001000 NetApp Data ONTAP Release 8.0 7-Mode Copyright (C) 1992-2009 NetApp. All rights reserved. ******************************* * * * Press Ctrl-C for Boot Menu. * * * *******************************
Boot Menu in Data ONTAP 7.3 As the storage system boots, the special boot menu allows you to control the booting sequence: Special boot options menu will be available. NetApp Release 7.3.2 … (1) Normal boot. (2) Boot without /etc/rc. (3) Change . (4) Initialize all disks. (4a)Same as option 4, but create a flexible root volume. (5) Maintenance mode boot.
BOOT MENU IN DATA ONTAP 8.0 The boot menu for DATA ONTAP 8.0 gives you more choices than the DATA ONTAP 7.3 menu:
18 - 8
Choice 4 initializes all of the disks and creates a FlexVol® root volume. Choice 5 allows s to enter maintenance mode, where they can perform aggregate and disk operations. Choice 6 allows s to update the boot device card from a backup configuration. Choice 7 allows s to install new software on a V-Series system. Choice 8 reboots the storage system.
Boot Sequence: Normal Booting If you do not press Ctrl-C while the storage system is booting (or if you select Normal boot), the system: 1. Loads the Data ONTAP kernel into physical memory from a CompactFlash card 2. Checks the root volume on the physical disk ... Wed Apr 7 20:53:00 GMT [mgr.boot.reason_ok:notice]: System rebooted. CIFS local server is running. system> Wed Apr 7 20:53:01 GMT [console__mgr:info]: root logged in from console Wed Apr 7 20:53:23 GMT [NBNS03:info]: All CIFS name registrations complete for local server system>
Boot Sequence: Configuration Files As the storage system boots and the Data ONTAP kernel is loaded, it reads these configuration and system files from the /etc directory: /etc/rc file (boot initialization) /etc/registry file (option configurations) /etc/hosts file (local name resolution) Wed Apr 7 20:52:50 GMT [fmmbx_instanceWorke:info]: Disk 0b.18 is a primary mailbox disk ...Loading Volume vol0 Wed Apr 7 20:52:53 GMT [rc:notice]: The system was down for 64 seconds Wed Apr 7 20:52:54 GMT [dfu.firmwareUpToDate:info]: Firmware is up-todate on all disk drives Wed Apr 7 20:52:58 GMT [ltm_services:info]: Ethernet e0a: Link up add net default: gateway 10.32.91.1 Wed Apr 7 20:53:00 GMT [mgr.boot.floppy_done:info]: NetApp Release 8.0 boot complete.
Data ONTAP Upgrades NetApp pre-installs the Data ONTAP operating system on all shipped storage systems. When a new version of the operating system becomes available, you can upgrade in one of these ways: – Single-system upgrade – High-availability nondisruptive upgrade (NDU) – Fresh install
Always check the NetApp site Data ONTAP Upgrade Advisor for current information.
DATA ONTAP UPGRADES The Data ONTAP operating system is pre-installed on all systems. If you want to upgrade the existing version you should review the considerations for each type of upgrade. If systems will be decommissioned and then put to a new use, you should perform a fresh install. For example, if you upgrade your hardware and want to send the old hardware to another division within the company, perform a fresh install on the old hardware to remove all data, allowing the new owner to start with clean hardware. Usually s want to upgrade the existing system. Upgrading an existing single system is easy, but requires enough downtime for a reboot. High-availability configurations allow s to use a rolling upgrade approach to ensure that the upgrade is nondisruptive.
DATA ONTAP UPGRADE ADVISOR NetApp provides a tool called Upgrade Advisor to customers that have a Edge Standard contract. This tool ensures that all storage systems have met the requirements for upgrading to the current release and generates an upgrade plan to help you perform the upgrade steps. Upgrade Advisor may be found on the NetApp site. For you to be able to use the Upgrade Advisor tool, your system must have a valid contract and must be configured to send Auto messages to NetApp. Your first step in any upgrade process should be to use Data ONTAP Upgrade Advisor.
SINGLE-SYSTEM UPGRADE STEPS Perform these steps to upgrade a system to Data ONTAP 8.0. Although the steps are simple, you should create a plan and test it before upgrading any production systems. You should also review your plan with NetApp Professional Services. The steps are as follows: 1. Review your current system hardware and licenses to ensure that you know what you currently have. 2. Review all necessary documentation, including the Data ONTAP Upgrade Guide and Data ONTAP Release Notes for the destination version of Data ONTAP. 3. Review the NetApp Bugs Online site for any known installation and upgrade problems. 4. Obtain the Data ONTAP upgrade image from the NetApp site. 5. Generate an Auto e-mail (not necessary in Data ONTAP 7.2.4 or later). 6. Use the software command to install the software and the new version of the Data ONTAP operating system to the boot device card. NOTE: the firmware is automatically upgraded with the operating system. 7. When you are ready, reboot the system. 8. When the system is running, the installation.
Single-System Upgrade: Step 1 Review your current system hardware and licenses: Use the sysconfig –a command to display a hardware inventory of the system. Use the version command to check your version of the Data ONTAP operating system. Use the version –b command to check the current version of the firmware. Use the license command to see your current licenses.
SINGLE-SYSTEM UPGRADE: STEP 1 Review your current hardware and software to ensure that your records are current. Because the system will be down at least long enough for a reboot, consider if there is any new hardware that you want to upgrade at the same time.
Single-System Upgrade: Step 2 Review all necessary documentation, including: Data ONTAP Upgrade Guide for the destination version of the Data ONTAP operating system Data ONTAP Release Notes for the destination version of the Data ONTAP operating system
SINGLE-SYSTEM UPGRADE: STEP 3 By reviewing Bugs Online on the NetApp site, you can research any known bugs and either learn how to fix them or choose to a different release that will not cause a problem.
ZIP and TGZ Formats NetApp has released two different formats of Data ONTAP 8.0 and later versions: The ZIP format is for use when upgrading from Data ONTAP 7.3. The TGZ format is for use when doing a fresh install or when performing incremental updates after Data ONTAP 8.0.
ZIP AND TGZ FORMATS When upgrading from the Data ONTAP 7G operating system, you will need to use the .zip package. After you are running Data ONTAP 8.0, for future upgrades you will need to use the .tgz package.
SINGLE-SYSTEM UPGRADE: STEP 5 This Auto notification includes a record of the system status just prior to upgrade. It saves troubleshooting information that you can use if a problem occurs during the upgrade process. This notification has been sent automatically since Data ONTAP 7.2.4.
Single-System Upgrade: Step 6 Use the software command to install the software and the latest version to the boot device card: From the storage system prompt, enter the command to extract and install Data ONTAP system files from the upgrade package Specify the correct system
files for an upgrades system> software update http://10.254.134.39/image.zip -d -r
The command can take as long as 60 minutes to complete. system>
SINGLE-SYSTEM UPGRADE: STEP 6 In this software command, http://10.254.134.39/image.zip is an example. Use the URL and file name that applies in your environment. You can use the following options with the software command:
–d: This option prevents the system from automatically performing the command, which updates the boot device card. The command can take a lot of time to run, during which you do not have access to the prompt. For this reason, some s use the –d option and then issue the command separately, at a time convenient for them. –r: This option prevents the system from rebooting automatically.
NOTE: the software install option has been deprecated in favor of the software update option for upgrades. Other methods of installation, such as using setup.exe from a CIFS share or a .tar file, are no longer ed. These files will not be available for Data ONTAP 8.0.
SINGLE-SYSTEM UPGRADE: STEP 7 When you reboot the storage system, it reboots in normal mode by default. You can also invoke a boot menu that allows you to reboot in alternative modes for the following reasons: to correct configuration problems, to recover a lost , to correct certain disk configuration problems, or to restore configuration information back to the boot device card. Use the version command to see the version of the Data ONTAP operating system that is running. The –b option displays the contents of the boot device file system. Keep in mind that after the reboot, background processes may still be working to fully upgrade your system. The Data ONTAP operating system continues to function and can serve data to clients while these background processes are completing the upgrade.
Single-System Upgrade: Step 8 the installation: s can use the version or sysconfig command to the new version of the Data ONTAP operating system that is running.
SINGLE-SYSTEM UPGRADE: STEP 8 s can use the version command to see the version of the Data ONTAP operating system that is running. The –b option displays the contents of the boot device file system. Keep in mind that after the reboot, background processes may still be working to fully upgrade your system. The Data ONTAP operating system continues to function and can serve data to clients while these background processes are completing the upgrade.
NONDISRUPTIVE UPGRADES A nondisruptive upgrade, or NDU, is a mechanism that uses high-availability storage controller technology to minimize client disruption during an upgrade. This procedure allows each of the two nodes in a highavailability pair to be upgraded individually to a newer version of the Data ONTAP operating system and firmware. When you perform an NDU, you upgrade three key areas:
Shelf firmware Disk firmware The Data ONTAP operating system and storage controller firmware (BIOS)
Storage Controller NDU Before you attempt to perform a storage controller NDU: Review all necessary documentation for the destination version of the Data ONTAP operating system: – –
NDU PROCESS: STEP-BY-STEP 1. After you have prepared for the storage controller NDU, initiate the process by installing the new version of the Data ONTAP operating system on both systems. 2. Then reboot the first system, allowing the high-availability configuration to fail over to the second system. NOTE: In Data ONTAP 8.0, takeovers and givebacks have been dramatically improved. For example, takeovers and givebacks should take less than 60 seconds for all SAN Fibre Channel and iSCSI configurations for systems with up to 10,000 Snapshot® copies. 3. After the first system has rebooted, the new installation and then have the partner perform the giveback. After both systems are running again, reboot the second system and fail over to the first system. 4. Give back to the second system. Notice that the difference between an NDU and a single-system upgrade is in the reboot process. In this case, you only reboot one system at a time. For this reason, an NDU is sometimes called a rolling upgrade. SOURCE
Fresh Installs of Data ONTAP 1. Install and the Data ONTAP operating system on the boot device drive (boot device). 2. Interrupt the reboot and choose one of these options from the boot menu: For Data ONTAP 8.0.x 7-Mode, choose this option:
–
(4) Clean configuration and initialize all disks
For Data ONTAP 7.3.x, choose one of these two option:
–
(4) Initialize all disks. (4a)Same as option 4, but create a flexible root volume.
FRESH INSTALLS OF DATA ONTAP You have already reviewed the single-system upgrade process and the NDU process for a high-availability configuration. You may also want to perform a fresh install. Because a fresh install requires removing all of your data, this type of installation is the least-often used. It allows you to decommission a system and prepare the system for other uses. To perform a fresh installation of the Data ONTAP operating system, follow these steps: Step 1: Install the Data ONTAP operating system on the boot device, which is the boot device drive. Step 2: Choose to create the root as a FlexVol volume (traditional volumes are not ed). The system should boot. In 7-Mode, this should be almost identical to 7.3.x. Step 3: Continue with the normal setup. For more information, see the Data ONTAP 8.0 System istration Guide.
Using the setup Script 1. The script runs automatically during initial system configuration. 2. Use the configuration worksheet to prepare to use the script. 3. Run the script at any time to change the existing configuration. 4. Reboot for the changes to take effect, or run this command: system> source /etc/rc
The setup Script: Part 1 system> setup The setup command will rewrite the /etc/rc, /etc/exports, /etc/hosts, /etc/hosts.equiv, /etc/dgateways, /etc/nsswitch.conf, and /etc/resolv.conf files, saving the original contents of these files in .bak files (e.g. /etc/exports.bak). Are you sure you want to continue? [yes] y NetApp Release Release 8.0.1 7-Mode System ID: 0101173126 (system); partner ID: 0101169724 (system2) System Serial Number: 1056850 (system) System Rev: C0 System Storage Configuration: Single-Path HA slot 0: System Board Processors: 4 Processor type: Intel Xeon Memory Size: 4096 MB ...
THE SETUP SCRIPT: PART 1 The setup script installs new versions of /etc/rc, /etc/hosts, /etc/exports, /etc/resolv.conf, /etc/hosts.equiv, and /etc/dgateways to reflect the new configuration. When setup is complete, the new configuration does not take effect until the storage system is rebooted. You can reconfigure a storage system any time by typing setup at the console prompt (this is not recommended unless you are performing a new installation). If a reconfiguration is performed, the old contents of these six configuration files are saved in rc.bak, hosts.bak, exports.bak, resolv.conf.bak, hosts.equiv.bak, and dgateways.bak. In the script content later in this module, empty brackets ([ ]) indicate that there is no default setting for the question. When the brackets have a value, the displayed value is the default.
The setup Script: Part 2 Please enter the new hostname [system]: Do you want to configure interface groups? [n]: Please enter the IP address for Network Interface e0a [10.254.134.36]: Please enter the netmask for Network Interface e0a [255.255.252.0]: Should interface e0a take over a partner IP address during failover? [n]: Please enter media type for e0a {100tx-fd, tp-fd, 100tx, tp, auto (10/100/1000)} [auto]: Please enter flow control for e0a {none, receive, send, full} [full]: Do you want e0a to jumbo frames? [n]:
... e0b, e0c, and e0d interfaces as well ... Would you like to continue setup through the web interface? [n]:
The setup Script: Part 3 Please enter the name or IP address of the default gateway [10.254.132.1]: The istration host is given root access to the filer's /etc files for system istration. To allow /etc root access to all NFS clients enter RETURN below. Please enter the name or IP address of the istration host: Please enter timezone [GMT]: Where is the filer located? []: What language will be used for multi-protocol files (Type ? for list)?: language not set
The setup Script: Part 4 Do you want to run DNS resolver? [n]: y Please enter DNS domain name [development.netappu.com]: You may enter up to 3 nameservers Please enter the IP address for first nameserver [10.254.132.10]: Do you want another nameserver? [y]: Please enter the IP address for alternate nameserver [10.254.134.25]: Do you want another nameserver? [n]: Do you want to run NIS client? [n]: The initial aggregate currently contains 3 disks; you may add more disks to it later using the "aggr add" command. Now type 'reboot' for changes to take effect.
CHECKING THE STORAGE SYSTEM’S STATUS You can your software version by using the sysconfig or sysconfig -v command, or by using the version command. One way to the software version on the disks is to change to the /etc/boot directory and then view the link to which that directory points. Firmware Version There are two primary ways to your firmware version: use sysconfig –v or use halt on the storage system and type version from the OK prompt. Ensure that the firmware version on your system is what it should be and that you have the current version of the firmware for your platform. Licenses Use the license command to that all licenses are listed for your storage appliance. The licenses that are displayed in the auto log are encrypted versions of the actual licenses.
Configuration of a Storage System The config command allows s to back up the system’s configuration information. The backup file maybe used for: – Restoring the system configuration if disasters or emergencies occur – Cloning an existing system to a new system
STORAGE SYSTEM CONFIGURATION COMMANDS NOTE: Adding -v to the config command causes the Data ONTAP operating system to also back up or restore volume-specific configurations. See the System istration Guide for the appropriate Data ONTAP version for more information.
Module Summary In this module, you should have learned to:
Access the NetApp site for the following documents: – Data ONTAP Upgrade Guide – Data ONTAP Release Notes Collect data for installation using a configuration worksheet Describe how to perform Data ONTAP software upgrades and reboots Configure a storage system using the setup command
Check Your Understanding What is the name of the worksheet that you can use to help you set up your storage system? Which command simplifies Data ONTAP upgrades?
Storage System Access Exercise careful attention when you set up istrative access. – Limit who has istrative access – Limit where s can gain access
To secure your system: – – – –
Ensure a secure configuration Manage s Communicate securely with the storage system Guard physical access NOTE: Data ONTAP 8.0 operating system and later default more secure settings than previous versions
Role-Based Access Control Role-based access control (RBAC) is a mechanism for managing a set of capabilities that an can perform on a storage system.
Follow these steps to implement RBAC:
– Create a role with specific capabilities. – Create a group with one or more assigned roles. – Create one or more s that are assigned to one or more groups.
MULTIPLE HA TECHNIQUES IN COMBINATION The HA techniques described do not have to be used in isolation; often they are combined. The example in this slide shows multipathing high-availability controller configuration, referred to as MPHA.
Virtualization and NetApp Solutions NetApp provides storage solutions for virtual infrastructures. Virtualization storage client – VMware® cloud, server, and desktop – NetApp, VMware, and Cisco® FlexPod™ for VMware – Microsoft Hyper-V™ – Citrix® server and desktop
Accelerated NCDA Boot Camp NetApp Protection Software istration Data ONTAP Performance Analysis SAN Fundamentals on Data ONTAP SAN Implementation Workshop
Web sites – NetApp (.netapp.com) – NetApp (www.netapp.com)
Module Objectives By the end of this module, you should be able to: Describe how data is structure within a WAFL® (Write Anywhere File Layout) file system on a traditional volume Explain how data is structure within a WAFL file system in a flexible volume on a 32-bit aggregate Describe how data is structure within a WAFL file system in a flexible volume on a 64-bit aggregate
WAFL File System Is the file system in the Data ONTAP® operating system Stores metadata in files and uses a buffer tree structure Allows the Data ONTAP operating system to write metadata files and blocks anywhere on disk (Write Anywhere File Layout) Is more flexible than traditional file systems, because metadata is not in fixed locations on disk, with the exception of the root inode
WAFL Block Structure The WAFL file system organizes data into blocks. Use the vol status -b command to block size. system> vol status -b Volume Block Size (bytes) Vol Size (blocks) FS Size (blocks) ------ ------------------ ----------------- ---------------vol0 4096 7058256 7058256
WAFL File System and Inodes WAFL organizes some metadata into inodes. An inode: – Is a collection of information about a file or directory – Holds information including:
Time and date stamp Size UNIX® permissions Windows® access control list (ACL)
WAFL STRUCTURE: VOLINFO AND FSINFO BLOCKS The root inode consists of volinfo blocks 1 and 2. These blocks, in turn, can point to up to 256 fsinfo blocks: 1 for the Active File System (AFS) and 255 for each possible Snapshot® copy.
Level 1 In Traditional Volumes For files that are greater than 64 bytes, but less than or equal to 64 KB, a level-1 inode structure is used. 32-bit pointers Root inode File inode
LEVEL 1 IN TRADITIONAL VOLUMES With traditional volumes, there are a maximum of sixteen level-1 pointers. Pointers are connected by Physical Block Numbers (PBNs) and Volume Block Numbers (VBNs).
Level 1 In 32-Bit Aggregates For files that are greater than 64 bytes but less than or equal to 32 KB, a level-1 inode structure is used. Root inode 0 32-Bit Aggregate
File inode
20
... 0
1
...
32-bit pointers
Inode file 7
2x4 bytes*
vol1 *NOTE: 2 x 4 bytes, because the physical and virtual VBNs are separate
LEVEL 1 IN 32-BIT AGGREGATES With level 1 inodes in 32-bit flexible volumes, there are only 8 pointers. Eight-byte pointers connect a fourbyte Physical Virtual Block Number (PvBN) to a four-byte Virtual Volume Block Number (vVBN).
LEVEL 1 IN 64-BIT AGGREGATES Within level 1 inodes in 64-bit flexible volumes, there are only 4 pointers at the first level. Each sixteen-byte pointers connect a PvBN to a vVBN: eight-byte PvBN and eight-byte vVBN.
LEVEL 2 IN TRADITIONAL VOLUMES Within level-2 inodes of traditional volumes, there are a maximum of 1024 possible pointers associated with each level 1 pointer.
LEVEL 2 IN 32-BIT AGGREGATES Within level-2 inodes of 32-bit flexible volumes, there are a maximum of 1024 pointers associated with each level-1 pointer.
LEVEL 2 IN 64-BIT AGGREGATES Within level-2 inodes of 64-bit flexible volumes, there are a maximum of 512 pointers associated with each level-1 pointer.
DIRECTORIES Each directory block is divided into two primary portions: an array of entries and an array of name chunks. The entry array for each directory block contains 128 rows, each currently 12 bytes long. The rest of the WAFL® (Write Anywhere File Layout) block contains the array of 160 name chunks, each 16 bytes long. Entries contain a file ID, a generation number, and a pointer to related name chunks. If the NFS name of a storage object in the directory is longer than 16 characters, it uses multiple name chunks from that directory block. Directory entries often have three names: one for NFS, another for DOS 8.3 compatibility, and a third for Unicode naming. Unicode and NFS names can't be merged, because NFS is case-sensitive and CIFS is not. If a file has multiple names, each name uses a separate name chunk. Some Unicode character sets use two bytes per character, in which case the Unicode name will need a name chunk per 8 characters. After all the name chunks in a directory block are used, the WAFL file system uses another directory block, even though there are still entry slots available. The same holds true when you use fill all of the name slots for NFS environments with short names. Finally, two files with the same name are not allowed in the same directory.
Module Summary Now that you have completed this module, you should be able to: Describe how data is structure within a WAFL file system on a traditional volume Explain how data is structure within a WAFL file system in a flexible volume on a 32-bit aggregate Describe how data is structure within a WAFL file system in a flexible volume on a 64-bit aggregate
Module Objectives By the end of this module, you should be able to: Distinguish between istration and system shells Enable the diagnostic to the system shell
SHELLS The system shell provides a powerful run-time environment, which has proven useful in the field for diagnosing problems with GX deployments. The Data ONTAP 7G operating system does not have this option, but in a clustered Data ONTAP configuration, many common mode features reside in the FreeBSD ecosystem and kernel. Those common-mode features include EMS, the Auto™ tool, Network Data Management Protocol (NDMP), environmental policy, and ntpd. The FreeBSD shell is accessible from both Data ONTAP 8.0 7-Mode and Data ONTAP 8.0 Cluster-Mode of operation. System shell access is not available through network protocols, such as Remote Shell (RSH), Secure Shell (SSH), and Telnet. Shell access is restricted to console sessions invoked on either the serial port or Remote LAN Module (RLM), and is intended for internal development as well as in-field serviceability.
System Shell Access Only a diagnostic may access the system shell, and the diagnostic is predefined, but disabled by default. Therefore, to perform lower-level diagnostics, you must: 1. Enable the diagnostic . 2. Access the system shell.
OPERATIONS AT THE SYSTEM SHELL Access to the FreeBSD level is documented and available to customers. Follow this FAQ: Q: Can I run scripts on BSD to gather data? A: Don’t do this for Data ONTAP 8.0 7-Mode. Currently, many WAFL® (Write Anywhere File Layout) processes run separately from the BSD platform and data retrieved from BSD does not provide a reliable view of system performance. Q: Can I restart daemons without rebooting? A: No. Please technical . Q: Do I need to access the system shell on a day-to-day basis? A: No, Data ONTAP 8.0 s do not need this feature for normal operations.
Module Summary In this module, you should have learned to: Distinguish between istration and system shells Enable the diagnostic to the system shell