Storage Area Network (SAN) is a general term for an architecture that uses external storage devices to provide network storage for applications running on an enterprise-level network.
Typical applications that use a storage area network (SAN) are enterprise data warehousing and data mining applications, mail servers, and other high-availability applications.
Using a SAN allows you to locate the mission-critical data externally and administer it separately from the applications that process that data. This type of architecture originated in mainframe computing environments.
How SAN Works
SANs are typically hardware/software storage arrays running on dedicated subnets that combine a variety of disk technologies, including magnetic and optical disk storage, RAID technologies such as disk mirroring and disk striping, and tape backup resources. SANs generally use high-speed Fibre Channel technologies for interconnections between the SAN and a group of computers running an application. Fibre Channel is a high-speed direct connection technology that supports data transfer rates of up to 1 Gbps. Data I/O is performed using block transfer methods and involves directly attaching the application to the storage system.
SANs are typically used to centralize storage of data in an enterprise, which simplifies administration and backup of the data. SANs are often located near legacy mainframe computing environments but are gaining importance in distributed client/server environments as well. SANs are also used as remote storage and archival facilities connected to networks by high-speed Synchronous Optical Network (SONET) or OC-3 connections.
Enterprise-level Storage Devices
It is easy to get confused by the various buzzwords relating to external enterprise-level storage devices because standards in this area have not been developed and ratified by standards bodies. Here are two other related storage system concepts:
- Network-attached storage (NAS): Involves data storage devices connected to computers using a standard network connection such as Ethernet. This is in contrast to SAN, in which a group of computers uses multipoint Fibre Channel technology. Another difference between NAS and SAN is that NAS involves the use of file servers similar to the Network File System (NFS) used in UNIX environments (from which the concept of NAS evolved), while SAN uses block-mode I/O for applications such as clustering and database access.
- Direct-attached storage (DAS): Involves a storage system connected to only a single computer using either Small Computer System Interface (SCSI) or Fibre Channel technology. DAS is usually the only solution if your servers are at different geographical locations around your enterprise or if the application that uses them can support only this form of storage – for example, Windows Clustering, which requires a shared SCSI bus.
Growth of SAN technology in the enterprise has been driven by demand but is limited by the lack of agreed-upon standards. The main body pushing for standards in this area is the Storage Networking Industry Association (SNIA), which has submitted its Simple Network Management Protocol (SNMP) Management Information Base (MIB) for SAN to the Internet Engineering Task Force (IETF) for consideration. Other groups pushing their own management interface solutions for SAN technology include Microsoft, with its Common Information Model (CIM) standard, and Sun Microsystems, with its StoreX initiative.
Data can be centrally located
Use a SAN if your data can be centrally located within your enterprise and if your application needs to access data directly using block transfers instead of using shared files. Use NAS if your data needs to be shared between different operating system platforms or for file-based applications such as Web servers.