FAQs & Features
1. What is a storage OS?
A storage OS is an operating system for servers that is designed and built around data availability, high-performance and stability.
Depending on it’s features, it will include storage protocols such as NFS, SMB, iSCSI and FC. OviOS Linux uses iSCSI, NFS and SMB by default. FC and FCoE can be manually configured, by adding the necessary drivers and modules.
Usually storage OSs aren’t concerned with graphics and audio drivers or others not needed for it’s purpose. OviOS does not include any of those.
2. What OviOS is and isn’t?
OviOS is a highly performant storage OS which can be used to provide storage to different clients via iSCSI, NFS, SMB and FTP.
OviOS is not suitable for desktops, laptops, media servers. It doesn’t provide online repositories, however packages can be upgraded manually non-disruptively.
3. What is a LUN?
A Logical Unit Number is a number that identifies a block device created on the storage system.
Using SCSI commands via TCP/IP networks, clients can connect to OviOS LUNs and use them as regular disks. OviOS LUNs can be thin or thick provisioned.
4. What is thin and thick provisioning?
When the admin creates a thin LUN, the size assigned to the new LUN is not reserved in the storage pool, as opposed to thick LUNs which reserve the space in the pool when created.
With thin LUNs, when the usage on the LUNs at client level changes due to data being deleted, the storage system can reflect this change if the client initiates UNMAP commands to tell the storage OS about the change. This way the admin can over provision the storage pool, but there must be measures taken to ensure there is always enough space on the pool.
Although the OviOS commands have implemented thin and thick provisioning, the functionality is not yet implemented at filesystem level. ZFS on Linux doesn’t provide yet UNMAP support. LUNs can still be created and used as thin, but the OS will not know when data is deleted inside the LUN.
OviOS recommends to create only thick LUNs until UNMAP will be supported.
5. What is an iSCSI target
In OviOS Linux an iSCSI target is a controller located on the iSCSI server. The target can be accessed by iSCSI initiators to access the LUNs. In OviOS you can create multiple targets, control which initiators can access them, map multiple LUNs to each target.
6. What is a volume?
In OviOS Linux a volume is a filesystem that acts as a directory in the storage pool. The volume can be shared to Windows clients as a Network Drive via the SMB protocol, or to Linux and UNIX clients as an NFS share.
When creating a volume, the admin can assign a volume size, which reserves the space on the pool.
7. What is NFS and SMB?
NFS (Network File System) and SMB (Server Message Block) are protocols used to share storage volumes to different clients. NFS is used with UNIX like systems, while SMB is used with Windows clients. OviOS has support for both protocols by default.
8. Does OviOS support remote authentication?
Yes. OviOS supports Active Directory authentication for SMB clients, by joining the system to a windows Domain Controller. OviOS shell provides a tool for this purpose, called smbovios
OviOS also supports NIS authentication, by using the NIS client feature.
9. What are snapshots?
OviOS supports copy-in-time snapshots for volumes, LUNs and pools. A snapshot can be taken manually at any time or scheduled, without any impact on data access or performance.
One can restore volumes , LUNs or entire pools from snapshots.
The admin can also use the replication feature and replicate pools, volumes or LUNs to a secondary OviOS storage system, for disaster recovery scenarios.
10. What operating systems does OviOS Linux support?
OviOS has been tested and supports the following client operating systems:
10.1. With iSCSI: ESXi 4.x, 5.x and 6.x ; Windows 2007, 8, 2008, 2012 ; Linux ; the BSD family ; Solaris 11 and the Illumos Family.
10.2. With NFS: ESXi 4.x, 5.x and 6.x ; Linux ; the BSD family ; Solaris 11 and the Illumos Family, Windows NFS client
10.3. With SMB v1, v2 and v3: Windows 2007, 8, 2008, 2012, Windows 10 ; Linux ; Chrome OS.
11. Main features
11.1. HA Clustering
OviOS can be configured as an HA cluster using 2 or 3 OviOS nodes, with corosync and pacemaker.
For automatic failover and scsi fencing, the required setup is 3 nodes.
11.2. Replication
Install a second OviOS Linux server and set up replication from the production server to the backup server.
Your data will be available at all times on the backup server for disaster recovery scenarios.
11.3. Snapshots
The ability to snapshot LUNs, volumes and / or pools without disrupting data traffic. Snapshots can be scheduled or taken manually, In case of accidental data deletion, a volume or LUN can be reverted to a last known good snapshot.
11.4. ZFS Storage File System
OviOS uses the ZFS file system for it's storage pools.
The ZFS storage filesystem is the most advanced and performant filesystem available today for storage systems. Together with snapshots, LUNs, Volumes, its complex RAID level and many more features there is virtually little that cannot be achieved with ZFS.
OviOS Linux provides all these features. Contact us for professional services to find out how we can help.
11.5. Virtualization.
OviOS Linux can be virtualized or installed on physical hardware.
11.6. Run from a USB, or run it as a live System
OviOS Linux can be installed on a USB drive, or can be run from the live medium. OviOS will create a temporary writable filesystem in the system's RAM and thus can run and serve data without being installed. Read the OviOS live Guide to see how you can save your settings even after a reboot in the live system.
11.7. iSCSI, NFS and SMB server features by default.
Once installed the system can be immediately used as an iSCSI, NFS and SMB server.
The default configuration has been tested and optimized to achieve best performance with VMWare, Linux and Windows clients for iSCSI, VMWare and Linux as NFS clients and Windows as SMB clients.
OviOS supports SMB protocol versions 1 (also known as NT1 or CIFS) ,SMBv2 and SMBv3.
11.8. Software RAID
RAID0 or stripped pools. This configuration provides no data protection in case of a drive failure, it does provide checksumming to prevent silent data corruption.
RAID1 or mirrored pools. Create storage pools using mirrors of disks. It can protect against a large amount of failed drives, but has impact on the storage capacity.
RAID10 or stripped mirrored storage pools. Creates pairs of stripped mirrors which provide the best random read writes performances but reduces the available storage capacity by 50%.
RAID5 uses dynamic stripe width, which gives better performance then traditional RAID5 configurations. Allows for one disk failure.
RAID6 same as RAID5 but allows for 2 disk failures without data loss. Or RAID6+ which allows for 3 disk failures without data loss.
11.9. Data compression
Data compression which saves space on compressible data and improves performance.