We work remotely. First of all we discuss needs and challenges during task analysis session. After that we draw design and help Customer to work with hardware suppliers. Next, team proceed remote installation and configuration during implementation stage. And finally we provide operating documentation. Remote coaching as an option.What example of system specification?
Complex can be build on almost any commodity hardware. At least three nodes needed with futher smooth horizontal scaling. 10Gbps connectivity are required for production usage.
No, we don't have own data center. We can build private cloud on almost any commodity servers in any data center around the world!How can I upgrade/replace hardware or do horizontal scaling?
You can add new disks, put new nodes or reduce exists hardware without downtime for VMs. At the end of our work we provide brilliant operating documentation in which you will find step-by-step instruction on how to do basic operate tasks.Can we separate storage and compute role to different nodes?
Yes, it's possible.
No. In new software defined paradigm you can find completely new approach in operating - no RAID related calculation, no SAN setup, no zones creation, no any special cabling or special switch hardware, architectural unlimited space and performance scaling. Standard 10 Gbps ethernet network enough. Any replication location - disk, nodes, raws, rooms, any level replication and deepest granularity of replication object. For this new storage model you can easily add new disk to nodes or add rack of nodes to system without any downtime during this maintenance. Rebalancing, migration, new replication and so on can be just programmed because this storage already a program.Will be used RBD cache layer?
Yes, but since RBD cache is local for each node it should be disabled for any "cluster across boxes" designs.
Disks of virtual machine are splited in blocks and then replicated in pseudo-random placement manner across different physical disks, nodes, racks, rooms and so on. As result, each block of data will be randomly written in several instances taking into account physical discs allocation to one another. Storage doesn’t confirm write command of data block until the writing confirmation of all elements in this replication for this block is made.How many storage requests per second?
During normal operation the storage subsystem will produce sum of IOPs from all disks devided on level of replication. But since complex used at least two cache mechanisms usual conventional loads can achieve much higher values.Can be used layered master/gold/parent image cloning to oversubscribe storage space?
Since complex uses KVM as hypervisor you can find guest OS support list hereDoes High Availability for VMs supported?
Yes, however for this task is responsible front-end VM, so it must be online. To achieve HA for front-end VM we have developed special availability mechanism across specified nodes.What about automatic HA slots reservation scheduling?
Currently virtualization subsystem does not have any kind of HA slots reservation scheduling.Is it possible to create blank VM and use OS installation ISOs.
Yes. You can manually export disks or disks snapshots and import disk images.Can I perform virt level backup?
Yes. You can manually export snapshot of virtual disks, but since qemu guest agent are still not supported, you should take care about disk consistency (umount, etc) to avoid crash-consistent backup state.Is live migration supported?
Сertainly.Will be live migration traffic separated from other kind of traffic?