At StudioGrizzly we made huge researches to unite exists software defined storage developments and private cloud/virtualization solutions to one proven private cloud complex. It's designed for companies that are not satisfied with the limitations of public clouds. Our solution has strong interoperability matrix and as results has healthy holistic documentation and predictable evolution. This unique complex used only open source software.

Chosen virtualization subsystem is enterprise ready cloud solution which developed mainly in Europe and has already proven itself to be active in research and academic centers of the old world. It is logically easy and architectural clear, and in our opinion, the most attractive looks in private use.

Chosen storage subsystem provide flexible multilevel duplication in pseudo-random placement manner for virtual machines data across different physical disks, nodes, racks, rooms and so on. As result, each block of data will be randomly written in several instances taking into account physical disks allocation to one another. This storage subsystem doesn't have central elements, capable to continue serve virtual machines during disk, node or whole rack failure, with highly customizable replications and easiness in adding further disk space.

We offer services for analysis customer workloads, design solution, complex construction during implementation stage and providing brilliant operating documentation. As a result, customer gets complete own software defined data center.

Submit order or Request access to our Demo Lab

Watch it


Common questions
How studio team works?

We work remotely. First of all we discuss needs and challenges during task analysis session. After that we draw design and help Customer to work with hardware suppliers. Next, team proceed remote installation and configuration during implementation stage. And finally we provide operating documentation. Remote coaching as an option.

What example of system specification?

Complex can be build on almost any commodity hardware. At least three nodes needed with futher smooth horizontal scaling. 10Gbps connectivity are required for production usage.

Does your company has own data center?

No, we don't have own data center. We can build private cloud on almost any commodity servers in any data center around the world!

How can I upgrade/replace hardware or do horizontal scaling?

You can add new disks, put new nodes or reduce exists hardware without downtime for VMs. At the end of our work we provide brilliant operating documentation in which you will find step-by-step instruction on how to do basic operate tasks.

Can we separate storage and compute role to different nodes?

Yes, it's possible.

Storage subsystem
Do you need any central storage system?

No. In new software defined paradigm you can find completely new approach in operating - no RAID related calculation, no SAN setup, no zones creation, no any special cabling or special switch hardware, architectural unlimited space and performance scaling. Standard 10 Gbps ethernet network enough. Any replication location - disk, nodes, raws, rooms, any level replication and deepest granularity of replication object. For this new storage model you can easily add new disk to nodes or add rack of nodes to system without any downtime during this maintenance. Rebalancing, migration, new replication and so on can be just programmed because this storage already a program.

Will be used RBD cache layer?

Yes, but since RBD cache is local for each node it should be disabled for any "cluster across boxes" designs.

How replications works?

Disks of virtual machine are splited in blocks and then replicated in pseudo-random placement manner across different physical disks, nodes, racks, rooms and so on. As result, each block of data will be randomly written in several instances taking into account physical discs allocation to one another. Storage doesn’t confirm write command of data block until the writing confirmation of all elements in this replication for this block is made.

How many storage requests per second?

During normal operation the storage subsystem will produce sum of IOPs from all disks devided on level of replication. But since complex used at least two cache mechanisms usual conventional loads can achieve much higher values.

Can be used layered master/gold/parent image cloning to oversubscribe storage space?

Yes, sure.

Virtualization subsystem
What guest OS supported?

Since complex uses KVM as hypervisor you can find guest OS support list here

Does High Availability for VMs supported?

Yes, however for this task is responsible front-end VM, so it must be online. To achieve HA for front-end VM we have developed special availability mechanism across specified nodes.

What about automatic HA slots reservation scheduling?

Currently virtualization subsystem does not have any kind of HA slots reservation scheduling.

Is it possible to create blank VM and use OS installation ISOs.


Can I export/import VM?

Yes. You can manually export disks or disks snapshots and import disk images.

Can I perform virt level backup?

Yes. You can manually export snapshot of virtual disks, but since qemu guest agent are still not supported, you should take care about disk consistency (umount, etc) to avoid crash-consistent backup state.

Is live migration supported?


Will be live migration traffic separated from other kind of traffic?