Wednesday, October 21, 2015

Stretched Clusters: Disable failover of specific VMs during full site failure


link: http://www.yellow-bricks.com/2015/10/21/stretched-clusters-disable-failover-of-specific-vms-during-full-site-failure/

Last week at VMworld when presenting on Virtual SAN Stretched Clusters someone asked me if it was possible to “disable the fail-over of VMs during a full site failure while allowing a restart during a host failure”. I thought about it and said “no, that is not possible today”. Yes you can “disable HA restarts” on a per VM basis, but you can’t do that for a particular type of failure.
The last statement is correct, HA does not allow you to disable restarts for a site failure. You can fully disable HA for a particular VM though. But when back at my hotel I started thinking about this question and realized that there is a work around to achieve this. I didn’t note down the name of the customer who asked the question, so hopefully you will read this.
When it comes to a stretched cluster configuration typically you will use VM/Host rules. These rules will “dictate” where VMs will run, and typically you use the “should” rule as you want to make sure VMs can run anywhere when there is a failure. However, you can also create “must” rules, and yes this means that the rules will not be violated and that those VMs can only run within that site. If a host fails within a site then the impacted VMs will be restarted within the site. If the site fails then the “must rule” will prevent the VMs from being restarted on the hosts in the other location. The must rules are pushed down to the “compatibility list” that HA maintains, which will never be violated by HA.

Friday, August 30, 2013

Don't Thrash, when you can Cache Your Hash on Flash


Don't Thrash: How to Cache Your Hash on Flash

Conference: 
Michael A. Bender, Stony Brook University and Tokutek;
Martin Farach-Colton, Rutgers University and Tokutek;
Rob Johnson, Stony Brook University;
Bradley C. Kuszmaul, MIT and Tokutek;
Dzejla Medjedovic, Stony Brook University
Pablo Montes, Stony Brook University
Pradeep Shetty, Stony Brook University
Richard P. Spillane, Stony Brook University
Erez Zadok, Stony Brook University
 
 
 

SQream Uses GPUs to Blast Through Big Data 10-Times Faster Than CPUs

Catch SQream and other early-stage GPU innovators this November in Tel Aviv, where we’ll host top execs from dozens of firms using GPUs to push the frontiers of computing.
The trouble with Big Data is it’s so big.
Piling up terabytes of data is easier than ever as the world grows more digital, and those digital services grow ever more connected.
The problem: all too often finding useful information takes racks full of servers. And it takes time.
An Israeli startup, SQreamTechnologies, wants to use GPUs to make the process of sifting through all those terabytes to find something useful faster and more efficient.
And while it may not be intuitive to use a technology developed for video games to crank through phone records or bank transactions, SQream CEO Ami Gal has been noodling around with the idea for more than a decade.
That’s because the parallel computing technology used by GPUs to render lush worlds and fast-paced action is also ideal for cranking through a huge number of problems simultaneously.
Gal, a serial entrepreneur with a knack for out-of-the-box thinking, gave the idea a shot more than a decade ago, in 1997, when he tried to use predecessor of GPUs to accelerate call center apps. It worked, but he couldn’t make it work quickly enough to make a difference.
Since then, GPUs have grown into the parallel processing powerhouses first Gal envisioned. Gal found that out first hand when he ran into Kostya Varkin, who was using the latest NVIDIA GPUs to tear through SQL analytics quickly and efficiently. Gal was so impressed with Varkin’s progress that he joined the company in 2010 as CEO.
The early-stage startup – based in Ramat Gan, near Tel Aviv – has fewer than 20 employees, and just a handful of major pilot customers. But Gal is confident the time is finally right.
The company’s technology can already crank data 10-times faster than CPU solutions, using a skinny server rather than an entire rack full of power-sucking machines, Gal says.
He’s expanding aggressively, looking for developers who know their way around CUDA and GPUs to add to the team. With data piling up all around us at an ever faster rate, Gal is confident he’s found an idea whose time has come.