Translate

Tuesday 27 December 2016

The 2016 Virtualization Review Editor's Choice Awards

VR Editors Choice 2016 Logo
Our picks for the best of the best.
As we close the book on 2016 and start writing a new one for 2017, it's a good time to reflect on the products we've liked best over the past year. In these pages, you'll find old friends, stalwart standbys and newcomers you may not have even thought about.
Our contributors are experts in the fields of virtualization and cloud computing. They work with and study this stuff on a daily basis, so a product has to be top-notch to make their lists. But note that this isn't a "best of" type of list; it's merely an account of the technologies they rely on to get their jobs done, or maybe products they think are especially cool or noteworthy.
Jon Toigo on Adaptive Parallel I/O Technology
In January, DataCore Software provided proof that the central marketing rationale for transitioning shared storage (SAN, NAS and so on) to direct-attached/software-defined kits was inherently bogus. DataCore Adaptive Parallel I/O technology was put to the test on multiple occasions in 2016 by the Storage Performance Council, always with the same result: parallelization of RAW I/O significantly improved the performance of VMs and databases without changing storage topology or storage interconnects. This flew in the face of much of the woo around converged and hyper-converged storage, whose pitchmen attributed slow VM performance to storage I/O latency -- especially in shared platforms connected to servers via Fibre Channel links.
DC express configurator
While it is true that I like DataCore simply for being an upstart that proved all of the big players in the storage and the virtualization industries to be wrong about slow VM performance being the fault of storage I/O latency, the company has done something even more important. Its work has opened the door to a broader consideration of what functionality should be included in a properly defined SDS stack.
In DataCore's view, SDS should be more than an instantiation on a server of a stack of software services that used to be hosted on an array controller. The SDS stack should also include the virtualization of all storage infrastructure, so that capacity can be allocated independently of hypervisor silos to any workload in the form of logical volumes. And, of course, any decent stack should include RAW I/O acceleration at the north end of the storage I/O bus to support system-wide performance.
DataCore hasn't engendered a lot of love, however, from the storage or hypervisor vendor communities with its demonstration of 5 million IOPS from a commodity Intel server using SAS/SATA and non-NVMe FLASH devices, all connected via Fibre Channel link. But it is well ahead of anyone in this space. IBM may have the capabilities in its SPECTRUM portfolio to catch up, but the company would first need to get a number of product managers of different component technologies to work and play well together.
Dan Kusnetzky on SANsymphony and Hyper-converged Virtual SAN
Why I love it: DataCore is a company I've tracked for a very long time. The company's products include ways to enhance storage optimization, storage efficiency and to make the most flexible use of today's hyper-converged systems.
The technology supports physical storage, virtual storage or cloud storage in whatever combination fits the customer's business requirements. The technology supports workloads running directly on physical systems, in VMs or in containers.
The company's Parallel I/O technology, by breaking down OS-based storage silos, makes it possible for customers to get higher levels of performance from a server than many would believe possible (just look at the benchmark data if you don't believe me). This, by the way, also means that smaller, less-costly server configurations can support large workloads.
What would make it even better: I can't think of anything.
Next best product in this category: VMware vSAN

Monday 12 December 2016

DataCore Parallel Processing - Applications & I/O on Steroids

Video 360 Overview by Enterprise Strategy Group and ESG Labs
Mark Peters, Senior Market Research Analyst and Brian Garrett, VP ESG Labs discuss DataCore’s parallel technologies and market shifts, trends to Software-defined Storage, Hyperconverged and Cloud; describe parallel I/O and impact and value based on the leap in performance that goes beyond technologies like all-flash arrays; the fit for data analytics, databases and more…
Must see 6 minute video below:

Thursday 8 December 2016

DataCore Software Chairman Wins Innovator of the Year at Best in Biz Awards 2016

ziya-aral2    BiBA-2016-gold-midres
DataCore Software has been named a gold winner in the Best in Biz Awards Innovator of the Year category, honoring the achievements of DataCore Chairman and Co-Founder, Ziya Aral.
The Best in Biz Awards is the only independent business awards program judged by members of the press and industry analysts. The sixth annual program in North America garnered more than 600 entries, from public and private companies of all sizes and from a variety of industries and geographic regions in the U.S. and Canada.
Much of DataCore’s technological success can be attributed to its Chairman and Co-Founder, Ziya Aral. Responsible for the direction of DataCore’s technologies, advancements and products, Aral is truly a pioneer in the storage industry.
While Aral has long been considered a “guru” in the space – widely published in the field of computer science – there’s no doubt that this past year was among his most innovative. His fundamental role in creating DataCore’s new Parallel I/O technology, the heart of the company’s software products including SANsymphony and DataCore Hyper-converged Virtual SAN is one of his greatest achievements to date. This says a lot for someone who designed the first high availability UNIX-based intelligent storage controller and whose development team invented disk-based data-sharing technology.
The problem facing the IT industry today is that in order to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads, business applications, data analytics, Internet of Things and highly virtualized environments, systems require ultra-fast I/O response times. However, the required I/O performance had failed to materialize in large part because software development hadn’t exploited the symmetrical multiprocessing characteristics of today’s cost-effective multicore systems. This is where Ziya Aral comes in…
In the early days of parallel computing, Aral was vice president of engineering and CTO of Encore Computer Corporation, one of the pioneers of parallel computing development. Then as co-founder of DataCore Software, he helped create the storage virtualization movement and what is now widely known as software-defined storage technology. As Aral and his team of technologists at DataCore Software set out to tackle the I/O bottleneck issue, this unique combination of expertise has enabled DataCore to develop the technology required to leverage the power of multi-cores to power I/O intensive applications like no other company can.
Parallel I/O executes independent ‘no stall’ I/O streams simultaneously across multiple CPU cores, dramatically reducing the time it takes to process I/O and enabling a single server to do the work of many.
The result? Parallel I/O not only completely revolutionizes storage performance but takes parallel processing from the realm of the specialist to making it practical and affordable for the masses. Unleashing multicores from the shackles of being I/O bound opens up endless new possibilities for cognitive computing, AI, machine learning, IoT and data analytics. The technology is proven at customer sites and has shattered the world-record for I/O performance and response times using industry-audited and peer-reviewed benchmarks.
“If companies are going to stand out from the crowd and remain competitive in future years, innovation is key. The market is tough and there is no guarantee that today’s dominant players will remain so — unless time and effort are concentrated on research and development,” said Charlie Osborne, ZDNet, one of Best in Biz Awards’ judges this year. “This year’s entries in Best in Biz Awards highlighted not only innovative business practices but the emergence of next-generation technologies which will keep companies current and relevant.”
For a full list of gold, silver and bronze winners in Best in Biz Awards 2016, visit: http://www.bestinbizawards.com/2016-winners.

Wednesday 7 December 2016

DataCore’s SANsymphony-V Software Receives Editor’s Choice ‘SVC 2016 Industry Award’ due to its Outstanding Contribution to Technology





READING, UK: DataCore have announced that their tenth generation SANsymphony-V platform has received the coveted Storage, Virtualisation and Cloud (SVC) 2016 Industry Award from an editorial judging panel, held at a London-based glittering ceremony, last Thursday evening.

“SANsymphony-V firmly deserves this important industry award based on two counts. Firstly, on the maturity and longevity of the platform – DataCore were the first to market Software-Defined Storage back in the early 2000’s, bringing software powered storage to thousands of customers. And secondly, on the immense impact that Parallel IO processing is having today within data centres, handling compute with unprecedented ease and on a scale never witnessed before.” notes Brett Denly, Regional Director, DataCore Software UK.

Brett collected the Award alongside DataCore’s Neil Crispin and Pierre Aguerreberry.

With unparalleled  numbers of vendors to select from within the Storage, Cloud and Virtualisation space, the eminent editorial judging panel of Digitalisation World stable of titles contemplated long and hard before bestowing the 2016 SVC Industry Award to DataCore Software, noting:-

“DataCore Software is a leader in software-defined storage backed by 10,000 customer sites around the world, so they must be doing something right!” said Jason Holloway, Director of IT Publishing at Angel Business Communications, Organiser of the SVC Awards

SVC Industry Awards continue to set a benchmark for outstanding performance on the contribution of individuals, projects, organisations and technologies that have excelled in the use, development and deployment of IT.


Image: DataCore’s Brett Denly (Regional Director), Neil Crispin (Account Director), Pierre Aguerreberry (Director, Enterprise Sales) collect the Award from SVC organiser & Director of IT Publishing, Jason Holloway.

Tuesday 15 November 2016

The magic of DataCore Parallel I/O Technology


DataCore Parallel I/O technology seems like a kind of magic, and too good to be true… but you only need to try it once to understand that it is real and has the potential to save you loads of money!
Benchmarks Vs Real world
Frankly, I was skeptical at first and I totally underestimated this technology. The benchmark posted a while ago was incredibly good (too good to be true?!). And even though this one wasn’t false, sometimes you can just work around some limits of the benchmarking suite and build specific and unrealistic configurations to get numbers that look very good, but that are hard to reproduce in real world scenarios.

When I was briefed by DataCore they convinced me not with sterile benchmarks, but with real workload testing! In fact, I was particularly impressed by a set of amazing demos I had the chance to watch where a Windows database server, equipped with Parallel I/O Technology, was able to process data dozens of times faster than the same server without DataCore’s software… and the same happened with a cloud VM instance (which is theoretically the same, since this is a software technology, but is much more important than you think… especially if you look at how much money you could save by adopting it).
Yes, dozens of times faster!
I know it seems ridiculous, but it isn’t. DataCore Parallel Server is a very simple piece of software that changes the way IO operations are performed. It takes advantage of the large number of CPU cores and RAM available on a server and allows to organize all the IOs in a parallel fashion, instead of serial, allowing to achieve microsecond level latency and, consequently, a very large number of IOPs. 


This kind of performance allows to build smaller clusters or get results much faster with the same amount of nodes… and without changing the software stack or adding expensive in-memory options to your DB. It is ideal for Big Data Analytics use cases, but there are also other scenarios where this technology can be of great benefit!
Just software
I don’t want to downplay DataCore’s work by saying “just software”, quite the contrary indeed! The fact that we are talking about a relatively simple piece of software makes it applicable not only to your physical server but also to a VM or, better, a cloud VM.
If you look at cloud VM prices, you’ll realise that it is much better to run a job in a small set of large-CPU large-memory VMs than in a large amount of SSD-based VMs for example… and this simply means that you can spend less to do more, faster. And, again, when it comes to Big Data Analytics this is a great result, isn’t it?
Closing the circle
DataCore is one of those companies that has been successful and profitable for years. Last year, with the introduction of Parallel I/O they demonstrated their capability of still being able to innovate and bring value to their customers. Now, thanks to an evolution of Parallel I/O, they are entering in a totally new market, with a solution that can easily enable end users to save loads of money and get faster results. It’s not magic of course, just a much better way to use the resources available in modern servers.

Parallel Server is perfect for Big Data Analytics, makes it available to a larger audience, and I’m sure we will see other interesting use cases for this solution over time…


Monday 14 November 2016

DataCore Hyperconverged Virtual SAN Speeds Up 9-1-1 Dispatch Response

Critical Microsoft SQL Server-based Application Runs 20X Faster

"Response times are faster. The 200 millisecond latency has gone away now with DataCore running," stated ESCO IT Manager Corey Nelson. "In fact, we are down to under five milliseconds as far as application response times at peak load. Under normal load, the response times are currently under one millisecond."

DataCore Software announced that Emergency Communications of Southern Oregon (ECSO) has significantly increased performance and reduced storage-related downtime with the DataCore Hyper-converged Virtual SAN.

Located in Medford Oregon, ECSO is a combined emergency dispatch facility and Public Safety Answering Point (PSAP) for the 9-1-1 lines in Jackson County, Oregon. ECSO wanted to replace its existing storage solution because its dispatch application, based on Microsoft SQL Server, was experiencing latencies of 200 milliseconds at multiple times throughout the day - impacting how fast fire and police could respond to an emergency. In addition to improving response time, ECSO wanted a new solution that could meet other key requirements, including higher availability, remote replication, and an overall more robust storage infrastructure.  

After considering various hyper-converged solutions, ECSO IT Manager Corey Nelson decided that the DataCore Hyper-converged Virtual SAN was the only one that could meet all of his technology and business objectives. DataCore Hyper-converged Virtual SAN enables users to put the internal storage capacity of their servers to work as a shared resource while also serving as integrated storage architecture. Now ECSO runs DataCore Hyper-converged Virtual SAN on a single tier of infrastructure, combining storage and compute on the same clustered servers.

Performance Surges with DataCore
Prior to DataCore, performance -- specifically, latency -- was a problem at ECSO, due to the organization's prior disk array that took 200 milliseconds on average to respond. DataCore has solved the performance issues and fixed the real-time replication issues that ECSO was previously encountering because its Hyper-converged Virtual SAN speeds up response and throughput with its innovative Parallel I/O technology in combination with high-speed caching to keep the data close to the applications.
ECSO's critical 9-1-1 dispatch application must interact nearly instantly with the SQL Server-based database. Therefore, during the evaluation and testing period, understanding response times were vital criteria. To test this, Nelson ran a SQL Server benchmark against his current environment as well as the DataCore solution. The benchmark used a variety of block sizes as well as a mix of random/sequential and read/write to measure the performance. The results were definitive -- the DataCore Hyper-converged Virtual SAN solution was 20X faster than the current environment.

"Response times are faster. The 200 millisecond latency has gone away now with DataCore running," stated Nelson. "In fact, we are down to under five milliseconds as far as application response times at peak load. Under normal load, the response times are currently under one millisecond."

Unsurpassed Management, Performance and Efficiency
Before DataCore, storage-related tasks were labor intensive at ECSO. Nelson was accessing and reviewing documentation continuously to ensure that any essential step concerning storage administration was not overlooked. He knew that if he purchased a traditional storage SAN, it would be yet another point to manage.
"I wanted as few ‘panes of glass' to manage as possible," noted Nelson. "Adding yet another storage management solution to manage would just add unnecessary complexity."
The DataCore hyper-converged solution was exactly what Nelson was looking for. DataCore has streamlined the storage management process by automating it and enabling IT to gain visibility to overall health and behavior of the storage infrastructure from a central console.

"DataCore has radically improved the efficiency, performance and availability of our storage infrastructure," he said. "I was in the process of purchasing new hosts, and DataCore Hyper-converged Virtual SAN fit perfectly into the budget and plan. This is a very unique product that can be tested in anyone's environment without purchasing additional hardware." 

To see the full case study on Emergency Communications of Southern Oregon, click here.


Wednesday 2 November 2016

Performance, Availability and Agility for SQL Server

Introduction - Part 1

IT organizations must maintain service level agreements (SLAs) by meeting the demands of applications that work on a SQL Server. To meet these requirements they must deliver superior performance and continuous uptime of each SQL Server instance.  Furthermore, application dependent on SQL Server, such as agile development, CRM, BI, or IOT,  are increasingly dynamic and require faster adaptability to performance and high availability challenges than device level provisioning, analytics and management can provide. 
In this blog, which is the first of a 3 part series, we will discuss the challenges IT organizations face with SQL server and solution that helps them overcome these challenges.
Challenges
All these concerns can be associated to a common root cause, which is the storage infrastructure. Did you know that 62% of DBAs experience latency of more than 10 milliseconds when writing to disks1? Not only does this slowdown impact the user experience, but also has DBAs spending hours tuning the database. Now that is the impact of storage on SQL Server performance; so what about its impact on availability? Well, according to surveys, 50% of organizations don’t have an adequate business continuity plan because of expensive storage solution2. When it comes to agility, DBAs have agility on at the SQL Server level, but IT administrators don’t have the same agility on the storage side – especially when they have to depend on heterogeneous disk arrays. Surveys shows that a majority of enterprises have 2 or more types of storage and 73% have more than 4 types3
A common IT trend to solve the performance issue is it to adopt flash storage4. However, moving the entire database to flash storage significantly increases cost. To save on cost, the DBAs end up with the burden of having to pick and choose the instances that require high-performance. The other option to overcome the performance issue is to tune the database and change the query. This requires significant database expertise, demands time, and changes to the production database. Most organizations either don’t have dedicated database performance tuning experts or the luxury of time or are sensitive to making changes to the production database. This common dilemma makes the option of tuning the database a very farfetched approach.
For higher uptime, DBAs utilize Cluster Failover Instance (formerly Microsoft Cluster Service) for server availability, but clustering alone cannot overcome storage-related downtime. One option is to upgrade to SQL Server Enterprise, but it puts a heavy cost burden on the organization (Figure 1.) This leaves them with the option to either not upgrading to SQL Server Enterprise or choosing only few SQL Server instances to be upgraded to the SQL Server Enterprise. The other option is to use storage or 3rd-party mirroring, but neither solution guarantee a Recovery Point Objective (RPO) & Recovery Time Objective (RTO) of zero.
SQL-server-challenges-part1
Figure 1
Solution
DataCore’s advanced software-defined storage solution addresses both the latency and uptime challenges of SQL Server environments. It is easy to use, delivers high-performance and offers continuous storage availability. DataCore™ Parallel I/O and high-speed ‘in-memory’ caching technologies increases productivity by dramatically reducing the SQL Server query times.

Next blog
In the next blog, we will touch more on the performance aspect of DataCore.

Thursday 29 September 2016

Hyper-converged: A Noun or a Verb?

Hyper-converged A Noun or A Verb
This is a blog post originally from Jeff Slapp, director, systems engineering and solution architecture at DataCore Software. See his full post here: https://www.linkedin.com/pulse/hyper-converged-noun-verb-jeffrey-slapp
INTRODUCTION
Interesting question. I have spoken to many people over the years who perhaps unknowingly default to the "noun-approach" when it comes to hyper-convergence. That is to say, making hyper-convergence something to be held rather than something to be done, as if you can walk into a store and buy some hyper-convergence. Perhaps this is because of the endless marketing we are bombarded with which pushes the concept of a very hardware-centric approach to hyper-convergencefrom the vendor rather than allowing the client to decide what it means to them or how it is to be accomplished given their unique set of applications and requirements.
Please do not misunderstand, I am not implying you don't need hardware, you do and today's hardware capabilities are simply amazing (as we will see). What I am saying and what I will show is that the true benefits are realized when the hardware is loosely coupled with the software which drives it. When you decide to hyper-converge (verb) it will be important to focus on the software used to accomplish it.
TO HYPER-CONVERGE OR NOT HYPER-CONVERGE... THAT IS THE QUESTION.
Before we get into the various ways of accomplishing hyper-convergence, let's first discuss the why. Why would an organization be compelled to deploy a hyper-converged architecture? Does this architecture really simplify things in the long run or is it simply shifting the problem to somewhere else? In order to answer these questions, we must have a basic definition of what hyper-convergence is:
"Hyper convergence (HC) is a type of software-defined architecture which tightly integrates compute, storage, and networking technologies into a commodity hardware chassis."
This is a general definition, but notice the key action word in there: integrates. What is being implied here is the direct coupling of compute and storage. One could logically conclude there are advantages in the form of cost savings, space savings, and reduction in overall physical complexity. However, I will add one more advantage which is not commonly attributed to HC and it is higher application performance.
"The reason high performance is not normally attributed to HC is due to the bottleneck which exists at the storage layer of the equation. Storage is the limiting factor in how far you can take HC or even if you deploy HC at all, because it is the most handicapped component in the stack."
[For more information on the storage bottleneck and why this is critically important to application density and performance, see: Parallel Application Meets Parallel Storage .]
Many times with technology, unless there is an extreme increase in efficiency somewhere in the system, a gain in one area will not usually overcome the loss in another. The net-net of the equation remains the same. However, if there is an extreme increase in efficiency then this is where things get really interesting. We will explore this shortly.
OK, I WANT TO HYPER-CONVERGE, BUT HOW?
There are plenty of companies out there who will tell you how to achieve a software-defined hyper-converged datacenter using their hardware box, but it will be on their terms within their boundaries. This will generally sound something like:
  • you must run this specific hypervisor
  • you must have these specific storage devices
  • you must run on this specific model of hardware
  • you must have this specific type of network
  • you must have a certain number of our nodes
  • oh, and by the way, when the hardware you purchased in the so-called "software-defined solution" reaches end of life, you must purchase the hardware and software all over again from us.
But wait, I thought hyper-convergence was in the category of "software-defined"? Doesn't it essentially mean the software and hardware layers are independent? If not, then nothing has changed really and all the industry has done is take the same old hardware based model (now with a specialized software layer) and repackage it as a software-defined solution only to lock us into the same hardware restrictions and limitations we have been dealing with for decades. Confused yet? Yeah, me too.
"In order to be truly software-defined, or what I like to call 'software-driven', the software must be loosely coupled with the hardware. This means the two are free to advanced independently of one another and when necessary, allow the relocation of the already-purchased-software from an older hardware platform to a newer one without needing to repurchase it all over again."
An alternative to the hardware-centric software-defined storage model we have today would be to adopt a piece of software which co-exists with all your other software while providing unmatched application density across any hardware platform (both server and storage alike).
IS HYPER-CONVERGENCE AN ALL-OR-NOTHING PROPOSITION?
If you have the opportunity to consolidate your entire enterprise into an HC architecture, great. Many times this is not the case. Each HC solution has a specific set of operational and functional boundaries. These vendor-imposed boundaries end up forming what software-defined principles were established to avoid, islands. For example, what happens when you purchase an HC solution, but you cannot deploy all your applications to it? You have created an island. And now you need to maintain a different solution for the rest of the architecture which couldn't be hyper-converged. So the net-net result is zero in terms of all the costs involved (capital, operational, and management), in fact in some cases the net-net may be negative.
However, if there was a solution which could unify local HC applications as well as non-HC applications, that would be something interesting. As it turns out there is such a solution and it is a variant of the hyper-converged model which I call hybrid-converged.
Storage is the principal focus here because in a mixed HC and non-HC environment, the compute layers are already separate with the network bridging the two, but storage is the component which can still be maintained as consolidated or converged without the need for two different storage solutions to serve both models.
"The logical implication of hybrid-converged is simply the ability to serve storage to applications in a HC model while at the same time providing storage for those external applications which cannot be hyper-converged. With this model, you no longer need two different storage solutions. Hence, the extreme increase in efficiency which I spoke of earlier has just entered the room."
OK, WHAT DOES THIS HYBRID MODEL LOOK LIKE?
It really doesn't look much different than what you are already familiar with, but what is important to note here is this: in order to pull this off you must have an intelligence which does a very good job at the one thing which has handicapped HC architectures up until this point, and that is to provide ultra-high performance storage services.
You cannot buy hyper-convergence from DataCore or anyone else for that matter. Remember, it's not something you acquire, it's something you do. However, DataCore does allow you to hyper-convergehowever and whatever you would like. Additionally, because the storage bottleneck has been removed, DataCore also allows you to deploy in a hybrid-converged model without slowing down and without requiring two separate storage platforms for each model.
Please refer to Jeff Slapp’s original blog post to see a few examples of what this can look like:https://www.linkedin.com/pulse/hyper-converged-noun-verb-jeffrey-slapp .
WHAT MADE THIS POSSIBLE?
This new model became apparent during the DataCore world-record SPC-1 run. During this run, while DataCore maintained an average of 5.12 million SPC-1 IOps, the CPUs on the DataCore engines were only at 50% utilization. This meant there was plenty of CPU power left to run other applications locally while serving storage out to the rest of the network/fabric.
While it is unlikely a single or even many applications simultaneously will drive millions of IOps, this demonstrates the amount of raw untapped power which is available within today's multicore architectures. No change to the underlying hardware framework is needed; all that is needed to harness the power is the right software. [See: Disk Is Dead? Says Who?]
CONCLUSION: THE PERPETUAL PENDULUM SWING
The decision to hyper-converge is a decision you must make. How you hyper-converge is also a decision you must make, not your vendor. If you decide to hyper-converge look long and hard at the options available to you. Also consider how the landscape is constantly changing and rapidly, right before our eyes. We now live in an intensely software-driven world.
Deployment Models - hyper-converged
Today there are three primary deployment models: traditionalconverged, and hyper-converged. DataCore has opened the door to a fourth: hybrid-converged. Ensure the solution you deploy allows you to not only achieve your goals today, but also allows for the adoption of future models easily and cost effectively.

Thursday 14 July 2016

Gartner Research Report on Software-defined Storage: The CxO View


Download this complimentary Gartner report Software-defined Storage: The CxO View – featuring the Top Five Use Cases and Benefits of Software-defined Storage – and learn how it can help grow your business while reducing TCO.              
Agile, cost-effective data infrastructure for today’s business climate
Welcome Fellow CxO,                                                                                                            
Today’s business climate carries a great deal of uncertainty for companies of all sizes and industries.  To seize new business models and opportunities, systems must be flexible and easily adjusted in order to respond to growth spurts, seasonality and peak periods. Likewise agility helps us mitigate risk .With the sluggish economies across the world, there is a need to be prepared to react quickly to changing fortunes.  From cutting back when needed to rapidly growing when opportunities present themselves, companies are less focused on long-term planning in favor of quick decisions and meeting quarterly expectations.

Technology is changing business dynamics as well.  Social, mobile and cloud are impacting companies’ operations, meaning they need to be able to meet changing demand 24x7.  This has put a premium on companies’ ability to react quickly while being able to absorb and analyze all the data they are gathering.
In survey after survey, CxOs highlight the following challenges when it comes to IT:
+ Dealing with the rapid growth of data
+ High cost of storing this data
+ Delivering high-performance applications
+ Meeting Business Continuity / Disaster Recovery requirements

When looking at IT infrastructure, it’s pretty clear that compute and networking have taken the lead in meeting these demanding requirements.  But, storage is a laggard.

Enter Software-defined Storage (SDS).  Aside from being the latest buzzword, what is SDS and will it help companies like yours succeed?

Put simply, SDS delivers agility, faster time to respond to change and more purchasing power control in terms of cost decisions.  Gartner defines SDS as “storage software on industry-standard server hardware [to] significantly lower opex of storage upgrades and maintenance costs… Eliminates need for high-priced proprietary storage hardware”. 

Our own research based on thousands of customers real-world feedback shows a growing interest in SDS.  By separating the storage software from storage hardware, SDS is able to:
+ Pool data stores allowing all storage assets and their existing and new capacity investments to work together ;enabling different storage devices from different vendors to be managed in common
+ Provide a comprehensive set of data services across different platforms and hardware
+ Separate advances in software from advances in hardware
+ Automate and simplify management of all storage


The benefits to your company are potentially enormous.  In a recent survey of over 2000 DataCore customers that have deployed SDS, key findings include:
79% improved performance by 3X or more  
83% reduced storage-related downtime by 50% or more 
81% reduced storage-related spending by 25% or more   
100% saw a positive ROI in the first year

It’s these kind of results and the advances in performance and efficiency due to DataCore’s revolutionary Parallel I/O technology within our SDS solution that have led to over 30k customer deployments globally and 96% of CxOs surveyed stating they recommend DataCore SDS.

Sincerely,
George Teixeira,

President and CEO, Co-founder

Monday 4 July 2016

DataCore drops performance bombshell; Benchmark blows record to hell and gone


"The Fort Lauderdale boys have struck again, with a record-breaking run of 5 million IOPS, and maybe killed off every other SPC-1 benchmark contender's hopes for a year or more.
DataCore Software, head-quartered in Fort Lauderdale, Florida, has scored 5,120,098.98 SPC-1 IOPS[PDF] with a simple 2-node Lenovo server set-up, taking the record from Huawei's 3 million-plus scoring OceanStore 18800 v3 array, which needs multiple racks and costs $2.37m. The DataCore Parallel Server configuration costs $505,525.24, almost a fifth of Huawei's cost.
It is astonishing how SPC-1 results have rocketed in the past few years...
Read the full article and comments from Chris Mellor at:  http://www.theregister.co.uk/2016/06/15/datacore_drops_spc1_bombshell/

Sunday 26 June 2016

DataCore Sets Record-Breaking Hyper-Converged Performance With Multi-node Highly Available Server SAN

http://www.storagenewsletter.com/rubriques/software/datacore-up-scales-record-breaking-hyper-converged-performance-with-multi-node-highly-available-server-san/

The new record-breaking performance results for a hyper-converged solution demonstrates the effectiveness of Parallel I/O technology in harnessing the untapped power of multi-core processors and disrupting the status quo of the storage industry.


The Results Speak for Themselves
Using the industry's most recognized storage benchmark for driving enterprise database workloads - the Storage Performance Council's SPC-1 - the company took on the classic high-end external storage arrays with a fully redundant, dual-node FC Server SAN solution, running itsSANsymphony software-defined storage services Platform on a pair of off-the-shelf Intel-based servers.


"With our first SPC-1 Price-Performance record [1], we set out to prove what could be accomplished with Parallel I/O in a single server, combining the benchmark's database workload and our storage stack in a single, atomic, hyper-converged system," said Ziya Aral, chairman, DataCore. "Now just a few months later, we are showcasing our progress and the effectiveness of multi-node scaling."

The new results put DataCore SANsymphony at number five on the SPC-1 Top Ten List for Performance[4], ranking only behind million-dollar mega-arrays including Huawei, Hitachi, HP XP7, and Kaminario, as well as DataCore's own Parallel Server hyper-converged configuration. The total price for the DataCore hyper-converged high-availability solution was $115,142.76, including three years of support. 

"We see the Server SAN architecture at the intersection of the hyperscale, convergence and flash trends. Storage intelligence has been moving back adjacent to compute, and Server SANs should be deployed as a best practice to enable low latency, high bandwidth and high availability in enterprise applications," said David Floyer, Chief Technology Officer of Wikibon. "The move to Server SAN architectures (aka hyper-converged infrastructure) has simplified operations by creating repeatable rack level deployments. DataCore with Parallel I/O software is demonstrating why these powerful multicore rack servers are becoming the basis for driving new levels of system performance and price-performance, and is a foundation for next generation system architecture." 

Aral adds, "With these new benchmark results, we up-scaled the configuration to two nodes, connected by Fibre Channel fabric, and reconfigured it for full high-availability with mirrored everything (mass storage, cache and software). Our objective was not only to set the performance record for hyper-converged systems, but to establish the corners of our performance envelope for the purposes of sizing and configuring what is otherwise an extremely flexible Software-Defined Storage scheme." 

Unlike competitive solutions, DataCore's mirroring comes standard with the capability to support local and stretched/metro clusters with automatic failover and failback protection across active-active synchronized copies of data located in geographically separate locations.  

Size Matters: Compact Server SAN for Lowest Total Cost of Ownership
In terms of lowering the total cost of ownership, it is also important to look at environmental and space considerations. Competitive storage solutions take up multiple 42U racks and many square feet of floor space, whereas the DataCore configuration occupies a mere fraction (12U) of one rack. 


Unlike traditional storage arrays, the DataCore nodes had the combined responsibility for running the database workloads and handling their I/O demands in the same servers - a much more challenging scenario.  DataCore's compact Server SAN solution collapses the infrastructure needed and significantly reduces the networking and administrative complexity and cost. It can also be non-disruptively upgraded at any time with more powerful servers and storage technology available in the open market. 

SANsymphony software availability an benchmark report detailsDataCore's latest software release has improved performance of SANsymphony and Hyper-converged Virtual SAN software solutions by up to 50%. The new software that was used as the basis for the SPC-1 benchmarking is available in June at no charge to current customers under support contracts.

The SPC-1 performance testing is designed to demonstrate a system's performance capabilities for business-critical enterprise workloads typically found in database and transaction processing environments. The audited configuration that was tested and priced includes SANsymphony Parallel I/O software on two standard Intel-based servers.

For complete configuration, pricing and performance details, see the SPC-1 Full Disclosure report.