Mission Critical Is More Than Just a Buzzword

first_imgNo active-active replication for ‘continuous availability’ with zero RPO, zero RTO.Limited multisite remote configurations – no star or cascaded configurations.Limited Snapshot Technology and NO Integrated Copy Data ManagementFlashArray//X snapshot capabilities remain the same as the //M series and there is no way to provide ‘secure’ snapshots.No integrated copy data management (iCDM) tool to help automate and simplify leveraging snap copies.No non-disruptive data migration toolsWhen you max out a FlashArray’s capacity, or performance, how can you move workloads to other boxes without impact to the users?Cannot Consolidate Block and File on the Same ArrayMost tier-1 application environments are leveraging some ‘file’ workloads in addition to their ‘block’ workloads. Having to use different arrays for your block and file workloads dramatically adds costs and complexities.Dell EMC arrays have a host of proven mission critical data services and tools that have been in mission critical production environments for yearsProven remote replication tools providing async, sync, multi-site, active-active, and more…Fast, efficient snapshot technology to create and use snapshots without impacting performance or capacity.Non-Disruptive Migration (NDM) tools for simple workload migration between boxes, with no impact to production applications.Unified block and file on the same array.Data at Rest Encryption (DARE) and secure snapshots for increased security of data.And many more features you can read about here.I hope you’ve enjoyed this blog and watch for more as we focus on other relevant enterprise storage themes.You may be wondering… what about NVMe – isn’t it a requirement for enterprise storage today?  Not exactly, for now see here and stay tuned for more on that topic in an upcoming blog! Pure Nonsense – Separating Fact From Fiction in FlashAt Dell EMC we have been designing, developing and supporting Tier-1 mission critical storage environments for decades with industry-leading solutions.  Although each customer defines “Tier-1” based on their unique needs, there are some common requirements everyone agrees must be offered for an all-flash storage portfolio to be considered for use in a Tier-1 mission critical environment.Mission Critical Storage Checklist: Ability to scale capacity while maintaining consistent & predictable performanceProven remote replication providing any recovery SLA – from asynchronous and synchronous support up to continuous availability (active-active for zero RPO/RTO)Intelligent and efficient copy (snapshot) technology, and integrated copy data management, with application integration for simple and fast copy creationSimple and non-disruptive migration tools to move workloads between arrays and migrate to future arrays as neededAbility to consolidate mixed workloads while maintaining performance and protection, including block and file on the same array if requiredData-at-rest encryption and other security methods, like secure snapshotsMany vendors claim to have Tier-1 mission critical storage, but knowledgeable customers can quickly see through the hype.  Just saying your storage is for Tier-1 or mission critical environments doesn’t make it true, you need the proven architecture and data services to support it.For example, Pure Storage has recently made the following claim:“FlashArray//X now represents a higher-performance tier for mission-critical databases, top of rack flash deployments, and Tier 1 application consolidation”The announcement of their new FlashArray//X (//X) has forced Pure Storage to make this bold (and questionable) claim and relegate their previous generation, the FlashArray//M (//M) series, as “most economic all-flash consolidation”. But what does that mean?The implication is that the new //X array has something new, or at least improved that makes it more suitable for mission critical/Tier-1 application environments than the previous //M series.  However, all we can see is that it should be faster than the FlashArray //M thanks to Pure’s proprietary NVMe drives (aka – flash modules).  That’s it… there’s no evidence of new or improved Tier-1 mission critical data services!After much research on Pure’s website, it turns out that the only real difference between the //X and the //M series is the use of proprietary NVMe ‘Direct Flash’ modules and ‘Direct Flash’ software, which Pure Storage claims will make the //X faster than the current //M series. However, as the //X isn’t generally available yet, and since Pure Storage isn’t publishing any performance numbers, no one really know what that means.Fun fact: Pure Storage recently removed all FlashArray performance metrics from its website and data sheets – perhaps they don’t want you comparing FlashArray to other industry-leading all-flash arrays?So the question is, how does an unspecified performance improvement, and no new or improved data services, make the FlashArray//X more appropriate for mission critical/Tier-1 environments? The obvious answer – It doesn’t!However, adding customized NVMe flash modules does make the array more expensive (to cover the cost of all the custom-built proprietary hardware), which means they need to make it sound more capable to justify the cost premium.  In fact, Pure Storage has been clear that the FlashArray//X needs to be positioned into mission critical Tier-1 application environments because these environments have larger budgets.Since Pure Storage now positions its previous generation, the //M series, for “general-purpose consolidation”, it’s clear even they understand it’s not built for mission critical environments.  Since it’s all they’ve been selling, they don’t seem to understand the true requirements of mission critical storage. Luckily for you, Dell EMC does… so, make sure you consider the following before putting FlashArray//X into your mission critical data centers.Consider the following before considering FlashArray//X for your Tier-1 mission critical environmentsController ArchitecturePure Storage’s FlashArray//X still uses the same active-passive (backend is passive) dual controller architectureYou cannot scale performance without replacing the controller with a faster controller.Capacity will always be limited – one fully active controller can only do so much. Currently the //X can only scale up to 182TB of raw capacity using 9.1TB proprietary flash modules (which are supposed to come later in the year).Since only one controller is fully active at any given time, you’re paying (a premium) for a second controller that is idle most of the time.What is the //X array overhead for RAID-3D, metadata and other drive management activities? If it’s the same as past generation FlashArrays (~43%) wouldn’t that reduce their current total usable capacity (before data reduction) to about 104TB per FlashArray//X, using their currently shipping 9.1TB flash modules?As stated by Pure Storage, 1PB effective capacity requires 5:1 data reduction and the currently unavailable 18.3TB proprietary flash modules. As others in the industry have pointed out, this is very optimistic as it is generally not possible to achieve the 5:1 data reduction in a typical OLTP environment (a common Tier 1 use case).Dell EMC arrays leverage dual fully active and multi-controller architecturesDell EMC mid-range all-flash arrays offer a dual fully active controller architecture, which allows greater scalability while maintaining performance with a dual controller architecture.Dell EMC high-end all-flash arrays leverage a multi-controller architecture which allows all controllers to be active and shared at all times. This provides the ability to add controllers to scale performance and capacity as the environment grows.All of the Dell EMC all-flash arrays can scale to multiple petabytes while maintaining consistent performance.Mission Critical Availability Pure Storage is marketing their new FlashArray//X as being always-on and having six-nines availability before the first units have even been installed!   The calculation here is questionable at best. Dell EMC offers at least six-nines of availability with an architecture that has been proven over decades of use, in the world’s most demanding environments.Mission Critical Tier-1 Data ServicesFlashArray//X is lacking key data services:Insufficient Mission Critical Remote ReplicationNo current synchronous remote replication, which limits the recovery SLAs the array can offer – introducing risk of data loss in the event of a disaster at the main data center.We hear that synchronous replication is coming, but most customers will be very leery about using version 1 replication to protect their mission critical data.last_img read more

Read More →

Back In Black – All New Latitude 7212 Rugged Extreme Tablet

first_imgToday we’re proud to announce the immediate availability of our all-new fully-rugged tablet, the Latitude 7212 Rugged Extreme. This evolution of our acclaimed Latitude 7202 rugged tablet brings a balanced mix of forward-looking performance and features, while preserving the backwards-compatibility that we have promised our customers.For field service professionals, law enforcement, military, and utility workers world-wide, we are bringing to market a durable and reliable product that integrates seamlessly into existing enterprise-class IT infrastructures, with no-hassle manageability and end to end security backed by the best supply chain and most comprehensive support program in the market.Lightweight, Powerful, and Securehttps://www.youtube.com/watch?v=VutoOx5oZ3cThe Latitude 7212 Rugged Extreme Tablet delivers up to several times more CPU performance than the previous generation1, thanks to 7th Generation Core i-series processors. Even with this substantial boost in performance, the tablet manages to improve battery life by 29 percent over its predecessor2. Best of all, we managed to shave over half a pound off the design during the process. Additional notable improvements include a FHD 1920x1080p etched Gorilla Glass screen with 10-point glove-touch ability and superior direct sunlight readability, USB 3 Type-C connectivity for one-wire docking and charging, and new and superior accessories.Customer-Focused Design and SupportYou might notice that we don’t have a wide variety of rugged products, and that we don’t update them as quickly as we may some of our other products. That’s on purpose. The rugged market is unique in several ways, and one of the ways is in what exactly our customers demand of us. Our customers want product longevity and forwards and backwards compatibility. We have customers that have asked to purchase models that are 5 years old or more! These customers make significant investments in not just the devices themselves, but the accessory ecosystem as well. That means that moving ports, changing the physical design, or any host of other design choices may mean that you break conformity or continuity with docks and other products that can cost several hundreds of dollars. That can drastically increase your expenses when trying to modernize your technology. To that end, we make sure that we stretch compatibility as long as possible as we iterate on our products, and that we ensure that we only introduce new products when there is clear market demand. Our Rugged portfolio is available with ProSupport Plus with SupportAssist, a monitored, predictive service that helps identify problems and increase uptime, all while minimizing steps and time to get you back up and running should problems occur.Test To FailOne thing that you’ll find about Dell Rugged is that we are always happy to talk about failure. In fact, we push our designs to the breaking point, and then we figure out how to make them better. That’s “testing to fail” – we go beyond just baseline industry specifications and try to find the true limits of our products, because we know that our customers’ experiences don’t fit squarely into a check box. Anytime we can offer that additional margin in durability may mean more than we can know to someone who depends on these products every day. Check out this video tour of our Rugged Lebs to learn about what we mean when we say “test to fail.”https://www.youtube.com/watch?v=nQlWrf4Zb-ELeveraging Dell’s Engineering StrengthsIn the world-wide market for computers, hardened compute devices are no-where near the largest segment. Because Dell designs and manufactures all types of devices for a wide-range of customers, there are certain advantages we can bring to the table when designing Dell Rugged products. Dell has a unified engineering team; all engineering teams within client computing report up to Senior Vice President of Engineering Ed Ward. He and his team collaborate at all levels across the business to bubble up best practices, lessons learned, and other knowledge that can be shared between team members to enhance the overall strength of our product portfolio. This may be in the shape of one of our larger volume product lines sharing reliability data with our Rugged team for commodity parts like hard drives, memory, and other shared electrical components, while Rugged may help others with various material science advancements and durability practices that lead to better hinges, more durable keyboards, and more scratch-proof displays across the entire line of laptops we offer. This collaboration brings cumulative and critical improvements that wouldn’t be otherwise possible.Battle-Tested in Real World ConditionsRecently Dell Rugged products were put through a gauntlet of testing with the British Army in some of the most demanding conditions. See the results and hear about their experiences in this video.https://www.youtube.com/watch?v=6DtTQj7HOy81 Based on Sysmark 2014, PCMark, and Cinebench testing2 Based on MobileMark 2014 results, configured with 52Wh total capacitylast_img read more

Read More →

Understanding the Container Storage Interface Project

first_imgContainers have become intensely important to software developers and system administrators – for good reason. Containers can isolate application code and even entire application stacks. That helps Ops teams keep test and production environments separate. In turn, that enhances security efforts and it gives IT more control of their environments. So, yay.But containers are still an evolving technology – which sounds better than, “We’re still figuring it out as we go.” And, as with nearly all the hairy problems computer professionals ever contend with, the messy bits are in integration. There are a lot of moving pieces, and where they meet (or fail to), we encounter friction.As a result, even if your shop is committed to container technology, getting underway isn’t as easy as it seems.First, as with any technical strategy, a development team has to choose the container orchestration architecture that’s the best choice for its shop. Fortunately, there are several good choices, such as Kubernetes, Mesosphere, Cloud Foundry, and Docker. One development team might choose Cloud Foundry, another Mesosphere, and so on; each platform serves a set of use cases.But after choosing a container architecture, the process gets more complex. Soon, a developer finds their team lost in yak shaving. That’s immensely frustrating. We want to solve this problem – not to deal with downstream issues that are distractions from the job-at-hand. We don’t want to invest time in cleaning up an old mess or building an integration tool before we can even get started.And that’s where the Container Storage Interface project (CSI) comes in.But let’s take a step back, so you can understand the problem that CSI solves. I’ve devoted a lot of time and energy to this, so I’m rather passionate about it.Container orchestrators have difficulty running applications that require data to persist between invocations of the containers. When a virtual machine (VM) is stopped and restarted (on the same node or elsewhere) the data (which may be a file system, database, whatever) is preserved. That’s because that data is encapsulated inside the virtual machine.In contrast, a container only contains the applications and its associated software dependencies; it does not include the underlying file system. This limits the application types you can run inside of a container. At least, it limits the value of running a stateful application in a container because that single container could only run on a specific node. So, in order to take advantage of containers, developers have to investigate the storage options and, too often, create unique custom solutions.Vendor and open source storage providers all have some kind of API – and that’s the nub of the problem. Because each storage product’s API has different semantics, nuances, and interfaces, developers have to learn each one to take advantage of the software’s features. Multiply that by the number of container orchestrators, and you see the yaks lining up for their haircuts. Particularly if you need to change container orchestrators or storage providers.It’s tough for users, but the lack of standardization presents problems for vendors, too. Storage companies have to choose which container orchestrators to support (and notably, which ones not to support); or they have to duplicate effort to support all of them. It’s very much like the problems an independent software vendor (ISV) faces when a new operating system comes along: Will it take off? Should we invest the time in it?Remember what it was like when mobile application developers needed to write every line of code for each possible mobile device? Yeah, like that. Nobody knows what works with what, and which version has bugs when you try to integrate this particular Tab A into that particular Slot B. The only way to figure things out is by trial and error. Few development teams (or project managers) want to be told, “Embrace unpredictability,” so they glom onto one “solution” and are demotivated to change the architecture because they’re afraid of the downstream side-effects.This slows down the adoption of containers, software-defined storage, and more modern infrastructures. Instead, the uncertainties cause people to continue to deploy older, legacy infrastructure. Fragmentation in this market has severely limited its ability to be embraced.It isn’t as though this is a new problem; this cycle has repeated time and again. Earlier technology evolutions certainly have had to deal with the process of creating reliable standards. For example, we struggled with choosing a database and then jiggling application data to integrate with another department’s software. By now, we should know the importance of building towards integration. A rising tide after all raises all boats.We are still doing storage like it’s 1999. It’s time to create a container storage interface that works across the industry. Thus now, is the point when your voice matters most.The Container Storage Interface (CSI) is a universal storage interface – effectively an API specification– that aims to enable easy interoperability between container orchestrators and storage providers. The result is application portability across infrastructures. CSI will enable container orchestrators to leverage any storage provider (cloud or otherwise) to consume storage services; storage providers can provide storage services to any container orchestrator.That sounds marketing-buzzwordy, doesn’t it? The point isn’t simply to create a single way for developers to incorporate storage into container-based software. That’d be only a matter of jargon and vocabulary (“You say tomato, I say to-MAH-to”). But a real interface takes into account what each platform can and cannot do. For example, one platform might let you mount more than one volume, and any API has to support that capability while also preventing its use on the other platforms. If we were talking about cars, the analogy might be an API responding, “This car model doesn’t have a back seat, so you can’t do this action.”Three communities have a stake in creating a Container Storage Interface: container orchestrators, storage providers, and the end-user community. “Users” encompasses several groups, each with its own sensitivities, including operations teams, technology architects, and storage specialists. Right now, the CSI project wants input from all of them.We have a pretty good spec, I think. We’ve collaborated with a number of people, and have contributed over two years of our experience from REX-Ray. But does it address concerns that people really have? Is there a feature or capability that needs to be included? We need as many voices in the community as possible to help us streamline this interface and make it work. The beauty of working with a community is hearing thoughts and ideas from all facets of a problem. So please, join us, lend us your voice and your thoughts.How You Can Get Involved:GitHub Spec: Read the spec, comment, ask questions, provide feedback.Google Group: Container Storage Interface CommunityThis is a public Google+ group for all CSI contributors. All the public meetings and discussions are shared here. Visit this group page for news and updates on CSI activities.Attend Community Sync Meetings: Meetings are held on Zoom every two weeks from 9am to 10am PT. Next meeting is October 4, 2017. Location: https://zoom.us/j/790748945. Check the Google doc the day of the meeting to confirm the meeting time.Google Group: Container Storage Interface Working GroupA smaller Google+ group of maintainers/approvers of CSI who maintain impartiality and have the benefit of end users in mind. Visit this page to stay up to date on the project.last_img read more

Read More →

NVMe and I/O Topologies for Dell EMC Intel and AMD PowerEdge Servers

first_imgIf you are a server user considering Non-Volatile Memory Express (NVMe) storage for your infrastructure, then you are seeking to invest in top-of-the-line performance. Leveraging a PCIe interface improves the data delivery path and simplifies software stacks, resulting in a significant latency reduction and bandwidth increase for your storage data transfer transactions.PowerEdge rack servers have unique configurations that are designed for specific value propositions, such as bandwidth, capacity or I/O availability. At times it can be a challenge to determine which configuration is best suited for your intended purpose!We at Dell EMC would like to simplify this process by providing the value propositions for each of our PowerEdge rack configurations; to help our customers choose the right configuration for their objectives. With this, we have also provided detailed illustrations of NVMe and system I/O topologies, so that customers can easily route and connect their best hardware configurations, and optimally design and configure customer software solutions and workloads.We can first look at one of our Intel-based rack servers, the R740xd. There are two suggested NVMe and I/O configurations that have unique value propositions:PowerEdge R740xd with x12 NVMe drives (Maximized Bandwidth)Figure 1: PowerEdge R740xd CPU mapping with twelve NVMe drives and twelve SAS drivesThis 2U R740xd configuration supports twenty-four NVMe drives. The NVMe drives are connected through PCIe switches, which allows the system to overprovision PCIe lanes to more NVMe drives while persevering I/O slots, therefore enabling low latency CPU access to twelve devices per CPU. Performance can easily be scaled for various dense workloads, such as big data analytics. This configuration appeals to customers wanting to consolidate Storage media to NVMe (from SAS/SATA). Customers requiring large capacity with the low latency of NVMe will benefit from this configuration, with up to 24 NVMe drives available for population.PowerEdge R740xd with x24 NVMe drives (Maximized Capacity)Figure 2: PowerEdge R740xd CPU mapping with twenty-four NVMe drivesThis 2U R740xd configuration supports twenty-four NVMe drives. The NVMe drives are connected through PCIe switches, which allows the system to overprovision PCIe lanes to more NVMe drives while persevering I/O slots, therefore enabling low latency CPU access to twelve devices per CPU. Performance can easily be scaled for various dense workloads, such as big data analytics. This configuration appeals to customers wanting to consolidate Storage media to NVMe (from SAS/SATA). Customers requiring large capacity with the low latency of NVMe will benefit from this configuration, with up to 24 NVMe drives available for population.Next, we can look at one of our AMD-based rack servers, the R7425. There are two suggested NVMe and I/O configurations that have unique value propositions:PowerEdge R7425 with x12 NVMe drives (Maximized Bandwidth)Figure 3: PowerEdge R7425 CPU mapping with twelve NVMe drives and twelve SAS drivesThis 2U PowerEdge R7425 configuration supports twelve NVMe drives and twelve SATA/SAS drives. Eight of the NVMe drives are connected directly to the CPU and four of the NVMe drives are connected to CPU1 through a PCIe extender card in I/O slot 3. Customers supporting workloads that demand maximum NVMe and storage performance will need maximum bandwidth to drive the best throughput (GB/s) performance connect to the devices.PowerEdge R7425 with x24 NVMe drives (Maximized Capacity)Figure 4: PowerEdge R7425 CPU mapping with twenty-four NVMe drivesThis 2U PowerEdge R7425 configuration supports twenty-four NVMe drives. Two PCIe switches are included, which allows the system to overprovision PCIe lanes to more NVMe drives while persevering I/O slots, which are then connected directly to the CPU. This configuration maximizes NVMe capacity and reserves slot 3 for additional I/O functionality but has a lower overall bandwidth. This configuration appeals to customers wanting to consolidate storage media to NVMe from SAS/SATA. Customers requiring large capacity with the low latency of NVMe will benefit from this configuration, with up to 24 NVMe drives available for population.Each PowerEdge server sub-group has a unique interconnect topology with various NVMe configurations to consider for implementation. To achieve your data center goals with your NVMe investments, it is critical to understand your NVMe topology, as well as why it is the best option from a value prop point of view.For the full list both Intel-based and AMD-based PowerEdge rack server NVMe and I/O topology illustrations, as well as explanations for each configurations value prop, please view the full NVMe and I/O Topologies Whitepaper now.last_img read more

Read More →

Lawyer group: Trump adds ex-prosecutor to impeachment team

first_imgCOLUMBIA, S.C. (AP) — Donald Trump is adding another attorney from South Carolina to his impeachment legal team: a former federal prosecutor-turned-defense attorney who specializes in white-collar crime.,That’s according to the head of a South Carolina trial lawyer group.,In an email Monday to South Carolina members of the American College of Trial Lawyers, group chairman Wallace Lightsey says Deborah Barbier has been hired to join Butch Bowers in crafting a defense for Trump’s unprecedented second impeachment trial, set for the week of Feb. 8.,Neither Lightsey nor Barbier returned messages seeking comment Monday.last_img read more

Read More →