February, 2021 Archive
No matter what business you are in or the size of your organization today, if you are an IT leader, you must be contemplating the future of IT in the context of the massive change in our industry. What will it look like? And how can I ensure I fully leverage it to the benefit of my customers and my own company?A new study of over 10,000 IT decision-makers has confirmed that a common vision of that future is emerging around the globe. The results are a good representation of the IT practitioner in the data center today – how they’re feeling and how they’re thinking. This study was sponsored by EMC and was conducted by independent market research specialist Vanson Bourne. You can explore the results in-depth.There are three tenets of this shared vision. First, is a belief that technology will create a competitive advantage as more and more business moves online. While this has always been true in some industries, it is becoming common across industries. Just look at the home today – a smart phone app is becoming critical buying criteria for everything from thermostats and home security systems to washing machines and garage doors.Second is a conviction that IT must become an in-house provider of on-demand services. The accessibility of applications and technology for consumers, combined with the ease-of-use in public cloud for organizations, has raised the bar. Now IT organizations around the globe know that they too must provide services on-demand.Lastly, there is a belief that the best way to improve agility and security in IT service offerings is to combine public and private clouds in a hybrid cloud solution. In fact, 64% identified the need for joint public and private cloud services – hybrid cloud – in order to ensure their organization is agile and that critical information will be protected.The role of in-house IT is evolving in light of the impact technology megatrends are having on business and society. IT departments continue being asked to deliver more value to the company with fewer resources. Offering on-demand services, including hybrid cloud and automated infrastructure are key ways to win in this environment.All around the world, IT leaders are innovating ways to enable business growth as quickly and efficiently as possible – that’s the future of IT.
Summer may be vacation season but the Dell team is in full swing. Throughout the month of June, we hosted and participated in several events and announced innovative new products and updates.Innovation DaysLast month the combined Austin and Round Rock campuses played host to Dell Enterprise Innovation Days, which gave media, bloggers and analysts a chance to meet with Dell executives, engineers and customers. Attendees visited the Thermal and Futuresville labs, the Design Lab, the Dell customer Texas Advanced Computing Center (TACC) and the Dell Global Support & Deployment (GSD) Command Center in and around Austin. Read more on the TACC visit in John Obeto’s blog here. Attendees also participated in a CTO panel on new ways to innovate, a discussion on the pending Dell-EMC combination and got a preview of Dell’s 2016 Annual Report to Customers.Innovation Day guests learned about ‘Triton,’ a liquid cooling solution for hyperscale. The system is the result of a unique collaboration with eBay, which needed a custom-built revolutionary data center cooling system. Triton can run faster and more efficiently than traditional air-cooled systems thanks to water’s ability to transport heat 25 times more efficiently that air. Dell is the first major vendor to safely bring facility water directly into each server sled to cool the CPU, resulting in unmatched cooling efficiencies and the lowest water consumption of any liquid cooled solution on the market today. Additionally, it uses 97% less datacenter cooling power than the average air-cooled data center, resulting in major cost savings and performance increases.PowerEdge and HPC Boost Customer SpeedIn June we made some key updates to our PowerEdge Server portfolio, further enabling customers to take advantage of our two-socket lineup of PowerEdge 13th generation servers. The new additions include the PowerEdge R530, and R430 rack servers and the T430 tower server. The new R530 features up to 80% more cores compared to the previous-generation and is designed for a wide range of common business applications and small-scale virtualization. The R430 has up to 15% more processing cores compared to its previous-generation and the T430 is ideal for a wide range of collaboration and productivity applications.Additionally, Dell worked with the Center for High Performance Computing (CHPC), part of South Africa’s Council for Scientific and Industrial Research to release the fastest computer on the continent. The computer, CHPC’s Dell-powered “Lengau” System, will drive new research and help provide the computational power to build the private sector and non-academic user base of the CHPC. Lenagu will provide access to resources for users who had limited or no access to the resources in the past.Dell Attends Partner EventsMembers from the Dell Storage team attended Nutanix’s .NEXT conference in Las Vegas, where they met with customers and partners. Alan Atkinson spoke on how new storage, server and software architectures are shrinking the datacenter. He also announced the extension of Dell’s two-year partnership with Nutanix in the hyper-converged space.At the International Supercomputing Conference (ISC), we had the opportunity to meet with customers to discuss Dell HPC innovations, including the availability of new Dell HPC systems, early access to innovative HPC technologies, technology partner collaborations, and successful global customer implementations. We were also able to highlight Dell customer collaborations between CHPC, TACC, Tapad, Sensus and others.We rounded out the month with Red Hat Summit, where Dell was proud to serve as a platinum sponsor. At the event, we announced new OpenStack extensions to help focus our technology efforts on building a hybrid cloud solution for customers. Most significantly, our Red Hat Cloud Solution has integrated its latest generation of PowerEdge server platforms with powerful new Intel Xeon E5-2600 V4 processors. The event also showcased our OpenStack Cloud and Software-defined Storage solutions with Red Hat Ceph. These new SDS architectures help optimize Ceph storage for IOPS performance and scalability.While we may still be in the heat of summer, Dell World (October 18-20 in Austin) is fast approaching. Register up until July 15 to take advantage of our discounted early bird registration. We hope to see you there!
The difference between enterprise IT success and failure comes down to nothing less than the level of commitment an IT vendor is willing to make on behalf of the IT professionals that bet their careers on the quality of the products and services being provided.Every experienced IT professional knows they need to expect the unexpected. What they should be able to count on without fail is a vendor willing to go the extra mile to minimize any potential disruption to the business. After all, it’s not just revenue streams being impacted; it’s the reputation of everyone involved in the administration of the IT environment that gets affected.That’s why the level of investment the Converged Platforms and Solutions Division of Dell EMC is making in protecting those reputations is unrivaled. Tools and services that we routinely provide to ensure those reputations include:A Release Certification Matrix that provides prescriptive guidance for patching and updating more than 30 elements of software and firmware.Dell EMC Vision Intelligent Operations software that automatically identifies any component that is out of sync with the Release Certification MatrixAutomatic issuance of security and technology alerts any time a product defect might cause a serious disruption.Single point of contact for all support issues regardless of original manufacturing source.Tools to automate backup of system configurations as well as change periodically change credentials to better secure the overall environment.Access to professional services to take on any management task or assist in the creation of runbooks to automate any IT management process or sets of functions.As is often the case, enterprise IT an ounce of prevention is always going to be worth a pound of proverbial cure. Converged and hyper-converged systems from Dell EMC are designed from the ground up to provide the highest levels of availability possible. Dell EMC also uniquely provides access to tools and services that provide IT teams with actionable intelligence that in the event of an emergency help make certain any disruption in service is kept to an absolute bare minimum.In effect, Dell EMC is invested in the success of each customer to the point where our goal is to become an extension of the IT operations team. Rather than simply viewing ourselves as a supplier, the Dell EMC customer commitment extends to the point where we make our resources available in the form of tools and services that are readily available any time and anywhere 365 days a year. Regardless of the source of the problem, Dell EMC is singularly focused on the success of the IT operations teams that standardize on our platforms.We invite you to download a “Best Operating Practices Tools for VxBlock and Vblock” that provides recommended action to maximize and optimize Dell EMC Converged Systems’ investment and helps track customers’ progress. We’re confident that you’ll appreciate the effort Dell EMC is willing to make to not just become your preferred vendor, but more importantly, a member of your extended IT operations staff. After all, we realize that when it comes to system availability it’s not just about dollars and cents; it’s a matter of personal pride.
For years, a typical EDA infrastructure has relied on the same architecture for its storage system: A scale-up storage system characterized by a single-server operating system and a controller head with shelves of disks. The architecture creates islands of storage with many disk shelves and many separate controller heads.The workflows, workloads, and infrastructure for chip design—combined with exponential data growth and the time-to-market sensitivity of the industry—constitute a clear requirement to optimize the system that stores EDA data.Using a traditional scale-up storage architecture for EDA leads to an array of problems:SCALABILITY AND PERFORMANCE BOTTLENECKSTraditional EDA storage architectures create performance bottlenecks—bottlenecks that get worse at scale. A traditional scale-up architecture requires manual data migrations among storage components to balance performance for EDA applications. The controller is the main bottleneck: Although it is typical in EDA to limit the amount of capacity per controller head to make sure that performance requirements are met, attaching too much capacity to the controller can saturate it—a situation made worse by the fact that adding capacity does not scale performance.Performance bottlenecks levied by the storage system can reduce wall-clock performance (turn-around time) for suites of concurrent jobs, which can affect the time it takes to bring a chip to market and, ultimately, revenue. UNWANTED ISLANDS OF STORAGETo avoid saturation of a control head, engineers using typical scale-up storage are forced to create “islands of storage” – isolated storage clusters each with their own volume and namespace. Scale-up storage architectures can also add limitations to storage growth – such as requiring that newer technologies, such as all-flash performance storage, be installed on a separate volume. These limitations can create challenges for engineering since projects can be forced to span multiple volumes, and scripts may have to be modified to accommodate path changes. This also creates additional overhead for storage management.INEFFICIENT UTILIZATION OF DISK SPACECapacity is unevenly utilized across islands of storage: Some volumes are underutilized while others are oversubscribed. The result is many volumes, all with pockets of free space. The uneven utilization forces you to manually rebalance volumes across aggregates and to manually migrate data to an even level. The burden of managing data across volumes and migrating data to distribute it evenly not only undermines performance but also increases operational expenses. From the engineering perspective, there is increased risk that a last minute request for space, while technically available in aggregate, may not be available within a single volume.MULTIPLE POINTS OF MANAGEMENTWith no central point of management, each filer must be individually managed. The management overhead increases the total cost of ownership (TCO) along with OpEx. The multiple points of manual management also put EDA companies at a strategic disadvantage because the lack of centralized management undermines a business unit’s ability to expand storage to adapt to demand, which can in turn hamper efforts to reduce time-to-market.With file server sprawl, the cost of managing fast-growing data can also exceed the IT budget by resulting in costly data migrations to eliminate hot spots. Similarly, backup and replication becomes increasingly complex and costly.STORAGE UNCERTAINTYExpanding data sets coupled with dynamic business models can make storage requirements difficult to forecast. Up-front purchasing decisions, based on business and data forecasts, can misestimate storage capacity. Forecasting capacity in advance of actual known needs undermines adaptability to changing business needs. With scale-up architectures, simply adding capacity causes a disruption – as does manually load-balancing control heads. Downtime increases project costs, directly causes delays in time-to-market and ultimately reduces profit margins.Such a model of early storage provisioning can also lead to vendor lock-in and a loss of negotiation leverage that further increases costs.PERFORMANCE (I/O) HOTSPOTSHistorically, CPU performance and quantity of cores was the primary bottleneck to EDA tools. When more performance was needed, semiconductor companies typically responded by adding cores. Today, however, we have faster networks and commodity servers available – with some compute grids growing to 50,000 cores and beyond. Even with as few as 1000 cores, however, the bottleneck shifts from compute to storage. EDA workflows tend to store large numbers of files in a deep and wide directory structure, which compounds the challenge around traditional storage infrastructures. For projects sharing the same controller or export, metadata-intensive workloads can saturate the controller CPU causing latency to spike and users are no longer able to interact with the storage system. For example, if an engineer kicks off 500 simulation jobs against /project/chip1_verif directory, anyone else interactively working in that space will notice delays in response.Today’s compute grid is dense and extensive, leaving storage as the bottleneck. For many semiconductor companies, the storage system still has a legacy dual controller head architecture, such as a traditional scale-up NAS system, leaving EDA tools vulnerable to I/O hotspots.Isilon Solutions for EDA WorkloadsIsilon overcomes the problems that undermine traditional NAS systems by combining the three traditional layers of storage architecture—file system, volume manager, and data protection—into a scale-out NAS cluster with a distributed file system.Such a scale-out architecture increases performance for concurrent jobs, improves disk utilization and storage efficiency to lower capital expenditures (CapEx), centralizes management to reduce operating expenses, and delivers strategic advantages to adapt to changing storage requirements and improve time-to-market.For more information about Dell EMC Isilon NAS and how we are delivering performance, scalability and efficiency to optimize data storage for EDA, please read our latest white paper here.
And you thought the cloud was complicated. When cloud computing first debuted, everyone was trying to figure out what it was; it meant different things to different people. Fast-forward to 2016, and it was déjà vu all over again – this time with the edge. In fact, one can argue that edge computing is even more ambiguous, misunderstood, and unknown at this point than the cloud was when it started out. Case in point: one customer recently told me that edge “means everything and nothing.”A big part of that uncertainty stems from the fact that the edge is a simple word for a completely emerging, complex, and misunderstood topic. It leaves people asking what’s real in this space, what are the basic requirements, how do others define their edge strategies, etc. But with Forbes Magazine naming edge computing one of the Top 10 trends for digital transformation in 2018, it’s a topic that companies who want to stay competitive cannot afford to ignore.While the industry has not yet landed on a completely cohesive definition of the edge, there has been some basic alignment. In general, it’s about moving compute closer to where data is being generated — where data and content is valuable, mobile and distributed. Often times this data requires information processing for real-time decision-making. Rather than incur the cost and latency of sending this information to the cloud or centralized data center, many businesses are looking to incorporate edge computing within their infrastructures.Take for example the airline industry. AviationWeek reported some airplanes have 5,000 sensors that generate up to 10GB of data per second. A single twin-engine aircraft with a 12-hour flight can produce up to 844TB of data. Even with a 100GB pipe, it would take over 18 hours to transfer this data to a centralized data center. With edge computing, you can speed analysis of the critical data and ensure the plane is ready to fly again, while back-hauling the rest of the data.ESI Takes on Edge Computing If you didn’t attend Mobile World Congress Barcelona or Dell Technologies World, you probably missed our Extreme Scale Infrastructure (ESI) team talking about the latest developments with edge computing. Here are some of the key themes we’ve been sharing at events:Applying Hyperscale Principles to the Edge: It may sound counter-intuitive, but as Dell EMC vice president and fellow Ty Schmitt explains in this interview at Dell Technologies World 2018, ESI is taking what we’ve learned building Modular Data Centers (MDCs) for macro-data centers and is now applying those principles to edge deployments. It’s still about packaging IT, power, cooling, security and management into a right-sized solution for the customer, but it’s the operational model and approach that has changed. For example, with edge, you may be trying to figure out how to distribute 10 megawatts of power for distributed use across a thousand different locations as opposed to concentrating it in a single, central location. Watch to learn more.ESI’s Latest Micro MDC Concept Takes Center Stage: Companies are realizing that trying to migrate all of their data to the cloud or centralized data center generates bandwidth congestion, latency and cost. In many cases it’s just not fast enough (see above airline example or think self-driving cars and the billions of other smart devices woven into the IoT landscape). That’s why ESI has introduced the idea of a micro MDC, designed specifically for the edge.Previewing its latest prototype at Mobile World Congress 2018, Dell EMC’s distinguished engineer Mark Bailey explains how the tenets of IT, power, cooling and a structure inclusive of security, networking and power distribution were incorporated into a single, 18 kilowatt, one-rack solution. Dive deeper into the specifics with Mark here.Managing at the Edge: So now that we’ve talked about moving data centers out to the edge, management can’t be an after-thought. With potentially thousands of micro MDCs spread across the world – often times in remote locations and without operators onsite – how do you manage? Dell EMC’s senior principal engineer Tyler Duncan describes how we’re monitoring and managing at the edge at the recent OSIsoft PI World conference in San Francisco. Watch the video to learn how we’re integrating OSIsoft’s PI system into our micro MDCs to monitor electricity consumption, avoid peak power charges, detect early warning signs of failures and other tasks.ConclusionWith the space still emerging, companies that want to occupy the edge are going to face logistical, security, regulatory challenges and more. Organizations are going to need a partner or provider who will take the time to learn their strategy and take a cross-platform view, with specialists from facility, IT, logistical, network and security arenas.While ESI has been busy talking to customers about edge computing from an IT and data center perspective, it’s just one piece of the puzzle, with software and services equally as important. Dell Technologies and Dell EMC knows that edge solutions are evolving and we are investing tremendous resources throughout the entire company to create products and solutions, proof of concepts and field trials to ensure we meet customer your requirements at the edge.Visit us here if you’d like to learn more about ESI capabilities or reach out to [email protected] if you want to discuss your edge initiatives.
According to the World Bank Group, 15 percent of the world’s population – one billion people – have a disability and between 110 million and 190 million people experience significant disabilities. In the last U.S. census in 2010, over 56 million people were counted as having a disability yet the employment ratios for people without a disability (65 percent) are more than three times more than those with a disability (18 percent).*The World Bank Group goes on to note that individuals with disabilities can struggle with “…inaccessible physical environments and transportation, the unavailability of assistive devices and technologies, non-adapted means of communication, gaps in service delivery…and stigma in society.” These are all areas where corporations, especially Dell, can help.At Dell , we recognize the importance of diversity and inclusion. Patrick Poljan and I are the global executive sponsors for the True Ability Employee Resource Group (ERG). True Ability joins team members that experience or support those with a range of physical or intellectual disabilities, while celebrating achievements and spreading awareness so everyone is positioned for success.My involvement in True Ability lets me see the impact our ERG’s have on our customers, team members, and communities. However, I also have a personal stake in the success of our ERG as I am the parent of a child with special needs. I want my son to ultimately achieve his full potential and be a positive influence in the workforce.Therefore, it gives me great pride to announce that Dell recently received a top score of 100 percent on the Disability Equality Index® (DEI®), a joint initiative of the Disability:IN (formerly U.S. Business Leadership Network) and the American Association of People with Disabilities (AAPD).DEI® is designed to promote inclusivity and understanding of people with disabilities in the workplace. DEI® measures six areas: culture and leadership, enterprise-wide access, employment practices, community engagement, supplier diversity and non-U.S. operations.As a top-scoring company, Dell was also recently recognized as one of the “2018 DEI Best Places to Work for Disability Inclusion.” These awards are a testament to the thousands of team members who have positively impacted our workplace regarding disability issues.I am very pleased that Dell has received this recognition – but there is more work we can do:We can drive even greater accessibility into our products and services for our customers.We can continue to foster a safe and inclusive environment for team members to declare they have a disability.We can intensify our efforts to drive inclusion, benefits programs, and accessibility for our disabled team membersWe can foster larger efforts to hire, retain, and develop team members with a disability.We can partner together on issues that span across multiple ERG’s.We can actively champion the diversity of disabled individuals within our local communities and the workforce.Let’s take a moment to celebrate this great recognition and strive to drive even more disability inclusion in the future! *United States Department of Labor, Bureau of Labor Statistics, Persons with a Disability: Labor Force Characteristics News Release, June 21 2018. https://www.bls.gov/news.release/disabl.htm
Modern service providers face an imperative to utilize the petabytes of data collected from their operations to accelerate business growth and improve operational efficiency. Service providers have learned that the value of data is maximized when real-time, streaming and batch analytics are combined to enable advanced use cases and inform important operational and investment decisions about their networks. Likewise, most service providers today have experience with data lakes and have learned the crucial role that advanced data management plays to prevent the data lake from turning into an unwieldy data “swamp”.Dell EMC Service Provider Analytics (SP Analytics) Ready Architectures help service providers apply actionable insights from network and customer data in order to improve the customer experience, increase operational efficiency, and introduce new revenue-generating services, all in the pursuit of transforming themselves into data-driven businesses. These Ready Architectures help service providers accelerate their digital transformation by making it easier for them to implement advanced use cases on a unified data platform. Dell EMC SP Analytics is designed to scale with a customer’s business needs, allowing the customer to derive value from data analytics today, while preparing for new use cases in the future. Our goal is to help customers build on the known benefits of business intelligence and rules-based automation to derive new benefits from advanced machine learning-enabled automation and eventually achieve fully autonomous operations.Dell EMC SP Analytics is built on the foundation of the industry’s #1 compute, storage and networking infrastructure portfolio. Our guiding design principles for SP Analytics include:Right time intelligence, combining streaming, near real-time and historical data analytics to allow decisions to be made while data still has valueEfficient data management, eliminating data silos, reducing management cost and complexity, and improving efficiencyData democratization, providing access to data via open APIs – allowing developers to write new applications and reap increasing TCO benefits from the underlying platformThe net benefit of SP Analytics is that it allows customers to see their business in completely new light. With greater customer and operational visibility, service providers can begin to take the guesswork out of decisions and become more data-driven. With proven Dell EMC infrastructure that can be deployed in hours rather than weeks, service providers can focus their resources on deriving business insight versus spending precious time implementation and maintenance. And by improving customer intimacy, SP Analytics helps transform the service provider business by allowing customers to guide the way. SP Analytics can make it much easier to delight customers and to unleash business differentiating innovation. In addition, service providers can find new ways to reduce operational expenses along the way.Dell EMC SP Analytics creates value on top of our Ready Architectures for Hadoop by adding software capabilities from our ISV partners, Cardinality and Zaloni. To learn more, please stay tuned for additional blogs outlining the actual makings of a Ready Architecture and it’s benefits as well as the details of the capabilities of the two new data analytics Ready Architectures.Dell EMC strives to help service providers become faster to market, focus their precious resources on insights rather than implementation and be secure without compromise by meeting stringent data security and compliance needs without sacrificing business agility. Get in touch with the Dell EMC Service Provider Analytics team to learn more and experience the magic first hand.
KISSIMMEEE, Fla. (AP) — The Florida Department of Law Enforcement will investigate the body slam by a school resource officer on a female high school student who appears to lose consciousness after her head hits the concrete in videos taken by other students. Osceola County Sheriff Marcos Lopez said Wednesday that his office was turning over the investigation of what happened between his deputy and a student at Liberty High School in Kissimmee to state investigators “to be sure no one can say that we are looking out for our own.” The deputy has worked for the sheriff’s office for a decade. He has been put on paid administrative leave.
LOS ANGELES (AP) — Former San Diego Mayor Kevin Faulconer says he’ll run for California governor and plans to formally announce the campaign Tuesday in Los Angeles. Faulconer is the first major Republican to formally step into the contest as signatures are being gathered for a recall effort against Democratic Gov. Gavin Newsom. Faulconer says in an online video that California has become a failed state under Newsom, accusing him of botching such issues as homelessness and dealing with the COVID-19 crisis. However, Faulconer would face an uphill fight in a state where Democrats outnumber Republicans by nearly 2-to-1.