Welcome!

@BigDataExpo Authors: Elizabeth White, Liz McMillan, Pat Romanski, Harry Trott, William Schmarzo

Related Topics: Containers Expo Blog, Microservices Expo, @CloudExpo, Cloud Security, @BigDataExpo, SDN Journal

Containers Expo Blog: Article

Bare Metal Blog: Quality Is Systemic, or It Is Not

In all critical systems the failure of even one piece can have catastrophic results for the user

February 5, 2013

BareMetalBlog talking about quality testing of hardware, in all its forms. F5 does a great job in this space.

For those of you new to the Bare Metal Blog series, find them all right here.

In all critical systems – from home heating units to military firearms – the failure of even one piece can have catastrophic results for the user. While it is unlikely that the failure of an ADC is going to be quite so catastrophic, it can certainly make IT staff’s day(s) terrible and cost the organization a fortune in lost revenue. That’s not to mention the problems that downtime’s impact on an organizations’ brand can have over the longer term. It is actually pretty scary to ponder the loss of any core system, but one that acts as a gateway and scaling factor for remote employee workload and/or customer access is even higher on the list of Things To Be Avoided ™.

In general, if you think about it the number of hardware failures out there is relatively minimal. There are a ton of pieces of network gear doing their thing every day, and yes, there is the occasional outage, but if you consider the number of devices NOT going down on a given day, the failure rate is very tiny.

Still, no one wants to be in that tiny percentage any more than they absolutely must. Hardware breaks, and will always do so, it is the nature of electronic and mechanical things. But we should ask more questions of our vendors to make certain they’re doing all that they can to keep the chances of their device breaking during their otherwise useful lifetime to a minimum.

For an example of doing it right, we’ll talk a bit about the lengths that F5 goes to in an attempt to make devices as reliable as possible from an  electro-mechanical perspective. While I am an F5 employee, I will note that there is no doubt that F5 gear is highly reliable. It was known for quality before I came to F5, and I have not heard anything since joining that would change that impression. So I use F5 because (a) I am aware of the steps we take as an organization and (b) because our hardware testing is an example of doing it right.

And of course, there are things I can’t tell you, and things that we just will not have room to delve into very deeply in this overview blog. I am considering extending the Bare Metal Blog series to include (among other things) more detail about those parts that I would want to know more about if I were a reader, but for this blog, we’re going to skim so there is space to cover everything without making the blog so long you don’t read to the end.

I admit it, I’ve talked to a lot of companies about testing over the years, and can’t recall a vendor that did a more thorough job – though I can think of a few whose record in the field says they probably have a similar program. So let’s look at some of the quality testing done on hardware.

Parts are not just parts.
An ADC, like any computerized system, is a complex beast. There is a lot going on and the quality of the weakest link is the piece that sets the life expectancy and out-of-the-box quality standards for the overall product. As such there are some detailed parts and subassembly tests that gear must go through.

For F5, these tests include:

  • Signal Integrity Tests to test for signal degradation between parts/subsystems.
  • BIOS Test Suites to validate that BIOS performs as expected and handles exception cases reliably.
  • Software Design Verification Testing to detect and eliminate software quality issues early in the development process.
  • Sub- Assembly Tests to verify correct subsystem performance and quality.
  • FPGA System Validation Tests determines that the FPGA design and hardware perform as expected.
  • Automated Optical Inspection used on the PCB production line to prevent and detect defects.
  • Automated X-Ray Inspection takes 3D slices of an assembled circuit board to prevent and detect defects.
  • In-Circuit Test using a series of probes to test the populated circuit board with power applied to detect defects.
  • Flying Probe uses a “golden board” (perfect sample) to compare against a newly produced board to verify there are no defects.

Now that’s a lot of testing, though I have to admit I’m still learning about the testing process, there may well be more. But you’ll note that some things aren’t immediately called out here – like items picked from suppliers, which could be caught in some of these tests but might not  either. That is because supplier quality standards are separate from actual testing, and require that suppliers whose parts make it into F5 gear are up to standard.

Supply demands
So what do we, as an organization, require from a quality perspective of those who wish to be our suppliers? Here’s a list. This list I KNOW isn’t complete, because I pared it down for the purposes of this blog. I think you’ll get the idea from what’s here though.

  • All assembly suppliers are ISO9000 and 140001 certified.
  • Suppliers assemble and test their products to F5 specifications.
  • Suppliers are monitored with closed loop performance metrics including delivery and quality.
  • Formal Supplier Corrective Action Response program – when a fault is determined in supplier quality, a formal system to quickly address the issue.
  • Quarterly reviews with senior management utilizing a formal supplier scorecard to evaluate supplier quality, stability, and more.

The biggest one in the list, IMO, is that suppliers assemble and test product to F5 specifications. Their part is going in our box, but our name is going on it. F5 has a vested interest in protecting that name, so setting the standards by which the suppliers put together and test the product they are supplying is huge. After all, many suppliers are building tiny little subsystems for inside an F5 device, so holding them to F5 standards makes the whole stronger.

By way of example, we require the more reliable but more expensive version of capacitors from our suppliers. For a bit of background on the problem, there is an excellent article on hardwaresecrets.com (and a pretty good overview on wikipedia.com) about capacitors. By demanding that our suppliers use better quality components, the overall life expectancy of our hardware is higher, meaning you get less calls in the middle of the night.

The whole is different than the sum of the parts
While an organization can test parts until the sun rises in the west, that will not guarantee the quality of the overall product. And in the end, it is the overall product that a vendor sells. As such, manufacturers generally (and F5 specifically) keep an entire suite of whole-product tests on-hand for product quality assessment. Here are some of them used at F5.

  • Mechanical Testing Test the construction of the system by  applying shock, drop, vibe, repetitive insertion/extractions, and more.
  • Highly Accelerated Life Testing -  Heat and vibration are used to determine the quality and operational limits of the device. The goal is to simulate years of use in a manageable timeframe.
  • Environmental Stress Screening – Expose the device to extremes of environment, from temperature to voltage.
  • MFG Test Suite System Stress testing - turn everything on, Reboot, Power Cycle, et cetera. By way of example, we cycle power up to 10,000 times during this testing.
  • On-Going Reliability Testing - The products currently in the manufacturing line are randomly picked and then put in a burn-in chamber which then test the device at elevated temperature.
  • Post Pack out Audit – Pull random samples from our finished good inventory to verify quality.

That’s a lot of testing, and it is not anywhere near all that F5 does to validate a box. For example, while software testing got a hat-tip at the component level, our Traffic Management Operating System (TMOS) has a completely separate set of testing, validation, and QA processes that are not listed here because this is the Bare Metal Blog. Maybe at some point in the future I’ll do a series like Bare Metal Blog on our software. That would be interesting for me, hopefully for you also.

It’s not over when it’s over
The entire time that Lori and I were application developers, there was a party to celebrate every time we finished a major piece of software. From an evening out with the team when our tax prep software shipped to a bottle of champagne on the roof of an AutoDesk office building when AutoCAD Map shipped, we always got to relax and enjoy it a bit.

While our hardware dev teams get something similar, our hardware test teams don’t pack up the gear and call it a product. For the entire lifecycle of an F5 box – from first prototype to End of Life – our test team does continuous testing to monitor and improve the quality of the product. Unlike most of what you will find in this blog, that is pretty unique to F5. Other companies do it, but unlike ISO certification or HALT testing, continuous testing is not accepted as a mandatory part of product engineering in the computing space. F5 does this because it makes the most sense. From variations in quality of chips to suppliers changing their suppliers, things change over the production of a product, and F5 feels it is important to overall quality to stay on top of that fact. This system also allows for continuous improvement of the product over its lifecycle.

One of the many reasons I think F5 is a great company. I have twice run into scenarios that involved a vendor who did not do this type of testing, and it cost me. Once was as a reviewer, which means it was worse for the vendor than for me, and once as an IT manager, which means it was worse for me than the vendor. I would suggest you start asking your vendors about lifetime testing, because a manufacturing or supplier change can impact the reliability of the gear. And if it does, either they catch it, or you could be walking into a nightmare. The perfect example (because so many of us had to deal with it) was a huge multinational selling systems with “DeskStar” disks that we all now lovingly call “Death Star” disks.

You can rely on it
This process is a proactive investment by F5 in your satisfaction. While you might think “doesn’t all that testing – particularly when continuous testing occurs over the breadth of devices you sell – cost a lot of money?”, the answer is “nowhere near as much as having to visit every device of model X and repair it, nowhere near as much as the loss of business persistent quality issues generates”. And it is true. We truly care about your satisfaction and the reliability of your network, but when it comes down to it, that caring is based upon enlightened self interest. The net result though is devices you can trust to just keep going.

I know, we have one in our basement from before we came to F5, It’s old and looks funny next to our shiny newer one. But it still works. It’s EOL’d, so it isn’t getting any better, and when it breaks it’s done, but the device is nearly a decade old, and still operates as originally advertised.

If only our laptops could do that.

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@BigDataExpo Stories
SYS-CON Events announced today that Golden Gate University will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Since 1901, non-profit Golden Gate University (GGU) has been helping adults achieve their professional goals by providing high quality, practice-based undergraduate and graduate educational programs in law, taxation, business and related professions. Many of its courses are taug...
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emergi...
SYS-CON Events announced today that DXWorldExpo has been named “Global Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Digital Transformation is the key issue driving the global enterprise IT business. Digital Transformation is most prominent among Global 2000 enterprises and government institutions.
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
Any startup has to have a clear go –to-market strategy from the beginning. Similarly, any data science project has to have a go to production strategy from its first days, so it could go beyond proof-of-concept. Machine learning and artificial intelligence in production would result in hundreds of training pipelines and machine learning models that are continuously revised by teams of data scientists and seamlessly connected with web applications for tenants and users.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
Connecting to major cloud service providers is becoming central to doing business. But your cloud provider’s performance is only as good as your connectivity solution. Massive Networks will place you in the driver's seat by exposing how you can extend your LAN from any location to include any cloud platform through an advanced high-performance connection that is secure and dedicated to your business-critical data. In his session at 21st Cloud Expo, Paul Mako, CEO & CIO of Massive Networks, wil...
SYS-CON Events announced today that Secure Channels, a cybersecurity firm, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Secure Channels, Inc. offers several products and solutions to its many clients, helping them protect critical data from being compromised and access to computer networks from the unauthorized. The company develops comprehensive data encryption security strategie...
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics ...
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
As businesses adopt functionalities in cloud computing, it’s imperative that IT operations consistently ensure cloud systems work correctly – all of the time, and to their best capabilities. In his session at @BigDataExpo, Bernd Harzog, CEO and founder of OpsDataStore, presented an industry answer to the common question, “Are you running IT operations as efficiently and as cost effectively as you need to?” He then expounded on the industry issues he frequently came up against as an analyst, and ...
SYS-CON Events announced today that App2Cloud will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. App2Cloud is an online Platform, specializing in migrating legacy applications to any Cloud Providers (AWS, Azure, Google Cloud).
Cloud resources, although available in abundance, are inherently volatile. For transactional computing, like ERP and most enterprise software, this is a challenge as transactional integrity and data fidelity is paramount – making it a challenge to create cloud native applications while relying on RDBMS. In his session at 21st Cloud Expo, Claus Jepsen, Chief Architect and Head of Innovation Labs at Unit4, will explore that in order to create distributed and scalable solutions ensuring high availa...
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including en...
Detecting internal user threats in the Big Data eco-system is challenging and cumbersome. Many organizations monitor internal usage of the Big Data eco-system using a set of alerts. This is not a scalable process given the increase in the number of alerts with the accelerating growth in data volume and user base. Organizations are increasingly leveraging machine learning to monitor only those data elements that are sensitive and critical, autonomously establish monitoring policies, and to detect...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. Jack Norris reviews best practices to show how companies develop, deploy, and dynamically update these applications and how this data-first...