Author Archives: Jim Fink

NYSE Data Center Availability Correction

Every investor is familiar with corrections in equity valuations, but P/E ratios aren’t the only metric that undergoes mean-reversion on the stock exchanges. Last wednesday, the statistical availability of the exchange’s data center underwent a correction. The event, caused by an improperly executed software upgrade, grabbed headlines and drew criticism on CNBC because it was the longest duration outage due to technical issues on record. To be sure it was a major event, “a bad day” as the NYSE president reported from the exchange floor. However, when taken in context of the cumulative availability over the last 30 or so years, this event should surprise us no more than any of the routine 10% corrections we see in the stock indices.

Screen Shot 2015-07-13 at 8.00.42 PM

Availability, in data center parlance is simply the fraction of time that the center is working right. In the data center infrastructure business, the nearly unachievable holy grail of availability scores is called “five nines”. This corresponds to about 5 minutes of unplanned downtime per year. It is easy to do this in some years, but averaging five nines over the long term is a tall order for even world class data centers with unlimited budgets. Note that we even see occasional outages from the likes of google, apple, yahoo, etc. The NYSE hit this elusive target briefly in 1989, then plummeted back to a still respectable four nines after a con-ed transformer exploded and brought down the exchange in 1990. Up until last week’s 4 hour “glitch” the availability had been creeping back up, reaching .99997 before the correction. Viewed in this cumulative manner, one should walk away confident that the exchange is still very reliable and becoming more so over the long term, despite this latest lengthy outage. Traders should be comforted by the similarity of the above graph to the stock charts they are always performing technical analysis on. I’ve yet to see any creative derivatives that allow us to make wagers on the next outage at the NYSE, but I’ll be watching, maybe even wagering.

Electrician Injury – Cable Pulling

CablingMost of us in the electrical industry have seen electricians install cables in conduits. The general idea is to install a pulling line, then fix the line to the cables to be installed, apply lubricant, and pull…. somehow.

However, the force required can be large, and can make things dangerous. It is not uncommon for electricians to be injured and equipment or installation materials to be damaged during this operation. The cable insulation can also be damaged, leading to catastrophic faults with potential for injury, electrocution or extensive downtime years later after construction is complete. I recall working for an electrical contractor as a summer job when I was 15 years old. The foreman had me driving a pickup truck that was attached to a pulling line at the mouth of a conduit high up on an exterior wall. Over and over I backed the truck up, then pulled forward using momentum to advance the cables a few feet further into the conduit. As I drove forward, the upward component of the tension became high enough to lighten the back end of the truck and cause the wheels to spin. The foreman told me to “keep doing that” and went back inside, presumably to check whether the guy feeding the cable bundle had lost an arm into the conduit yet. Despite my supervisor’s apparent lack of proper equipment and knowledge about cable installation, I was having great fun and was immensely proud to have been suddenly promoted from “tool gopher” to “company truck driver”. Life was good.

To my surprise, I later discovered that cable pulling is meant to be a more refined operation than that performed by the guys in the truck pull at the county fair. With the benefit of some education and experience I learned there are several important considerations rooted in math, physics and various published industry standards. How much tension will be required? What sidewall force can the conduit withstand? What pulling equipment should be used, or can we safely perform a manual pull? Will lubricant be required? What do the static & kinetic coefficients of friction tell us about stopping in the middle of a pull?

In short, pulling tension is related to the simple friction equations that we all learned in high school physics. However, the geometry makes them a bit more complicated when dealing with conduit runs that go up, down and around corners. A simplified form of the pulling equations is shown below.

Screen Shot 2014-06-03 at 9.59.10 AM

We analyze each straight and curved section of the conduit in series, using the tension out of one section as the tension into the next section. The tension in the line at the outlet of the last section is our pulling tension, a very useful number which allows us to select a strong enough rope, and determine what pulling equipment is needed. Even the most burly electricians won’t be able to win a 1000 lb tug o war – so keep them off the workers comp dole and don’t let them try!

One observation from the equations is that elbows hurt. For those who aren’t math people, that exponential means that every bend in the conduit causes a dramatic increase in tension. You might have guessed that. But a less obvious product of this analysis is the determination of preferred pulling direction. As an example of this, consider the figure below. Should we pull from A to C or C to A?

Screen Shot 2014-06-03 at 10.38.08 AM

The answer is not intuitive, and if you ask a room full of electrical workers this question you are likely to get a mixed response (there will be no shortage of colorful supporting theories however). Looking at our equations, one can see that the math we use to calculate pulling tension is a nonlinear operation. That is, we calculate A-B tension and B-C tension separately, and it matters a lot which one comes first. In this case it turns out that pulling from C to A requires less tension. Barring other circumstances such as accessibility, the pull in this direction will be less likely to injure electricians, and damage equipment or materials. One can construct a spreadsheet or use a program to make this calculation fast and simple for use during design, or on the jobsite.  A little extra time tapping numbers on the iPad can save a lot of frustration, blood, and sweat during the pull!

Other cable pulling tips;

  • Consider feeder reels & spools at the inlet to reduce incoming tension
  • Use pulling equipment that allows smooth, continuous adjustable pulling speed. Restart tension after an unplanned stop can be much higher than pulling tension, possibly causing the pulling rope to break.
  • Use a tension meter to predict an accident before it happens.
  • Always have 2 way communication between inlet and outlet locations
  • Use a lubricant and ensure that the manufacture indicates compatibility with your cable jacket type.

Need help with pulling calculations? Had an electrician injured on the job? Need some training on cable pulling best practices? Contact Kleinholz Inc. today for a free consultation.

Jim Fink, P.E.

Freeze Protection for HVAC Equipment

Much of the country is currently in the single digit temperatures, and “polar vortex” has been officially added to the list of meteorological buzzwords (right on the heels of the “Asperatus Cloud”… how can the weather man keep up?). So it is no surprise that we’ve been fielding calls from adjusters relating to HVAC equipment freeze and water damage from pipe bursts.

The figure below explains why pipes burst when they freeze. It’s all about how the graph curves back down between 4C and 0C.

Density Graph - Freeze Protection

Put simply, water expands by about 9% as it approaches the freeze point. Water is not timid about its need to expand, it does so with lots of force. This simple fact of physics creates billions of dollars in water damage losses for insurance companies. Unfortunately, the alternative is worse; if water didn’t have this destructive little feature, our lakes, oceans and rivers would freeze from the bottom up, causing a major food chain and climate disruption at best, or the end of life on Earth at worst. So, we accept the pipe bursts, whilst nature has provided the fish with a most accommodating under ice environment during the winter months.

There are numerous scenarios that can cause a loss related to pipes freezing. As with most losses, the fault can lie with the owner, installer, manufacturer or a combination these parties. Some common origins of water freeze damage include:

Defective Installation

The National Building Code (encompassing the Electrical and Mechanical Codes) goes a long way to promote pipe freeze protection in equipment and buildings. These codes are normally adopted by state law, and and contractors must adhere to them. The codes increase in complexity every year, and through lack of training or sometimes through willful efforts to cut costs, installation requirements are missed and the building is consequently not compliant with code. Relating to HVAC equipment and plumbing, the codes specify things like minimum insulation R values, acceptable places in the building to install HVAC and electrical equipment, pipe insulation types, freeze protection features and workmanship standards. Some hydronic HVAC systems are required to have chemical freeze protection such as ethylene or propylene glycol. These agents have the undesired effect of reducing the system performance during normal operation, so their use is a tradeoff. But if omitted, or present in too low of a concentration for the regional weather conditions, the results can be disastrous.

Defective Equipment

Since the dawn of the HVAC industry in the late 1800’s, the systems have grown in complexity every year. The complexity often results from our desire to reduce energy costs since HVAC equipment is responsible for a substantial part of most commercial and residential energy use. With complexity comes the possibility for more failure modes. As an example, designers often employ a feature known as “outdoor reset” on heating systems which saves energy by reducing pump speeds and / or boiler water temperature during periods of unseasonably warm outdoor temperatures. A failure in the outdoor reset circuitry, or even mounting the outdoor reset sensor in an incorrect location such as in direct sunlight can cause the “smart” heating system to think it is warmer outside than it is.  The boiler, thinking it is warm outside, reduces or stops heat transfer into the building, and a pipe ruptures. Freeze stats, rupture , heat trace tape and other equipment can also be used to guard against water damage. Each of these devices can fail, and be applied improperly contributing to a loss.

Operator Error

Improper operation of equipment by owners or facilities managers contributes to many freeze accidents. Proper thermostat setting is among the most basic actions that can prevent loss, and insurance policies often carry a requirement to maintain minimum thermostat settings for this reason. Elimination of cold air leaks in the building shell, and proper protection of HVAC elements and piping located in or near outside walls offers a level of protection. Homeowners and enterprise facility managers alike should consider the use of a cold weather checklist to protect property. It is particularly useful to plan for power loss scenarios. Since loss of utility electric power is not an infrequent event, it is advisable not to rely on the presence of utility power as the sole means of freeze protection. Simple procedural steps in a checklist such as “shut of water main if outage duration exceeds 1 hour and outdoor temperatures less than 20F” can save millions of dollars in damages.

Freeze Protection

If you are an owner, counsel or adjuster in need of expert advice on a freeze claim, we can help. Contact Kleinholz Inc. today for more details.

Hard Drive Heads

How Magnetic Fields Effect Hard Drives

Abstract

While it is obvious that hard drives can tolerate the Earth’s relatively weak magnetic field, their ability to reliably store and transfer data in the presence of stronger magnetic fields is less well known. Two different models of hard drives were tested in non-time varying magnetic fields of various strengths. The impact of the magnetic flux exposure was quantified by performing data transfer rate benchmark testing, data integrity tests, predictive testing, and surface scanning on the drives before, during and after exposure.  The drives were found to be largely immune to magnetic flux densities up to approximately 250 Gauss (0.025 Tesla).

Introduction

The first magnetic hard disk drive was created by IBM in 1956. Despite extreme improvements in data density and data transfer rates, the fundamental concept of operation in modern drives remains largely intact (Hayes, 2002).  The drives store and retrieve data using movable read and write heads held proximal to rotating platters. The platters are coated with a thin layer of cobalt alloy (previously an iron-based magnetic material) which is divided into magnetic domains called “bit cells”. A bit cell in a modern drive contains 50 to 100 grains of magnetic material, and the collective magnetic orientation of these grains in a single bit cell represents a logical binary “0” or “1”. Through the read/write heads, the disk drive has the means to both detect previously written ones and zeroes (read), and to create or reverse magnetic polarization in bit cells to create new stored data (write).

Hard Drive Heads - Susceptibility of Magnetic Fields

The actual mechanisms for reading and writing are entirely different. The writing operation is performed by an electromagnet which has a core designed to concentrate intense magnetic flux on individual bit cells as the head “flies” over the cells in the spinning platter.  The current applied to the electromagnet in the write head can be applied in either polarity, thereby establishing a magnetic south or north pole at the writing point, and consequently storing either a one or a zero. On older drives the read operation was performed by an inductive pickup coil, essentially performing the write operation in reverse. When a coil (read head) passes through a magnetic field, a current is induced according to Faraday’s law of induction. In the 1990s IBM invented the magnetoresistive head, which removed a barrier to further data density increases in the years that would follow. This method uses materials in the read head that change their resistance according to magnetic flux exposure. Therefore, the contents of successive bit cells can be read by monitoring the pattern of resistance changes provided by the read head.  Both the read and write operations benefit from technology that allows the heads to fly on a layer of air only 10-20 nm above the surface of the platter (Hayes, 2002).

 Understanding Vulnerabilities

The inner workings of hard disk drives must be understood to consider vulnerabilities for data loss due to externally applied magnetic fields.  Knowing that an electromagnet stores data in the bit cells by passing magnetic flux through them suggests that some externally applied magnetic field strength of opposing direction might be able to cancel the intended field from the write head, thereby interfering with the write operation.  After the data is stored, it is reasonable to expect that a strong enough interfering magnetic field could reverse the orientation of some or all of the grains in a bit cell, possibly converting a zero to a one or vice versa, and corrupting the data.  The ability of the magnetic material to resist this phenomenon is known as coercivity, which has been well understood since the late 19th century.  Pure iron has a very low coercivity of ~2 Oe (160 A/m), (Thompson, 1896), whereas  modern magnetic media with Cobalt based coatings in disk drives can be on the order of 1000 times higher (Yang, 1991).

 Magnetic Fields in Data Centers

In the commercial data center industry, it is common knowledge that the value of stored data itself far exceeds that of the hardware upon which it is stored.  Today’s hard drives are designed to last about 5 years, measured by the Component Design Life (CDL) method.  Clearly a business process may rely on a database to generate revenue, but where the database resides is of little importance.  Consequently, enterprise data center owners go to great lengths to protect their data, sometimes constructing entire mirrored, geographically diverse facilities to reduce the probability of data loss.  It follows then that users of magnetic disk drives in such environments are understandably nervous about risks of data loss and corruption.

One such perceived risk is that associated with magnets being stored or used near magnetic storage media. Historically, this risk may have been well founded. The earliest hard disks such as IBM’s 1956 model, used iron oxide as the magnetic material. The low coercivity of iron compounds means there would have been little resistance to data corruption from external magnetic fields. Iron compounds were still commonly used in the ubiquitous floppy disks of the 1980’s and 1990’s. Floppy disks are known for being vulnerable to damage from even relatively weak magnets (Keizer, 2004). This may have contributed to a persisting fear of magnets in the vicinity of modern hard drives. Small rare earth magnets are commonly used in consumer products, and their usage is increasing. Apple Inc.’s iPad 2 product contains 31 magnets, 21 in the device’s esteemed folding cover, and another 10 in the iPad itself.  As of the date of this paper, Apple has sold over 60 million iPads. In 2011, 92% of fortune 500 companies were testing or deploying iPads (Wingfield, 2011), yet the servers, laptops and hard drives in proximity to all these iPad magnets are not failing.

Magnets are also showing up in data centers. Companies have developed products such as thermal containment systems that are designed to be mounted directly to IT equipment enclosures with magnets. This is thought to be advantageous, because other methods such as mechanical fastening and adhesives involve more labor and are problematic due to metal shavings and adhesive aging/failure. One objection to this recent industry trend has been concern over risk to magnetic storage media. This study determines whether magnets really pose a risk to modern hard drives, and if so, at what field strength.

The complete article was recently published in IEEE Potentials magazine. It may be downloaded here.

Still not convinced? Maybe you saw the Breaking Bad Episode where the protagonists destroyed evidence on hard drives by parking a giant mobile electromagnet outside the locked police evidence room?  Well, lets just say to get above 250 gauss from 10 feet away, they would have needed more than a few car batteries strung together in the back of a truck, but an amusing episode nonetheless. Hollywood – if you are reading, Kleinholz Inc. is available for technical consults, should any producers wish to form a stronger allegiance with scientific reality. Interestingly, EMI attacks on data centers are a real, and often unguarded threat today but that’s a topic for another blog post.

Marina Ground Fault Protection

marina_powerIn 2011, article 555.3 of the National Electric Code added a requirement for ground fault protection in marinas. The purpose of this requirement is to protect the public from ESD, or Electric Shock Drownings, of which there have been more than 100 reported cases in the US. It is likely that many more have actually occurred, but were misidentified as conventional drownings. ESD occurs when ground faults in marina wiring or more commonly, on connected boats cause return current through the water. Even relatively small currents through the water can set up steep voltage gradients, which have the effect of paralyzing swimmers in the vicinity, who consequently drown. Tragically, some cases involve multiple drownings because an onlooker sees a swimmer in distress, and unaware of the danger in the water, jumps in with intent to save the victim, but instead immediately becomes paralyzed and perishes along with the first victim.

The requirement continues to be misunderstood by the boating community, marina owners, and contractors. Myths abound such as that the higher current settings of 100mA do not protect people, or that nuisance tripping is impossible to prevent. The fact is that a very large number of boats have ground faults on them, which have gone undetected for years. When the ground fault detection circuit activates because a boat shows up and plugs in, this generally means the system is working, but users often conclude just the opposite is true because “my boat has been fine for 20 years and now this new system says there’s a problem?” Well, lets just say it’s a good thing no one swimming around that boat.

Please contact us for concerns about electric shock drowning, ground fault protection, or electrical code compliance in marinas.

Electrical Forensics In Action: The Boston Marathon Bombing

Marathon_BombingThe application of technical science as an enabler for the delivery of justice is a beautiful little niche of engineering.  The satisfaction that accompanies such work is one reason I was initially attracted to forensic engineering as a profession.  And oh what an honor it would be to work on the forensics team pouring over every little shard of metal and wire at the horrific scene in Boston.  Let’s hope the puzzle is completed before the cowardly perpetrator(s) get out of the country.

I was intrigued by be a high resolution video clip I saw on CNBC of the battery and wiring that was apparently used to detonate the device. The battery is a sub C-3000 nickel metal hydride cell by Tenergy.  Huh?  One has to ask why this battery was chosen by the individual who constructed this device instead of the far more common and easily obtainable options. First, we don’t know how many of the batteries were strung together, so I’ll avoid conclusions about voltage alone.  But the NiMH battery is a very high energy density, high current, low internal resistance battery with a long cycle lifetime.  It’s also rechargeable.  Why any of these features were deemed necessary for a one time use in a destructive device with very low current requirements is puzzling. Further, NiMH’s tend to self discharge very rapidly – 10-25% in the first day in some cases. This suggests the need to have charged them very recently prior to use.  A lighter, smaller, cheaper, longer shelf life, easier to obtain, less traceable solution might have been…. the ubiquitous square 9 volt that goes in your smoke alarm.

Tenergy NiMH battery used in one of the Boston marathon bombs.

Tenergy NiMH battery used in one of the Boston marathon bombs.

Also somewhat curious is the wire choice.  Did you see those wires?  Was he planning to jump start a car with that thing? They are enormous.  I didn’t get to my screen capture button quickly enough to have saved a frame with the markings on it, but those looked like 12 or 14 AWG aluminum to me.  That is some serious oversizing. Those sizes are good for a momentary 20 amps. A simple low-explosive detonation device might require about 1/100th of that, and even if more, only for a matter of milliseconds. Aside from detonation circuitry, let’s say, another 100mA for a timer or whatever control electronics were used. So presuming that one of the designer’s goals was to construct the device to be small, lightweight, and concealable, why the gross oversizing of both wires and batteries?

All this may tell us nothing more about the guy than that he doesn’t do much electrical design. Or, maybe that’s just what the script in the terrorist cookbook said to use. Even so, if your goal in life is to carry out this dreadful act, wouldn’t you have done some research and optimized your equipment? But that’s ok, cluelessness builds a criminal profile just as competence does, and the unique attributes of the components used may lead us directly to the source. Wires have their manufacturing date and origin, among other information, printed on them. We electrical forensics engineers rely heavily on these markings for cases ranging from building code compliance to electrical fires. Those wires may have been used out of laziness alone because they were part of a preassembled battery pack. Either way, the manufacturer will figure out when they made, and where they sent those finished goods, and a location & time of purchase will fall out of the investigation.

Still more information will come from the  internals of the battery – its chemical state of charge will tell a story. Engineers will measure the open circuit voltage, then subject it to a test discharge and plot the resulting voltage / current curves. These test curves start from the end point of the discharge curve that occurred on the actual day of use – ie while the device was energized, waiting to explode. Further, we know the exact point in time when the actual use curves terminated, if we accept that the battery was open circuited at time of detonation. The test discharge curves will be fitted to manufacturers data after accounting for age and estimated cycling history. Finally, the chemical forensics guys will be turned loose to cut open the cell and confirm or modify lifecycle assumptions based upon electrolyte condition. The state of charge will help to piece together what happened in the hours and days before the incident. For instance, If the battery was fully charged an hour before bomb placement…. hmmm, lets look at hotels & coffee shops in a close radius, etc. If the battery internals show signs of advanced cycling age, this may be an indication that the bomber didn’t buy it at all, but instead chopped the battery out of some existing equipment. In which case finding an RC car (that’s where these batt’s are often used) with a missing battery in a 1 room apartment somewhere will be a nice piece of evidence to tag. It’s only a matter of time.

I had a family member running in the marathon, and fortunately she was uninjured. My sincere condolences go to all those and their families who were less fortunate. Now in the wake of this latest tragedy  the race to deliver justice is on. There’s no question that the forensic teams working tirelessly around the clock right now will win. The more disturbing question, that I’ll leave for the behavioral psychologists to opine on, is how to prevent future occurrences. When someone wants to inflict terror to support their extremist ideology, all they need is an event, a crowd, a cookbook, and a willingness to die. The first 3 aren’t hard to come by and history has made clear that martyrdom runs deep in the terrorist circles. I don’t know the answer, but I suspect it will involve all of us sacrificing some (more) civil liberties. Just assume your bag will be searched, the NSA is listening to your cell phone call, and your ISP is in bed with law enforcement. For most of us, it doesn’t really make much of a difference, and alternatives that don’t fully exploit IT intelligence are certain to be much less palatable.

Jim Fink, P.E. – Electrical Forensic Engineer

An Engineer’s Thoughts on the Superbowl Power Outage

Superdome-NightWell, it’s been a few days and all the headlines still seem to be appended with “…cause not yet pinpointed”. This never stops the media from hastily putting together some scripts with all the right buzz words in them however. I have to admit, I was enjoying the announcer’s commentary on the outage as much as the game and commercials (skechers was my personal fav) themselves. Said one, “As you can see a power surge has hit the stadium and now the lights are slowly getting brighter as they restore power”. Umm…. not really I thought. They restored power 20 min ago, and the HID metal halide lighting is just going through its normal restrike and run-up cycle. Also, a surge, normally defined as a multi-cycle moderate voltage increase, is a fairly rare power anomaly and was almost certainly not the cause. In a way the reporting reminded me of the Fukushima nuclear accident in Japan – listening to the news as a former nuclear engineer, the fabrications of respected news agencies were embarrassing. I could picture some producer behind the scenes telling the reporters, “look, no one understands this stuff anyway, just be creative and fill in the gaps”.  It’s not only the reporters troweling out these inaccuracies.  Greg Boyce, CEO of Peabody said the outage was a “convincing visual demonstration to counter those who’ve envisioned a world without coal”.  Really Greg?  Regardless of your views on global warming, I’m pretty sure coal was not a root cause here.

Anyway, it is also unlikely that an overloaded feeder problem existed, as has been reported. In large part, the superdome is either on or off. The lighting, ventilation, the guy cooking hot dogs, none of them use any more power for the superbowl than any other event. There is not much of a per-occupant electrical burden, and the place only has a fixed number of seats anyway. Further, design engineers are notoriously conservative in sizing wires and breakers, because they don’t have to pay for the oversizing and the consequences of undersizing are severe. In general, owners under-appreciate the cost savings of right-sizing, but the engineer will get slapped with back charges or even sued if things are undersized – we recently investigated just such a case and estimated corrective action to be over 175% of initial project cost due to demolition expense.

Ok then, so what was the cause and how might it have been prevented?

My guess, and without having personally investigated that’s all it is, is the incident was related to improper protective action coordination. In my experience, improper trip coordination is the biggest cause of unintended partial facility outages. I’ve never done a coordination study and not found instances of mis-coordination. In simple terms, the purpose of coordinated protective action in electrical distribution systems is to ensure that faults are isolated with minimal impact to the rest of the distribution. With thousands of pieces of equipment running in the superdome, the probability of a fault developing in one of them is fairly large – we should expect it. What is important is that the branch circuit breaker feeding the faulted equipment has a trip curve that is below and to the left on a time-current plot of the feeder breaker(s) upstream. In plain english, this means that when something goes wrong electrically, the branch breaker trips first and isolates the fault without any power interruption to other loads. With metal halide (MH) stadium lighting, it is particularly important to rigorously apply coordination to protect against any voltage dips on the panels feeding the lighting. Why? MH lighting (and other HID lighting) is vulnerable to even brief power interruptions, and once it’s off, as we all saw, it takes a long time to restart. Even a 15% voltage drop for a fraction of a second (and a fault in 1 decent size motor can easily cause this) is enough to knock out MH lighting. These lights work by vaporizing mercury in an arc tube, and the tube temperature & pressure have to be allowed to decrease by natural cooling for 10 or more minutes before “re-striking” the arc. Manufacturers do have “instant restrike” lights available, but they are expensive. Once the arc is re-ignited, it then can take another 15 min or so to reach full intensity as the cooled metal halides are again fully re-vaporized. This causes the “sunrise” effect that we saw in the stadium.

Assuming my mis-coordination theory is correct, the design engineer’s coordination study should have involved plotting the lighting manufacturers time-voltage tolerance curve right on same plot as the adjacent breaker trip curves. This method makes it immediately apparent whether it is possible for any other branch breakers to “let through” a fault that could allow the MH lighting arcs to extinguish. If so, we make adjustments to the trip settings, select different breakers, or even add fusing and plot the curves again until we achieve adequate coordination. Notwithstanding code requirements, the rigor with which this process is applied must increase when the economic impact of the unavailability of the involved loads increases. In other words, if the HID lights in a Walmart parking lot go out for 30 min, it’s not a huge deal, but when 1/3 of the country is watching an event that suddenly goes dark…..

Hopefully the outage didn’t ruin your superbowl experience. If the 49ers had pulled it off, I think the NFL might have someday been using words like “legendary” and “famous” to describe this power outage and its alleged effect on the game. Personally, I didn’t mind it. I just feel bad for the facilities guys down in the pits of the stadium. Think about it – they were probably getting yelled at on their radios before the 7 second network delay allowed them to see it on their little TV in the electrical shack. Then they ran down the hall, and reset the tripped breaker, lets say, within 3 minutes. Then for the next 31 minutes, they were getting blasted with irate inquiries and demands, trying to explain things like “restrike time” to a bunch of execs. But in fact they had already done all they could and it was just a waiting game. Now for weeks they will be interrogated by experts and investigators sniffing around for clues, writing their reports. Poor guys. Anyway, I sent my CV over to Doug Thornton, superdome manager, so maybe with a little luck I’ll get to meet them.

Electromagnetic Interference

With the proliferation of electronic equipment that is a part of our homes & businesses today, has come a cacophony of electronic noise.  This noise, can exist in nearly any part of the electromagnetic spectrum from DC through microwave and beyond.  We’ve all heard the hum of electromagnetic noise Perhaps the most problematic is noise in the 2.5GHz range which has the annoying side effect of interfering with our wifi networks.  While this is certainly a real problem, and I have identified it as the source of legitimate interference in the past when diagnosing wifi network problems, I happen to also believe it is a convenient scapegoat used by IT-folk when at their wits end about why your wifi connection keeps getting dropped.

But either way, to illustrate a simpler case, I’ll explain a scenario I encountered with a client in the healthcare industry.  A hospital contacted me saying they were experiencing intermittent noise on an ECG (electrocardiogram) machine so severe that for some patients the ECG output was completely unreadable by the doctor.  As a result the patient had to be moved to a different part of the hospital or asked to return at a different time when hopefully the machine would work.  They had already beat up the vendor of the machine, and replaced an expensive set of skin electrodes and associated wiring.  They provided me with scrolls of logs indicating what times of day and for whom the machine would not work properly.  I examined these, looking for periodicity that could point us to a cycling load, a certain operator, etc.  Nothing.  It was random.  I had an assistant lie down and get a “free” ECG from the nurse so we could see whether the offending electrical noise was currently present.  Of course not.  But we returned another day armed with a wide range spectrum analyzer and a couple different antennas, some directional.  The spectrum was relatively quiet throughout, except the 60Hz background electromagnet radiation that you expect to find in any building with 60Hz electrical service.  The intensity of the 60Hz energy varied a lot as we moved around, and there was definitely some present at the point of use.  The ECG’s I had been provided had a time scale that didn’t permit inspection of the waveform of the trace on the printout.  But it turned out that we could zoom in on the trace while it was on the screen.  The noise was periodic, and counting the divisions on the screen revealed it was in large part the same 60Hz spectral content that I was seeing on the spectrum analyzer.  From there it was a simple matter of systematically shutting off equipment one load at a time to see where the noise was originating from.  In this case it turned out to be a simple floor standing lamp near the examining table.   As supplementary lighting, it was on sometimes, not others, the staff reporting that they used it as needed.  It was also moved around the room to some extent.  With my assistant again reluctantly on the ECG machine, we flipped the lamp on & off and observed the noise on the trace appear and disappear accordingly.  There is nothing better than finding a very simple solution to a tricky problem.

I would have like to do a leakage current test on the lamp, but the hospital opted to discard it.  There are specific guidelines for maximum leakage currents in patient care areas, which vary by proximity, equipment, and procedure.  Electromagnetic noise is among the less important symptoms of leakage current, patient electrocution being the foremost.  Healthcare providers should have periodic leakage current measurements as part of their electrical safety program.  There are specific ANSI, UL, NFPA and IEC standards governing allowable leakage values depending on many factors.  If your business is experiencing EMI problems, or your healthcare facility needs an updated electrical safety program before the next JCAHO inspection, we can help.

New Ground Fault Detection Requirements for Marinas

The 2011 National Electric Code (NEC) contains new requirements about ground fault protection for Marinas.  The reason for this addition is that in recent decades we have seen an estimated 100 electrocution drowning deaths.  These incidents typically occur when individuals are swimming in fresh water marinas that, for any number of reasons may have faulted distribution wiring, or faulted boats connected to said wiring.  This sets up conditions for fault current to flow through the water.  Since fresh water is only a marginal conductor, even small currents can result in large regional voltage gradients, paralyzing and consequently drowning a swimmer.  People are often surprised to learn how low a voltage gradient threshold is necessary to cause paralysis when submerged in water.   While I’ve not yet had the opportunity to investigate such an unfortunate incident from a forensic / electrical expert witness standpoint, I have heard protests arising from misunderstanding by contractors, owners, and code officials.

In one case, a contractor appealed for relief from the relevant electrical code section (2011 NEC 555.3).  The contractor and related parties claimed that proper equipment was not available, the threshold 100mA current was too high to protect swimmers from electrocution, and that continuity of power would be unmanageable for the marina owner.  An investigation revealed all of these claims to be false, and upon presentation of these facts to the state board of appeals, the variance was not granted.

Of particular importance is that people must remember there is not not a black and white threshold current above which electrocution is certain.  First, the body of water and surrounding Earth is effectively a semi-conductor of enormous cross section, and therefore has expansive spacial current densities and expansive iso-potential lines to match.  Any amount of ground fault current limiting will theoretically shrink, although maybe not eliminate, the “lethal zone” in the water.  Second, GFCI equipment in the 5-6 mA range used at points of utilization has this setpoint because the fault current is likely to be highly localized, and it is a balance against continuity of power (nuisance trips) and user safety.  The 100mA level prescribed by 555.3 is permitted at the feeder level.  In the multi-user environment of a Marina, this allows for a greater amount of diffuse ground leakage current without nuisance trips.  Yes, if a boat pulls in, rents a slip and trips the ground fault breaker immediately upon connection to shore power, it creates a “nuisance”, but let’s not forget it also prevents a potentially deadly condition.  Having spent time on a US Navy nuclear submarine in ports around the world, I can say that even at high quality facilities, shore power interruptions are routine and expected.  It’s just a part of life in the boat world. The reality is, the owner of the faulted boat needs to get the problem fixed.  One could envision the enterprising marina owner partnering with a local contractor to offer pier-side “marine electrician services” to remedy such situations for the benefit of all parties.

 

Manufacturing: China is No Longer the Obvious Choice

I recently took a trip to  with a Client and our Hong Kong based services broker to vet some manufacturers for several newly developed products. We ranked the candidates on usual categories such as technical aptitude, quality control, worker conditions, and of course cost. That’s where the surprise would come.

On the ride from the Shekou Ferry terminal into Shenzhen, we cruised along a paved road that could easily have been in a major US city. Our host explained that not long ago it was a dirt road. In the 1990’s Shenzhen’s migrant work force and consequently, GDP grew at a sharply increased pace. China eased foreign travel restrictions to the region in 2003 further contributing to economic growth. Workers streamed from the countryside to take manufacturing jobs, and the bountiful labor supply kept manufacturing costs attractively low – until they weren’t low anymore. With the economic growth, the extreme polarization of wealth and class are giving way to the rise of a middle class. Workers are demanding better pay and better jobs. While still well below advanced manufacturing nations such as South Korea, wages in the Shenzhen region are currently on pace to double over a 6 year period.

So what did we do? Logistic simplification, reduced shipping costs and the opportunity for more robust collaboration with the manufacturer offset the modestly lower labor cost and resulted in selection of a domestic candidate in this case.

The next time we set up a manufacturing line abroad, assuming basic requirements are met, I’d like to have a look at other up & coming locations including Indonesia, Vietnam, each of which have labor costs of 60% or less those in China. Some African nations, India, and Mexico are other possibilities. Of course these lower cost options are not without concerns of their own such as primitive supply chains, and non-existent patent law resulting in shameless product rip-offs.