• Increase font size
  • Default font size
  • Decrease font size
Home News Archive Root Cause Analysis

Root Cause Analysis

E-mail Print PDF

Sometimes you eat the bear; sometimes the bear eats you.

The_BearOn those dark days when the bear eats you, it’s always worthwhile to understand how it happened, so as to reduce the probability of future occurrences. Sometimes this inquiry is called “lessons learned” or “root cause analysis.” The inquiry into what went wrong is always beneficial, even if it delves into politically sensitive areas. But the inquiry is nothing but wasted time unless the results are used to drive change. If there are no behavioral or process changes, then why bother?

That said, some level of lessons learned analysis is actually mandated by regulation. Readers know (or should know) that the Contractor Business Systems discussed in the Defense Federal Acquisition Regulation Supplement (DFARS) have “adequacy criteria” that must be complied with in order to have a business system determined to be adequate (or to be approved). Readers know (or should know) that a business system that is inadequate (or disapproved) may lead to undesirable consequences, including (in some cases), payment withholds.

With that thought in mind, readers should note that one of the adequacy criteria for a contractor’s Estimating System requires a lessons-learned inquiry. Contractors that have defense contracts that include the DFARS clause 252.215-7002 (“Cost Estimating System Requirements”) are required, by that clause, to implement an estimating system that (among other things) “require[s] management review, including verification of compliance with the company's estimating and budgeting policies, procedures, and practices; provide[s] for internal review of, and accountability for, the acceptability of the estimating system, including the budgetary data supporting indirect cost estimates and comparisons of projected results to actual results, and an analysis of any differences; [and] provide[s] estimating and budgeting practices that consistently generate sound proposals that are compliant with the provisions of the solicitation and are adequate to serve as a basis to reach a fair and reasonable price.”

Thus, in order to actually have an adequate and compliant estimating system, several layers of management review must take place—including “comparisons of projected results to actual results, and an analysis of any differences.” Granted, that’s not exactly a full-on “lessons learned” or “root cause” analysis, and there is no express requirement to perform post-mortems on failed proposal efforts to try to figure out what went wrong and to identify the required improvements to militate against the risk of future proposal failures—but we strongly suggest that such additional analyses are at least implied in the fundamental requirement. And if they are not implied, they should be. Why would a contractor choose not to perform analyses on failed proposal efforts in the hopes of winning work on the next opportunity?

The correct answer: a savvy contractor would never choose to close its eyes to what went wrong and would scrutinize its process so that it could do better next time around. Unfortunately, the correct answer is not given 100 percent of the time and too many contractors persist in generating proposals in a less-than-optimum manner because that’s the way they’ve always done it; and when they lose it’s never the process that’s blamed. Instead, too often it’s Tom or Dick or Joe or Jane that’s blamed, because it’s easier to blame a person than to admit the process itself (which is the responsibility of leadership) was the real culprit.

Too many times, the answer is that there are multiple reviews, each one very bureaucratic and time-consuming; each one full of brilliant management insights that are offered far too late in the proposal process. There are Red Team Reviews and Green Team Reviews and Blue Team Reviews and Puce/Fuchsia Reviews. There are enough proposal reviews to fill an entire large Crayola crayon box of colors—and the last thing anybody on the proposal team wants is yet another review, even if it would be in their best interests to participate in it.

Too many times, those multiple management reviews are superficial or focused on the wrong issue. The customer solicitation tells the proposal team what’s important, whether it be price or technical or past performance or socioeconomic participation, but the reviewers tend to focus on what they know rather than what the customer is telling them is important. And it’s always price, isn’t it?

In thirty years of proposal preparation and review, price is always a key topic. At what price will the other bidders submit, and where will we be against that number? What’s the customer’s budget? What’s the price pain point? How low can we go? Can we price this project at a loss and make it up on the overall program? (Which is what Boeing did on its successful KC-46 Tanker bid.)

And the price is never low enough, is it? How many times have you heard a marketing person say, “The price is too low—add more!” The answer is, sadly, almost never. Instead, the price is always too high and it needs to come down. “Price to Win” or P2W is a common refrain in most proposal reviews.

The thing of it is, if your technical approach is no good then your price is irrelevant. In a comparison of two equivalent technical approaches, the low price bidder has a tendency to win—especially in these days of “Low Price Technically Acceptable” or LPTA competitions. But if there is a clear technical winner, in a competition where technical approach is more important than price, then if your technical approach is inferior you will tend to lose, even if your price is significantly lower.

A good example in support of that assertion is found in a recent bid protest decision at the Government Accountability Office (GAO) in the matter of Raytheon Company (Space and Airborne Systems). (We used to work there and we still have friends there, so there is nothing happy about using SAS as the poster child in this article, but the fact is that it was this bid protest decision that sparked the article—so there you go.)

TL;DR Summary:

Raytheon Company, Space and Airborne Systems, of El Segundo, California, protests the issuance of a task order to Northrop Grumman Systems Corporation, of Linthicum, Maryland, by the Department of the Army, under request for task order execution plan (RTEP) No. R2-3G-0823, issued to holders of the Army’s Rapid Response-Third Generation multiple-award contract, for the purpose of prototyping, designing, integrating and testing a Synthetic Aperture Radar/Ground Moving Target Indicator (SAR/GMTI) long range radar. Raytheon alleges that the Army’s technical evaluation was unreasonable, principally because the Army conducted inadequate and misleading discussions.

We deny the protest.

Without rehashing the 10-page decision, let’s focus on one table that we think tells the story.

Offeror

Technical Rating

Past Performance Confidence

Revised Cost Proposal

Evaluated/ Probable Cost

Northrop

Outstanding

Substantial

$23,066,341

$23,066,341

Offeror B

Acceptable

Substantial

$16,329,553

$16,329,553

Raytheon SAS

Unacceptable

Substantial

$19,608,932

N/A-Not Evaluated

The table above shows that Raytheon’s offered price was significantly lower than Northrop’s offered price but its technical rating was dramatically worse. Raytheon received an “unacceptable” technical rating versus Northrop’s “outstanding” rating. Accordingly, Raytheon’s offered price was irrelevant. Indeed, Offeror B’s price was lower than both Raytheon and Northrop, but it didn’t win because Northrop’s “outstanding” trumped Offeror B’s “acceptable” technical rating.

As the GAO decision summarized—

In the SSDD, the source selection authority (SSA) concluded that Northrop’s technical proposal was significantly superior to that of Offeror B, due to a ‘combination of a significant strength and multiple strengths, higher level of detail, lower-risk approach, and exceptional understanding’ of the PWS requirements. In the final tradeoff analysis, the SSA concluded that Northrop’s and Offeror B’s proposals were superior to Raytheon’s, as Raytheon’s technical proposal was unacceptable. Between the remaining two proposals, the SSA concluded that Northrop’s proposal, when taken as a whole, was superior to Offeror B’s proposal and represented the best value to the government.

We don’t know why Raytheon’s proposal was deficient. It’s hard to argue that Raytheon isn’t a global leader in the defense radar marketplace. So what happened? We certainly don’t know. The point of this article is: Raytheon SAS needs to understand why it missed the mark. Raytheon SAS needs to conduct a candid and detailed analysis of its proposal process and the reviews of its technical approach. Raytheon SAS needs to do the root cause analysis if it wants to retain its top position in the defense radar market.

We would argue the analysis should take place even if Raytheon won the competition. We assert that a lessons-learned analysis should be a routine process step for every single proposal submitted. We believe it should be systemically ingrained into the proposal preparation process—and indeed the culture of every Federal contractor.

But on those days when the bear eats you, it is vital to understand how it happened, so as to avoid being somebody’s lunch again.

 

 

Newsflash

Effective January 1, 2019, Nick Sanders has been named as Editor of two reference books published by LexisNexis. The first book is Matthew Bender’s Accounting for Government Contracts: The Federal Acquisition Regulation. The second book is Matthew Bender’s Accounting for Government Contracts: The Cost Accounting Standards. Nick replaces Darrell Oyer, who has edited those books for many years.