THIS OPINION WAS INITIALLY ISSUED UNDER PROTECTIVE ORDER AND IS BEING RELEASED TO THE PUBLIC IN REDACTED FORM ON NOVEMBER 9, 1994 _______________________________ DENIED: October 27, 1994 _______________________________ GSBCA 12912-P GRUMMAN DATA SYSTEMS CORPORATION, Protester, and FEDERAL COMPUTER CORPORATION, Intervener, v. DEPARTMENT OF THE NAVY, Respondent, and INTERGRAPH CORPORATION, Intervener. Kenneth S. Kramer, Lynda Troutman O'Sullivan, James M. Weitzel, Jr., Michael L. Waldman, Deneen J. Melander, Anne B. Perry and Catherine E. Pollack of Fried, Frank, Harris, Shriver & Jacobson, Washington, DC, counsel for Protester. Gerard F. Doyle, James D. Bachman, Scott A. Ford and Alexander T. Bakos of Doyle & Bachman, Washington, DC, counsel for Intervener Federal Computer Corporation. Ellen D. Washington, M. Elizabeth Hancock, David P. Andross, Penny Rabinkoff, Thomas L. Frankfurt, Stirling Adams, Lis B. Young and David A. Lee, Department of the Navy, Washington, DC, counsel for Respondent. Rand L. Allen and Philip J. Davis of Wiley, Rein & Fielding, Washington, DC, counsel for Intervener Intergraph Corporation. Before Board Judges DEVINE, VERGILIO, and DEGRAFF. DEVINE, Board Judge. The Navy sought sophisticated computer aided design equipment for use in developing and maintaining its ships and aircraft. Its award of a contract to the high technical/low price bidder was challenged on the principal ground that the source selection authority had not followed the recommendation of its impact advisory working group, whose analysis showed protester to be equivalent or slightly better than awardee technically in some areas and a better value to the Government financially. We deny the protest. Findings of Fact On July 15, l991, the United States Navy issued a request for proposals (RFP) soliciting computer aided design (CAD) equipment and support services to assist the Navy in performing its design, engineering, manufacturing and repair work on its ships and airplanes more effectively and less expensively. Thus, the procurement also included computer aided manufacturing (CAM), and computer aided engineering (CAE) equipment and services. Protest File, Exhibit 4A; Transcript at 56-57. The RFP required hardware, software, maintenance, training, documentation, and support services. Protest File, Exhibit 4A. The solicitation looked to a firm, fixed price requirements contract for an initial three-year term, with options for up to nine additional years. Id. The estimated value of the contract was $600 million. Transcript at 288. The contract was designed to allow the Naval Air Systems Command, the Space and Naval Warfare Systems Command and other Department of Defense activities to obtain CAD/CAM/CAE hardware, software and support services. Protest File, Exhibit 4A. The solicitation required offerors to propose various workstations, peripherals, and software to perform such work as geometric modelling, graphics displays, drafting applications and drawing generation, aero-mechanical engineering analysis, avionics system design and analysis, and manufacturing and aerospace engineering applications. Protest File, Exhibit 4A. On a practical level these tasks ranged from producing printed circuits to guiding the cutting tool on an engine lathe while it machines an airplane part. Id. The RFP contained a large number of Minimum Mandatory Requirements (MMRs). Any failure to meet an MMR would eliminate an offeror from consideration for award. Protest File, Exhibit 4A. There were more than 4,000 MMRs. Transcript at 841. The RFP also contained a number of what it denominated Other Technical Features (OTFs). These were optional features that could raise an offeror's technical score if successfully proposed. Id. All offerings were required to be commercial-off- the-shelf (COTS) or commercial products modified to adapt them to the contract (MCOTS). Protest File, Exhibit 4. Offerors were required to certify that offered products were in fact COTS or MCOTS. Protest File, Exhibit 4A. Section M of the RFP set out the basic rating scheme. Technical Merit was of first importance, followed by price. Id., M3. Technical Merit included Technical Excellence (75%) and Management (25%). Id. Technical Excellence, in turn, was based on an offeror's Operational Capability Benchmark Demonstration (OCBD) score (80%) and its Technical Proposal score (20%). Id. The RFP set out price evaluation procedures designed to determine the lowest overall estimated contract life price. The goal was price realism. Protest File, Exhibit 4A. The technology proposed by the offerors was required by the RFP to be the technology supplied under the contract. However, understanding the pace of change in the computer industry, the Navy included a contract clause dealing with technology improvements (the TI clause). The purpose of the clause was to allow new technology to be added during contract performance. The clause indicated that, following award, the Navy might solicit and the awardee was encouraged to propose, improvements to the hardware, software, or to other contract requirements which would represent a technological advantage to the Navy, thus saving it money or energy, improving performance, or conferring some other advantage. Id. This clause also contained a concept called a computed price differential (CPD) which, in essence, represented the ratio of a proposed price to a commercial list price. It amounted to a price discount which was a sort of obverse of the CPD. Thus a CPD of .4 meant that the offeror was proposing a discount off a commercial list price of .6 or 60%. Transcript at 850. The purpose of the clause was to make sure that the Government received the same relative price discounts for new technology as for the original. Id. at 319-20. It also made it harder for an offeror to bid low on its original offer and threafter raise its prices unreasonably on new technology in later years. Id. at 119. The CPD concept applied to all new technology except the twenty-nine functional subcategories listed in Attachment 10 to the RFP. Id. at 126-27. Proposal Evaluation Proposals were received from protester Grumman, awardee Intergraph, Electronic Data Systems (EDS), and intervener Federal Computer Corporation (FCC). A Source Selection Evaluation Board (SSEB) compared the various offerors' proposals to the many requirements of the RFP over a period of many months. Following the initial evaluation of proposals each offeror was required to perform an OCBD test lasting five days, which evaluated each offeror's ability to comply with the RFP requirements. Protest File, Exhibit 4A. The OCBD tested selected OTFs and approximately 1,100 of the RFP's approximately 4,000 MMRs, using 100 tests covering each of the specification areas. Protest File, Exhibit 6J. The OCBD was a live demonstration of an offeror's hardware and software solutions designed to test the "real world" usage of equipment and thus to test the offeror's systems in the same way that the Government intended to utilize the systems after contract award. Protest File, Exhibit 4A at AT2-12e. Offeror compliance with untested MMRs and OTFs was determined from a review of technical literature and other commercially available literature submitted by the offerors. Testing for compliance with the MMRs (and offered OTFs) All offerors successfully completed the OCBD tests. Grumman received a score of 359.68 points, while Intergraph received 429.207 points. The SSEB concluded that all offerors met the minimum mandatory requirements. Protest File, Exhibit 6F. Grumman's score was 163.334 points while Intergraph scored 150.687 points. Protest File, Exhibit 6A. The SSEB also evaluated the offerors' management proposals. Protest File, Exhibit 6F. Protestor scored 243.750 and Intergraph 241.792 (after correcting for a 3 point initial error). Id., Exhibit 6A. The overall technical score for Grumman was 636.40, and for Intergraph 680. Id., Exhibit 6C. The SSEB reported the results of its 18 month analysis to the Source Selection Advisory Committee (SSAC). The SSAC analyzed the SSEB's voluminous report and compared the results of the various offerors' ratings. Although not required to do so under conventional procurement rules since Intergraph had already been rated the high technical/low cost offeror, the SSAC nevertheless appointed an Impact Analysis Working Group (IAWG) which was directed to assess the effect that the various CAD systems proposed would have on their expected users. The IAWG was to identify significant differences among the offerors' proposals (discriminators), assess their relative rankings, quantify those discriminators that could be quantified as to their value to the Navy, and report their findings to the SSAC. Half of the membership of IAWG was made up of people who were already involved in the procurement as members of the SSEB or as participants in the OCBD. The other half represented various Navy user groups from scattered locations. The SSAC examined the IAWG's report, considered the differences among the various proposals, assigned ratings and reported to the Source Selection Authority (SSA) who selected Intergraph as the awardee on a "best value" basis, including price risk, and not merely on technical point scores and calculated prices. Transcript 659. The Protest Grounds The original protest complaint contained fifteen counts. Three supplemental protest complaints added four additional counts making nineteen, and a final amended protest complaint withdrew thirteen counts, including all but three of the original counts, and added eight additional counts making a total of fourteen. A single count was withdrawn during trial. A summary of the remaining counts in rough inverse order of importance follows: Count 27. The RFP required that offerors certify that their hardware and software met the minimum mandatory requirements of the RFP. Protester alleged that Intergraph's certifications of its Simulation Accelerator (Count 1), its Digital Functional Simulator (Count 20), and its Digital Fault Simulator (Count 21) were false in that none of the three met the minimum requirements of the RFP. Count 26. Protester alleged that because Intergraph is an equipment manufacturer (Grumman is not), it unfairly used its control of its prices to reduce them on the older, less desirable equipment it offered initially, and in the process raised its computed price differentials, which, in effect, lowered its discounts on new technology to be furnished in future years under the contract. Protester finds this to be improper. Count 17. Protester alleged that Intergraph violated Section 27 of the Office of Federal Procurement Policy Act by improperly obtaining information on either protester's proposal or on its OCBD test results. Counts 13 and 24. Count 13 alleges that since Intergraph labelled its price lists with a proprietary legend which prohibited disclosure outside the Government, the listed products were therefore not available as commercial off-the-shelf items, since none could be sold commercially without disclosing the price. Protester also suggests that some of Intergraph's offerings were never actually sold commercially, and some were not COTS because they were being modified at the time of proposal submissions. Count 24 simply alleges that Intergraph's certification that these items were COTS was false. Count 14. In this count protester alleged, and the Navy apparently concedes, that protester's price was improperly raised by $1,168,593, due to an incorrect analysis based on the Trade Agreements/Buy American Acts. Counts 16 and 25. Count 16 alleges that Intergraph failed to have its database language, SQL, validated by the National Institute of Standards and Technology (NIST) for compliance with FIPS PUB 127, an RFP requirement. Count 25 alleges that Intergraph deceived the Navy in connection with the NIST tests. Count 23. This count alleges that Intergraph did not meet three of the minimum mandatory requirements which dealt with geometric modelling and graphics applications and which appear in Section C8 of the RFP. Counts 1, 20, and 21. All of these counts deal with Intergraph's alleged failure to provide three different simulation tool sets each of which will simulate at least one million logic primitives, and perform at least one million evaluations per second. Count 1 deals with a Simulation Accelerator; Count 20 with a Digital Functional Simulator; and Count 21 with a Digital Fault Simulator. Count 22. Protester attacks the Navy's best value analysis because, among other things, it did not give enough weight to the report of the IAWG, a part of the Source Selection Advisory Council. The FCC counts. FCC adopted most of the foregoing counts and added an allegation that the Navy's proposal evaluations were flawed, and an allegation that the Navy had failed to hold discussions with FCC on its proposal weaknesses. Discussion Count 27. A number of protester's complaints with respect to the procurement at issue deal with Intergraph's certifications that certain items met the specifications when, in protester's judgment, they did not. If protester cannot prove that the underlying facts are as it alleges, then the certification issue cannot stand. Thus in Count 27 protester says that Intergraph's certifications with respect to its Simulation Accelerator (Count 1), its Digital Functional Simulator (Count 20), and its Digital Fault Simulator (Count 21) were all false in that it knew that none met the requirements of the RFP. This count is, of course, derivative of the counts that directly attack the status of the offered simulators as MMRs. Protester has produced no evidence that Intergraph set out to mislead the Navy, nor any evidence to show that Intergraph's proposals were made other than in good faith. As we shall see hereafter, Grumman has not established that Intergraph's equipment fails to meet the MMRs. Count 27 fails for want of proof. Count 26. Protester subtitles this count as "bait and switch." It alleges that because Intergraph is an equipment manufacturer it used its control of its prices to reduce them on the equipment it initially offered, and to offer older equipment, and to raise its computed price diferentials, which, in effect lowered its discounts on new technology to be furnished in future years under the contract. The "bait" apparently being the lower priced initial equipment, and the "switch" being the lower discounts (higher CPDs) on new technology, the acquisition of which was made more likely by the fact that Intergraph was offering older (and cheaper) technology to begin with. Protester finds this to be unfair but it has produced no evidence to support either its existence or its unfairness. There is also the fact that the equipment offered must pass the OCBD testing and the scrutiny of the SSEB in many other areas. Since Intergraph's equipment scored higher generally than protester's, the theory is immediately suspect. So far as price control is concerned there is nothing illegal or invidious in a manufacturer setting its own prices. In any event the prices set must still pass price risk analyses. We find no merit in this count. Count 17. Protester alleges that Intergraph violated Section 27 of the Office of Federal Procurement Policy Act by either obtaining Grumman's proposal or some part thereof, or by observing Grumman's OCBD test, or as a result of some communication by the Navy. It makes the following statement to explain its charge: [I]n Interrogatory No. 6, Intergraph requested that GDS [Grumman] identify all documents that GDS contends meet the requirement in Section C6.6.a to document the data required to mathematically define all entities. This question relates to a section in GDS's proposal in which GDS inadvertently omitted a citation to the above-described data. GDS believes that, as of the time this interrogatory was propounded, counsel for Intergraph had not yet been provided with a copy of GDS' proposal. GDS thus believes and hence alleges that Intergraph received and is in possession of GDS' proposal or portions thereof, or is in possession unlawfully of the contents of GDS' proposal, in violation of Section 27 of the Office of Federal Procurement Policy ("OFPP") Act and its implementing regulations. Grumman's Amended Protest at 16. In other words, protester believes that because Intergraph asked for certain data, at a time before protester thinks Intergraph was legally in possession of protester's proposal, which data should have been in its proposal but was omitted by accident, that therefore Intergraph had illegal advance knowledge of GDS's proposal. Protester has presented no evidence to support this convoluted conjecture. Protester arrives at a similar unsupported conclusion with respect to Interrogatory No. 13, which protester claims shows access by Intergraph to protester's OCBD results before such access could have been legally had. Protester has adduced no evidence to support its version of the facts. We find no merit in protester's speculations in these areas. Count 13. This count alleges that since Intergraph labelled its price lists with a proprietary legend prohibiting their disclosure outside the Government when it submitted its proposal, the listed products were therefore not available as commercial off-the-shelf items, since none could be sold commercially without disclosing the price. This is simply irrelevant. Intergraph's confidentiality restriction bound the Government, but it did not limit Intergraph's freedom to disseminate its price lists to prospective customers. The restriction was merely for the purposes of its RFP submissions, which took the form of selected pages from its commercial price books. Protest File, Exhibit 23. Protester also suggests that some of Intergraph's listed commercial items were never actually sold on the commercial market, but it has presented no evidence on the issue. In any event no offeror was required to show actual sales data, merely product announcements offering the items for sale and similar documents, which requirement Intergraph fully complied with. Protest File, Exhibits 21, 23, Vol. I. Finally protester suggests that some of Intergraph's offered products were not COTS items because they were being modified at the time of proposal submission on July 7, 1992. Protester's evidence on this point indicated only that porting was being done. Transcript at 805. The Navy did not consider porting to be a modification. Transcript at 834-35, 966. In any event, the solicitation permitted modifications to COTS products. Protester has not shown that the modifications were not compliant. Transcript at 834. We find no merit at all in this count. Count 14. In this count protester alleged, and the Navy apparently concedes, that protester's evaluated price was improperly raised by $1,168,593, due to an improper analysis based on the Trade Agreements/Buy American Acts. We hold that a proper analysis would have reduced protester's price by the amount of the error, but the lower price would not have changed the relative rankings of the offerors. Counts 16 and 25. The RFP required that the database language SQL be validated by the National Institute of Standards and Technology (NIST) for compliance with FIPS PUB 127. Protest File, Exhibit 4A. The NIST test was required to be run prior to contract award. Id. However, a NIST test failure would not prevent award. Id. Offerors had six months following award to correct any deficiencies. Id. Thereafter any failure to correct deficiencies would not breach the contract but might subject the contractor to liquidated damages. Id. Offerors handled this requirement by submitting letters showing that a NIST test was scheduled. Intergraph had tentatively scheduled its test for November l992. Transcript at 1160. In September 1992 Intergraph's Database Systems Manager postponed the NIST test because Intergraph was scheduled for its OCBD tests during five days in November l992, and the department that handled both operations was shorthanded. Transcript at 1161. The database manager, Mr. Robert Larsen, testified that he simply forgot to re-schedule the NIST test until after the filing of the instant protest. Transcript at 1163, 1165. Intergraph successfully completed the NIST test a little more than a month after award. Id. at 1165. Count 16 deals with the foregoing. In Count 25 protester alleges that Intergraph's acts with respect to the NIST test and the NIST test appointment were deceitful and should disqualify Intergraph from award. We find Intergraph's acts to be inadvertent. We also regard Intergraph's failure to undergo the NIST tests as a minor matter, because by the terms of the RFP a failure on this point would not prevent contract award, a failed offeror would have six months to correct the deficiency, and even a failure to make corrections would not result in a contract breach. Intergraph's evidence shows the failure to have been inadvertent which negates any inference of deceit or fraud. Protester has produced no evidence to support its contrary view. Count 23. This count alleges that Intergraph's proposal did not meet three MMRs dealing with geometric modeling and graphics applications which appear in Section C8 of the RFP. The first is the requirement for a CAD system interface which allows "two way associativity between the ADS [Advanced Design System] solid model and its drawing (i.e., if a dimension is changed then the associated solid model shall reflect the same change and vice versa)." Protest File, Exhibit 4A. The second is a requirement that a system user be able to "select and create a part from a pre-defined standard table or define a part by entering the part attributes required by the part generator." Id. The third is the requirement that the proposed system "provide a user definable grid plane representing intervals of measurement which shall be overlaid on the model, at the user's grid option" and that the "graphics system shall support Cartesian and polar coordinate grid systems." Id. Protester presented no evidence with respect to the second MMR described above which deals with "parts" and "part generators," nor are these items mentioned in its brief. We presume that its complaints of deficiencies in this area have been abandoned, but in any event they fail for lack of proof. With respect to the requirement of two way associativity between a three dimensional model and its associated drawings, two witnesses testified, one on behalf of the protester: Dr. David Albert, and one on behalf of the Navy: Mr. Ben Kassell. Dr. Albert is an electrical engineer with a doctorate in nuclear physics who presently heads his own consulting firm which is active in the CAD/CAM field. He has extensive experience in designing and developing CAD/CAM systems. Mr. Kassell is a mechanical engineer and a Navy employee who was one of the technical evaluators for the procurement in question. Dr. Albert's familiarity with the procurement at issue stems from his study of those sections of the RFP dealing with associativity and Cartesian and polar coordinates, plus Intergraph's technical response to those sections, and the technical documents, including computer commands, describing the proposed I/EMS 2.0 system proposed by Intergraph. Transcript at 558. Dr. Albert testified to considerable experience in designing and developing CAD/CAM systems but he has never used the system proposed by Intergraph. He was questioned closely on the various parts of Intergraph's response to the RFP, item by item. It was his opinion that Intergraph's proposal allowed only one way associativity based on the document analysis which he had made during the three days preceding his testimony. Transcript at 594. However, he conceded quite candidly under cross-examination that those who developed the system and those who used the system would know more about it than he did. Id. at 595. Mr. Kassell, on the other hand, testified that he had operated CAD/CAM systems of one kind or another since 1981. Transcript at 897-901. Since January of 1993 he had spent 50-80% of his time operating I/EMS 2.0, the system proposed by Intergraph. Transcript at 901. His opinion considered the relevant documents, as did Dr. Albert's, but it was also based on his experience as a user, which Dr. Albert's was not. Transcript at 895. Mr. Kassell testified flatly that the I/EMS system allowed the model to up-date when a dimension on a drawing was changed and the drawing to update when a dimension on the model was changed. Id. He explained on both direct and cross- examination how this was accomplished using the I/EMS system. Neither expert lacked credibility, but Mr. Kassell had the advantage of experience with the system. We hold that protester has not met its burden to prove by a preponderance of the evidence that Intergraph did not meet this MMR. The same two witnesses testified with respect to the third MMR which deals with grid systems. Again Mr. Kassell, the experienced user, was matched against Dr. Albert, the theoretician. Again we are dealing with the I/EMS system proposed by Intergraph. The RFP defines a grid as "a network of uniformly spaced points or crosshatch displayed for exactly locating and digitizing a position." Protest File, Exhibit 4A, Attachment 1. Dr. Albert testified that he had not seen this definition. The RFP under the heading "C8.3.5. Grids" also required, in pertinent part: a. The system shall provide a user definable grid plane representing intervals of measurement which shall be overlaid on the model, at the user's option. b. The graphics system shall support Cartesian and polar coordinate grid systems. c. . . . d. The grid which displays radial (polar) coordinates shall allow for user control over grid point distance, number of degrees per Specification, and number of Specifications in the grid. e. The working grid shall be used to force input to be placed at grid intersection points (e.g., lock-on to grid point). This lock on will be turned on or off by the user. f. The display of the grids shall not be required for the lock-on feature to operate, and the lock-on feature can be selected "off" while the grids are displayed. g. The dimension of the grid shall be determined from user direction as a multiple of the basic unit of measure in the data base. The distance between grid points shall always be in accordance with the basic unit, regardless of the magnification to which the screen image has been subjected. In the event that grid density is too high, a message shall be output to the display device and the grid shall be echoed. Protest File, Exhibit 4A. All of the lettered paragraphs are MMRs except paragraph d. which is an OTF. There was no dispute that Intergraph's proposal satisfied the requirement for a Cartesian coordinate grid. It was the polar grid and its associated requirements that were in dispute. Protester's method of showing Intergraph's non-compliance with the polar grid requirements was to direct Dr. Albert's attention to the documents cited in Intergraph's proposal to support its position that its I/EMS system met the requirements of the RFP. With respect to lettered paragraph e. which deals with lock-on to grid point, Dr. Albert was of the opinion that Intergraph's system did not meet the requirements of the RFP because the method involved "was cumbersome." Transcript at 586-87. On brief, protester takes issue with Intergraph's solution to the requirement of lettered paragraph f. dealing with the independent operation of the grid lock-on feature and the grids themselves. Protester concedes that the Grid Lock and Grid On/Off commands are independent of each other for the Cartesian grid. However, protester says that with respect to a polar grid Intergraph's system requires a change to the color of the grid to match the color of the background in order to make the grid disappear. Transcript 925-26. This, says protester, violates Section C8.3.5.h. of the RFP which reads: "Imposition of the grid shall not affect either a temporary or permanent version of the drawing." The violation would occur, according to protester, when, and if, the background color is again changed thus again exposing the grid. According to Mr. Kassell this combination of circumstances is unlikely to occur. Transcript at 926-27. Its occurrence is under the control of the operator, in any event, and can be avoided. It is also a matter not included in any of the numerous protest grounds or even mentioned at hearing. It first appears in protester's post-hearing brief and is reluctantly considered here only because it deals with the polar grid area. We find no substance to protester's complaint in this area. We find that the described method of turning the polar grid on and off does not "affect either a temporary or permanent version of the drawing." Counts 1, 20, and 21. Count 1 deals with the RFP requirement that each offeror supply what is called a Hardware/Software Simulation Accelerator toolset which will "simulate at least one (1) million logic primitives, and perform at least one (1) million evaluations per second." Protest File, Exhibit 4A. Count 20 deals with what is called a Digital Functional Simulator and Count 21 with a Digital Fault Simulator, both of which must meet the same performance requirements. Id. Simulation is a method which allows a design engineer to evaluate how an electronic circuit will behave in response to certain stimuli before the circuit is actually implanted on a silicon chip. Transcript at 1015. Protester says that the equipment Intergraph offers to satisfy these requirements cannot "simulate at least one (1) million logic primitives, and perform at least one (1) million evaluations per second" as protester understands those phrases. Intergraph and the Navy say that protester misinterprets those phrases, and that when properly understood Intergraph's equipment meets the requirements in all respects. During the proposal preparation period several questions were directed to the Navy concerning the simulation requirement. Two pertinent questions and their answers follow: QUESTION NUMBER: 0915 SECTION REFERENCE: Specifications C11.3.2.f, C11.3.3.k and C11.3.4.c QUESTION: How does the government define "evaluations per second." We assume that when any device input changes, an evaluation is recorded. For example, a 2- input gate with both inputs connected to the same net to form an inventor (sic) would record two evaluations for each transition of the net driving the nand inputs. Is the above assumption correct? GOVERNMENT RESPONSE: The offeror's assumption is correct. An evaluation occurs when the simulator examines an element in the design and determines the value of its outputs based on the values of its inputs. Protest File, Exhibit 4A at Supp 7-60. QUESTION NUMBER: 0916 SECTION REFERENCE: Specifications C11.3.2.f, C11.3.3.k and C11.3.4.c QUESTION: Does the (1) million evaluations per second apply to all model types (behavioral, functional, gate, switch) or may this specification be satisfied if only one of these model types meet[s] the requirement? We assume that one type will suffice. Is our assumption correct? GOVERNMENT RESPONSE: The one (1) million evaluations per second applies to all model types. Protest File, Exhibit 4A at Supp 7-60, 61. The differences between the principal parties on this issue arise entirely from differing views as to what the operative words "(1) million evaluations per second" mean. Experts on either side testified to diammetrically opposed views. For each of the differing views there was a similarly differing opinion. Mr. Robertson, Intergraph's expert, testified that there are four core operations which make up the algorithm of the simulation process. Step A is to gather all of the necessary inputs making up the logic primitive and store them "someplace where you can get at them again in the computer." Transcript at 1039. Mr. Robertson calls this step input evaluation. Step B is to apply the mathematical operation that the model (which can be classified as behavioral, functional, gate or switch) requires in order to see what the output might be. Mr. Robertson called this operation functional appraisal. Transcript at 1042. Its result is some sort of change in output which is called an event. Step C is called event scheduling which is the process of determining when the change in output is going to be released downstream in the model. Id. at 1044. Step D is the process called up-date propagation by which the changed value is sent to downstream inputs. Id. at 1045. The difference between the parties is that Intergraph and the Navy interpret "one million evaluations per second" to mean one million Step A's, while Grumman interprets the phrase to mean one million Steps A,B,C, and D combined: the complete algorithm. Intergraph's view is based on the fact that the "model" referred to in Step B can be as simple as a NAND gate or as complex as a Boeing 747 (a behavioral model). Mr. Robertson testified on this point: "[G]ate level models usually take only a couple of computer instructions. Functional [models] probably take dozens to tens. Behavioral [models] and VHDL, there is really no upper bound, as I said before, it could be few, it could be millions, it depends on the circuit that you are trying to describe with that particular model. So functional appraisal is really -- you can't compute ahead of time unless somebody tells you how long it is going to take to do." Transcript at 1042. It is for this reason that Mr. Robertson interprets "one million evaluations per second" as one million Step A's because: [I]f you are going to achieve the million evaluations per second, you can't include the functional appraisal (Step B) in that value because nobody doing behavioral simulation has a computer that goes fast enough to get any size of behavioral model into a microsecond, it is just not possible. Transcript at 1043. Mr. Rene Haas, Intergraph's second expert witness, who was not an Intergraph employee, interpreted the phrase in question in the same way as Mr. Robertson and for the same reasons. Transcript at 1097-99. Protester takes an entirely different view. To restrict the meaning of the phrase to Step A only, is to ignore the second sentence of the Navy's answer to Question 0915, according to protester. The fact is that the answer to the question as a whole supports the views of both sides. The first sentence supports Intergraph's interpretation that only Step A is involved, and the second sentence supports protester's interpretation that all 4 steps are involved. The difficulty with Intergraph's view is that under its interpretation the requirement does not demand very much, merely the capacity to label and store a million inputs per second. The difficulty with protester's view is that under its interpretation the requirement demands so much that it is virtually impossible to meet under many circumstances. The Navy found that all offerors met this requirement. The difference between the parties is simply a difference in the interpretation of the phrase "one million evaluations per second." The evidence shows only that the experts are split on the proper interpretation of this phrase and therefore split on their opinions as to whether Intergraph met the requirement or not. The parties' interpretations are plausible, although the Navy's is less restrictive of competition than protester's. We do not have to resolve the conflict. In this circumstance protester has not shown by a preponderance of the evidence that Intergraph and the Navy are wrong in their interpretation, and thus protester cannot prevail. Count 22. Protester entitles this count: Irrational And Improper Best Value Analysis. It lists three reasons for doing so. The first is based on the fact that the IAWG, working at the direction of the SSAC, issued a report on the impact on the Navy of the various offerors' proposals which was favorable to Grumman but the final award went to Intergraph. The second was the fact that the price risk analysis performed for the Navy by Performance Engineering Corporation (PEC) did not take into consideration what protester describes as "the significant price risk to the Navy posed by the Computed Price Differentials ("CPD's") proposed by Intergraph." Intergraph's CPDs were significantly higher than Grumman's. Because of the way CPDs are defined this meant that Intergraph offered much lower discounts on future new technology to be acquired under the RFP's TI clause, than did Grumman. The third was the mathematical mistake made by the Source Selection Evaluation Board in scoring Intergraph's Management Proposal which gave Intergraph a slight scoring edge, and which, if corrected, would have given Grumman a slight scoring edge. All of the foregoing combined to make the Navy's best value analysis improper and irrational, according to protester. To begin with, we have corrected the SSEB's mathematical error with respect to the Management Proposal scoring. With respect to whether or not PEC considered CPDs in its analysis, the following is quoted from PEC's Supplemental Material to its original report: In conjunction with the supplemental analysis we examined the characteristics of the proposed CPD's for risks related to contract performance under the terms of the technology improvement provisions of the contract. In the analysis of price risk, as presented in our report of May 10, we found that the contemplated TI actions for the Red and Blue [Grumman and Intergraph] offers were sustainable within the proposed unit prices, thus did not constitute a risk of contract price growth. Similarly, we do not find reason to expect the proposed CPD's will constitute an impediment to the introduction of technologies under contract performance. We therefore find no significant performance risk related to the CPD's as proposed. Protest File, Exhibit 8D. Thus PEC considered both performance risk and price risk in connection with the offered CPDs and found none for either Grumman or Intergraph. Protester's complaint that the Source Selection Authority should have followed the IAWG's recommendations has a little more substance than any of the foregoing. The SSAC's analysis, based on the requirements set out in the solicitation, found Intergraph to be the high technical scorer (680 to 636.40 for protester) and the lower priced of the two by [REDACTED] (after adjustment for a mathematical error). The IAWG's analysis found Grumman to be very slightly ahead technically in the areas the IAWG examined and its proposal a better value to the Government over a range of values from $98 million to $242 million, the range being due to variations in certain underlying assumptions. These two sets of results had very different bases. The SSAC's analysis of technical worth looked at every operation that the RFP required an offeror to perform on the assumption that all operations would be going on during equal amounts of time. In other words the proposed systems would be employed in designing operations for the same time periods as in manufacturing operations, and so on with all of the major requirements. This analytic method has two strengths. The first is that because an actual task that an awardee (who is the high scorer technically) is asked to accomplish is as likely as not to be any of the tasks within the limits of the RFP, then the probability is that the Navy will be getting the best technical person to do the job under the stated parameters. The second is that all offerors are judged technically on the same criteria, and over the same range of tasks, all of which are weighted equally. The IAWG's approach, on the other hand, in arriving at best value, requires that an initial estimate be made as to which tasks are the more likely to be required, and for what length of time. Since each offeror does some things better than others, the type of task selected becomes extremely important. Thus if four tasks are selected as historically likely from experience with an earlier, similar procurement and three are of the type that require proficiency in modelling, then the offeror whose strength is modelling has an immediate and continuing advantage. This was the approach taken by the IAWG. It made estimates of which operations were most common from its experience, and how much time was spent on each. Thereafter the IAWG determined the cost, using Government pay rates, thereby obtaining a value to the Government for each offeror's proposal. Thus any error in the IAWG's judgment as to what were the most likely operations and/or any error as to the percentages of time likely to be required by the most likely operations would have a profound effect on its results. The IAWG's technical process analysis consisted of running a simple problem in each of a number of areas and recording the number of keystrokes, menu picks, view picks, mouse strokes and function key strokes required to complete the problem. The problem here is the phrase "required to complete the problem." The method used was up to the operator. To the extent that the operator always took the shortest route on the keyboard to obtain the desired end, the result was accurate; to the extent he did not, the result was not accurate. No offeror had any direct input into any of these routes except through standard operating manuals. Nor did any of the problems used by the IAWG track any of the areas of the OCBD tests, where results could be compared between the SSAC analysis and the IAWG's. After receiving the IAWG report the SSA sent it back for a re-check. The IAWG added to and subtracted from some of its assumptions to examine best and worst cases but did not vary its task distribution patterns or its time distribution patterns. In the final analysis the SSA was faced with the necessity of comparing the analytic methods of the SSAC and the IAWG, and the recommendations this produced. Both analyses involved projections over a 12 year period, and both considered cost and price changes. The IAWG analysis was based on the assumption that certain operations with respect to contract tasks would be performed for a larger percentage of working time than other operations, based on estimates arising out of experience with earlier, similar procurements. This analysis was thus more vulnerable to future changes in the overall Navy mission, and to changes in the Navy's way of doing things, as well as to changes in technology. The SSAC analysis, since it did not consider differing use frequencies among the many possible contract operations, was much less vulnerable to future changes in these areas. If the SSA accepted the SSAC's recommendation he would have to disfavor the IAWG's determination that Grumman was marginally better technically and offered a somewhat better value to the Government, an opinion developed after three month's work. On the other hand, if he accepted the IAWG's recommendation, his decision would have to go against the SSAC's conclusion that Intergraph offered the high technical/low price solution based on the 49,000 hours of work over 18 months put in by the SSEB in examining and evaluating the proposals, plus more than four weeks spent on the OCBDs, and the thousands of hours spent by the SSAC analyzing both the SSEB's voluminous report and the IAWG's two reports. He would also have to make the award to an offeror that was not the high technical/low price offeror under the evaluation scheme set out in the solicitation. But in the end it is not a question of who has the best analysis, or the best methodology, or the "right" answer. Both the IAWG and the SSAC presented analyses that show two offerors very close in technical ability, with both proposing about equal value to the Government. The SSA, although mindful of IAWG's recommendation, has selected Intergraph as the high technical/low cost offeror. This was a decision well within his authority to make, and it is consonant with the terms of the solicitation. Protester challenges that selection but has the burden of proving it wrong. The most generous view of its effort to do so is that it has achieved a tie. But the Navy wins all ties, because a tie is less than a preponderance of the evidence, which is what the protester needs to succeed.[foot #] 1 Lanier Business Products, Inc., GSBCA 7702-P, 85-2 BCA 18,033. Decision For the reasons stated we deny the protest. ----------- FOOTNOTE BEGINS --------- [foot #] 1 Intervenor Federal Computer Corporation (FCC) adopted most of Grumman's protest grounds and, in addition, alleged that the Navy's evaluation of its proposal was flawed. It also alleged that the Navy failed to hold discussions on the weaknesses in its proposal. FCC presented no evidence of any lack of discussions on the Navy's part. The record shows that FCC received notice of those weaknesses requiring discussions. Protest File, Exhibit J. FCC presented nothing to the contrary. Indeed FCC was allowed a complete re-run of the OCBD tests which were designed to reveal weaknesses. Protest File, Exhibit L. These two allegations are unsupported by the record and fail for lack of proof. ----------- FOOTNOTE ENDS ----------- _________________ DONALD W. DEVINE Board Judge I concur: ___________________ MARTHA H. DEGRAFF Board Judge VERGILIO, Board Judge, dissenting. I part company from the majority with respect to the resolution of counts 1, 20, and 21, which involves the requirement that equipment simulate at least one million evaluations per second. The findings of the majority, and the record, support the allegations of the protester and intervenor FCC. Specifically, when the solicitation--including all questions and answers--is read as a whole, each "step" must comply with the simulation requirement. The position of the agency and the testimony of the Intergraph witnesses fail to reconcile the language in the responses to questions 915 and 916. (The majority recognizes that, regarding the answer to question 915, "the second sentence supports protester's interpretation that all 4 steps are involved." The response to question 916 belies the interpretation put forward by the agency and awardee.) Accordingly, the agency interpretation is not reasonable. Although the more relaxed interpretation may permit "more" (or a different sort of) competition, the solicitation did not seek proposals in response to the relaxed requirement. Because the best and final offer of Intergraph fails to meet a minimum mandatory requirement, here of great significance, the agency's selection of Intergraph as the awardee was contrary to the terms of the solicitation, as well as statute and regulation. Based on this error by the agency, I would grant the protest (and intervention of FCC). Given this conclusion, I do not resolve the other bases of protest and intervention; the agency should be required to proceed in accordance with statute and regulation. _________________________ JOSEPH A. VERGILIO Board Judge