Last Update: April 8, 1993 Welcome to the OO7 Directory. Currently you will find 1) sigmod.ps - the postscript file corresponding to the OO7 paper that will appear at the 1993 SIGMOD conference. If you pulled "sigmod.ps" before April 6, you may wish to get a fresh copy because some clerical errors were corrected and installed on April 6. 2) implementations - directory containing implementations of the OO7 benchmark for E/Exodus, Objectivity, Ontos and ODI. An O2 implementation is in the works. The implementations directory contains tar files for the four systems. 3) techreport.ps - a postscript file that is the "full version" of the sigmod paper. This is an expanded version of the sigmod paper with some more tests and a more thorough description of the benchmark and its motivation. Approximately one month from now an expanded version of the paper with tests from small, medium, and large databases. Mike Carey David DeWitt Jeff Naughton p.s. Many have written to ask details of why ODI (the company that makes ObjectStore) is not included and why the ObjectStore implementation is not available. Here are the reasons: Why can't you get the ObectStore implementation? 1) As is explained in item 2) below, ODI had their lawyers send us a legal notice demanding that we not release any numbers on their system. We are not happy about this demand, but we are complying and will not release any ODI numbers. Since ODI demanded to withdraw from the benchmarking process, we have requested that ODI return or delete all copies of our ObjectStore OO7 implementation, (code that we wrote here at the University of Wisconsin) since we view that implementation as our intellectual property. Our goal in writing and running the OO7 benchmark ourselves, instead of handing a specification to vendors to write and run, was to ensure that the comparison is what we view as being a fair "apples to apples" comparison. Since ODI will not let us release our numbers from our tests on our implementation, we do not want others to release numbers from our code or derivatives of our code if we are not able to inspect the code or monitor the tests. The upshot of this is that there are no ODI numbers for OO7 (at this point, as per ODI's demand, there can be no such numbers.) 2) Why did ODI drop out at the very last moment? Here are some more details of the process. Starting last August we began interacting with ODI on this benchmark. They were very helpful, and their intial results were excellent on the small (5M) database but not as good on the medium (50MB) database. This resulted in "legal" action #1 from their lawyers, ordering us to not release their numbers at that stage. We didn't understand what was going on with the results, so we assigned one of our full-time programmers to figure out what was happening, as the CPU and I/O times didn't add up to the elapsed time. It turned out to be a simple communications-related problem; once we suggested a fix, their medium times became dramatically better. This resulted in legal action #2, a letter from their lawyers giving us permission to publish the results of the OO7 benchmark. At that point in time we told all the vendors (ODI, Objectivity, & Ontos) that they had until March 1 to deliver us a "final" system which we would use to produce final numbers. While waiting for the systems to arrive we decided that we needed to explore how some of the different parameter settings effected the results obtained. During this period we added the concept of a manual, and we decided to study how varying the number of connections/atomic part effected the results. In our original tests, the small database had 3 connections/atomic part and the medium database had 7. We decided to test both the small and medium databases with 3, 6,and 9 connections/atomic part. We did not inform any of the vendors of this change (which ODI viewed as unfair). In early March, Naughton gave a talk at Stanford with our 3, 6, and 9 results in it. It turns out that OS performance was very sensitive to the fanout setting. One day before the SIGMOD deadline, ODI sent us a new version of their system that reportedly would solve the sensitivity problem. Since there was not enough time left to allow other vendors to respond to the new set of tests, we felt that we could not fairly publish their new numbers in the SIGMOD version of the paper. We did, however, offer a compromise in which the SIGMOD version would contain the old numbers but the technical report (referenced in the SIGMOD paper) would contain the new numbers. ODI rejected this comprise via legal action #3, another letter from their lawyers ordering us to not release any results using their system. In summary, the stalemate is that ODI wants to have a final say on any numbers that we release on their product. None of the other vendors have made such a demand, and we simply cannot agree to it. This is not the way that science should be conducted. Carey, DeWitt, Naughton