« Extending DriveWorks 7 Part 2 | Main | DriveWorks 8 Introduction »

Thursday, 18 November 2010

Comments

Jeff Sweeney

Does WatchTower do anything with the error/crash reports that are sent in automatically by DW?

Philip Stears

@Jeff

Automated error reports are handled by another system developed by a couple of the guys here.

As they come in they are saved to disk on one of our servers, this means that if anything goes wrong further down the process, e.g. if any of the databases or systems we use downstream are offline, we always have the files so when the system comes back online, the reports are processed.

We then have another system which monitors this drop folder and transfers the file from the data center to our head office. Once transferred it is analyzed to determine its similarity to existing reports.

if its a report we've not seen before, then it automatically gets added to the tracking system our development team uses alongside any other work that we have to do.

If it's the same as an existing report then it gets added to the same entry in the tracking system as the existing report. In the event that a customer has provided their details, we can then use this information to contact the customer when a fix becomes available, or if we need more information to track down the issue.

It's very rare to get a report concerning model generation, but if we do get one, or we receive a support call that identifies an issue with model generation, we then create a WatchTower test to try to reproduce the issue so that we can validate the fix that development creates, and to ensure that behavior doesn't regress in the future.

Hope this helps!
Philip

Paul Gimbel

The bigger question is how many different models are being created. Making thousands of trailers is nice, but the trailer only provides one isolated set of conditions. Those conditions are designed to be very sterile and bulletproof since it is a public-facing demo set. Certainly there is something to be said for using a familiar test case, as you have a good baseline for comparison, but the tricky bits come in when the users place the app in their implementation's environment with their real world details.

Philip Stears

WatchTower is our primary means of testing different model generation conditions, in which case, there are in excess of 2,600 unique permutations of models and more are being added all of the time.

We are using a trailer implementation to specifically test DriveWorks Autopilot in a number of ways. Firstly, the model itself undergoes fairly significant change - it has a good number of replacement models that are generated. Secondly, it involves using "common" models - i.e. models used in more than one top level assembly, which specifically tests the algorithm used by Autopilot to distribute work when more than one Autopilot machine is involved. Thirdly, we are using it to hone our handling of SolidWorks in DriveWorks Autopilot, we've specifically got Autopilot configured to use a single copy of SolidWorks - i.e. not to restart after each top level model, this means that each SolidWorks instance is used to generate thousands of individual models.

The environment itself in terms of the specs of the machines is pretty rudimentary in terms of the hardware/software we are using, and the DB is backed by SQL Express as opposed to one of its bigger cousins.

This gives us a pretty realistic view of the scaling ability of DriveWorks Autopilot and how to tune it which is, as I said, the primary goal for the Autopilot part of Jobbr.

The comments to this entry are closed.