Tao On Twitter





    Tuesday, October 16, 2007

    Need For Speed : Optimized Servers and Platforms For EDA Tools

    You can never have too many Opterons. No amount of memory is ever enough. 64bit is no longer an option but a necessity. Sounds familiar? EDA tools push compute servers to their limits. Memory requirements are measured in GB and runtimes in days. Interesting, then, that the platforms that run these high-performance tools are the same machines, configurations and operating systems that are used by John Q. Server down the street for everything from email to file storage. If there's a Blackbird for gaming, should ASIC design engineers settle for anything less?

    Imagine, if you will, a platform whose components and operating system are tweaked to speed up EDA execution. A wishlist for an EDA-optimized platform:

    1. Blazing floating-point instruction sets with vector support (for those iterative crosstalk-aware timing runs or a router's calculations to simultaneously satisfy setup, hold, noise, crosstalk, DRC, Antenna, signal-EM....you get the idea)
    2. 100s of GBs of memory ( and a motherboard that supports that)
    3. Excellent graphics support (so that your place-and-route display doesn't hang so often)
    4. Optimized shared libraries that support graph partitioning algorithms and sparse matrix computations (that just about every tool will use)
    Ideally, we require a platform that accelerates EDA tool performance but does not restrict EDA tools from running on any other server. In other words, no proprietary libraries that only the accelerated server can use. Try locking people in and you just might be locking them out. All functions that are used in the accelerated server should be available for normal servers (they'd be slower, of course). One way I see is EDA tools would use a normal version of shared library objects on John Q. Server but the accelerated version would be much faster (with access to corresponding hardware) on the EDA-optimized platform.

    Some options I see of making this platform and the business of such platforms work:
    1. Create your own custom EDA compute server. A Tharas Hammer for general-purpose EDA, so to speak.
    2. Create a card for accelerating EDA tools, release a SDK for it, and have EDA vendors use it
    3. Creating your own card is still on the expensive side and possibly risky. Can we use off-the-shelf cards to build this accelerator? Use Nvidia's CUDA technology, for example?
    4. Work only on the OS and hardware. Tweak, tweak and then tweak some more. Use only off-the-shelf components and an open-source OS base to build a killer compute server. Sell the OS+Machine to all the ASIC companies at a slight premium over the standard package. Can you see all those deep red blade servers wall-to-wall in server rooms?

    Tags : , ,,

    1 comment:

    1. Anonymous1:06 AM

      Hi Adi

      Nice blog. You are right. Lot of research is going on in developing parallel algorithms for circuit simulation etc using GPUs. Lot of papers in DAC this year too. I also remember coming across a tool from synopsys with parallelization. Lot of jobs in this area.

      ReplyDelete