Tao On Twitter





    Friday, October 26, 2007

    Divide and Conquer : The Case for a Distributed EDA Future

    In two of my previous posts, I had dealt with the issues of machines for EDA tools. Need for Speed was about tuning machines to accelerate EDA tool performance while It's Not What You've Got had some ideas on optimizing machine utilization in the ASIC enterprise. Now, it's EDA's turn. My argument is that, if we look at the trends of designs and machines, distributed EDA algorithms and tools will start to dominate over non-distributed ones (multi-threaded or not). In a nutshell, my argument is : "Speed is no longer the bottleneck for design execution, memory is".

    Consider the following:

    • A : Designs are constantly growing (increased gate count and placeable instances).
    • B : The data structure for a cell, net or design object is also growing (as tools start tracking and optimizing designs across multiple degrees of freedom).
    • A + B : The memory requirement for designs are growing.
    • C : Commercial servers and motherboards don't have infinite slots for RAM. I think the current figure is around 16GB per CPU.
    • A + B + C : There might be a design that will not fit on a server in one shot!
    • D : A 64GB (4-CPU) Opteron is not 8x more expensive than a 8GB (1-CPU) opteron; It's 19x more expensive!
    • A + B + C + D : Even if a design could fit on one server, it's going to be a very expensive design to execute!
    What will you do when your sign-off timing, simulation or extraction run on your mammoth cannot fit on one server?

    Distributed EDA will surely be slower than their non-distributed counterparts, but can one not build an EDA tool that runs on 19 single-CPU opterons (with a total of 19CPUs and 152GB of RAM among them) with the same efficiency as a 4-CPU opteron with 64GB of RAM? We have 19x margin in CPU resources and 2x margin in RAM resources!

    Tags : , ,

    No comments:

    Post a Comment