Tao On Twitter





    Sunday, December 14, 2008

    Come Together Right Now (Over Me) : EDA Standards That I'd Love to See

    It's great to see the EDA industry going pursuing interoperability via standardization. I think I'll keep the rants about there being two competing standards for almost everything (Verilog vs. VHDL, ECSM vs CCS, UPF vs. CPF) for another post. The industry has done a great job on two fronts: libraries (Liberty, GDSII, LEF) and design intent (UPF, CPF, SDF). Two areas that are conspicuously devoid of standards are implementation and sign-off. The items on my wishlist below fall into one of the two categories.

    #1 Physical Design Verification

    This seems to be one of the low-hanging fruits ripe for standarization. The manufacturing rules for GDSII, regardless of the implementation tool, are the same. Yet, EDA vendors or foundries spend translating the same rules into a multitude of languages. Wouldn't it be great if every physical design verification tool supported the same rule format?

    #2 Parasitic Extraction

    There are only so many things one expects to be present in an RC Deck. These values, much like in DRC decks, are constant for a given process. It can't be hard to develop a common extraction rules format. Vendors can still add value via the speed and accuracy of their RC estimation algorithms.

    #3 Crosstalk (Delay and Noise)

    Standard delay calculation is, in essence, interpolation of a LUT. Thus, tools from different vendors usually correlate very well for non-SI timing. Timing with SI is the real problem now. A standard for crosstalk impact estimation would go a long way in addressing one of the greatest pain points in ASIC design: A scenario where the implementation tool and sign-off tool do not agree with respect to the timing or noise impact of crosstalk. If buying the implementation tool and sign-off tool from the same vendor is not an option, ASIC design companies are left with the unenviable prospect of multiple iterations to close SI-aware timing. There are two issues here. One, each vendor is convinced that their approach to crosstalk is the right one. Two, vendors don't publish their delay calculation methodology so that customers can examine the merits of the competing approaches. It's hard to say if vendors view their delay calculation methodology as the core of their tool and are unwilling to publish it for fear of competing implementations. If that is the case, good luck to ASIC design companies. They're going to need it.


    #4 Implementation

    This is a tough one to get consensus on. It's also interoperability cranked up to the max. Imagine of every design tool spoke the same language. Imagine no more tool-specific scripts. Tool scripts that work in one tool environment will work seamlessly in another regardless of the flow. You only need one set of scripts regardless of the vendor. The problem getting this to work is two-fold. One, tool languages are a certain way because of the tool architecture. It's not going to be easy supporting the same language on different architectures. It shouldn't be impossible either. A common implementation language seems to be the same as a common design intent language (such as SDC) except for the increased scope. Two, different vendors have features that competitors might not possess and so require different commands to exercise these features. For example, if one vendor has technology for sequential scan-compression logic while another has a combinational scan-compression technology, it's going to be hard to get them to agree on a common set of commands.




    Tags : ,

    Sunday, December 07, 2008

    A Bigger Pie : Value Addition In the ASIC Ecosystem

    There were a couple of press releases this week in which companies attempted to add value to their primary products in an interesting way.

    #1 DFT + IP

    Virage Logic's bid for LogicVision might seem puzzling at first. Why would an IP company want to take over a DFT EDA vendor? My take is that Virage Logic intends to provide customers with the IP and the means to test them. The only problem bigger than IP integration is IP test! Poor documentation, absence of proper test-friendly structures and long verification cycles are commonplace when IP vendors provide just their IP and a test document of questionable quality. Using LogicVision's test and diagnosis technology, it's not hard to envision Virage's IP coming with fully integrated self-test structures that require just a little glue logic from the IP integrator for complete testability.

    #2 Open Source + ASIC Products

    How can an ASIC/System vendor increase the demand for its products? Realizing that the software built upon the hardware is the real driver for demand, XMOS is using open source to nurture an ecosystem around its hardware platform (programmable event-driven processor arrays). It doesn't hurt that it also allows XMOS to demonstrate the flexibility of its platform when it comes to applications. Xlinkers is a user-driven open-source community for software utilizing the XMOS platform. An open source community hosting a variety of (constantly maturing) applications is a huge vote of confidence for users considering XMOS products. Not only do customers get to appreciate the flexibility of the platform but they also get a head start by reusing the developed open-source components as building blocks for their own products.



    Tags : ,

    Sunday, November 30, 2008

    Brave New World : SaaS and Its Implications For EDA

    There's been a lot of attention being paid to the SaaS(software-as-a-service) model in EDA recently. Harry (The ASIC Guy) started it all with his views on (re)deploying the SaaS model in the EDA industry. Gabe Moretti and Daya Nadamuni are publishing an ongoing series about SaaS in EDA. What's all the fuss about?

    What does SaaS model offer ASIC design companies?

    • Unlimited licenses (The concept of buying a license does not exist in SaaS model. It's pay as you go.)
    • No upfront license cost (Pay as you go, remember?)
    • Lower IT investment (Since the EDA applications are hosted, the cost of compute and storage resources are built into the rate the vendor charges you).
    • Scalability (Since both tools and infrastructure are no longer your concern, scalability is only limited by your engineering team).
    What does SaaS model offer EDA companies?
    • Constant revenue stream (Rather than one lumpsum payment, the EDA company receives a stream of revenue. Value-adds such as storage and compute services increase your revenue.)
    • Increased Margins (The pay-as-you-go model allows EDA companies to charge higher rates than the current model)
    • Unlocked Revenue (Since small customers can now rent tools, compute and storage resources rather than invest in them upfront, the market size that can be addressed the SaaS model is larger than the current ratable model).
    Can SaaS succeed this time around?

    The reality is that ASIC design companies rarely use tools from one vendor across all flows. Given this, can one company moving to the SaaS model make an impact? You may think you're achieving lock-in but end up being locked out. Sure, an EDA company can put in hooks that allow tools from other vendors to run on their SaaS platform but I'm not sure that option will even be permitted. I really don't see one EDA company handing over its tools and licenses to run on a competitor's network. If you now have to transfer data to/from/across SaaS platforms to use tools from different vendors, it's going to be a real mess. ASIC design companies want to use the tools that they want to use and it's up to the industry to figure out a way to allow that. Self-hosted SaaS platforms do not seem to be a viable option for EDA companies. One option I see is the creation of independent third-party(non-EDA) vendors that host applications from all vendors.

    What are the implications of the switch to SaaS for EDA Companies?
    • Interoperability: Once every tool is available on one unified SaaS platform, interoperability becomes more crucial than ever. Users will expect that best-in-class tools from different vendors work seamlessly.
    • Best-In-Class Tools as Profit Centers: Right now, every major vendor is trying offer solutions for all aspects of design and selling them as such. Under the SaaS model, each tool will stand alone. The ease of switching tools is so low that best-in-class tools are the only ones that will survive in the end. EDA companies may see that trying to support a large number of tools to address the entire flow may actually be detrimental to their business.


    Tags : ,

    Sunday, November 23, 2008

    Desktop Supercomputers : The CUDA-enabled EDA (Near) Future

    In my first ever post, Need for Speed (way back in Oct'2007), I proposed that specialized EDA platforms could be built using Nvidia's CUDA technology. Since then, Nvidia's CUDA technology has a steadily increasing list of EDA adopters.

    • February 2008: Gauda uses Nvidia's CUDA platform (and distributed processing) to accelerate OPC by 200x.
    • April 2008: Nascentric announced the availability of a GPU-accelerated spice simulator (OmegaSim GX) using none other than Nvidia's CUDA technology.
    • August 2008: Agilent announced the use of CUDA technology for the acceleration of signal integrity simulations in their ADS (Advanced Design System) Transient Convolution Simulator.
    Is this a flash in the pan or is CUDA-enabled EDA here to stay? Why use CUDA?
    • C-Based SDK allows for easy porting of code to the CUDA platform
    • Ecosystem of tools and applications built on the CUDA platform. Nvidia's doing its part by hosting CUDA-based code design contests. Right now, you can read and download academic papers on the utilization of the CUDA platform for statistical timing analysis and graph algorithms.
    • Cost-effective computation is perhaps the biggest thing going for the CUDA platform. Where else can you get a 100x improvement in runtime for a mere $600?
    Despite these advantages, there are some missing pieces that would hold CUDA back from really taking the EDA world by storm. My wishlist for CUDA is the following:
    • Backward compatibility is key to a low-risk path to adoption. Without it, who's going to risk porting their code base to CUDA hoping that their customers will move to CUDA-enabled platforms? With backward compatibility comes a great hook: "Use our tools on your current platform but you can get a 100x improvement in runtime just by buying a PCI card". Right now, that's not the case. Nascentric, for example, offers OmegaSim and OmegaSim-GX as two separate tools. Wouldn't it be great if the same code could run on both platforms but one runs a whole lot faster because of CUDA?
    • Native support for distributed processing could bump performance up even higher. Graphics processors are built to solve "embarrassingly" parallel problems. It's not really much of a leap to distribute the workload amongst multiple graphics processors. The CUDA pitch (do more with less money) becomes that much sweeter.

    Tags : ,

    Sunday, November 16, 2008

    Pushing The Envelope : Mobile/Wireless Design And The Advancement Of ASIC Design

    At a high level, the metric space for an ASIC design consists of:

    • Performance (Timing)
    • Area
    • Power
    • Effort
    • Schedule
    • Unit cost
    Depending on the application of the end device, each ASIC project has important metrics and some not-so-important metrics driven by the business concerned. Every ASIC design is sitting somewhere in the n-metric space as per the importance of each metric. For example:
    • Supercomputer applications : Performance is everything. The nature of the business (low volume, high margins, controlled device environment) is such that the other metrics don't really matter (within (relaxed) limits).
    • Desktop Processors : High performance and unit cost matter. Schedule and effort are very high (think of the teams to spend a year or two doing nothing else but designing the next-generation desktop processor) but that's ok because, with a low unit cost and high volumes, the business will still generate a profit.
    • Memories : Area and unit cost matter. The objective is to create the densest memories possible such that you can hit a performance standard at the least unit cost possible.
    In contrast to the above, consider the driving forces of the mobile and wireless business:
    • Short product cycles
    • High volumes
    • Cost-driven business
    • Power-sensitive
    • High performance
    • Smaller/slimmer mobile devices
    • Increasing functionality (multiple interfaces(Wi-Fi, Bluetooth, SD), on-device memory ..)
    Here's what the metrics look like for the space:
    • Performance : high performance (increasing functionality)
    • Area : ultra-low area (smaller die, smaller footprint)
    • Power : ultra-low power (very power-sensitive)
    • Effort : Low effort (small teams, fast turnaround time)
    • Schedule: Very short (shrinking product cycles, market windows)
    • Unit Cost : Very low (high volume, low margins)
    The mobile/wireless space works to satisfy almost impossible (and conflicting) requirements. There is no slack on any front. This is a good thing. The effect of the all-round pressure is that the techniques and methodologies used in the mobile/wireless space are required to push the state-of-the-art all the time and in all directions. Sophisticated power management, advanced mixed signal design, high-yield techniques, the list goes on and on.

    Performance-driven businesses (supercomputers, processors) usually receive the largest mindshare but it is mobile/wireless design that has been (and will be) quietly advancing ASIC design on all fronts.

    Tags : ,

    Sunday, November 09, 2008

    Changing The Rules : Transforming EDA

    It seems strange to me, as an ASIC design engineer, that the EDA industry could ever get into trouble. Paul McLellan put it best when he said that $3 trillion electronics industry is fully dependent on the $400 billion semiconductor industry which, in turn, is fully dependent on the $5 billion EDA industry. How can an industry that is in such a position of strength be in so much trouble? It appears that, while EDA is very important, EDA companies do not seem to have much leverage when it comes to customers. When there's an industry that provides an essential service or product and yet has no leverage, it's a good bet that the concerned service or product has become a commodity. EDA is not a true commodity like, say, milk. There's still quite a bit of technological differentiation out there. The problem is that the differentiation is not enough to defend market share. There's a magic dollar number at which design companies will switch tool suites because there's not much you can do with one tool that you can't with another. The only exception appears to be sign-off tools. That's because the differentiation is not technical. The differentiation in sign-off tools is their long history of working silicon. What is to be done?

    • A change in licensing scheme or business model would not work in the long term. If the model has any merit, other EDA companies will follow suit and you're pretty much back where you started.
    • You can expand the scope of EDA to genetics or aircraft design but, eventually, those markets are going to be same low margin markets as EDA faces now.
    • The EDA community is gravitating to common standards (CCS, UPF, OpenAccess) and suddenly the playing field is more level than ever. The cost of switching vendors will get lower than ever.
    • Building differentiation is the key.
    • Differentiation (a.k.a. no one else can do what you do) is the only defense against commoditization.
    • Differentiation = Increased Profit Margins. When people need your product and when they can't get it from some one else, that's leverage. Actually, even better, that's a monopoly.
    Tags : ,

    Wednesday, October 29, 2008

    Beyond The Text : Adding Presentations To The Tao

    A good presentation can convey a lot more than the dry text of a blog could. I'm hoping that, in some future posts, I could supplement the text of the blog with some good presentation material. I'm posting (via SlideShare) one my my SNUG presentations to start things off...

    Sunday, October 19, 2008

    Safety in Numbers : In Search of A Complete Statistical RTL2GDSII Flow

    Much has been said about the shift from corner-based sign-off to statistical sign-off. Most of the material has been focusing on the use of statistical timing analysis and statistical extraction to alleviate the pessimism and effort associated with corner-based sign-off. What no one has been touting is a complete statistical solution. Design in the statistical domain is a paradigm shift and cannot but help affect the entire RTL2GDSII flow. In the absence of a complete solution, there's a "lost in translation" scenario when a design transitions from corner-based implementation into statistical sign-off. Put bluntly, is there sense in adopting statistical sign-off while the rest of the flow supports only corner-based analysis?

    The smart money says that statistical design is here to stay. In that case, the complete solution is already on its way. But, right now, we are in a no-man's land between corner-based design and statistical design. What are our choices?

    1. Short-term Gain/ Long-term Pain. Use corner-based flows until a complete statistical solution is in place. Nothing needs to change as of now. Of course, the transition will be widespread and painful when it is attempted.
    2. Short-term Pain/ Long-term Gain. Adapt non-statistical implementations tools to work with existing stat-aware tools. The transition into a complete solution will be gradual. The pain here is that of adapting non-statistical tools to fit into a statistical design flow. Lots of workarounds, custom scripts that will eventually be rendered obsolete by the arrival of eda tools with native stat support.

    Tags : ,

    Saturday, September 27, 2008

    Silicon Biometrics : The OCV Authentication Solution

    I never thought I'd say this ever: There's finally a reason to appreciate OCV and we can all thank Verayo for it. In one of my earlier posts, I wrote about the growing problem of counterfeit ASICs (Will The Real ASIC Please Stand Up?)The variation of on-die device and metal parameters that is the bane of designers all over the globe has turned out to be useful in the most unique way. OCV, it seems, is the silicon equivalent of a fingerprint. Let us review the salient points about OCV:

    • On-die variations are always present in all devices
    • Although on-die variations obey statistics, individual variations are essentially random
    • The OCV solution space is huge. Assume that a single transitor can be one of a types and a single net can be any one of b types. If you have x transistors and n nets, a single ASIC can be any one of a *b * c *d possibilities.
    • The probability of two ASICs having the exact same characteristics is so small, it is essentially zero.
    The existence of OCV is half the solution. It sort of like saying I can uniquely identify a grain of sand by the fact that each of its trillion odd molecules is differently positioned when compared to any other grain of sand. For the solution to be complete, there has to be a feasible way of measuring OCV for a given device or its effects. Hmm, Why do I worry about OCV? what's the worst that can happen? I'm guessing the phrase "timing violations" are flashing in six foot neon letters in your head right about now.

    By creating a circuit with lots of reconvergent logic with very low (or zero margins) margin setup and hold paths, you can observe the effects of OCV. The values captured at the output will not only change with each device but also change with the input stimulus. Depending on the stimulus, the path taken through the reconvergent cone may or may not suffer from a timing violation. By observing the outputs for a set of random stimuli, each device can be uniquely identified.

    If you want to know more about this technology, it's official name is "Physically Unclonable Functions(PUF)" . For the academically oriented, the home page of Professor Srini Devadas (MIT/ Verayo Co-founder) has download links for all his published PUF papers.

    Tags : ,

    Sunday, August 17, 2008

    The Hidden Factory : First Time Yield In The ASIC Design Flow

    The Six Sigma methodology is widely used to redesign processes to be more efficient. The way Six Sigma works is to redesign a process to all but eliminate defective outputs. To achieve Six Sigma, a process must create no more than 3.4 defects per million opportunities. Why the focus on defects?

    • Defects = Waste: When a process creates a defective output, all the effort and material invested in that defective part is essentially wasted.
    • Defects = Rework: When an intermediate stage produces a defective output, the process for that stage is repeated to produce a correct output.
    Focusing on eliminating defects in a systematic manner allows a process to be both more efficient as well as produce outputs of consistently high quality.

    One of the key measures of process quality used in Six Sigma is First Time Yield(FTY). The first time yield of a single stage is the probability that a correct output is created if the stage is run exactly once. The first time yield of a process is the product of the first time yield of its stages.

    FTY is great for identifying priority areas for redesign. Think of a simple process consisting of two stages: A and B. The FTY of A is 50%. The FTY of B is 100%. The FTY of the entire process is 50% (100% * 50% ). Stage B is perfect but Stage A is bringing down the FTY of the process. Another benefit of FTY is to identify "hidden factories". What if stage A is followed by a quality check stage that mandates a rerun of A in case of defective outputs? If we were to insert a QC check stage between A and B, the process as a whole will have 100% yield. But, stage A will be repeated twice on average to produce a correct output for a given input. When you view the process as a black box, you would not see these stage A iterations. For this reason, these iterative loops are called hidden factories. So, a process of multiple stages can produce a million correct outputs for a million correct inputs and still not be a good process.

    Consider your ASIC Design Flow in this context:
    • What are the chances that a design will go through your ASIC Design Flow in one shot?
    • What are the chances of a particular flow (synthesis, scan insertion,...) will go through in one shot?
    • Which flow is bringing you down?
    • Where are the hidden factories?
    • What are you going to do about it?
    Tags : ,

    Friday, July 04, 2008

    Design For Flexibility : Deep Data and Function Access Is A Must For EDA Tools

    I remember reading somewhere that the state-of-the-art EDA tools always lag the state-of-the-art design practices in the ASIC industry. Before an EDA tool or a feature is created to solve a problem, an engineer in an ASIC design company had already figured out how to use existing EDA tools to accomplish the same thing. The solution in the engineer's case could be a simple workaround or, sometimes, a complete tool/script using the EDA tool's command language. The key element of success in such endeavors is the EDA tool's support for user access to low-level functions and data structures. The argument here is that, when EDA tools support user access to low-level data structures and functions, it is a win-win situation.

    • Unintended Uses: EDA tools are like goldmines. For example, a static timing analysis tools contains within it robust high-speed parsers (verilog, SDF, SPEF...) and network analysis functions. When the EDA tool architect allows access to these functions (and the resultant data structures), he or she opens the door to a wide range of uses for the tool. The design engineer does not have to write his or her own functions and can leverage the functions from EDA tools (win). Increased usage of the EDA tool implies increased license requirement which leads to increased revenue for the EDA company (win).
    • Advancing the State Of The Art : The only thing constant in the industry is workarounds. No matter how sophisticated the tool, someone somewhere wants something that the tool doesn't support yet. This is a good thing. Before there was a synthesis tool that supported clock-gating, designers with clock-gated designs would adapt an existing synthesis tool to support clock-gating through workarounds and scripts. Eventually, a future generation of the synthesis tool would have native support for clock-gating utilizing, in no small measure, the learnings from these tools and scripts. The designer engineer is free to utilize existing tools for advancing design methodologies (win) while EDA companies learn from these trailblazers to improve their tools (win).
    • Maintain the Status Quo : Switching EDA vendors is particularly painful. Along with the switch, a lot of tool-related know-how (scripts, methodologies, workarounds, known issues...) is invalidated. The design engineer is forced the relearn and redo everything. Obviously, this switch is just as painful to the EDA vendor as it constitutes a loss of current and future revenue from the concerned ASIC design company. Apart from performance, one of the usual suspects that force the switch is gaps in the EDA tool's capabilities. When the tool cannot be tweaked, cajoled or coerced to meet the designer's requirements, the engineer has no choice but to go shopping. By providing deep access, the EDA tool can be used to meet the designer's requirement. The design engineer does not lose time ramping up on a different tool (win) and the EDA company does not take a revenue hit (win).
    Tags : ,

    Thursday, June 19, 2008

    (Even More) Useful Skew : Teklatech Adapts Useful Skew Concepts To Close Dynamic IR

    Zero clock skew is not a necessary thing. As long as timing can be closed, clock skew is immaterial. This kind of thinking is what gave birth to the concept of useful skew. The use of useful skew in contemporary design is mostly restricted to timing closure. In this method, the arrival of clock edges at the launch and sink registers of critical paths are changed to increase the effective clock period available to the critical path. Since different registers have different clock latencies, one side-effect of this optimization is that all your registers don't transition at once.

    But is there a benefit from having all your registers transition at different times? Enter Dynamic IR (stage left). When a large number of transitions occur in a very short period of time (right after the active clock edge, for example), the power network is not in a position to supply such a large amount of current in such a short space of time. Result: large and localized instantaneous drops.

    Teklatech's FloorDirector uses useful skew concepts to spread out the register transitions by playing around with the clock edges and thus reduces the dynamic IR drop problem without affecting timing. Elegant, huh?

    Tags : ,

    Out Now! : TSMC Reference Flow 9.0 Is Now Available

    The TSMC Reference Flow 9.0 is available for download from TSMC-Online. Eyecatching items include:

    • DFT
      • Using E-fuse for MBIST
      • Failure Analysis
      • Low-power ATPG
    • Really Advanced CTS
      • CTS for Dynamic IR
      • CTS for Low-power
      • Multi-Mode Multi-corner CTS
    • Low-power
      • Low power automation with UPF
    • Statistical Design
      • {LPC, CAA, VCMP} --> {Timing, Power, Leakage} Flows

    Tags : ,

    EDA BlogRoll : All The Blogs In One Place

    The EDA Blogroll is an excellent resource to keep yourself up-to-date on the EDA/VLSI blogosphere. Not just links ! comes with a built-in RSS reader,too. Check it out.


    Tags : ,

    Thursday, June 12, 2008

    SNUG 2008 : Registrations Open

    In case you're a Synopsys customer in Bangalore, registration for SNUG2008 is now open. Why, Aditya, thank you for that perfectly selfless propagation of useful information with no ulterior motives....

    NOT!

    If you can, do try and attend my presentation (in the Synthesis & Test track) on the 10th of July.

    Register Cloning For Accelerated Design Closure

    Multiple technologies exist to achieve timing closure on critical paths. One such technology, clock skew optimization, changes the arrival of clock edges at the launch and sink registers to increase the effective clock period of the critical path. Standard clock skew optimization does not necessarily utilize the full slack available at the input of a register but only the amount required to resolve the setup violations on paths from the register. If clock skew optimization were to utilize the input slack to the fullest extent towards have a large setup slack on the erstwhile critical path, it could accelerate setup timing closure by letting the tool concentrate on other paths in the design. However, the process could also introduce a large number of hold violations on other paths from the register with low hold slack due to the early launch of data. By having separate clone registers for setup and hold paths, one can fully utilize the input slack to launch registers for accelerating timing closure while limiting resultant hold violations. Since cloned registers are exact copies of the original register, the impact of register cloning on verification and ECO methodology effort is minimized. In this paper, a methodology will be presented to identify cloning candidates, insert clone registers and verify the final design against the un-cloned input.


    Tags : ,

    Wednesday, June 11, 2008

    Everything's Connected : An Opportunistic View On Butterfly Effects In Physical Design

    The emergence of physical synthesis technologies (such as DC-Topographical from Synopsys and First Encounter SVP from Cadence) is usually attributed to the following:

    • Increasing interconnect delay:gate delay ratio
    • Imprecise prediction of wire delays using wire load models
    While the above is certainly true, it does not seem to be the whole story. It seems to me that the whole problem is that there are additional cost functions that cannot possibly be accounted for without placement and global routing. When it comes to logical synthesis (without physical information), tools have no choice but to optimize paths independently. If there is no combinational connection between two paths, synthesis is not in a position to perform any possible trade-offs between these two paths. A cost function combining timing, area and power is applied to each path independently.

    When it comes to physical design, two additional cost functions come into play:
    • Placement : How can the cells in the design be placed to optimize area, timing and power?
    • Routing: Can this placed design be routed?
    Placement and routing put a new spin on things because hitherto un-related paths (as seen by logic synthesis) start to affect each other through these functions. The logical synthesis cost function has zeros where placement and routing ought to be. In other words, synthesis assumes that each cell in the design will be placed perfectly. The reality, of course, is far removed from this utopian vision. The best metaphor I can think of for this phenomenon is crosstalk. Can your synthesis tool synthesize for crosstalk avoidance? No.There is no correlation between logical connectivity and aggressor-victim relationships. You have to find the aggressors and victims after detailed routing.

    But the problem can also be viewed as an opportunity. Since physical design causes un-related paths to interact, one can play around with non-critical paths to close the design without directly optimizing your critical paths. Like everything else, it pays to see the big picture.


    Tags : ,

    Sunday, May 04, 2008

    Force Multipliers : A Paradigm for Technology and Tool Development

    The concept of force multipliers originated in military science. Force multipliers are fulcrums that allow you to have an effect that is exponentially proportional to the amount of resources employed. Radar is a good example of a force multiplier. An air force that uses radar will be able to successfully attack or fend off a much larger force that does not have the benefit of the technology simply by being able to track their opponents in the battlefield. Sometimes, the force multiplier does not physically exist. For example, the coordinated use of air and ground forces as a blitzkrieg is more effective than an uncoordinated force of the same size. The blitzkrieg tactic is the force multiplier in this case.

    It seems to me that viewing tools and technologies as force multipliers is a good paradigm for guiding technology and tool development.

    1. Force multipliers do not replace; they complement. Did they disband the air force after inventing radar? The objective of tool development in a force multiplier context should not be to create a software version of your engineer. It's about creating a tool that will allow them to control a lot with very little. In a design environment such as Pyramid or IC-Catalyst, the engineer is in control but the environment takes care of all the small stuff (generating scripts, checking reports, firing jobs...). An engineer using a design environment is in a position to accomplish a lot with very little effort.
    2. Force multipliers need not be complex; just effective. Creating and supporting a complete design environment is a lot of work. plus, there's always the chance that you're over-solving the problem. Small utilities addressing the right issues in the flow can add value with much less effort. Think scalpels, not broadswords.
    3. Force multipliers need not physically exist.A stable design methodology does not physically exist, and yet, guides the engineer in the direction that will produce the optimal result in the shortest time.
    4. Force multipliers stack. The effect of more than one force multipliers is not the sum of their individual effects but their product. For example, a good hierarchical methodology and a good timing fix utility employed together will accelerate timing closure beyond the sum of their individual effects.

    Tags : ,

    Wednesday, February 27, 2008

    Distributed EDA Meets Accelerated Hardware: Gauda's OPC Points The Way

    In Divide And Conquer, I made some arguments for the inevitable dominance of distributed algorithms in the EDA industry. In Need For Speed, I argued for, among other things, using graphics card-based acceleration for EDA (Nvidia's CUDA technology).

    What happens when these two get together? Gauda's new optical proximity correction (OPC) tool. The tool not only uses existing graphic cards (from the likes of Nvidia and the erstwhile ATI) but also uses sophisticated distributed algorithms to accelerate OPC upto 200x (really? 200x??) faster. Rather than a flash in the pan, I'd say Gauda is the pioneer in a direction that is soon to be well-traveled by the EDA biggies.

    Tags : , , ,

    Saturday, February 23, 2008

    Will The Real ASIC Please Stand Up? : The Brave New World Of Counterfeit ASICs

    Just last week, Reuters reported that counterfeit components worth $1.3 Billion were seized in a joint operation by the US and the EU. Chew on these stats:

    • These counterfeits must be pretty sophisticated. Biggies like Intel, Phillips and Cisco are not exactly known for making great op-amps.
    • Counterfeiters are going after the big-ticket items. If 360,000 parts were seized, that puts the average value of the components seized at $3600!
    Counterfeit ASICs bring us face to face with an altered reality. You can fake a watch, a perfume or even clothing, but an ASIC? Counterfeits used to be something Nike and Armani worry about, not ASIC design companies. Clearly, we're not in Kansas anymore. The big questions are:
    • Do these counterfeits actually work??
    • How are ASICs reverse-engineered?
      • Do they use the datasheet/spec?
      • Do they obtain the GDSII?
      • Do they strip the die layer by layer?
    • Can we prevent an ASIC from being faked?
      • If the datasheet and the chip are out in the real world, can we prevent a copy?
    • Can we authenticate an ASIC beyond doubt?
      • What prevents them from copying that, too?
    Tags : , ,

    Friday, February 15, 2008

    The EDA Universe : Now in PDF!

    EDA DesignLine has published the PDF format of Gary Smith's EDA charts! You can read IC Design Tools Vendors Reference Chart (@ EDA Designline) to get the back story on this. The PDFs can be downloaded from the following links:


    Tags : , ,

    Thursday, February 07, 2008

    I Coulda Been A Contender : Some VLSI Ideas I Wish I Had Had (First)

    Getting out plan for the new year got me thinking about issues in the ASIC flow and what could (or should) be developed to make the flow better. It could be a new technology, a new tool or a new flow even. The hardest part of this is to be able to break out of the box and approach issues from a new angle. Naturally, at such times, I look back at some ideas I heard about and think "I wish I'd thought of that!" It's not that these ideas made their inventors rich but they did make me sit up and take notice because of their refreshingly different viewpoint. Here's hoping they make you think, too.

    • Wire Tapering : Minimize the delay of a wire by controlling its shape. Rather than the usual rectangular shape, the optimal shape (in terms of delay) for an interconnect is exponentially tapered. The wire will be thickest near the driver and thinnest near the sink. In a regular net, the capacitance of the end section of the wire is the same as the initial section of the wire. Since the end capacitance has to be driven through a large resistance (basically the resistance of the whole net), the driver "sees" this end capacitance as a large load. By tapering the interconnect, the capacitative load decreases with distance from the driver. Thus, the overall load seen by the driver for a tapered net is lesser than a rectangular one.
    • Configurable Processors : Adapt a processor's instructions to match the application. Most processors are jacks of all trades but masters of none. Though processor cores are not particularly good at anything, using them for implementing features has its advantages. There's low risk of functional bugs within the core itself. Fixing bugs and adding features is as easy as updating the firmware. This is the low-performance/low-effort solution. Custom RTL, on the other hand, is the high-performance/high-effort solution. Implemented features have high performance but verification requires a lot of effort and bug fixes and additional features will atleast require a respin. Configurable processors allow you to get the best of both worlds. By implementing an instruction set tailored to the end application, the performance of the core is increased while verification and feature updates and bug fixes are still relatively easy.
    • Channel-less Floorplan: Why not just route top-level signals through the block? Look at the channels in your hierarchical floorplan (the gaps between the blocks). You need these channels to be able to route top-level signals from block to block. Usually, these signals don't undergo logical transformations at the top-level so it's pretty much getting the signal from point A to point B using repeaters. In a channel-less floorplan, the repeaters don't go round or over the block; They go through the block. The idea is to create feedthroughs in blocks such that a signal can use these feedthroughs to get to the other side of the block faster. The benefit is that you save on the die area that was previously dedicated to channels.
    • Asynchronous ASIC Design: What if there were no clocks? Clocks are nothing but synchronization signals. At each clock edge, you are guaranteed that the input data is stable and valid. The problem with having a clock is that your design can run only as fast as the worst path in the design. Even if 99.99999% of paths run at 1ns , the last path running at 2ns requires you to run the entire design at 500MHz. Wouldn't it be great if the performance of the entire device did not depend on the worst path in the design? That's where asynchronous designs come in. Asynchronous ASIC designs do not require a clock. Instead of clock-based synchronization, these designs use handshaking, semaphores and other methods to exchange data. Each part of the device runs as fast as it can and the effective frequency of operation is no longer determined by the worst path in the design. Other benefits of asynchronous design include lesser power dissipation (no clock trees) and lesser dynamic IR drop issues (without clocks, transitions in the design are randomized).
    • Analog Computing: Forget binary and use (semiconductor) physics. There's something unnatural about digital design. In a world where everything sentient is analog, should computing be any different? The idea behind analog computing is to use physical laws for computation. Suppose you want to build an adder, currents equivalent to the numbers to be added are passed through a resistance. The voltage drop across the resistance is proportional to the sum. The square-law characteristic of the MOS transistor in saturation is used to create computation devices such as multipliers. By piggy-backing onto the mathematics of physical effects, it is possible to use analog for computation at a fraction of the delay, area and power of digital circuits.

    Tags : ,

    Monday, January 14, 2008

    What, Me Worry? : Is Your Design Team DFT-aware?

    This post came about after reading John Ford's post about the value of a DFT education for designers. Forget appreciation, DFT is usually the least-understood specialization in the ASIC design flow.

    In my opinion, DFT is not that easy to "get" primarily because it requires a viewpoint different from synthesis, timing and place-and-route. In those fields, you usually worry about timing, area and power. It's easier to apply your knowledge from synthesis to PNR. In DFT, you worry about fault coverage, controllability and observability; concepts alien to the rest of the ASIC design flow. It gets worse. More often than not, project and design managers are from a place-and-route, synthesis or timing background. Not many DFT engineers managing projects out there. Ergo: A clear understanding of DFT is lacking at most higher levels of the engineering hierarchy. Is it any wonder that DFT is the least of the design team's priorities?

    What are your options? This list is AND'ed not OR'ed (do (1) AND (2) AND (3)...).

    • Have a set of non-negotiable DFT rules that everyone has to follow. This would include ensuring controllability of resets and clocks, etc. I think it's important to have a list that is not open to negotiation (or else the DFT engineer ends up having to explain each and every rule to each and everyone). This is not a enlightened approach but it saves both DFT and non-DFT groups time and energy.
    • Have a DFT checklist like John's that can be used by all designers to ensure DFT compliance. This list would be a checklist form of the aforementioned non-negotiable rules.
    • Use a tool like Atrenta's Spyglass-DFT to check your RTL for DFT issues. Automated approaches to verify DFT compliance is so much more error-proof than looking through millions of lines of uncommented code.
    • Have a very well-defined handoff mechanism between DFT and other functions to ensure minimum confusion for the non-DFT folks. The idea is to give them what they can use without knowing the first thing about DFT. Don' t explain the differences between scan shift, scan capture and scan transition modes to the STA engineers; give them a set of appropriate constraints (comes straight out of Tetramax, if you have it) that they can use as-is. Don't explain how to identify scan chains for re-ordering to the PNR engineers; give them a scandef file that can be read both in Synopsys and Magma.
    Tags : , , ,