Tao On Twitter





    Monday, December 31, 2007

    Will It Blend? : Combining ASIC, FPGA and Structured ASIC On A Single SoC

    The availability of structured ASIC IP for use within standard SoC creates yet another option for ASIC design houses seeking to balance NRE, per-unit cost and time-to-market. You can see the announcement by ChipX here. If this trend keeps up, the options available to ASIC design houses when it comes to design implementation will be a blend of implementation technologies where designers choose the optimal mix of FPGA, structured ASIC and standard cell technologies to hit their target NRE-UC-TTM sweet spot. There was at least one company, Leopard Logic, that created FPGA IP that could be embedded into a standard SoC. Just a short hop to a company offering all three technologies on the same die.

    Think of the advantages that each approach bring to the table:

    • FPGA : Highly configurable, Fast Time-to-Market , Low performance
    • Standard Cells : Zero configurability, High time-to-market, High performance
    • Structured ASIC : Medium configurability, Medium time-to-market, Medium performance
    Can we have the advantages of all the three if a SoC is built using all three technologies? Just flip the equation.
    • Blocks that require configurability but can live with low performance : use FPGA IP
    • Blocks that require high performance : use Standard cells
    • Blocks that require medium configurability but only medium performance : use structured ASIC IP
    Are we heading towards the universal ASIC design flow that can handle all three technologies on a single die?


    Tags : , ,,

    Thursday, December 20, 2007

    Death, Taxes and ECOs : The Inevitability Of Functional Errors Is An Opportunity

    There was a great article in EETimes a couple of weeks back titled Virtually Every ASIC ends up as an FPGA that I'd summarize thus:

    • Functional verification is becoming more of a problem with increasing number of transistors being crammed onto a die
    • Simulation is slowly becoming a non-viable option for functional verification (both in terms of time and test vector generation)
    • Prototype FPGAs of your design allow your to apply millions of vectors per second (since you can run the FPGA at high speed) and debug issues much faster.
    • FPGA is the new testbench.
    Here's a thought:
    • How can a RTL-to-GDSII flow add and extract value in a world where FPGAs are used for functional verification?
    • Is that really necessary? Yes. According to the article, 60% of ASIC respins are to fix logical/functional errors and 90% of all ASICs are prototyped as FPGAs
    If you could tune your flow to exploit this reality, you have an advantage executing (or bidding for) 60-90% of all ASIC designs!

    Tags : , ,

    Friday, December 14, 2007

    How to Build a $4.5 Million Core2Duo : From The Book Of Random

    This really puts our work in perspective...


    Tuesday, December 11, 2007

    The ASIC Factory (Complete) : The Toyota Way For Fabless ASICs

    Reading The Toyota Way by Jeffrey Liker got me to thinking about the benefits of bringing manufacturing into the realm of ASIC design (low cost, high quality, predictability, etc). For those who don't know, The Toyota Way is Toyota's management philosophy. The production system reflecting that philosophy allows Toyota to consistently figure as one of the best companies in the world. It's hard to comprehensively describe a philosophy in words but it is generally accepted that there are 14 principles that capture the essence of the Toyota Way. How can these 14 be applied to ASIC design? My thoughts:

    #1. Base your management decisions on a long-term philosophy, even at the expense of short-term financial goals.

    Focus on core competencies and don't waste too much energy pursuing multiple courses of action. This would apply to designs too. Trying to be good at everything from low-power wireless designs to high-performance multi-core processors is a recipe for disaster. You could extend this to methodologies or even EDA tools. Streamline. Focus. On the flip side, don't let the lack of a large current market prevent you from pursuing technologies or products that would have a great future market. Lastly, distinguish between the two (easier said than done but someone has got to say it ;) ).

    #2. Create a continuous process flow to bring problems to the surface.

    Have a methodology and design process that is transparent and efficient. It will allow you to easily spot problems in the flow. Minimize idle time and non-value added work. In the course of work, a design engineer:

    • writes a script
    • checks the syntax
    • executes the script
    • waits for the results
    • opens some reports
    • checks specific parameters (slack, perhaps)
    Writing the script and checking the slack are pretty much the only steps that really adds value. Everything else is a waste of the engineer's time. What are you doing about it?


    #3.
    Use "pull" systems to avoid overproduction.

    In "pull" systems, the control flow is backward not forward. Each stage "pulls" its required inputs from previous stages rather than having inputs "pushed" onto it. You replenish items that are being depleted (i.e sold) rather than creating items in set proportions. "Pull" processes avoid overproduction by generating items in response to consumer demand. Further, note the complete lack of upfront scheduling. One of the advantages of "pull" systems is that they are self-scheduling. Each stage signals the previous stage in such a manner that the end goal is reached just in time.

    In an ASIC design context, the translation of the principle would be that each stage of the ASIC design flow "pulls" data from previous stages. For example, place-and-route "pulls" data from scan insertion. There is no point in rushing through scan insertion if the scan-inserted netlist is not going to go through place-and-route for the next few days. The DFT engineer does not need to create a scan inserted netlist until the PNR engineer is ready to use the scan inserted netlist. Prioritizing is simplified, too. When juggling multiple projects, the DFT engineer simply works on the design that flags him or her first. In such a system, each engineer will work on high-priority items first and project will progress at the right pace and complete on time.


    #4.
    Level out the workload (heijunka). (Work like the tortoise, not the hare).

    Leveling out the workload has multiple benefits. Since the load on resources is close to constant, it is easy to predict demand and plan accordingly. As resources are closely matched to load, there is optimal utilization of resources as well. In organizations where the load varies wildly, the management will have to plan for the worst case. The problem is that these acquired resources will never be fully utilized during normal periods. When you level out the workload, transitional peak requirements are correspondingly low. You will require less "margin" when it comes to tools, machines and even people. One of my previous posts, It's Not What You've Got, has some details on application of heijunka for machines and licenses.

    #5. Build a culture of stopping to fix problems, to get quality right the first time.

    Most ASIC design projects go through concurrent evolution of RTL, floorplan and package. Sometimes, for new IP, we can expect that the IP vendor will provide multiple drops of increasing maturity as the project progresses. Given this state of affairs, does this principle imply that we must freeze RTL before running synthesis? or that the IP must be solid before the RTL is designed? No! Quality is relative to expectations and point of view. For synthesis, the meaning of high-quality IP might be that the timing information contained in the libraries are final or close to final. This would allow synthesis to proceed without nasty timing surprises down the line. The area of the IP is immaterial. The IP need not even have a dummy placeholder LEF. For place-and-route, high quality can be taken to mean fixed IP macro size and fixed pin locations. The GDSII that matches the size and pin locations can come later during the final DRC runs. Clearly, this principle does not preclude usage of a concurrent evolution methodology.

    What we seek to prevent is the propagation of low quality (as defined by expectations) in the ASIC flow. Quality is a box that is centered around a target number (area in sq.mm, for example) and bounded on either side by tolerance limits (within 5% of final design area, for example). Anything within the box is of acceptable quality. Anything outside the box is of low quality. Low quality RTL, for example, can be defined as RTL that does not include blocks with large area impact.
    When the designer does not fully capture the area of the design, the place-and-route strategy will suffer. When the design grows by a million gates over the space of a few iterations, your floorplan and powerplan go out of the window. In fact, you might find yourself in the unenviable position of having to switch from a flat place-and-route strategy to a hierarchical one.

    Low quality inputs lead to low quality outputs. Rather than waste time and effort downstream, do the right thing: Get quality right the first time around.

    #6. Standardized tasks and processes are the foundation for continuous improvement and employee empowerment.

    The benefits of standardization are many.
    • Standardization of processes is essential to meet quality constraints. When each engineer has his or her own way of doing things, the quality of the output varies widely. Further, there is a high probability of the introduction of errors. When you diligently follow a proven process that is known to provide quality output, the chance for errors are minimized. A standard synthesis script that is to be used across all projects is an example of a standardization that makes it easy to meet quality constraints.
    • Standardization makes it easier to map out and subsequently meet schedules. When you have fully mapped the path from RTL to GDSII, you already have an idea of the critical execution path and how long the process will take. In fact, through standardization, scheduling methodology itself will be a well-documented process.
    • Standardization is the first step towards automation. When your processes are standardized with respect to inputs, outputs and measurable quality metrics, you already have a specification that can be used to automate the menial and repeatable sections of the ASIC design process.
    • Standard processes are required to be fully specified before one attempts any process improvement strategies. Before you can improve, you must know what it is you're improving upon. Further, when the process improvements suggested are "proven", they can be incorporated as a part of the standard design process. If you find (and prove) a way to improve simulation runtime by 15%, it is easy for everyone in the organization to reap the benefits if this improvement is incorporated into the standard ASIC design process.
    • Standardized processes that are accessible at all levels serve to empower employees. When everybody knows what needs to be done and what is important, they have a checklist against which to evaluate a situation and their proposed response. Employees are empowered because they are able to take decisions with greater independence. Further, they have the confidence that the decisions made are both right and justifiable in light of set processes.

    #7. Use visual control so no problems are hidden.

    The principle of visual control is not about a fancy GUI with lots of bells and whistles. It's about providing the user the easiest possible way to check whether their work is on track or not. The idea is : summarize, summarize, summarize. Create indicators that make it easy to take decisions. It's the difference between a IR drop text report and a heat map. The former is accurate but the latter is a more effective indicator. But, don't think you have to draw pictures all the time. A simple summary table that captures the essence of a 10000 line timing report is also a visual indicator in this context. Effectively summarizing a report is one level of visual control. What about the entire design project? Imagine, for example, an active version of your ASIC methodology flow diagram. As tasks get completed, boxes turn green in real time. Perhaps, the size of the boxes could be indicative of the time taken for a task. In one look, the project manager knows where the project is and what needs to be done.

    Be sure to tune the visual indicators to match the target end-user. Unlike the ASIC flow visual used by the design manager, a timing engineer would see a summary table with violations categorized by slack, number of violations, violation clock groups, etc. For someone at the CEO level, all they'd see is a progress bar saying 'project 65% complete'.

    #8. Use only reliable, thoroughly tested technology that serves your people and processes.

    Stable processes built on stable technologies and tools is what enables a fabless ASIC company to deliver quality products on a predictable schedule. Given this, there has to be some very compelling reasons for a company to migrate to newer technologies, processes and tools.
    • Will it fit into and improve our processes?
      If no, you can stop right here. The tools serve the process and not the other way around. Before anything else, ensure that your new tools will fit into or improve your processes.
    • Will it benefit our engineers?
      Even the most technologically advanced tool is worth nothing if it cannot be used by your engineers. Ensure the people who actually going to use the tool think it's going to improve their lot.
    • Has it been tested by us?
      Ensure that you go put the tool through its paces with meaningful tests. It's good to have a formal evaulation process and a set of testcases to measure the benefits of any new tool or technology.
    • Does it consistently offer a marked value improvement over current tools?
      It is essential that a new tool or technology offer a significant value addition over existing ones for successful adoption. Usually, there's substantial institutional knowledge (tacit or explicit) about incumbent tools and technologies. When you shift, there's a learning curve that will cause some impact in terms of productivity. Further, the value addition is what will drive your people to adopt the new technology or tool. If new tools do not deliver significant value over current ones, they's not worth using.
    • Does it require a significant rework of processes?
      Much like the previous question, some tools might require too many changes to the way you do business. Rapid change is an energizing concept for business books but suicidal when it comes to mucking around with processes.
    You don't want to become a dinosaur clinging onto processes till you become extinct. You don't want to surf on the edge of chaos, either. How does one reconcile constant improvement of processes with the if-it-aint-broke diktat? Separate the improvement of processes from the mainstream. Live projects continue using proven technologies and processes with success. At the same time, there is a group or an effort to evaluate new technologies, tools and methodologies that have the potential to offer a value addition over current technologies, tools and processes. If the improvements show promise after exhaustive testing, they can be incorporated into the mainstream methodology.

    #9. Grow leaders who thoroughly understand the work, live the philosophy, and teach it to others.

    If corporations are organisms, corporate culture is akin to the genes. People, like cells, come and go but the culture of the corporation remains unchanged. What determines corporate culture in the first place? As someone once wrote, things that get rewarded get done. The same goes for corporate culture. The culture that gets rewarded gets followed. This is great if your culture is the one you want. But, what if you're at A and you want to get to B? Effecting cultural change is not easy but, atleast, it's slow :). What is the best way to get from A to B and stay there? When you advance and reward people who live by and understand philosophy "B", you not only set in motion the change of culture from A to B but also, through positive feedback, ensure that the change sticks. Using visible and respected leaders who live, understand and spread the philosophy is one of the best ways to make this happen.

    #10. Develop exceptional people and teams who follow your company’s philosophy.

    If #9 was about getting the right "PR", this principle is about live demos. The evangelizing leaders are ok but you have to give people something to hold up and say "See?! this stuff works!". This principle works as a two-step process . Step 1: get people and teams to work within the defined process. Step 2: get exceptional. When exceptional work is accomplished using the defined process and tools, the corporate philosophy gets reinforced in the minds of the engineers.

    #11. Respect your extended network of partners and suppliers by challenging them and helping them improve.

    When it comes to ASIC design, there are not many companies that can go it alone. A typical fabless ASIC has EDA partners, IP partners and assembly and test partners. You could think of customers as partners in this context. Why would you want to challenge and help partners improve? Whether you like it or not, each of these partners is a co-pilot. If there's a problem with an IP macro, a project using the IP gets delayed until the IP is fixed. What if you could help your IP vendor deliver quality collateral the first time around?If you're an IP company, you probably spend a lot of time supporting customers whose IP is not integrated correctly. What if you could help your customers integrate your IP correctly the first time? Everybody wins!


    #12. Go and see for yourself to thoroughly understand the situation (Genchi Genbutsu).

    The prerequisite for any successful process improvement initiative is a good problem statement.
    This principle mandates that, in order to frame a good problem statement, you have to experience the pain firsthand. This way, your work is not directed by assumptions and guesses. You think and design from personal experience and have a solution that addresses the real problem. For example, what are the pain points in your ASIC flow? Too many ECOs? Is timing closure where your people burn most of their cycles? Or, perhaps, the last few DRCs? You'll never know until you experience it firsthand. Once you know what the problem is, the solution cannot be far behind.

    #13. Make decisions slowly by consensus, thoroughly considering all options; implement decisions rapidly (nemawashi).

    This principle is a masterful piece of social engineering. It's relatively simple to put a product project on the fast track. You create a skunkworks team and let them drive the project to completion. The problem with such an approach is that you will crash and burn when you apply it to processes. When it comes to processes affecting people, you first need to ensure that you're solving their most pressing problems. Second, you need buy-in from everyone to ensure successful adoption. That's why this principle works for processes. Counter to a skunkworks method, process design (or re-design) should be something every concerned stakeholder should have a hand in deciding. This approach provides superior solutions that also stand a better chance of large-scale adoption.
    • Since the decisions involve the key stakeholders, the outcome is probably a superior solution that addresses the real issues.
    • By having a larger number of people exploring the solution space, the quality of the solution is improved.
    • Since all stakeholders feel like they had a hand in the design, the not-invented-here issue does not arise.
    At the end of the decision-making process, there is support for the solution across the rank and file of the organization. Further, the solution will not undergo further changes. These two factors make possible the rapid implementation of the solution.

    #14. Become a learning organization through relentless reflection (hansei) and continuous improvement (kaizen).

    Think of the previous principles as directions. This 14th principle is the motive force that propels the organization. The 14th principle turns an organization into a living being constantly evolving to meet the challenges within and without. The principle is simple. First, everything and anything can be improved if you apply your intellect to its design. Second, incremental and continuous change at every level of the organization directed by the previous thirteen principles has the highest chance of success.

    Tags : , ,

    The ASIC Factory (Part 6) : The Toyota Way For Fabless ASICs

    This post is part 6 of the application of the 14 principles of the Toyota Way to the ASIC design process. To catch up, you can either read Part 1 , Part 2 , Part 3 , Part 4 and Part 5 or click on principles #1 through #11 below.

    #1. Base your management decisions on a long-term philosophy, even at the expense of short-term financial goals.

    #2.
    Create a continuous process flow to bring problems to the surface.

    #3.
    Use "pull" systems to avoid overproduction.

    #4. Level out the workload (heijunka). (Work like the tortoise, not the hare).

    #5. Build a culture of stopping to fix problems, to get quality right the first time.


    #6. Standardized tasks and processes are the foundation for continuous improvement and employee empowerment.

    #7. Use visual control so no problems are hidden.


    #8. Use only reliable, thoroughly tested technology that serves your people and processes.


    #9. Grow leaders who thoroughly understand the work, live the philosophy, and teach it to others.

    #10. Develop exceptional people and teams who follow your company’s philosophy.

    #11. Respect your extended network of partners and suppliers by challenging them and helping them improve.

    #12. Go and see for yourself to thoroughly understand the situation (Genchi Genbutsu).

    The prerequisite for any successful process improvement initiative is a good problem statement.
    This principle mandates that, in order to frame a good problem statement, you have to experience the pain firsthand. This way, your work is not directed by assumptions and guesses. You think and design from personal experience and have a solution that addresses the real problem. For example, what are the pain points in your ASIC flow? Too many ECOs? Is timing closure where your people burn most of their cycles? Or, perhaps, the last few DRCs? You'll never know until you experience it firsthand. Once you know what the problem is, the solution cannot be far behind.

    #13. Make decisions slowly by consensus, thoroughly considering all options; implement decisions rapidly (nemawashi).

    This principle is a masterful piece of social engineering. It's relatively simple to put a product project on the fast track. You create a skunkworks team and let them drive the project to completion. The problem with such an approach is that you will crash and burn when you apply it to processes. When it comes to processes affecting people, you first need to ensure that you're solving their most pressing problems. Second, you need buy-in from everyone to ensure successful adoption. That's why this principle works for processes. Counter to a skunkworks method, process design (or re-design) should be something every concerned stakeholder should have a hand in deciding. This approach provides superior solutions that also stand a better chance of large-scale adoption.

    • Since the decisions involve the key stakeholders, the outcome is probably a superior solution that addresses the real issues.
    • By having a larger number of people exploring the solution space, the quality of the solution is improved.
    • Since all stakeholders feel like they had a hand in the design, the not-invented-here issue does not arise.
    At the end of the decision-making process, there is support for the solution across the rank and file of the organization. Further, the solution will not undergo further changes. These two factors make possible the rapid implementation of the solution.

    #14. Become a learning organization through relentless reflection (hansei) and continuous improvement (kaizen).

    Think of the previous principles as directions. This 14th principle is the motive force that propels the organization. The 14th principle turns an organization into a living being constantly evolving to meet the challenges within and without. The principle is simple. First, everything and anything can be improved if you apply your intellect to its design. Second, incremental and continuous change at every level of the organization directed by the previous thirteen principles has the highest chance of success.


    Tags : , ,

    Monday, November 26, 2007

    The ASIC Factory (Part 5) : The Toyota Way For Fabless ASICs

    This post is part 5 of the application of the 14 principles of the Toyota Way to the ASIC design process. To catch up, you can either read Part 1 , Part 2 , Part 3 and Part 4 or click on principles #1 through #8 below.

    #1. Base your management decisions on a long-term philosophy, even at the expense of short-term financial goals.

    #2.
    Create a continuous process flow to bring problems to the surface.

    #3.
    Use "pull" systems to avoid overproduction.

    #4. Level out the workload (heijunka). (Work like the tortoise, not the hare).

    #5. Build a culture of stopping to fix problems, to get quality right the first time.


    #6. Standardized tasks and processes are the foundation for continuous improvement and employee empowerment.

    #7. Use visual control so no problems are hidden.


    #8. Use only reliable, thoroughly tested technology that serves your people and processes.


    #9. Grow leaders who thoroughly understand the work, live the philosophy, and teach it to others.

    If corporations are organisms, corporate culture is akin to the genes. People, like cells, come and go but the culture of the corporation remains unchanged. What determines corporate culture in the first place? As someone once wrote, things that get rewarded get done. The same goes for corporate culture. The culture that gets rewarded gets followed. This is great if your culture is the one you want. But, what if you're at A and you want to get to B? Effecting cultural change is not easy but, atleast, it's slow :). What is the best way to get from A to B and stay there? When you advance and reward people who live by and understand philosophy "B", you not only set in motion the change of culture from A to B but also, through positive feedback, ensure that the change sticks. Using visible and respected leaders who live, understand and spread the philosophy is one of the best ways to make this happen.

    #10. Develop exceptional people and teams who follow your company’s philosophy.

    If #9 was about getting the right "PR", this principle is about live demos. The evangelizing leaders are ok but you have to give people something to hold up and say "See?! this stuff works!". This principle works as a two-step process . Step 1: get people and teams to work within the defined process. Step 2: get exceptional. When exceptional work is accomplished using the defined process and tools, the corporate philosophy gets reinforced in the minds of the engineers.

    #11. Respect your extended network of partners and suppliers by challenging them and helping them improve.

    When it comes to ASIC design, there are not many companies that can go it alone. A typical fabless ASIC has EDA partners, IP partners and assembly and test partners. You could think of customers as partners in this context. Why would you want to challenge and help partners improve? Whether you like it or not, each of these partners is a co-pilot. If there's a problem with an IP macro, a project using the IP gets delayed until the IP is fixed. What if you could help your IP vendor deliver quality collateral the first time around?If you're an IP company, you probably spend a lot of time supporting customers whose IP is not integrated correctly. What if you could help your customers integrate your IP correctly the first time? Everybody wins!


    Tags : , ,

    Thursday, November 22, 2007

    The ASIC Factory (Part 4) : The Toyota Way For Fabless ASICs

    This post is part 4 of the application of the 14 principles of the Toyota Way to the ASIC design process. To catch up, you can either read Part 1 , Part 2 and Part 3 or click on principles #1 through #6 below.

    #1. Base your management decisions on a long-term philosophy, even at the expense of short-term financial goals.

    #2.
    Create a continuous process flow to bring problems to the surface.

    #3.
    Use "pull" systems to avoid overproduction.

    #4. Level out the workload (heijunka). (Work like the tortoise, not the hare).

    #5. Build a culture of stopping to fix problems, to get quality right the first time.


    #6. Standardized tasks and processes are the foundation for continuous improvement and employee empowerment.

    #7. Use visual control so no problems are hidden.


    The principle of visual control is not about a fancy GUI with lots of bells and whistles. It's about providing the user the easiest possible way to check whether their work is on track or not. The idea is : summarize, summarize, summarize. Create indicators that make it easy to take decisions. It's the difference between a IR drop text report and a heat map. The former is accurate but the latter is a more effective indicator. But, don't think you have to draw pictures all the time. A simple summary table that captures the essence of a 10000 line timing report is also a visual indicator in this context. Effectively summarizing a report is one level of visual control. What about the entire design project? Imagine, for example, an active version of your ASIC methodology flow diagram. As tasks get completed, boxes turn green in real time. Perhaps, the size of the boxes could be indicative of the time taken for a task. In one look, the project manager knows where the project is and what needs to be done.

    Be sure to tune the visual indicators to match the target end-user. Unlike the ASIC flow visual used by the design manager, a timing engineer would see a summary table with violations categorized by slack, number of violations, violation clock groups, etc. For someone at the CEO level, all they'd see is a progress bar saying 'project 65% complete'.

    #8. Use only reliable, thoroughly tested technology that serves your people and processes.

    Stable processes built on stable technologies and tools is what enables a fabless ASIC company to deliver quality products on a predictable schedule. Given this, there has to be some very compelling reasons for a company to migrate to newer technologies, processes and tools.

    • Will it fit into and improve our processes?
      If no, you can stop right here. The tools serve the process and not the other way around. Before anything else, ensure that your new tools will fit into or improve your processes.
    • Will it benefit our engineers?
      Even the most technologically advanced tool is worth nothing if it cannot be used by your engineers. Ensure the people who actually going to use the tool think it's going to improve their lot.
    • Has it been tested by us?
      Ensure that you go put the tool through its paces with meaningful tests. It's good to have a formal evaulation process and a set of testcases to measure the benefits of any new tool or technology.
    • Does it consistently offer a marked value improvement over current tools?
      It is essential that a new tool or technology offer a significant value addition over existing ones for successful adoption. Usually, there's substantial institutional knowledge (tacit or explicit) about incumbent tools and technologies. When you shift, there's a learning curve that will cause some impact in terms of productivity. Further, the value addition is what will drive your people to adopt the new technology or tool. If new tools do not deliver significant value over current ones, they's not worth using.
    • Does it require a significant rework of processes?
      Much like the previous question, some tools might require too many changes to the way you do business. Rapid change is an energizing concept for business books but suicidal when it comes to mucking around with processes.
    You don't want to become a dinosaur clinging onto processes till you become extinct. You don't want to surf on the edge of chaos, either. How does one reconcile constant improvement of processes with the if-it-aint-broke diktat? Separate the improvement of processes from the mainstream. Live projects continue using proven technologies and processes with success. At the same time, there is a group or an effort to evaluate new technologies, tools and methodologies that have the potential to offer a value addition over current technologies, tools and processes. If the improvements show promise after exhaustive testing, they can be incorporated into the mainstream methodology.


    Tags : , ,

    Monday, November 19, 2007

    The ASIC Factory ( Part 3) : The Toyota Way For Fabless ASICs

    This post is part 3 of the application of the 14 principles of the Toyota Way to the ASIC design process. You can find Part 1 and Part 2 here or click on principles #1 through #4 below.

    #1. Base your management decisions on a long-term philosophy, even at the expense of short-term financial goals.

    #2.
    Create a continuous process flow to bring problems to the surface.

    #3.
    Use "pull" systems to avoid overproduction.

    #4. Level out the workload (heijunka). (Work like the tortoise, not the hare).

    #5. Build a culture of stopping to fix problems, to get quality right the first time.

    Most ASIC design projects go through concurrent evolution of RTL, floorplan and package. Sometimes, for new IP, we can expect that the IP vendor will provide multiple drops of increasing maturity as the project progresses. Given this state of affairs, does this principle imply that we must freeze RTL before running synthesis? or that the IP must be solid before the RTL is designed? No! Quality is relative to expectations and point of view. For synthesis, the meaning of high-quality IP might be that the timing information contained in the libraries are final or close to final. This would allow synthesis to proceed without nasty timing surprises down the line. The area of the IP is immaterial. The IP need not even have a dummy placeholder LEF. For place-and-route, high quality can be taken to mean fixed IP macro size and fixed pin locations. The GDSII that matches the size and pin locations can come later during the final DRC runs. Clearly, this principle does not preclude usage of a concurrent evolution methodology.

    What we seek to prevent is the propagation of low quality (as defined by expectations) in the ASIC flow. Quality is a box that is centered around a target number (area in sq.mm, for example) and bounded on either side by tolerance limits (within 5% of final design area, for example). Anything within the box is of acceptable quality. Anything outside the box is of low quality. Low quality RTL, for example, can be defined as RTL that does not include blocks with large area impact.
    When the designer does not fully capture the area of the design, the place-and-route strategy will suffer. When the design grows by a million gates over the space of a few iterations, your floorplan and powerplan go out of the window. In fact, you might find yourself in the unenviable position of having to switch from a flat place-and-route strategy to a hierarchical one.

    Low quality inputs lead to low quality outputs. Rather than waste time and effort downstream, do the right thing: Get quality right the first time around.

    #6. Standardized tasks and processes are the foundation for continuous improvement and employee empowerment.

    The benefits of standardization are many.

    • Standardization of processes is essential to meet quality constraints. When each engineer has his or her own way of doing things, the quality of the output varies widely. Further, there is a high probability of the introduction of errors. When you diligently follow a proven process that is known to provide quality output, the chance for errors are minimized. A standard synthesis script that is to be used across all projects is an example of a standardization that makes it easy to meet quality constraints.
    • Standardization makes it easier to map out and subsequently meet schedules. When you have fully mapped the path from RTL to GDSII, you already have an idea of the critical execution path and how long the process will take. In fact, through standardization, scheduling methodology itself will be a well-documented process.
    • Standardization is the first step towards automation. When your processes are standardized with respect to inputs, outputs and measurable quality metrics, you already have a specification that can be used to automate the menial and repeatable sections of the ASIC design process.
    • Standard processes are required to be fully specified before one attempts any process improvement strategies. Before you can improve, you must know what it is you're improving upon. Further, when the process improvements suggested are "proven", they can be incorporated as a part of the standard design process. If you find (and prove) a way to improve simulation runtime by 15%, it is easy for everyone in the organization to reap the benefits if this improvement is incorporated into the standard ASIC design process.
    • Standardized processes that are accessible at all levels serve to empower employees. When everybody knows what needs to be done and what is important, they have a checklist against which to evaluate a situation and their proposed response. Employees are empowered because they are able to take decisions with greater independence. Further, they have the confidence that the decisions made are both right and justifiable in light of set processes.


    Tags : , ,

    Friday, November 16, 2007

    The ASIC Factory (Part 2) : The Toyota Way For Fabless ASICs

    This post is part 2 of the application of the 14 principles of the Toyota Way to the ASIC design process. You can find Part 1 here or click on principles #1 and #2 below.

    #1. Base your management decisions on a long-term philosophy, even at the expense of short-term financial goals.

    #2.
    Create a continuous process flow to bring problems to the surface.

    #3.
    Use "pull" systems to avoid overproduction.

    In "pull" systems, the control flow is backward not forward. Each stage "pulls" its required inputs from previous stages rather than having inputs "pushed" onto it. You replenish items that are being depleted (i.e sold) rather than creating items in set proportions. "Pull" processes avoid overproduction by generating items in response to consumer demand. Further, note the complete lack of upfront scheduling. One of the advantages of "pull" systems is that they are self-scheduling. Each stage signals the previous stage in such a manner that the end goal is reached just in time.

    In an ASIC design context, the translation of the principle would be that each stage of the ASIC design flow "pulls" data from previous stages. For example, place-and-route "pulls" data from scan insertion. There is no point in rushing through scan insertion if the scan-inserted netlist is not going to go through place-and-route for the next few days. The DFT engineer does not need to create a scan inserted netlist until the PNR engineer is ready to use the scan inserted netlist. Prioritizing is simplified, too. When juggling multiple projects, the DFT engineer simply works on the design that flags him or her first. In such a system, each engineer will work on high-priority items first and project will progress at the right pace and complete on time.


    #4.
    Level out the workload (heijunka). (Work like the tortoise, not the hare).

    Leveling out the workload has multiple benefits. Since the load on resources is close to constant, it is easy to predict demand and plan accordingly. As resources are closely matched to load, there is optimal utilization of resources as well. In organizations where the load varies wildly, the management will have to plan for the worst case. The problem is that these acquired resources will never be fully utilized during normal periods. When you level out the workload, transitional peak requirements are correspondingly low. You will require less "margin" when it comes to tools, machines and even people. One of my previous posts, It's Not What You've Got, has some details on application of heijunka for machines and licenses.


    Tags : , ,

    Wednesday, November 14, 2007

    The ASIC Factory (Part 1) : The Toyota Way For Fabless ASICs

    Reading The Toyota Way by Jeffrey Liker got me to thinking about the benefits of bringing manufacturing into the realm of ASIC design (low cost, high quality, predictability, etc). For those who don't know, The Toyota Way is Toyota's management philosophy. The production system reflecting that philosophy allows Toyota to consistently figure as one of the best companies in the world. It's hard to comprehensively describe a philosophy in words but it is generally accepted that there are 14 principles that capture the essence of the Toyota Way. How can these 14 be applied to ASIC design? My thoughts:

    #1. Base your management decisions on a long-term philosophy, even at the expense of short-term financial goals.

    Focus on core competencies and don't waste too much energy pursuing multiple courses of action. This would apply to designs too. Trying to be good at everything from low-power wireless designs to high-performance multi-core processors is a recipe for disaster. You could extend this to methodologies or even EDA tools. Streamline. Focus. On the flip side, don't let the lack of a large current market prevent you from pursuing technologies or products that would have a great future market. Lastly, distinguish between the two (easier said than done but someone has got to say it ;) ).

    #2. Create a continuous process flow to bring problems to the surface.

    Have a methodology and design process that is transparent and efficient. It will allow you to easily spot problems in the flow. Minimize idle time and non-value added work. In the course of work, a design engineer:

    • writes a script
    • checks the syntax
    • executes the script
    • waits for the results
    • opens some reports
    • checks specific parameters (slack, perhaps)
    Writing the script and checking the slack are pretty much the only steps that really adds value. Everything else is a waste of the engineer's time. What are you doing about it?

    Tags : , ,

    Monday, November 05, 2007

    Bridging the Divide : EDA User Groups For Multi-Vendor Flows

    There's SNUG for Synopsys and MUSIC for Magma. There's User2User for Mentor and CDNLive for Cadence. But, it still feels like something's missing. Don't get me wrong, these conferences are a great place to learn about the latest and greatest ways to use your favorite tool. It's just than no one is looking after the interface between the EDA vendors. If you are one of many ASIC design houses that use multiple EDA vendors, then you know what I'm talking about. If the place-and-route tool does not re-order the scan chains stitched by your DFT tool, What do you do? It may be the PNR tool's problem or the DFT tool's problem, but it is certainly your problem!

    There's potential for some mismatches when a flow crosses EDA vendor boundaries and, usually, it's up to the users to make the tools from different vendors work together. While one does get a whole lot of support from the vendors, the ultimate responsibility rests on the ASIC engineer. Further, the inter-vendor issues are discovered anew by each ASIC design company (or worse, each engineer) and precious cycles are wasted in figuring out the issue and possible workarounds. If only there was some way to document the issue and associated workarounds for the benefit of all ASIC design companies! Enter Multi-Vendor EDA User Groups, stage left.

    EDA vendors might not be the biggest sponsors of such groups as PR ROI might not be as great as for their individual events. Maybe all the small EDA companies that cannot afford their own user groups can get together on this one. But, I'm guessing this idea needs a more democratic driving force to really work. The people who'd make it a success would be the ones who feel the pain.

    Multi-Vendor User Groups. Of the Fabless ASICs, By the Fabless ASICs, For the Fabless ASICs?

    Tags : , ,


    Tuesday, October 30, 2007

    It's a Bird, It's a Plane...It's a Proprietary Processor Architecture ! : If ARM's Microsoft, Who is Red Hat?

    How are third-party ARM-compatible processor cores like dodos? In both cases, there are none in existence and those that were alive were hunted down and killed (Here lies PicoTurbo...). Take a look around. There are no alternative vendors for your favorite processor core. That's because most of the proprietary cores are so well-protected that its almost impossible to meet the ISA spec and not violate a patent. This issue presents a problem. Processor cores are to ASICs what operating systems are to computers : a platform. Since those that control the platform make the rules and the money (just ask Bill), there's a pitched battle between the IP companies to dominate the market. While market shares of individual companies such as ARM and MIPS go up and down, proprietary processor cores as a whole dominate the ASIC landscape. At this point, you may ask yourself : What's the problem with that?

    • Freedom : Freedom to modify, share and improve processor cores and generally enrich mankind with your knowledge.
    • Safety : Safety in the knowledge that your chosen architecture is non-proprietary and, thus, is not controlled by any one entity. Further, its non-proprietary nature allows you to benefit from competition among vendors.
    • Money : The business model for processor cores companies is pretty standard. The developers of the core get a percentage of sales revenue of devices built on top of their core. If you adopt a free (as in beer and freedom) architecture, you save those dollars for yourself.
    Significant obstacles have to be overcome before any open architecture can match up to the proprietary ones.
    • Ecosystem : A solid ecosystem is essential to the success of any platform. Proprietary processors enjoy unrivaled ecosystems that are not even close to being matched by open architectures such as Sparc.
    • Support : Processor core companies know their cores inside out. They spend their lives developing and advancing processors and the associated tool chains. It's hard to find that kind of laser-beam focus and support in the case of open architectures.
    • Marketing : This is an important issue. There are no evangelists with deep pockets broadcasting the stability and maturity of open architectures. Linux has RedHat, IBM and Sun to toot its horn. What of hardware?
    • Image : The image of open source software took some time to go from hacker's science experiments to rock-solid enterprise applications. The image of open source hardware is yet to undergo a similar transition.
    The shortcomings of the open architecture can be overcome with the backing of solid profit-driven companies that align their business with open source processor architectures. Like I said, what we really need is a Red Hat of processor cores...



    Tags : , , , , ,

    Monday, October 29, 2007

    The DFT Arms Race: Technological Convergence of Test Solutions From Magma and Synopsys

    Magma's Talus ATPG has put Magma very close (in technological terms) to matching Synopsys's test solutions. The first thing that struck me about the Talus press coverage was the high level of similarity between tool features touted by Magma and Synopsys.

    #1 : Combinational On-Chip Compression
    . Synopsys projected Adaptive Scan's low-overhead fully-combinational architecture as revolutionary in comparison to sequential compression technologies from Mentor's TestKompress. Now, Magma claims their ATPG-X solution consists of a "stateless broadcaster" and a "combinational compactor" duo.

    #2 : Slack-Aware Delay Testing. Propagating faults along the longest path in transition patterns rather than the easiest path was something Synopsys had been working on for some time now. The patent for this technology was filed in 1999 and granted in 2002! Yet, both Magma and Synopsys give press releases at the same time for this feature.

    #3 : Concurrent Fault Detection. This one is just plain spooky. First, at SNUG 2007 Bangalore, we have a paper ( Concurrent Fault Detection : The New Paradigm in Compacted Patterns) on detecting multiple fault models using one pattern set using TetraMax. Now, it is claimed that "in Talus ATPG, when one fault model is targeted, other fault models are automatically simulated".

    #4 : Just about Everything Else. Imitation , it is said, is the sincerest form of flattery. That said, imitation is also the first step in having the original eat your dust. From supporting industry standard STIL to diagnosis of compressed patterns, it appears that Magma has designed Talus ATPG to capitalize on Synopsys's success with TetraMax and DFT-Max.

    The big difference, the one Magma is counting on, is the level of integration between test and place-and-route. It remains to be seen if this factor will be enough to overcome the dominance of DFT-Max and TetraMax.

    Tags : , , , ,

    Friday, October 26, 2007

    Divide and Conquer : The Case for a Distributed EDA Future

    In two of my previous posts, I had dealt with the issues of machines for EDA tools. Need for Speed was about tuning machines to accelerate EDA tool performance while It's Not What You've Got had some ideas on optimizing machine utilization in the ASIC enterprise. Now, it's EDA's turn. My argument is that, if we look at the trends of designs and machines, distributed EDA algorithms and tools will start to dominate over non-distributed ones (multi-threaded or not). In a nutshell, my argument is : "Speed is no longer the bottleneck for design execution, memory is".

    Consider the following:

    • A : Designs are constantly growing (increased gate count and placeable instances).
    • B : The data structure for a cell, net or design object is also growing (as tools start tracking and optimizing designs across multiple degrees of freedom).
    • A + B : The memory requirement for designs are growing.
    • C : Commercial servers and motherboards don't have infinite slots for RAM. I think the current figure is around 16GB per CPU.
    • A + B + C : There might be a design that will not fit on a server in one shot!
    • D : A 64GB (4-CPU) Opteron is not 8x more expensive than a 8GB (1-CPU) opteron; It's 19x more expensive!
    • A + B + C + D : Even if a design could fit on one server, it's going to be a very expensive design to execute!
    What will you do when your sign-off timing, simulation or extraction run on your mammoth cannot fit on one server?

    Distributed EDA will surely be slower than their non-distributed counterparts, but can one not build an EDA tool that runs on 19 single-CPU opterons (with a total of 19CPUs and 152GB of RAM among them) with the same efficiency as a 4-CPU opteron with 64GB of RAM? We have 19x margin in CPU resources and 2x margin in RAM resources!

    Tags : , ,

    Wednesday, October 24, 2007

    The Last Bastion II : Some Q&A with DeFacto President and CTO Dr. Chouki Aktouf

    In an earlier post, The Last Bastion Of DFT Tradition Falls : DeFacTo Technologies Introduces RTL Scan Insertion , I had listed some questions about HiDFT-Scan's capabilities. Dr. Chouki Aktouf (CTO, DeFacTo) was kind enough to answer these questions. Here's what Dr.Aktouf had to say about HiDFT-Scan.

    Q : The change of methodology proposed is quite drastic. What's the easiest way adopt this flow in a phased manner?

    The change in the overall test process is not that drastic; mainly you add DRC at RTL in conjunction with new checks (that now become possible) and displace gate-level scan insertion to RTL. HiDFT-Scan can be adopted progressively, IP-based for instance. The tool usage is also easy and accessible to RTL designers.

    Q: Is HiDFT-Scan physically-aware when it comes to scan insertion?

    Not physically aware, since we implement functional scan; but our scan is data-path sensitive. This ensures a good optimization during synthesis and P&R.


    Q: Other EDA vendors are very much ahead in the DFT game and have very compelling DFT technologies. Synopsys has adaptive scan. Mentor has TestKompress. Is there a way to use HiDFT-Scan with tools from these vendors?

    Absolutely. You can use HiDFT-Scan in conjunction with these tools. A typical example is TestKompress. Through HiDFT-Scan you can automatically access to TK and generate your final RTL including scan+test compression logic.

    Q: Will we get a chance to see HiDFT in action?

    If you are present at ITC, we will be pleased to talk to you at our booth #837.

    Tags : , ,

    Tuesday, October 23, 2007

    Triple Digits in Four Years : Open-Silicon Books 100th Design Win

    I'm happy to say that Open-Silicon has now won a total of 100 designs in a short span of just 4 years. These design wins are across the spectrum in terms of both applications (low-power wireless to high-performance cluster nodes) and processes (0.25u to 45nm). Here's what Pierre Lamond (of Sequoia Capital) had to say about the company and its achievement:

    "Open-Silicon has been an agent for change in the ASIC market from the moment they launched their innovative business model. Their ability to book 100 design wins in four years is a testament to the market's need for highly predictable and reliable custom silicon. Open-Silicon's skill at delivering on their customer's critical time-to-market requirements is what makes them one of the fastest growing companies in the market."

    Tags : ,

    The Last Bastion Of DFT Tradition Falls : DeFacTo Technologies Introduces RTL Scan Insertion

    It had to happen. Someone would eventually come up with a way to insert scan onto RTL. Turns out that someone is DeFacTo Technologies of France. The tool is called HiDFT-Scan.

    In some ways, this is a great idea:

    • Scan insertion would be fast (as it works with RTL)
    • Synthesis step between RTL ECOs and scan insertion is avoided
    • Scan insertion is process/technology independent and can be ported easily to new nodes
    But there are some questions that need to be answered:
    • The change of methodology proposed is quite drastic. What's the easiest way adopt this flow in a phased manner?
    • Is HiDFT-Scan physically-aware when it comes to scan insertion?
    • Other EDA vendors are very much ahead in the DFT game and have very compelling DFT technologies. Synopsys has adaptive scan. Mentor has TestKompress. Will we have to abandon these vendors or will DeFacTo play well with others?
    Tags : , ,, ,

    Monday, October 22, 2007

    It's Not What You've Got, It's How You Use It : Ideas For Cost-effective Server Farms


    The optimal utilization of resources is instrumental in improving the profitability of any enterprise. As a fabless ASIC company scales in size, the issue of automated/intelligent resource management becomes a bottleneck. You're too big to be able make snap decisions in response to immediate demand. There has to be a formal process of measuring resource requirements and subsequent acquisition of said resources. One such resource that requires intelligent estimation and acquisition are compute resources available to your engineers. The compute resources referred to are the sum total of compute power in the enterprise that are capable of executing EDA tools . These include both the dedicated high-end compute servers and often overlooked workstations that sit in every engineer's cubicle. It is true that machines will probably come third when it comes to expenses (after people and EDA licenses) but there is a lot of scope for more efficient machine usage because it is third in line! Some ideas to make the most of your servers:

    1. Get a queue. When resources are accessed through a load-sharing mechanism such as LSF or FlowTracer, it is a whole lot easier to analyze trends and increase utilization across all machines. Low-power desktops that do not usually perform anything more intensive than a screen refresh will, on the queue, be put to use on jobs that meet their memory limits and save your high-end servers for jobs that require them. A enterprise with a queue has the advantage of a scalable system. Your engineers see a single interface whether you have ten machines or a thousand.
    2. Choose Your Machines Carefully. With data on jobs and memory, one can can create a machine pool that reflects these requirements. Why is this important? Have a look at machine prices on Epinions. An 8GB single-CPU Opteron will cost you $900 a piece . On the other hand, a 64GB machine with 4 CPUS will cost you $17400. If most jobs take up less than 8GB of memory, you'd be better off choosing low-end servers instead of high-end ones. Why get a 64GB machine when you can get 19 8GB opterons instead?
    3. Spread it Out. Ensure that there's a mechanism to even out demand throughout the working day or week. One sign that you might need such a policy is that there don't seem to be enough machines during the day but most of your servers are idle at night! In the absence of such measures, you might end up with more machines that are acquired to meet just the peak load. By instituting a policy that ensures an even utilization across the day, you require lesser resources while utilizing those resources to the maximum.
    Tags : , , , , ,

    Sunday, October 21, 2007

    IP For Nothing and Chips for Free : Think Silicon offers free soft IP using a web interface

    One of the issues with Open Cores is its large number of GPL'ed IP. Using GPL in your design, under some interpretations, may require you to disclose your whole design. Can a company benefit from freely available high-quality IP and yet maintain its internally developed IP? Just when I thought such IP was impossible to find, Think Silicon comes in and creates a web-accessible free IP generator. To be fair, the modules offered are small but the impact is huge. See it here.

    Tags : , ,

    Tuesday, October 16, 2007

    Need For Speed : Optimized Servers and Platforms For EDA Tools

    You can never have too many Opterons. No amount of memory is ever enough. 64bit is no longer an option but a necessity. Sounds familiar? EDA tools push compute servers to their limits. Memory requirements are measured in GB and runtimes in days. Interesting, then, that the platforms that run these high-performance tools are the same machines, configurations and operating systems that are used by John Q. Server down the street for everything from email to file storage. If there's a Blackbird for gaming, should ASIC design engineers settle for anything less?

    Imagine, if you will, a platform whose components and operating system are tweaked to speed up EDA execution. A wishlist for an EDA-optimized platform:

    1. Blazing floating-point instruction sets with vector support (for those iterative crosstalk-aware timing runs or a router's calculations to simultaneously satisfy setup, hold, noise, crosstalk, DRC, Antenna, signal-EM....you get the idea)
    2. 100s of GBs of memory ( and a motherboard that supports that)
    3. Excellent graphics support (so that your place-and-route display doesn't hang so often)
    4. Optimized shared libraries that support graph partitioning algorithms and sparse matrix computations (that just about every tool will use)
    Ideally, we require a platform that accelerates EDA tool performance but does not restrict EDA tools from running on any other server. In other words, no proprietary libraries that only the accelerated server can use. Try locking people in and you just might be locking them out. All functions that are used in the accelerated server should be available for normal servers (they'd be slower, of course). One way I see is EDA tools would use a normal version of shared library objects on John Q. Server but the accelerated version would be much faster (with access to corresponding hardware) on the EDA-optimized platform.

    Some options I see of making this platform and the business of such platforms work:
    1. Create your own custom EDA compute server. A Tharas Hammer for general-purpose EDA, so to speak.
    2. Create a card for accelerating EDA tools, release a SDK for it, and have EDA vendors use it
    3. Creating your own card is still on the expensive side and possibly risky. Can we use off-the-shelf cards to build this accelerator? Use Nvidia's CUDA technology, for example?
    4. Work only on the OS and hardware. Tweak, tweak and then tweak some more. Use only off-the-shelf components and an open-source OS base to build a killer compute server. Sell the OS+Machine to all the ASIC companies at a slight premium over the standard package. Can you see all those deep red blade servers wall-to-wall in server rooms?

    Tags : , ,,