908 resultados para Design verification of VLSI circuits
Resumo:
The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.
Resumo:
Design verification in the digital domain, using model-based principles, is a key research objective to address the industrial requirement for reduced physical testing and prototyping. For complex assemblies, the verification of design and the associated production methods is currently fragmented, prolonged and sub-optimal, as it uses digital and physical verification stages that are deployed in a sequential manner using multiple systems. This paper describes a novel, hybrid design verification methodology that integrates model-based variability analysis with measurement data of assemblies, in order to reduce simulation uncertainty and allow early design verification from the perspective of satisfying key assembly criteria.
Resumo:
An efficient geometrical design rule checker is proposed, based on operations on quadtrees, which represent VLSI mask layouts. The time complexity of the design rule checker is O(N), where N is the number of polygons in the mask. A pseudoPascal description is provided of all the important algorithms for geometrical design rule verification.
Resumo:
This paper introduces a complete CAD toolset for the implementation of digital logic in a Field-Programmable Gate Array (FPGA) platform. Compared with existing academic toolsets, this toolset introduces formal verification in each step of the tool flow, especially the formal verification of the configuration bitstream. The FPGA CAD tool verification flow using Formality is presented in detail. Using plug-in technology, we have developed an integrated FPGA design kit to incorporate all tools together.
Resumo:
In this paper we present a methodology and its implementation for the design and verification of programming circuit used in a family of application-specific FPGAs that share a common architecture. Each member of the family is different either in the types of functional blocks contained or in the number of blocks of each type. The parametrized design methodology is presented here to achieve this goal. Even though our focus is on the programming circuitry that provides the interface between the FPGA core circuit and the external programming hardware, the parametrized design method can be generalized to the design of entire chip for all members in the FPGA family. The method presented here covers the generation of the design RTL files and the support files for synthesis, place-and-route layout and simulations. The proposed method is proven to work smoothly within the complete chip design methodology. We will describe the implementation of this method to the design of the programming circuit in details including the design flow from the behavioral-level design to the final layout as well as the verification. Different package options and different programming modes are included in the description of the design. The circuit design implementation is based on SMIC 0.13-micron CMOS technology.
Resumo:
The increasing complexity of VLSI circuits and the reduced accessibility of modern packaging and mounting technologies restrict the usefulness of conventional in-circuit debugging tools, such as in-circuit emulators for microprocessors and microcontrollers. However, this same trend enables the development of more complex products, which in turn require more powerful debugging tools. These conflicting demands could be met if the standard scan test infrastructures now common in most complex components were able to match the debugging requirements of design verification and prototype validation. This paper analyses the main debug requirements in the design of microprocessor-based applications and the feasibility of their implementation using the mandatory, optional and additional operating modes of the standard IEEE 1149.1 test infrastructure.
Resumo:
I have designed and implemented a system for the multilevel verification of synchronous MOS VLSI circuits. The system, called Silica Pithecus, accepts the schematic of an MOS circuit and a specification of the circuit's intended digital behavior. Silica Pithecus determines if the circuit meets its specification. If the circuit fails to meet its specification Silica Pithecus returns to the designer the reason for the failure. Unlike earlier verifiers which modelled primitives (e.g., transistors) as unidirectional digital devices, Silica Pithecus models primitives more realistically. Transistors are modelled as bidirectional devices of varying resistances, and nodes are modelled as capacitors. Silica Pithecus operates hierarchically, interactively, and incrementally. Major contributions of this research include a formal understanding of the relationship between different behavioral descriptions (e.g., signal, boolean, and arithmetic descriptions) of the same device, and a formalization of the relationship between the structure, behavior, and context of device. Given these formal structures my methods find sufficient conditions on the inputs of circuits which guarantee the correct operation of the circuit in the desired descriptive domain. These methods are algorithmic and complete. They also handle complex phenomena such as races and charge sharing. Informal notions such as races and hazards are shown to be derivable from the correctness conditions used by my methods.
Resumo:
An extension to a formal verification approach of hybrid systems is proposed to verify analog and mixed signal (AMS) designs. AMS designs can be formally modeled as hybrid systems and therefore lend themselves to the formal analysis and verification techniques applied to hybrid systems. The proposed approach employs simulation traces obtained from an actual design implementation of AMS circuit blocks (for example, in the form of SPICE netlists) to carry out formal analysis and verification. This enables the same platform used for formally validating an abstract model of an AMS design, to be also used for validating its different refinements and design implementation; thereby, providing a simple route to formal verification at different levels of implementation. The feasibility of the proposed approach is demonstrated with a case study based on a tunnel diode oscillator. Since the device characteristic of a tunnel diode is highly non-linear with a negative resistance region, dynamic behavior of circuits in which it is employed as an element is difficult to model, analyze and verify within a general hybrid system formal verification tool. In the case study presented the formal model and the proposed computational techniques have been incorporated into CheckMate, a formal verification tool based on MATLAB and Simulink-Stateflow Framework from MathWorks.
Resumo:
The paper describes a semianalytic slope delay model for CMOS switch-level timing verification. It is characterised by classification of the effects of the input slope, internal size and load capacitance of a logic gate on delay time, and then the use of a series of carefully chosen analytic functions to estimate delay times under different circumstances. In the field of VLSI analysis, this model achieves improvements in speed and accuracy compared with conventional approaches to transistor-level and switch-level simulation.
Resumo:
Residential RC framed structures suffered heavily during the 2001 Bhuj earthquake in Gujarat, India. These types of structures also saw severe damage in other earthquakes such as the 1999 Kocaeli earthquake in Turkey and 921 Ji-Ji earthquake in Taiwan. In this paper the seismic response of residential structures was investigated using physical modelling. Idealised soft storey and top heavy, two degrees of freedom (2DOF) portal frame structures were developed and tested on saturated and dry sand models at 25 g using the Schofield Centre 10-m Beam Centrifuge. It was possible to recreate observed field behaviour using these models. As observed in many of the recent earthquakes, soft storey structures were found to be particularly vulnerable to seismic loads. Elastic response spectra methods are often used in the design of simple portal frame structures. The seismic risk of these structures can be significantly increased due to modifications such as removal of a column or addition of heavy water tanks on the roof. The experimental data from the dynamic centrifuge tests on such soft storey or top-heavy models was used to evaluate the predictions obtained from the response spectra. Response spectra were able to predict seismic response during small to moderate intensity earthquakes, but became inaccurate during strong earthquakes and when soil structure interaction effects became important. Re-evaluation of seismic risk of such modified structures is required and time domain analyses suggested by building codes such as IBC, UBC or NEHRP may be more appropriate. © Springer 2006.
Resumo:
A wind catcher/tower natural ventilation system was installed in a seminar room in the building of the School of Construction Management and Engineering, the University of Reading in the UK . Performance was analysed by means of ventilation tracer gas measurements, indoor climate measurements (temperature, humidity, CO2) and occupant surveys. In addition, the potential of simple design tools was evaluated by comparing observed ventilation results with those predicted by an explicit ventilation model and the AIDA implicit ventilation model. To support this analysis, external climate parameters (wind speed and direction, solar radiation, external temperature and humidity) were also monitored. The results showed the chosen ventilation design provided a substantially greater ventilation rate than an equivalent area of openable window. Also air quality parameters stayed within accepted norms while occupants expressed general satisfaction with the system and with comfort conditions. Night cooling was maximised by using the system in combination with openable windows. Comparisons of calculations with ventilation rate measurements showed that while AIDA gave reasonably correlated results with the monitored performance results, the widely used industry explicit model was found to over estimate the monitored ventilation rate.
Resumo:
This paper formally derives a new path-based neural branch prediction algorithm (FPP) into blocks of size two for a lower hardware solution while maintaining similar input-output characteristic to the algorithm. The blocked solution, here referred to as B2P algorithm, is obtained using graph theory and retiming methods. Verification approaches were exercised to show that prediction performances obtained from the FPP and B2P algorithms differ within one mis-prediction per thousand instructions using a known framework for branch prediction evaluation. For a chosen FPGA device, circuits generated from the B2P algorithm showed average area savings of over 25% against circuits for the FPP algorithm with similar time performances thus making the proposed blocked predictor superior from a practical viewpoint.