928 resultados para TEST CASE GENERATION
Resumo:
The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.
Resumo:
Research on inverted pendulum has gained momentum over the last decade on a number of robotic laboratories over the world; due to its unstable proprieties is a good example for control engineers to verify a control theory. To verify that the pendulum can balance we can make some simulations using a closed-loop controller method such as the linear quadratic regulator or the proportional–integral–derivative method. Also the idea of robotic teleoperation is gaining ground. Controlling a robot at a distance and doing that precisely. However, designing the tool to takes the best benefit of the human skills while keeping the error minimal is interesting, and due to the fact that the inverted pendulum is an unstable system it makes a compelling test case for exploring dynamic teleoperation. Therefore this thesis focuses on the construction of a two-wheel inverted pendulum robot, which sensor we can use to do that, how they must be integrated in the system and how we can use a human to control an inverted pendulum. The inverted pendulum robot developed employs technology like sensors, actuators and controllers. This Master thesis starts by presenting an introduction to inverted pendulums and some information about related areas such as control theory. It continues by describing related work in this area. Then we describe the mathematical model of a two-wheel inverted pendulum and a simulation made in Matlab. We also focus in the construction of this type of robot and its working theory. Because this is a mobile robot we address the theme of the teleoperation and finally this thesis finishes with a general conclusion and ideas of future work.
Resumo:
This article introduces an efficient method to generate structural models for medium-sized silicon clusters. Geometrical information obtained from previous investigations of small clusters is initially sorted and then introduced into our predictor algorithm in order to generate structural models for large clusters. The method predicts geometries whose binding energies are close (95%) to the corresponding value for the ground-state with very low computational cost. These predictions can be used as a very good initial guess for any global optimization algorithm. As a test case, information from clusters up to 14 atoms was used to predict good models for silicon clusters up to 20 atoms. We believe that the new algorithm may enhance the performance of most optimization methods whenever some previous information is available. (C) 2003 Wiley Periodicals, Inc.
Resumo:
The Weyl-Wigner prescription for quantization on Euclidean phase spaces makes essential use of Fourier duality. The extension of this property to more general phase spaces requires the use of Kac algebras, which provide the necessary background for the implementation of Fourier duality on general locally compact groups. Kac algebras - and the duality they incorporate - are consequently examined as candidates for a general quantization framework extending the usual formalism. Using as a test case the simplest nontrivial phase space, the half-plane, it is shown how the structures present in the complete-plane case must be modified. Traces, for example, must be replaced by their noncommutative generalizations - weights - and the correspondence embodied in the Weyl-Wigner formalism is no longer complete. Provided the underlying algebraic structure is suitably adapted to each case, Fourier duality is shown to be indeed a very powerful guide to the quantization of general physical systems.
Resumo:
This paper develops a novel full analytic model for vibration analysis of solid-state electronic components. The model is just as accurate as finite element models and numerically light enough to permit for quick design trade-offs and statistical analysis. The paper shows the development of the model, comparison to finite elements and an application to a common engineering problem. A gull-wing flat pack component was selected as the benchmark test case, although the presented methodology is applicable to a wide range of component packages. Results showed very good agreement between the presented method and finite elements and demonstrated the usefulness of the method in how to use standard test data for a general application. © 2013 Elsevier Ltd.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
T he people’s daily lives are surrounded by computing devices, with increasing resources (sensors) and, with increasing processing. How these devices communicate is still not natural and This retards the growth of Ubiquitous Computing. This paper presents a way in which these devices can communicate using Jini technology and concepts of Service Oriented Architecture, applying these concepts in a test case of Ubiquitous Computing. To conduct the test case was constructed a fictitious system for management of a soccer championship, where users can interact with each other and with the system in a simplified way, have access to data in real time of the championship during the event. This communication is performed by services built using Jini technology, which were based on key SOA concepts, such as modularity and reusability
Resumo:
Pós-graduação em Engenharia Mecânica - FEG
Resumo:
The ability of nanoassisted laser desorption-ionization mass spectrometry (NALDI-MS) imaging to provide selective chemical monitoring with proper spatial distribution of lipid profiles from tumor tissues after plate imprinting has been tested. NALDI-MS imaging identified and mapped several potential lipid biomarkers in a murine model of melanoma tumor (inoculation of B16/F10 cells). It also confirmed that the in vivo treatment of tumor bearing mice with synthetic supplement containing phosphoethanolamine (PHO-S) promoted an accentuated decrease in relative abundance of the tumor biomarkers. NALDI-MS imaging is a matrix-free LDI protocol based on the selective imprinting of lipids in the NALDI plate followed by the removal of the tissue. It therefore provides good quality and selective chemical images with preservation of spatial distribution and less interference from tissue material. The test case described herein illustrates the potential of chemically selective NALDI-MS imaging for biomarker discovery.
Resumo:
Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.
Resumo:
In recent years, due to the rapid convergence of multimedia services, Internet and wireless communications, there has been a growing trend of heterogeneity (in terms of channel bandwidths, mobility levels of terminals, end-user quality-of-service (QoS) requirements) for emerging integrated wired/wireless networks. Moreover, in nowadays systems, a multitude of users coexists within the same network, each of them with his own QoS requirement and bandwidth availability. In this framework, embedded source coding allowing partial decoding at various resolution is an appealing technique for multimedia transmissions. This dissertation includes my PhD research, mainly devoted to the study of embedded multimedia bitstreams in heterogenous networks, developed at the University of Bologna, advised by Prof. O. Andrisano and Prof. A. Conti, and at the University of California, San Diego (UCSD), where I spent eighteen months as a visiting scholar, advised by Prof. L. B. Milstein and Prof. P. C. Cosman. In order to improve the multimedia transmission quality over wireless channels, joint source and channel coding optimization is investigated in a 2D time-frequency resource block for an OFDM system. We show that knowing the order of diversity in time and/or frequency domain can assist image (video) coding in selecting optimal channel code rates (source and channel code rates). Then, adaptive modulation techniques, aimed at maximizing the spectral efficiency, are investigated as another possible solution for improving multimedia transmissions. For both slow and fast adaptive modulations, the effects of imperfect channel estimation errors are evaluated, showing that the fast technique, optimal in ideal systems, might be outperformed by the slow adaptive modulation, when a real test case is considered. Finally, the effects of co-channel interference and approximated bit error probability (BEP) are evaluated in adaptive modulation techniques, providing new decision regions concepts, and showing how the widely used BEP approximations lead to a substantial loss in the overall performance.
Resumo:
In questo lavoro di tesi sono state evidenziate alcune problematiche relative alle macchine exascale (sistemi che sviluppano un exaflops di Potenza di calcolo) e all'evoluzione dei software che saranno eseguiti su questi sistemi, prendendo in esame principalmente la necessità del loro sviluppo, in quanto indispensabili per lo studio di problemi scientifici e tecnologici di più grandi dimensioni, con particolare attenzione alla Material Science, che è uno dei campi che ha avuto maggiori sviluppi grazie all'utilizzo di supercomputer, ed ad uno dei codici HPC più utilizzati in questo contesto: Quantum ESPRESSO. Dal punto di vista del software sono state presentate le prime misure di efficienza energetica su architettura ibrida grazie al prototipo di cluster EURORA sul software Quantum ESPRESSO. Queste misure sono le prime ad essere state pubblicate nel contesto software per la Material Science e serviranno come baseline per future ottimizzazioni basate sull'efficienza energetica. Nelle macchine exascale infatti uno dei requisiti per l'accesso sarà la capacità di essere energeticamente efficiente, così come oggi è un requisito la scalabilità del codice. Un altro aspetto molto importante, riguardante le macchine exascale, è la riduzione del numero di comunicazioni che riduce il costo energetico dell'algoritmo parallelo, poiché in questi nuovi sistemi costerà di più, da un punto di vista energetico, spostare i dati che calcolarli. Per tale motivo in questo lavoro sono state esposte una strategia, e la relativa implementazione, per aumentare la località dei dati in uno degli algoritmi più dispendiosi, dal punto di vista computazionale, in Quantum ESPRESSO: Fast Fourier Transform (FFT). Per portare i software attuali su una macchina exascale bisogna iniziare a testare la robustezza di tali software e i loro workflow su test case che stressino al massimo le macchine attualmente a disposizione. In questa tesi per testare il flusso di lavoro di Quantum ESPRESSO e WanT, un software per calcolo di trasporto, è stato caratterizzato un sistema scientificamente rilevante costituito da un cristallo di PDI - FCN2 che viene utilizzato per la costruzione di transistor organici OFET. Infine è stato simulato un dispositivo ideale costituito da due elettrodi in oro con al centro una singola molecola organica.
Resumo:
Questo lavoro si concentra sullo studio fluidodinamico del flusso multifase cavitante di un iniettore per applicazioni a motori ad iniezione diretta (GDI). L’analisi è stata svolta tramite l’uso del software CFD (Computational Fluid Dynamics) Star-CCM+^® sviluppato da CD-ADAPCO. L’obiettivo di questo studio è investigare i motivi che portano ad un diverso comportamento tra i rilievi della prova sperimentale di caratterizzazione dell’iniettore e quanto atteso dai valori nominali dettati dalla specifica dell’iniettore, con particolare riferimento alla distribuzione di portata fra i diversi ugelli. Il presente lavoro fa parte di una coppia di elaborati collegati tra loro e, pertanto, ha inoltre lo scopo di fornire dati utili allo sviluppo dell’altro tema di analisi mirato alla individuazione di parametri di qualità della miscela aria-combustibile non reagente utili alla previsione della formazione del particolato prodotto dalla combustione di un motore GDI. L’elaborato, costituito di 5 capitoli, è strutturato secondo lo schema sottostante. Nel capitolo 1 vengono presentate le motivazioni che lo hanno avviato e viene esposto lo stato dell’arte della tecnologia GDI. Il capitolo 2 è a sfondo teorico: in esso vengono riportati i fondamenti del processo di cavitazione nella prima parte e i modelli numerici utilizzati nell’analisi nella seconda. Il capitolo 3 descrive la modellazione e successiva validazione dei modelli tramite confronto con il test case ‘Comprensive hydraulic and flow field documentation in model throttle experiments under cavitation conditions’ (E. Winklhofer, 2001). Nella scelta dei modelli e dei parametri relativi, l’analisi si è basata su precedenti lavori trovati in letteratura. Successivamente è stato svolto uno studio di sensibilità per valutare la stabilità della soluzione a piccole variazioni nei valori dei parametri. La scelta dei parametri modellistici nel caso di interesse, l’iniettore multihole, si è basata inizialmente sui valori ‘ottimali’ ottenuti nel test case ed è l’argomento del capitolo 4. All’interno del capitolo si parla inoltre dell’analisi di sensibilità successiva, svolta con lo scopo di comprendere i motivi che portano allo sbilanciamento tra fori corrispondenti e al maggiore sviluppo del getto centrale rispetto agli altri. Nel capitolo 5 dopo un breve riepilogo dei punti fondamentali trattati nello svolgimento dell’elaborato, si tirano le conclusioni sull’analisi e si espongono gli sviluppi futuri.
Resumo:
The AEGISS (Ascertainment and Enhancement of Gastrointestinal Infection Surveillance and Statistics) project aims to use spatio-temporal statistical methods to identify anomalies in the space-time distribution of non-specific, gastrointestinal infections in the UK, using the Southampton area in southern England as a test-case. In this paper, we use the AEGISS project to illustrate how spatio-temporal point process methodology can be used in the development of a rapid-response, spatial surveillance system. Current surveillance of gastroenteric disease in the UK relies on general practitioners reporting cases of suspected food-poisoning through a statutory notification scheme, voluntary laboratory reports of the isolation of gastrointestinal pathogens and standard reports of general outbreaks of infectious intestinal disease by public health and environmental health authorities. However, most statutory notifications are made only after a laboratory reports the isolation of a gastrointestinal pathogen. As a result, detection is delayed and the ability to react to an emerging outbreak is reduced. For more detailed discussion, see Diggle et al. (2003). A new and potentially valuable source of data on the incidence of non-specific gastro-enteric infections in the UK is NHS Direct, a 24-hour phone-in clinical advice service. NHS Direct data are less likely than reports by general practitioners to suffer from spatially and temporally localized inconsistencies in reporting rates. Also, reporting delays by patients are likely to be reduced, as no appointments are needed. Against this, NHS Direct data sacrifice specificity. Each call to NHS Direct is classified only according to the general pattern of reported symptoms (Cooper et al, 2003). The current paper focuses on the use of spatio-temporal statistical analysis for early detection of unexplained variation in the spatio-temporal incidence of non-specific gastroenteric symptoms, as reported to NHS Direct. Section 2 describes our statistical formulation of this problem, the nature of the available data and our approach to predictive inference. Section 3 describes the stochastic model. Section 4 gives the results of fitting the model to NHS Direct data. Section 5 shows how the model is used for spatio-temporal prediction. The paper concludes with a short discussion.
Resumo:
In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.