910 resultados para test case generation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The self, roles and the ongoing coordination of human action. Trying to see ‘society’ as neither prison nor puppet theatre In the article it is argued that structural North-American role-sociology may be integrated with theories emphasizing ‘society’ as ongoing processes (f. ex. Giddens’ theory of structuration). This is possible if the concept of role is defined as a recurrence oriented to the action of others standing out as a regularity in a societal process. But this definition makes it necessary to in a fundamental way understand what kind of social being the role-actor is. This is done with the help of Hans Joas’ theory of creativity and Merleau-Pontys concept of ‘flesh’ arguing that Meads concept of the ‘I’ maybe understood as an embodied self-asserting I, which at least in reflexive modernity has the creative power to split Meads ‘me’ into a self-voiced subject-me and an other voiced object-me. The embodied I communicating with the subject-me may be viewed as that role-actor which is something else than the role played. But this kind of role-actor is making for new troubles because it is hard to understand how this kind of self is creating self-coherence by using Meads concept of ‘the generalized other’. This trouble is handled by using Alain Touraines concept of the ‘subject’ and arguing that the generalized other is dissolving in de-modernized modernity. In split modernity self-coherence may instead be created by what in the article is called the generalized subject. This concept means a kind of communicative future based evaluation, which has its base in the ‘subject’ opposing the split powers of both the instrumentality of markets and of life-worlds trying to create ‘fundamentalistic’ self-identities. This kind of self is communicative because it also must respect the other as ‘subject’. It exists only in the battle against the forces of the market or a community. It never constructs an ideal city or a higher type of individual. It creates and protects a clearing that is constantly being invaded, to use the words of the old Frenchman himself. Asa kind of test-case it is by the way in the article shown how Becks concept of individualization may be understood in a deeply social and role-sociological way.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A system built in terms of autonomous agents may require even greater correctness assurance than one which is merely reacting to the immediate control of its users. Agents make substantial decisions for themselves, so thorough testing is an important consideration. However, autonomy also makes testing harder; by their nature, autonomous agents may react in different ways to the same inputs over time, because, for instance they have changeable goals and knowledge. For this reason, we argue that testing of autonomous agents requires a procedure that caters for a wide range of test case contexts, and that can search for the most demanding of these test cases, even when they are not apparent to the agents’ developers. In this paper, we address this problem, introducing and evaluating an approach to testing autonomous agents that uses evolutionary optimization to generate demanding test cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Research on inverted pendulum has gained momentum over the last decade on a number of robotic laboratories over the world; due to its unstable proprieties is a good example for control engineers to verify a control theory. To verify that the pendulum can balance we can make some simulations using a closed-loop controller method such as the linear quadratic regulator or the proportional–integral–derivative method. Also the idea of robotic teleoperation is gaining ground. Controlling a robot at a distance and doing that precisely. However, designing the tool to takes the best benefit of the human skills while keeping the error minimal is interesting, and due to the fact that the inverted pendulum is an unstable system it makes a compelling test case for exploring dynamic teleoperation. Therefore this thesis focuses on the construction of a two-wheel inverted pendulum robot, which sensor we can use to do that, how they must be integrated in the system and how we can use a human to control an inverted pendulum. The inverted pendulum robot developed employs technology like sensors, actuators and controllers. This Master thesis starts by presenting an introduction to inverted pendulums and some information about related areas such as control theory. It continues by describing related work in this area. Then we describe the mathematical model of a two-wheel inverted pendulum and a simulation made in Matlab. We also focus in the construction of this type of robot and its working theory. Because this is a mobile robot we address the theme of the teleoperation and finally this thesis finishes with a general conclusion and ideas of future work.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article introduces an efficient method to generate structural models for medium-sized silicon clusters. Geometrical information obtained from previous investigations of small clusters is initially sorted and then introduced into our predictor algorithm in order to generate structural models for large clusters. The method predicts geometries whose binding energies are close (95%) to the corresponding value for the ground-state with very low computational cost. These predictions can be used as a very good initial guess for any global optimization algorithm. As a test case, information from clusters up to 14 atoms was used to predict good models for silicon clusters up to 20 atoms. We believe that the new algorithm may enhance the performance of most optimization methods whenever some previous information is available. (C) 2003 Wiley Periodicals, Inc.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Weyl-Wigner prescription for quantization on Euclidean phase spaces makes essential use of Fourier duality. The extension of this property to more general phase spaces requires the use of Kac algebras, which provide the necessary background for the implementation of Fourier duality on general locally compact groups. Kac algebras - and the duality they incorporate - are consequently examined as candidates for a general quantization framework extending the usual formalism. Using as a test case the simplest nontrivial phase space, the half-plane, it is shown how the structures present in the complete-plane case must be modified. Traces, for example, must be replaced by their noncommutative generalizations - weights - and the correspondence embodied in the Weyl-Wigner formalism is no longer complete. Provided the underlying algebraic structure is suitably adapted to each case, Fourier duality is shown to be indeed a very powerful guide to the quantization of general physical systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper develops a novel full analytic model for vibration analysis of solid-state electronic components. The model is just as accurate as finite element models and numerically light enough to permit for quick design trade-offs and statistical analysis. The paper shows the development of the model, comparison to finite elements and an application to a common engineering problem. A gull-wing flat pack component was selected as the benchmark test case, although the presented methodology is applicable to a wide range of component packages. Results showed very good agreement between the presented method and finite elements and demonstrated the usefulness of the method in how to use standard test data for a general application. © 2013 Elsevier Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

T he people’s daily lives are surrounded by computing devices, with increasing resources (sensors) and, with increasing processing. How these devices communicate is still not natural and This retards the growth of Ubiquitous Computing. This paper presents a way in which these devices can communicate using Jini technology and concepts of Service Oriented Architecture, applying these concepts in a test case of Ubiquitous Computing. To conduct the test case was constructed a fictitious system for management of a soccer championship, where users can interact with each other and with the system in a simplified way, have access to data in real time of the championship during the event. This communication is performed by services built using Jini technology, which were based on key SOA concepts, such as modularity and reusability

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEG

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ability of nanoassisted laser desorption-ionization mass spectrometry (NALDI-MS) imaging to provide selective chemical monitoring with proper spatial distribution of lipid profiles from tumor tissues after plate imprinting has been tested. NALDI-MS imaging identified and mapped several potential lipid biomarkers in a murine model of melanoma tumor (inoculation of B16/F10 cells). It also confirmed that the in vivo treatment of tumor bearing mice with synthetic supplement containing phosphoethanolamine (PHO-S) promoted an accentuated decrease in relative abundance of the tumor biomarkers. NALDI-MS imaging is a matrix-free LDI protocol based on the selective imprinting of lipids in the NALDI plate followed by the removal of the tissue. It therefore provides good quality and selective chemical images with preservation of spatial distribution and less interference from tissue material. The test case described herein illustrates the potential of chemically selective NALDI-MS imaging for biomarker discovery.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, due to the rapid convergence of multimedia services, Internet and wireless communications, there has been a growing trend of heterogeneity (in terms of channel bandwidths, mobility levels of terminals, end-user quality-of-service (QoS) requirements) for emerging integrated wired/wireless networks. Moreover, in nowadays systems, a multitude of users coexists within the same network, each of them with his own QoS requirement and bandwidth availability. In this framework, embedded source coding allowing partial decoding at various resolution is an appealing technique for multimedia transmissions. This dissertation includes my PhD research, mainly devoted to the study of embedded multimedia bitstreams in heterogenous networks, developed at the University of Bologna, advised by Prof. O. Andrisano and Prof. A. Conti, and at the University of California, San Diego (UCSD), where I spent eighteen months as a visiting scholar, advised by Prof. L. B. Milstein and Prof. P. C. Cosman. In order to improve the multimedia transmission quality over wireless channels, joint source and channel coding optimization is investigated in a 2D time-frequency resource block for an OFDM system. We show that knowing the order of diversity in time and/or frequency domain can assist image (video) coding in selecting optimal channel code rates (source and channel code rates). Then, adaptive modulation techniques, aimed at maximizing the spectral efficiency, are investigated as another possible solution for improving multimedia transmissions. For both slow and fast adaptive modulations, the effects of imperfect channel estimation errors are evaluated, showing that the fast technique, optimal in ideal systems, might be outperformed by the slow adaptive modulation, when a real test case is considered. Finally, the effects of co-channel interference and approximated bit error probability (BEP) are evaluated in adaptive modulation techniques, providing new decision regions concepts, and showing how the widely used BEP approximations lead to a substantial loss in the overall performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In questo lavoro di tesi sono state evidenziate alcune problematiche relative alle macchine exascale (sistemi che sviluppano un exaflops di Potenza di calcolo) e all'evoluzione dei software che saranno eseguiti su questi sistemi, prendendo in esame principalmente la necessità del loro sviluppo, in quanto indispensabili per lo studio di problemi scientifici e tecnologici di più grandi dimensioni, con particolare attenzione alla Material Science, che è uno dei campi che ha avuto maggiori sviluppi grazie all'utilizzo di supercomputer, ed ad uno dei codici HPC più utilizzati in questo contesto: Quantum ESPRESSO. Dal punto di vista del software sono state presentate le prime misure di efficienza energetica su architettura ibrida grazie al prototipo di cluster EURORA sul software Quantum ESPRESSO. Queste misure sono le prime ad essere state pubblicate nel contesto software per la Material Science e serviranno come baseline per future ottimizzazioni basate sull'efficienza energetica. Nelle macchine exascale infatti uno dei requisiti per l'accesso sarà la capacità di essere energeticamente efficiente, così come oggi è un requisito la scalabilità del codice. Un altro aspetto molto importante, riguardante le macchine exascale, è la riduzione del numero di comunicazioni che riduce il costo energetico dell'algoritmo parallelo, poiché in questi nuovi sistemi costerà di più, da un punto di vista energetico, spostare i dati che calcolarli. Per tale motivo in questo lavoro sono state esposte una strategia, e la relativa implementazione, per aumentare la località dei dati in uno degli algoritmi più dispendiosi, dal punto di vista computazionale, in Quantum ESPRESSO: Fast Fourier Transform (FFT). Per portare i software attuali su una macchina exascale bisogna iniziare a testare la robustezza di tali software e i loro workflow su test case che stressino al massimo le macchine attualmente a disposizione. In questa tesi per testare il flusso di lavoro di Quantum ESPRESSO e WanT, un software per calcolo di trasporto, è stato caratterizzato un sistema scientificamente rilevante costituito da un cristallo di PDI - FCN2 che viene utilizzato per la costruzione di transistor organici OFET. Infine è stato simulato un dispositivo ideale costituito da due elettrodi in oro con al centro una singola molecola organica.