916 resultados para automatic test case generation
Resumo:
The evolution of integrated circuits technologies demands the development of new CAD tools. The traditional development of digital circuits at physical level is based in library of cells. These libraries of cells offer certain predictability of the electrical behavior of the design due to the previous characterization of the cells. Besides, different versions of each cell are required in such a way that delay and power consumption characteristics are taken into account, increasing the number of cells in a library. The automatic full custom layout generation is an alternative each time more important to cell based generation approaches. This strategy implements transistors and connections according patterns defined by algorithms. So, it is possible to implement any logic function avoiding the limitations of the library of cells. Tools of analysis and estimate must offer the predictability in automatic full custom layouts. These tools must be able to work with layout estimates and to generate information related to delay, power consumption and area occupation. This work includes the research of new methods of physical synthesis and the implementation of an automatic layout generation in which the cells are generated at the moment of the layout synthesis. The research investigates different strategies of elements disposition (transistors, contacts and connections) in a layout and their effects in the area occupation and circuit delay. The presented layout strategy applies delay optimization by the integration with a gate sizing technique. This is performed in such a way the folding method allows individual discrete sizing to transistors. The main characteristics of the proposed strategy are: power supply lines between rows, over the layout routing (channel routing is not used), circuit routing performed before layout generation and layout generation targeting delay reduction by the application of the sizing technique. The possibility to implement any logic function, without restrictions imposed by a library of cells, allows the circuit synthesis with optimization in the number of the transistors. This reduction in the number of transistors decreases the delay and power consumption, mainly the static power consumption in submicrometer circuits. Comparisons between the proposed strategy and other well-known methods are presented in such a way the proposed method is validated.
Resumo:
A key to maintain Enterprises competitiveness is the ability to describe, standardize, and adapt the way it reacts to certain types of business events, and how it interacts with suppliers, partners, competitors, and customers. In this context the field of organization modeling has emerged with the aim to create models that help to create a state of self-awareness in the organization. This project's context is the use of Semantic Web in the Organizational modeling area. The Semantic Web technology advantages can be used to improve the way of modeling organizations. This was accomplished using a Semantic wiki to model organizations. Our research and implementation had two main purposes: formalization of textual content in semantic wiki pages; and automatic generation of diagrams from organization data stored in the semantic wiki pages.
Resumo:
The power system stability analysis is approached taking into explicit account the dynamic performance of generators internal voltages and control devices. The proposed method is not a direct method in the usual sense since conclusion for stability or instability is not exclusively based on energy function considerations but it is automatic since the conclusion is achieved without an analyst intervention. The stability test accounts for the nonconservative nature of the system with control devices such as the automatic voltage regulator (AVR) and automatic generation control (AGC) in contrast with the well-known direct methods. An energy function is derived for the system with machines forth-order model, AVR and AGC and it is used to start the analysis procedure and to point out criticalities. The conclusive analysis itself is made by means of a method based on the definition of a region surrounding the equilibrium point where the system net torque is equilibrium restorative. This region is named positive synchronization region (PSR). Since the definition of the PSR boundaries have no dependence on modelling approximation, the PSR test conduces to reliable results. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Representative Life-Cycle Inventories (LCIs) are essential for Life-Cycle Assessments (LCAs) quality and readiness. Because energy is such an important element of LCAs, appropriate LCIs on energy are crucial, and due to the prevalence of hydropower on Brazilian electricity mix, the frequently used LCIs are not representative of the Brazilian conditions. The present study developed a LCI of the Itaipu Hydropower Plant, the major hydropower plant in the world, responsible for producing 23.8% of Brazil's electricity consumption. Focused on the capital investments to construct and operate the dam, the LCI was designed to serve as a database for the LCAs of Brazilian hydroelectricity production. The life-cycle boundaries encompass the construction and operation of the dam, as well as the life-cycles of the most important material and energy consumptions (cement, steel, copper, diesel oil, lubricant oil), as well as construction site operation, emissions from reservoir flooding, material and workers transportation, and earthworks. As a result, besides the presented inventory, it was possible to determine the following processes, and respective environmental burdens as the most important life-cycle hotspots: reservoir filling (CO(2) and CH(4) emission: land use); steel life-cycle (water and energy consumption; CO, particulates, SO(x) and NO(x) emissions); cement life-cycle (water and energy consumption; CO(2) and particulate emissions); and operation of civil construction machines (diesel consumption; NO(x) emissions). Compared with another hydropower studies, the LCI showed magnitude adequacy, with better results than small hydropower, which reveals a scale economy for material and energy exchanges in the case of ltaipu Power Plant. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The applications of the Finite Element Method (FEM) for three-dimensional domains are already well documented in the framework of Computational Electromagnetics. However, despite the power and reliability of this technique for solving partial differential equations, there are only a few examples of open source codes available and dedicated to the solid modeling and automatic constrained tetrahedralization, which are the most time consuming steps in a typical three-dimensional FEM simulation. Besides, these open source codes are usually developed separately by distinct software teams, and even under conflicting specifications. In this paper, we describe an experiment of open source code integration for solid modeling and automatic mesh generation. The integration strategy and techniques are discussed, and examples and performance results are given, specially for complicated and irregular volumes which are not simply connected. © 2011 IEEE.
Comissionamento de turbinas hidráulicas: ensaios de faixa operativa, índex, test e rejeição de carga
Resumo:
With growing electricty demand, the importance of generation through hydropower, a renewable energy source, it’s of great importance. This demand derives from the country’s growth, as well as events that will occur in the coming years. The commission has a crucial role before the entry into operation of hydroelectric plant, sice, ensures a good operation of hydraulic and electrical systems, as well as the safety of the installation. This paper is a case study, commissioning in a PCH in especially the most important tests, such as range operation, índex tes and load rejection. In these trials we can get a Idea of the actual behavior of the unit, as well as future operation maneuvers, and evidence of real efficiency
Resumo:
[EN]The meccano method is a novel and promising mesh generation method for simultaneously creating adaptive tetrahedral meshes and volume parametrizations of a complex solid. We highlight the fact that the method requires minimum user intervention and has a low computational cost. The method builds a 3-D triangulation of the solid as a deformation of an appropriate tetrahedral mesh of the meccano. The new mesh generator combines an automatic parametrization of surface triangulations, a local refinement algorithm for 3-D nested triangulations and a simultaneous untangling and smoothing procedure. At present, the procedure is fully automatic for a genus-zero solid. In this case, the meccano can be a single cube. The efficiency of the proposed technique is shown with several applications...
Resumo:
The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.
Towards model driven software development for Arduino platforms: a DSL and automatic code generation
Resumo:
La tesi ha lo scopo di esplorare la produzione di sistemi software per Embedded Systems mediante l'utilizzo di tecniche relative al mondo del Model Driven Software Development. La fase più importante dello sviluppo sarà la definizione di un Meta-Modello che caratterizza i concetti fondamentali relativi agli embedded systems. Tale modello cercherà di astrarre dalla particolare piattaforma utilizzata ed individuare quali astrazioni caratterizzano il mondo degli embedded systems in generale. Tale meta-modello sarà quindi di tipo platform-independent. Per la generazione automatica di codice è stata adottata una piattaforma di riferimento, cioè Arduino. Arduino è un sistema embedded che si sta sempre più affermando perché coniuga un buon livello di performance ed un prezzo relativamente basso. Tale piattaforma permette lo sviluppo di sistemi special purpose che utilizzano sensori ed attuatori di vario genere, facilmente connessi ai pin messi a disposizione. Il meta-modello definito è un'istanza del meta-metamodello MOF, definito formalmente dall'organizzazione OMG. Questo permette allo sviluppatore di pensare ad un sistema sotto forma di modello, istanza del meta-modello definito. Un meta-modello può essere considerato anche come la sintassi astratta di un linguaggio, quindi può essere definito da un insieme di regole EBNF. La tecnologia utilizzata per la definizione del meta-modello è stata Xtext: un framework che permette la scrittura di regole EBNF e che genera automaticamente il modello Ecore associato al meta-modello definito. Ecore è l'implementazione di EMOF in ambiente Eclipse. Xtext genera inoltre dei plugin che permettono di avere un editor guidato dalla sintassi, definita nel meta-modello. La generazione automatica di codice è stata realizzata usando il linguaggio Xtend2. Tale linguaggio permette di esplorare l'Abstract Syntax Tree generato dalla traduzione del modello in Ecore e di generare tutti i file di codice necessari. Il codice generato fornisce praticamente tutta la schematic part dell'applicazione, mentre lascia all'application designer lo sviluppo della business logic. Dopo la definizione del meta-modello di un sistema embedded, il livello di astrazione è stato spostato più in alto, andando verso la definizione della parte di meta-modello relativa all'interazione di un sistema embedded con altri sistemi. Ci si è quindi spostati verso un ottica di Sistema, inteso come insieme di sistemi concentrati che interagiscono. Tale difinizione viene fatta dal punto di vista del sistema concentrato di cui si sta definendo il modello. Nella tesi viene inoltre introdotto un caso di studio che, anche se abbastanza semplice, fornisce un esempio ed un tutorial allo sviluppo di applicazioni mediante l'uso del meta-modello. Ci permette inoltre di notare come il compito dell'application designer diventi piuttosto semplice ed immediato, sempre se basato su una buona analisi del problema. I risultati ottenuti sono stati di buona qualità ed il meta-modello viene tradotto in codice che funziona correttamente.
Resumo:
BACKGROUND: Complete investigation of thrombophilic or hemorrhagic clinical presentations is a time-, apparatus-, and cost-intensive process. Sensitive screening tests for characterizing the overall function of the hemostatic system, or defined parts of it, would be very useful. For this purpose, we are developing an electrochemical biosensor system that allows measurement of thrombin generation in whole blood as well as in plasma. METHODS: The measuring system consists of a single-use electrochemical sensor in the shape of a strip and a measuring unit connected to a personal computer, recording the electrical signal. Blood is added to a specific reagent mixture immobilized in dry form on the strip, including a coagulation activator (e.g., tissue factor or silica) and an electrogenic substrate specific to thrombin. RESULTS: Increasing thrombin concentrations gave standard curves with progressively increasing maximal current and decreasing time to reach the peak. Because the measurement was unaffected by color or turbidity, any type of blood sample could be analyzed: platelet-poor plasma, platelet-rich plasma, and whole blood. The test strips with the predried reagents were stable when stored for several months before testing. Analysis of the combined results obtained with different activators allowed discrimination between defects of the extrinsic, intrinsic, and common coagulation pathways. Activated protein C (APC) predried on the strips allowed identification of APC-resistance in plasma and whole blood samples. CONCLUSIONS: The biosensor system provides a new method for assessing thrombin generation in plasma or whole blood samples as small as 10 microL. The assay is easy to use, thus allowing it to be performed in a point-of-care setting.
Resumo:
Background Tissue microarray (TMA) technology revolutionized the investigation of potential biomarkers from paraffin-embedded tissues. However, conventional TMA construction is laborious, time-consuming and imprecise. Next-generation tissue microarrays (ngTMA) combine histological expertise with digital pathology and automated tissue microarraying. The aim of this study was to test the feasibility of ngTMA for the investigation of biomarkers within the tumor microenvironment (tumor center and invasion front) of six tumor types, using CD3, CD8 and CD45RO as an example. Methods Ten cases each of malignant melanoma, lung, breast, gastric, prostate and colorectal cancers were reviewed. The most representative H&E slide was scanned and uploaded onto a digital slide management platform. Slides were viewed and seven TMA annotations of 1 mm in diameter were placed directly onto the digital slide. Different colors were used to identify the exact regions in normal tissue (n = 1), tumor center (n = 2), tumor front (n = 2), and tumor microenvironment at invasion front (n = 2) for subsequent punching. Donor blocks were loaded into an automated tissue microarrayer. Images of the donor block were superimposed with annotated digital slides. Exact annotated regions were punched out of each donor block and transferred into a TMA block. 420 tissue cores created two ngTMA blocks. H&E staining and immunohistochemistry for CD3, CD8 and CD45RO were performed. Results All 60 slides were scanned automatically (total time < 10 hours), uploaded and viewed. Annotation time was 1 hour. The 60 donor blocks were loaded into the tissue microarrayer, simultaneously. Alignment of donor block images and digital slides was possible in less than 2 minutes/case. Automated punching of tissue cores and transfer took 12 seconds/core. Total ngTMA construction time was 1.4 hours. Stains for H&E and CD3, CD8 and CD45RO highlighted the precision with which ngTMA could capture regions of tumor-stroma interaction of each cancer and the T-lymphocytic immune reaction within the tumor microenvironment. Conclusion Based on a manual selection criteria, ngTMA is able to precisely capture histological zones or cell types of interest in a precise and accurate way, aiding the pathological study of the tumor microenvironment. This approach would be advantageous for visualizing proteins, DNA, mRNA and microRNAs in specific cell types using in situ hybridization techniques.