964 resultados para script-driven test program generation process
Resumo:
Pós-graduação em Engenharia e Ciência de Alimentos - IBILCE
Resumo:
As software evolves, engineers use regression testing to evaluate its fitness for release. Such testing typically begins with existing test cases, and many techniques have been proposed for reusing these cost-effectively. After reusing test cases, however, it is also important to consider code or behavior that has not been exercised by existing test cases and generate new test cases to validate these. This process is known as test suite augmentation. In this paper we present a directed test suite augmentation technique, that utilizes results from reuse of existing test cases together with an incremental concolic testing algorithm to augment test suites so that they are coverage-adequate for a modified program. We present results of an empirical study examining the effectiveness of our approach.
Resumo:
The present work propounds an inverse method to estimate the heat sources in the transient two-dimensional heat conduction problem in a rectangular domain with convective bounders. The non homogeneous partial differential equation (PDE) is solved using the Integral Transform Method. The test function for the heat generation term is obtained by the chip geometry and thermomechanical cutting. Then the heat generation term is estimated by the conjugated gradient method (CGM) with adjoint problem for parameter estimation. The experimental trials were organized to perform six different conditions to provide heat sources of different intensities. This method was compared with others in the literature and advantages are discussed. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
A model for computing the generation-recombination noise due to traps within the semiconductor film of fully depleted silicon-on-insulator MOSFET transistors is presented. Dependence of the corner frequency of the Lorentzian spectra on the gate voltage is addressed in this paper, which is different to the constant behavior expected for bulk transistors. The shift in the corner frequency makes the characterization process easier. It helps to identify the energy position, capture cross sections, and densities of the traps. This characterization task is carried out considering noise measurements of two different candidate structures for single-transistor dynamic random access memory devices.
Resumo:
INTRODUCTION: The aim of this study was to assess the epidemiological and operational characteristics of the Leprosy Program before and after its integration into the Primary healthcare Services of the municipality of Aracaju-Sergipe, Brazil. METHODS: Data were drawn from the national database. The study periods were divided into preintegration (1996-2000) and postintegration (2001-2007). Annual rates of epidemiological detection were calculated. Frequency data on clinico-epidemiological variables of cases detected and treated for the two periods were compared using the Chi-squared (χ2) test adopting a 5% level of significance. RESULTS: Rates of detection overall, and in subjects younger than 15 years, were greater for the postintegration period and were higher than rates recorded for Brazil as a whole during the same periods. A total of 780 and 1,469 cases were registered during the preintegration and postintegration periods, respectively. Observations for the postintegration period were as follows: I) a higher proportion of cases with disability grade assessed at diagnosis, with increase of 60.9% to 78.8% (p < 0.001), and at end of treatment, from 41.4% to 44.4% (p < 0.023); II) an increase in proportion of cases detected by contact examination, from 2.1% to 4.1% (p < 0.001); and III) a lower level of treatment default with a decrease from 5.64 to 3.35 (p < 0.008). Only 34% of cases registered from 2001 to 2007 were examined. CONCLUSIONS: The shift observed in rates of detection overall, and in subjects younger than 15 years, during the postintegration period indicate an increased level of health care access. The fall in number of patients abandoning treatment indicates greater adherence to treatment. However, previous shortcomings in key actions, pivotal to attaining the outcomes and impact envisaged for the program, persisted in the postintegration period.
Resumo:
Com o objetivo de avaliar a produção da silagem e o uso de aditivos no processo de ensilagem do resíduo úmido de cervejaria, foram realizados 5 tratamentos: controle (C: ensilagem de 100% de resíduo úmido de cervejaria); PC15 (15% de polpa cítrica); PC30 (30% de polpa cítrica); CS15 (15% de casca de soja); CS30 (30% de casca de soja) – com base na matéria fresca do resíduo de cervejaria. As silagens foram confeccionadas em baldes plásticos com 252mm de altura e 245mm de diâmetro (0,06174m³), e amostras foram coletadas para análises bromatológicas, pH, nitrogênio amoniacal, digestão in vitro de matéria seca, ácidos orgânicos e perfil microbiológico. Os resultados foram analisados pelo programa computacional Statistical Analysis System (Statistical..., 1985), sendo verificada a normalidade dos resíduos pelo Teste de Shapiro-Wilk (PROC UNIVARIATE), e as variâncias, pelo Teste de Hartley. Os efeitos dos níveis de adição foram separados por meio de contrastes polinomiais utilizando o nível de significância de 5%. Houve aumento do teor de matéria seca, carboidratos solúveis, ácido lático, digestão in vitro de matéria seca, da população de bactérias ácido láticas e redução do pH, ácido butírico, propiônico e nitrogênio amoniacal a partir das inclusões de polpa cítrica e casca de soja, sendo os melhores resultados encontrados para o tratamento com inclusão de 30% de polpa cítrica (P<0,05). A ensilagem do bagaço de malte por si só é uma alternativa para o produtor rural como suporte alimentar e confecção de silagem de qualidade que pode ser incrementada com o uso de aditivos a serem avaliados de acordo com a relação custo-benefício para eficiência da produção
Towards model driven software development for Arduino platforms: a DSL and automatic code generation
Resumo:
La tesi ha lo scopo di esplorare la produzione di sistemi software per Embedded Systems mediante l'utilizzo di tecniche relative al mondo del Model Driven Software Development. La fase più importante dello sviluppo sarà la definizione di un Meta-Modello che caratterizza i concetti fondamentali relativi agli embedded systems. Tale modello cercherà di astrarre dalla particolare piattaforma utilizzata ed individuare quali astrazioni caratterizzano il mondo degli embedded systems in generale. Tale meta-modello sarà quindi di tipo platform-independent. Per la generazione automatica di codice è stata adottata una piattaforma di riferimento, cioè Arduino. Arduino è un sistema embedded che si sta sempre più affermando perché coniuga un buon livello di performance ed un prezzo relativamente basso. Tale piattaforma permette lo sviluppo di sistemi special purpose che utilizzano sensori ed attuatori di vario genere, facilmente connessi ai pin messi a disposizione. Il meta-modello definito è un'istanza del meta-metamodello MOF, definito formalmente dall'organizzazione OMG. Questo permette allo sviluppatore di pensare ad un sistema sotto forma di modello, istanza del meta-modello definito. Un meta-modello può essere considerato anche come la sintassi astratta di un linguaggio, quindi può essere definito da un insieme di regole EBNF. La tecnologia utilizzata per la definizione del meta-modello è stata Xtext: un framework che permette la scrittura di regole EBNF e che genera automaticamente il modello Ecore associato al meta-modello definito. Ecore è l'implementazione di EMOF in ambiente Eclipse. Xtext genera inoltre dei plugin che permettono di avere un editor guidato dalla sintassi, definita nel meta-modello. La generazione automatica di codice è stata realizzata usando il linguaggio Xtend2. Tale linguaggio permette di esplorare l'Abstract Syntax Tree generato dalla traduzione del modello in Ecore e di generare tutti i file di codice necessari. Il codice generato fornisce praticamente tutta la schematic part dell'applicazione, mentre lascia all'application designer lo sviluppo della business logic. Dopo la definizione del meta-modello di un sistema embedded, il livello di astrazione è stato spostato più in alto, andando verso la definizione della parte di meta-modello relativa all'interazione di un sistema embedded con altri sistemi. Ci si è quindi spostati verso un ottica di Sistema, inteso come insieme di sistemi concentrati che interagiscono. Tale difinizione viene fatta dal punto di vista del sistema concentrato di cui si sta definendo il modello. Nella tesi viene inoltre introdotto un caso di studio che, anche se abbastanza semplice, fornisce un esempio ed un tutorial allo sviluppo di applicazioni mediante l'uso del meta-modello. Ci permette inoltre di notare come il compito dell'application designer diventi piuttosto semplice ed immediato, sempre se basato su una buona analisi del problema. I risultati ottenuti sono stati di buona qualità ed il meta-modello viene tradotto in codice che funziona correttamente.
Resumo:
The work investigates the feasibility of a new process aimed at the production of hydrogen with inherent separation of carbon oxides. The process consists in a cycle in which, in the first step, a mixed metal oxide is reduced by ethanol (obtained from biomasses). The reduced metal is then contacted with steam in order to split the water and sequestrating the oxygen into the looping material’s structure. The oxides used to run this thermochemical cycle, also called “steam-iron process” are mixed ferrites in the spinel structure MeFe2O4 (Me = Fe, Co, Ni or Cu). To understand the reactions involved in the anaerobic reforming of ethanol, diffuse reflectance spectroscopy (DRIFTS) was used, coupled with the mass analysis of the effluent, to study the surface composition of the ferrites during the adsorption of ethanol and its transformations during the temperature program. This study was paired with the tests on a laboratory scale plant and the characterization through various techniques such as XRD, Mössbauer spectroscopy, elemental analysis... on the materials as synthesized and at different reduction degrees In the first step it was found that besides the generation of the expected CO, CO2 and H2O, the products of ethanol anaerobic oxidation, also a large amount of H2 and coke were produced. The latter is highly undesired, since it affects the second step, during which water is fed over the pre-reduced spinel at high temperature. The behavior of the different spinels was affected by the nature of the divalent metal cation; magnetite was the oxide showing the slower rate of reduction by ethanol, but on the other hand it was that one which could perform the entire cycle of the process more efficiently. Still the problem of coke formation remains the greater challenge to solve.
Resumo:
This thesis work encloses activities carried out in the Laser Center of the Polytechnic University of Madrid and the laboratories of the University of Bologna in Forlì. This thesis focuses on the superficial mechanical treatment for metallic materials called Laser Shock Peening (LSP). This process is a surface enhancement treatment which induces a significant layer of beneficial compressive residual stresses underneath the surface of metal components in order to improve the detrimental effects of the crack growth behavior rate in it. The innovation aspect of this work is the LSP application to specimens with extremely low thickness. In particular, after a bibliographic study and comparison with the main treatments used for the same purposes, this work analyzes the physics of the operation of a laser, its interaction with the surface of the material and the generation of the surface residual stresses which are fundamentals to obtain the LSP benefits. In particular this thesis work regards the application of this treatment to some Al2024-T351 specimens with low thickness. Among the improvements that can be obtained performing this operation, the most important in the aeronautic field is the fatigue life improvement of the treated components. As demonstrated in this work, a well-done LSP treatment can slow down the progress of the defects in the material that could lead to sudden failure of the structure. A part of this thesis is the simulation of this phenomenon using the program AFGROW, with which have been analyzed different geometric configurations of the treatment, verifying which was better for large panels of typical aeronautical interest. The core of the LSP process are the residual stresses that are induced on the material by the interaction with the laser light, these can be simulated with the finite elements but it is essential to verify and measure them experimentally. In the thesis are introduced the main methods for the detection of those stresses, they can be mechanical or by diffraction. In particular, will be described the principles and the detailed realization method of the Hole Drilling measure and an introduction of the X-ray Diffraction; then will be presented the results I obtained with both techniques. In addition to these two measurement techniques will also be introduced Neutron Diffraction method. The last part refers to the experimental tests of the fatigue life of the specimens, with a detailed description of the apparatus and the procedure used from the initial specimen preparation to the fatigue test with the press. Then the obtained results are exposed and discussed.
Resumo:
In this article we propose a bootstrap test for the probability of ruin in the compound Poisson risk process. We adopt the P-value approach, which leads to a more complete assessment of the underlying risk than the probability of ruin alone. We provide second-order accurate P-values for this testing problem and consider both parametric and nonparametric estimators of the individual claim amount distribution. Simulation studies show that the suggested bootstrap P-values are very accurate and outperform their analogues based on the asymptotic normal approximation.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
A prototype vortex-driven air lift pump was developed and experimentally evaluated. It was designed to be easily manufactured and scalable for arbitrary riser diameters. The model tested fit in a 2 inch diameter riser with six air injection nozzles through which airwas injected helically around the perimeter of the riser at an angle of 70º from pure tangential injection. The pump was intended to transport both water and sediment over a large range of submergence ratios. A test apparatus was designed to be able to simulate deep water or oceanic environments. The resulting test setup had a finite reservoir; over the course of a test, the submergence ratio varied from 0.48 to 0.39. For air injection pressures ranging from 10 to 60 psig and for air flow rates of 6 to 15 scfm, the induced water discharge flow rates varied only slightly, due to the limited range of available submergence ratios. The anticipated simulation of deep water environment, with a corresponding equivalent increase in thesubmergence ratio, proved unattainable. The pump prototype successfully transported both water and sediment (sand). Thepercent volume yield of the sediment was in an acceptable range. The pump design has been subsequently used successfully in a 4 inch configuration in a follow-on project. A computer program was written in Matlab to simulate the pump characteristics. The program output water pressures at the location of air injection which were physicallycompatible with the experimental data.
Resumo:
The current case study examined the effects of the STARS-PAC anxiety reduction program on the social and test anxiety levels of a middle school student. The literature supporting the effectiveness of cognitive behavioral therapy programs which incorporate methods such as those used in the STARS-PAC program were reviewed. The findings of this case study indicated decreased levels of overall anxiety during the intervention phase; however, the student’s test anxiety level displayed little improvement. Implications of the findings and for future research are discussed.
Resumo:
BACKGROUND: In May 2003, a newborn auditory screening program was initiated in the Upper Palatinate. METHODS: Sequential OAE- and BERA-screening was conducted in all hospitals with obstetric facilities. The Screening Center at the Public Health Authority was responsible for the coordination of the screening process, completeness of participation, the follow-up of all subjects with a positive screening test and the quality of instrumental screening. RESULTS: A total of 96% of 17,469 newborns were screened. The referral rate at discharge was 1.6% (0.4% for bilateral positive findings). For 97% of the positive screening results, a definite diagnosis to confirm or exclude hearing loss was achieved; for 43% only after intervention by the Screening Center. Fifteen children with profound bilateral hearing impairment were identified of whom eight were only detected by the intervention of the Screening Center. CONCLUSION: The effective structures established in the Upper Palatinate provide a standard for the quality of neonatal auditory screening achievable in Germany.