981 resultados para validation process
Resumo:
Functional validation of complex digital systems is a hard and critical task in the design flow. In particular, when dealing with communication systems, like Multiband Orthogonal Frequency Division Multiplexing Ultra Wideband (MB-OFDM UWB), the design decisions taken during the process have to be validated at different levels in an easy way. In this work, a unified algorithm-architecture-circuit co-design environment for this type of systems, to be implemented in FPGA, is presented. The main objective is to find an efficient methodology for designing a configurable optimized MB-OFDM UWB system by using as few efforts as possible in verification stage, so as to speed up the development period. Although this efficient design methodology is tested and considered to be suitable for almost all types of complex FPGA designs, we propose a solution where both the circuit and the communication channel are tested at different levels (algorithmic, RTL, hardware device) using a common testbench.
Resumo:
This work aims at a deeper understanding of the energy loss phenomenon in polysilicon production reactors by the so-called Siemens process. Contributions to the energy consumption of the polysilicon deposition step are studied in this paper, focusing on the radiation heat loss phenomenon. A theoretical model for radiation heat loss calculations is experimentally validated with the help of a laboratory CVD prototype. Following the results of the model, relevant parameters that directly affect the amount of radiation heat losses are put forward. Numerical results of the model applied to a state-of-the-art industrial reactor show the influence of these parameters on energy consumption due to radiation per kilogram of silicon produced; the radiation heat loss can be reduced by 3.8% when the reactor inner wall radius is reduced from 0.78 to 0.70 m, by 25% when the wall emissivity is reduced from 0.5 to 0.3, and by 12% when the final rod diameter is increased from 12 to 15 cm.
Resumo:
This work addresses heat losses in a CVD reactor for polysilicon production. Contributions to the energy consumption of the so-called Siemens process are evaluated, and a comprehensive model for heat loss is presented. A previously-developed model for radiative heat loss is combined with conductive heat loss theory and a new model for convective heat loss. Theoretical calculations are developed and theoretical energy consumption of the polysilicon deposition process is obtained. The model is validated by comparison with experimental results obtained using a laboratory-scale CVD reactor. Finally, the model is used to calculate heat consumption in a 36-rod industrial reactor; the energy consumption due to convective heat loss per kilogram of polysilicon produced is calculated to be 22-30 kWh/kg along a deposition process.
Resumo:
This work presents a systematic process for building a Fault Diagnoser (FD), based on Petri Nets (PNs) which has been applied to a small helicopter. This novel tool is able to detect both intermittent and permanent faults. The work carried out is discussed from theoretical and practical point of view. The procedure begins with a division of the whole system into subsystems, which are the devices that have to be modeled by using PN, considering both the normal and fault operations. Subsequently, the models are integrated into a global Petri Net diagnoser (PND) that is able to monitor a whole helicopter and show critical variables to the operator in order to determine the UAV health, preventing accidents in this manner. A Data Acquisition System (DAQ) has been designed for collecting data during the flights and feeding PN diagnoser with them. Several real flights (nominal or under failure) have been carried out to perform the diagnoser setup and verify its performance. A summary of the validation results obtained during real flight tests is also included. An extensive use of this tool will improve preventive maintenance protocols for UAVs (especially helicopters) and allow establishing recommendations in regulations
Resumo:
According to the last global burden of disease published by the World Health Organization, tumors were the third leading cause of death worldwide in 2004. Among the different types of tumors, colorectal cancer ranks as the fourth most lethal. To date, tumor diagnosis is based mainly on the identification of morphological changes in tissues. Considering that these changes appears after many biochemical reactions, the development of vibrational techniques may contribute to the early detection of tumors, since they are able to detect such reactions. The present study aimed to develop a methodology based on infrared microspectroscopy to characterize colon samples, providing complementary information to the pathologist and facilitating the early diagnosis of tumors. The study groups were composed by human colon samples obtained from paraffin-embedded biopsies. The groups are divided in normal (n=20), inflammation (n=17) and tumor (n=18). Two adjacent slices were acquired from each block. The first one was subjected to chemical dewaxing and H&E staining. The infrared imaging was performed on the second slice, which was not dewaxed or stained. A computational preprocessing methodology was employed to identify the paraffin in the images and to perform spectral baseline correction. Such methodology was adapted to include two types of spectral quality control. Afterwards the preprocessing step, spectra belonging to the same image were analyzed and grouped according to their biochemical similarities. One pathologist associated each obtained group with some histological structure based on the H&E stained slice. Such analysis highlighted the biochemical differences between the three studied groups. Results showed that severe inflammation presents biochemical features similar to the tumors ones, indicating that tumors can develop from inflammatory process. A spectral database was constructed containing the biochemical information identified in the previous step. Spectra obtained from new samples were confronted with the database information, leading to their classification into one of the three groups: normal, inflammation or tumor. Internal and external validation were performed based on the classification sensitivity, specificity and accuracy. Comparison between the classification results and H&E stained sections revealed some discrepancies. Some regions histologically normal were identified as inflammation by the classification algorithm. Similarly, some regions presenting inflammatory lesions in the stained section were classified into the tumor group. Such differences were considered as misclassification, but they may actually evidence that biochemical changes are in course in the analyzed sample. In the latter case, the method developed throughout this thesis would have proved able to identify early stages of inflammatory and tumor lesions. It is necessary to perform additional experiments to elucidate this discrepancy between the classification results and the morphological features. One solution would be the use of immunohistochemistry techniques with specific markers for tumor and inflammation. Another option includes the recovering of the medical records of patients who participated in this study in order to check, in later times to the biopsy collection, whether they actually developed the lesions supposedly detected in this research.
Resumo:
The purposes of this study were (1) to validate of the item-attribute matrix using two levels of attributes (Level 1 attributes and Level 2 sub-attributes), and (2) through retrofitting the diagnostic models to the mathematics test of the Trends in International Mathematics and Science Study (TIMSS), to evaluate the construct validity of TIMSS mathematics assessment by comparing the results of two assessment booklets. Item data were extracted from Booklets 2 and 3 for the 8th grade in TIMSS 2007, which included a total of 49 mathematics items and every student's response to every item. The study developed three categories of attributes at two levels: content, cognitive process (TIMSS or new), and comprehensive cognitive process (or IT) based on the TIMSS assessment framework, cognitive procedures, and item type. At level one, there were 4 content attributes (number, algebra, geometry, and data and chance), 3 TIMSS process attributes (knowing, applying, and reasoning), and 4 new process attributes (identifying, computing, judging, and reasoning). At level two, the level 1 attributes were further divided into 32 sub-attributes. There was only one level of IT attributes (multiple steps/responses, complexity, and constructed-response). Twelve Q-matrices (4 originally specified, 4 random, and 4 revised) were investigated with eleven Q-matrix models (QM1 ~ QM11) using multiple regression and the least squares distance method (LSDM). Comprehensive analyses indicated that the proposed Q-matrices explained most of the variance in item difficulty (i.e., 64% to 81%). The cognitive process attributes contributed to the item difficulties more than the content attributes, and the IT attributes contributed much more than both the content and process attributes. The new retrofitted process attributes explained the items better than the TIMSS process attributes. Results generated from the level 1 attributes and the level 2 attributes were consistent. Most attributes could be used to recover students' performance, but some attributes' probabilities showed unreasonable patterns. The analysis approaches could not demonstrate if the same construct validity was supported across booklets. The proposed attributes and Q-matrices explained the items of Booklet 2 better than the items of Booklet 3. The specified Q-matrices explained the items better than the random Q-matrices.
Resumo:
Background: The “Mackey Childbirth Satisfaction Rating Scale” (MCSRS) is a complete non-validated scale which includes the most important factors associated with maternal satisfaction. Our primary purpose was to describe the internal structure of the scale and validate the reliability and validity of concept of its Spanish version MCSRS-E. Methods: The MCSRS was translated into Spanish, back-translated and adapted to the Spanish population. It was then administered following a pilot test with women who met the study participant requirements. The scale structure was obtained by performing an exploratory factorial analysis using a sample of 304 women. The structures obtained were tested by conducting a confirmatory factorial analysis using a sample of 159 women. To test the validity of concept, the structure factors were correlated with expectations prior to childbirth experiences. McDonald’s omegas were calculated for each model to establish the reliability of each factor. The study was carried out at four University Hospitals; Alicante, Elche, Torrevieja and Vinalopo Salud of Elche. The inclusion criteria were women aged 18–45 years old who had just delivered a singleton live baby at 38–42 weeks through vaginal delivery. Women who had difficulty speaking and understanding Spanish were excluded. Results: The process generated 5 different possible internal structures in a nested model more consistent with the theory than other internal structures of the MCSRS applied hitherto. All of them had good levels of validation and reliability. Conclusions: This nested model to explain internal structure of MCSRS-E can accommodate different clinical practice scenarios better than the other structures applied to date, and it is a flexible tool which can be used to identify the aspects that should be changed to improve maternal satisfaction and hence maternal health.
Validation of the Swiss methane emission inventory by atmospheric observations and inverse modelling
Resumo:
Atmospheric inverse modelling has the potential to provide observation-based estimates of greenhouse gas emissions at the country scale, thereby allowing for an independent validation of national emission inventories. Here, we present a regional-scale inverse modelling study to quantify the emissions of methane (CH₄) from Switzerland, making use of the newly established CarboCount-CH measurement network and a high-resolution Lagrangian transport model. In our reference inversion, prior emissions were taken from the "bottom-up" Swiss Greenhouse Gas Inventory (SGHGI) as published by the Swiss Federal Office for the Environment in 2014 for the year 2012. Overall we estimate national CH₄ emissions to be 196 ± 18 Gg yr⁻¹ for the year 2013 (1σ uncertainty). This result is in close agreement with the recently revised SGHGI estimate of 206 ± 33 Gg yr⁻¹ as reported in 2015 for the year 2012. Results from sensitivity inversions using alternative prior emissions, uncertainty covariance settings, large-scale background mole fractions, two different inverse algorithms (Bayesian and extended Kalman filter), and two different transport models confirm the robustness and independent character of our estimate. According to the latest SGHGI estimate the main CH₄ source categories in Switzerland are agriculture (78 %), waste handling (15 %) and natural gas distribution and combustion (6 %). The spatial distribution and seasonal variability of our posterior emissions suggest an overestimation of agricultural CH₄ emissions by 10 to 20 % in the most recent SGHGI, which is likely due to an overestimation of emissions from manure handling. Urban areas do not appear as emission hotspots in our posterior results, suggesting that leakages from natural gas distribution are only a minor source of CH₄ in Switzerland. This is consistent with rather low emissions of 8.4 Gg yr⁻¹ reported by the SGHGI but inconsistent with the much higher value of 32 Gg yr⁻¹ implied by the EDGARv4.2 inventory for this sector. Increased CH₄ emissions (up to 30 % compared to the prior) were deduced for the north-eastern parts of Switzerland. This feature was common to most sensitivity inversions, which is a strong indicator that it is a real feature and not an artefact of the transport model and the inversion system. However, it was not possible to assign an unambiguous source process to the region. The observations of the CarboCount-CH network provided invaluable and independent information for the validation of the national bottom-up inventory. Similar systems need to be sustained to provide independent monitoring of future climate agreements.
Resumo:
Recent research indicates that social identity theory offers an important lens to improve our understanding of founders as enterprising individuals, the venture creation process, and its outcomes. Yet, further advances are hindered by the lack of a valid scale that could be used to measure founders' social identities - a problem that is particularly severe because social identity is a multidimensional construct that needs to be assessed properly so that organizational phenomena can be understood. Drawing on social identity theory and the systematic classification of founders' social identities (Darwinians, Communitarians, Missionaries) provided in Fauchart and Gruber (2011), this study develops and empirically validates a 12-item scale that allows scholars to capture the multidimensional nature of social identities of entrepreneurs. Our validation tests are unusually comprehensive and solid, as we not only validate the developed scale in the Alpine region (where it was originally conceived), but also in 12 additional countries and the Anglo-American region. Scholars can use the scale to identify founders' social identities and to relate these identities to micro-level processes and outcomes in new firm creation. Scholars may also link founders' social identities to other levels of analysis such as industries (e.g., industry evolution) or whole economies (e.g., economic growth).
Resumo:
Social identity theory offers an important lens to improve understanding of founders as enterprising individuals, the venture creation process, and its outcomes. Yet, further advances are hindered by the lack of valid scales to measure founders’ social identities. Drawing on social identity theory and a systematic classification of founders’ social identities (Darwinians, Communitarians, and Missionaries), we develop and test a corresponding 15-item scale in the Alpine region and validate it in 13 additional countries and regions. The scale allows identifying founders’ social identities and relating them to processes and outcomes in entrepreneurship. The scale is available online in 15 languages.
Resumo:
A steady state mathematical model for co-current spray drying was developed for sugar-rich foods with the application of the glass transition temperature concept. Maltodextrin-sucrose solution was used as a sugar-rich food model. The model included mass, heat and momentum balances for a single droplet drying as well as temperature and humidity profile of the drying medium. A log-normal volume distribution of the droplets was generated at the exit of the rotary atomizer. This generation created a certain number of bins to form a system of non-linear first-order differential equations as a function of the axial distance of the drying chamber. The model was used to calculate the changes of droplet diameter, density, temperature, moisture content and velocity in association with the change of air properties along the axial distance. The difference between the outlet air temperature and the glass transition temperature of the final products (AT) was considered as an indicator of stickiness of the particles in spray drying process. The calculated and experimental AT values were close, indicating successful validation of the model. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The critical process parameter for mineral separation is the degree of mineral liberation achieved by comminution. The degree of liberation provides an upper limit of efficiency for any physical separation process. The standard approach to measuring mineral liberation uses mineralogical analysis based two-dimensional sections of particles which may be acquired using a scanning electron microscope and back-scatter electron analysis or from an analysis of an image acquired using an optical microscope. Over the last 100 years, mathematical techniques have been developed to use this two dimensional information to infer three-dimensional information about the particles. For mineral processing, a particle that contains more than one mineral (a composite particle) may appear to be liberated (contain only one mineral) when analysed using only its revealed particle section. The mathematical techniques used to interpret three-dimensional information belong, to a branch of mathematics called stereology. However methods to obtain the full mineral liberation distribution of particles from particle sections are relatively new. To verify these adjustment methods, we require an experimental method which can accurately measure both sectional and three dimensional properties. Micro Cone Beam Tomography provides such a method for suitable particles and hence, provides a way to validate methods used to convert two-dimensional measurements to three dimensional estimates. For this study ore particles from a well-characterised sample were subjected to conventional mineralogical analysis (using particle sections) to estimate three-dimensional properties of the particles. A subset of these particles was analysed using a micro-cone beam tomograph. This paper presents a comparison of the three-dimensional properties predicted from measured two-dimensional sections with the measured three-dimensional properties.
Resumo:
Workflow technology has delivered effectively for a large class of business processes, providing the requisite control and monitoring functions. At the same time, this technology has been the target of much criticism due to its limited ability to cope with dynamically changing business conditions which require business processes to be adapted frequently, and/or its limited ability to model business processes which cannot be entirely predefined. Requirements indicate the need for generic solutions where a balance between process control and flexibility may be achieved. In this paper we present a framework that allows the workflow to execute on the basis of a partially specified model where the full specification of the model is made at runtime, and may be unique to each instance. This framework is based on the notion of process constraints. Where as process constraints may be specified for any aspect of the workflow, such as structural, temporal, etc. our focus in this paper is on a constraint which allows dynamic selection of activities for inclusion in a given instance. We call these cardinality constraints, and this paper will discuss their specification and validation requirements.
Resumo:
A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given..
Resumo:
Web wrapper extracts data from HTML document. The accuracy and quality of the information extracted by web wrapper relies on the structure of the HTML document. If an HTML document is changed, the web wrapper may or may not function correctly. This paper presents an Adjacency-Weight method to be used in the web wrapper extraction process or in a wrapper self-maintenance mechanism to validate web wrappers. The algorithm and data structures are illustrated by some intuitive examples.