988 resultados para model verification


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The instability of river bank can result in considerable human and land losses. The Po river is the most important in Italy, characterized by main banks of significant and constantly increasing height. This study presents multilayer perceptron of artificial neural network (ANN) to construct prediction models for the stability analysis of river banks along the Po River, under various river and groundwater boundary conditions. For this aim, a number of networks of threshold logic unit are tested using different combinations of the input parameters. Factor of safety (FS), as an index of slope stability, is formulated in terms of several influencing geometrical and geotechnical parameters. In order to obtain a comprehensive geotechnical database, several cone penetration tests from the study site have been interpreted. The proposed models are developed upon stability analyses using finite element code over different representative sections of river embankments. For the validity verification, the ANN models are employed to predict the FS values of a part of the database beyond the calibration data domain. The results indicate that the proposed ANN models are effective tools for evaluating the slope stability. The ANN models notably outperform the derived multiple linear regression models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present thesis work was performed in the frame of ESEO (European Student Earth Orbiter) project. The activities that are described in this document were carried out in the Microsatellites and Space Micro systems Lab led by Professor Paolo Tortora and in ALMASpace company facilities. The thesis deals with ESEO structural analysis, at system and unit level, and verification: after determining the design limit loads to be applied to the spacecraft as an envelope of different launchers load profiles, a finite element structural analysis was performed on the model of the satellite in order to ensure the capability to withstand the loads encountered during the launch; all the analyses were performed according to ESA standards and using the software MSC NASTRAN SIMXPERT. Amplification factors were derived and used to determine loads to be considered at unit level. In particular structural analyses were carried out on the GPS unit, the payload developed for ESEO by students of University of Bologna and results were used in the preparation of GPS payload design definition file. As for the verification phase a study on the panels and inserts to be used in the spacecraft was performed: different designs were created exploiting methods to optimize weight and mechanical behavior. The configurations have been analyzed and results compared to select the final design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work studies a km-scale data assimilation scheme based on a LETKF developed for the COSMO model. The aim is to evaluate the impact of the assimilation of two different types of data: temperature, humidity, pressure and wind data from conventional networks (SYNOP, TEMP, AIREP reports) and 3d reflectivity from radar volume. A 3-hourly continuous assimilation cycle has been implemented over an Italian domain, based on a 20 member ensemble, with boundary conditions provided from ECMWF ENS. Three different experiments have been run for evaluating the performance of the assimilation on one week in October 2014 during which Genova flood and Parma flood took place: a control run of the data assimilation cycle with assimilation of data from conventional networks only, a second run in which the SPPT scheme is activated into the COSMO model, a third run in which also reflectivity volumes from meteorological radar are assimilated. Objective evaluation of the experiments has been carried out both on case studies and on the entire week: check of the analysis increments, computing the Desroziers statistics for SYNOP, TEMP, AIREP and RADAR, over the Italian domain, verification of the analyses against data not assimilated (temperature at the lowest model level objectively verified against SYNOP data), and objective verification of the deterministic forecasts initialised with the KENDA analyses for each of the three experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND. The high rate of reperfusion injury in clinical lung transplantation mandates significant improvements in lung preservation. Innovations should be validated using standardized and low-cost experimental models. METHODS. The model introduced here is analyzed by comparing global lung function after varying ischemic times (2, 4, 8, 16, and 24 hours). A rat double-lung block is flush-perfused, and the main pulmonary artery and left atrium are connected to the left pulmonary artery and vein of a syngeneic recipient using a T-shaped stent. With pressure side ports and incorporated flow crystals, measurement of vascular resistance and graft oxygenation can be performed. The transplant is ventilated separately, and compliance and resistance are determined. RESULTS. The increase in the ischemic interval from 2 to 24 hours caused an increase in the alveolar arterial oxygen difference from 220 +/- 20 to 600 +/- 34 mm Hg, pulmonary vascular resistance from 198 +/- 76 to 638 +/- 212 mm Hg.mL-1.min-1, and resistance to airflow from 274 +/- 50 to 712 +/- 30 cm H2O/L H2O, and a decrease in pulmonary compliance from 0.4 +/- 0.05 to 0.12 +/- 0.06 mL/cm H2O. CONCLUSIONS. This in situ, syngeneic rat lung transplantation model offers an alternative to large animal models for verification of lung preservation solutions and for modification of donor or recipient treatment regimens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The CHaracterizing ExOPlanet Satellite (CHEOPS) is an ESA Small Mission whose launch is planned for the end of 2017. It is a Ritchey-Chretien telescope with a 320 mm aperture providing a FoV of 0.32 degrees, which will target nearby bright stars already known to host planets, and measure, through ultrahigh precision photometry, the radius of exo-planets, allowing to determine their composition. This paper will present the details of the AIV plan for a demonstration model of the CHEOPS Telescope with equivalent structure but different CTEs. Alignment procedures, needed GSEs and devised verification tests will be described and a path for the AIV of the flight model, which will take place at industries premises, will be sketched. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of operations on representations of objects is well documented in the realm of spatial engineering. However, the mathematical structure and formal proof of these operational phenomena are not thoroughly explored. Other works have often focused on query-based models that seek to order classes and instances of objects in the form of semantic hierarchies or graphs. In some models, nodes of graphs represent objects and are connected by edges that represent different types of coarsening operators. This work, however, studies how the coarsening operator "simplification" can manipulate partitions of finite sets, independent from objects and their attributes. Partitions that are "simplified first have a collection of elements filtered (removed), and then the remaining partition is amalgamated (some sub-collections are unified). Simplification has many interesting mathematical properties. A finite composition of simplifications can also be accomplished with some single simplification. Also, if one partition is a simplification of the other, the simplified partition is defined to be less than the other partition according to the simp relation. This relation is shown to be a partial-order relation based on simplification. Collections of partitions can not only be proven to have a partial- order structure, but also have a lattice structure and are complete. In regard to a geographic information system (GIs), partitions related to subsets of attribute domains for objects are called views. Objects belong to different views based whether or not their attribute values lie in the underlying view domain. Given a particular view, objects with their attribute n-tuple codings contained in the view are part of the actualization set on views, and objects are labeled according to the particular subset of the view in which their coding lies. Though the scope of the work does not mainly focus on queries related directly to geographic objects, it provides verification for the existence of particular views in a system with this underlying structure. Given a finite attribute domain, one can say with mathematical certainty that different views of objects are partially ordered by simplification, and every collection of views has a greatest lower bound and least upper bound, which provides the validity for exploring queries in this regard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work was to develop a comprehensive IMSRT QA procedure that examined, using EPID dosimetry and Monte Carlo (MC) calculations, each step in the treatment planning and delivery process. These steps included verification of the field shaping, treatment planning system (RTPS) dose calculations, and patient dose delivery. Verification of each step in the treatment process is assumed to result in correct dose delivery to the patient. ^ The accelerator MC model was verified against commissioning data for field sizes from 0.8 × 0.8 cm 2 to 10 × 10 cm 2. Depth doses were within 2% local percent difference (LPD) in low gradient regions and 1 mm distance to agreement (DTA) in high gradient regions. Lateral profiles were within 2% LPD in low gradient regions and 1 mm DTA in high gradient regions. Calculated output factors were within 1% of measurement for field sizes ≥1 × 1 cm2. ^ The measured and calculated pretreatment EPID dose patterns were compared using criteria of 5% LPD, 1 mm DTA, or 2% of central axis pixel value with ≥95% of compared points required to pass for successful verification. Pretreatment field verification resulted in 97% percent of the points passing. ^ The RTPS and Monte Carlo phantom dose calculations were compared using 5% LPD, 2 mm DTA, or 2% of the maximum dose with ≥95% of compared points required passing for successful verification. RTPS calculation verification resulted in 97% percent of the points passing. ^ The measured and calculated EPID exit dose patterns were compared using criteria of 5% LPD, 1 mm DTA, or 2% of central axis pixel value with ≥95% of compared points required to pass for successful verification. Exit dose verification resulted in 97% percent of the points passing. ^ Each of the processes above verified an individual step in the treatment planning and delivery process. The combination of these verification steps ensures accurate treatment delivery to the patient. This work shows that Monte Carlo calculations and EPID dosimetry can be used to quantitatively verify IMSRT treatments resulting in improved patient care and, potentially, improved clinical outcome. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proof-Carrying Code (PCC) is a general approach to mobile code safety in which programs are augmented with a certificate (or proof). The intended benefit is that the program consumer can locally validate the certificate w.r.t. the "untrustcd" program by means of a certificate checker a process which should be much simpler, efficient, and automatic than generating the original proof. The practical uptake of PCC greatly depends on the existence of a variety of enabling technologies which allow both proving programs correct and replacing a costly verification process by an efficient checking proceduri on th( consumer side. In this work we propose Abstraction- Carrying Code (ACC), a novel approach which uses abstract interpretation as enabling technology. We argue that the large body of applications of abstract interpretation to program verification is amenable to the overall PCC scheme. In particular, we rely on an expressive class of safely policies which can be defined over different abstract domains. We use an abstraction (or abstract model) of the program computed by standard static analyzers as a certificate. The validity of the abstraction on ihe consumer side is checked in a single pass by a very efficient and specialized abstract-interpreter. We believe that ACC brings the expressiveness, flexibility and automation which is inherent in abstract interpretation techniques to the area of mobile code safety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to satisfy the safety-critical requirements, the train control system (TCS) often employs a layered safety communication protocol to provide reliable services. However, both description and verification of the safety protocols may be formidable due to the system complexity. In this paper, interface automata (IA) are used to describe the safety service interface behaviors of safety communication protocol. A formal verification method is proposed to describe the safety communication protocols using IA and translate IA model into PROMELA model so that the protocols can be verified by the model checker SPIN. A case study of using this method to describe and verify a safety communication protocol is included. The verification results illustrate that the proposed method is effective to describe the safety protocols and verify deadlocks, livelocks and several mandatory consistency properties. A prototype of safety protocols is also developed based on the presented formally verifying method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The verification of compliance with a design specification in manufacturing requires the use of metrological instruments to check if the magnitude associated with the design specification is or not according with tolerance range. Such instrumentation and their use during the measurement process, has associated an uncertainty of measurement whose value must be related to the value of tolerance tested. Most papers dealing jointly tolerance and measurement uncertainties are mainly focused on the establishment of a relationship uncertainty-tolerance without paying much attention to the impact from the standpoint of process cost. This paper analyzes the cost-measurement uncertainty, considering uncertainty as a productive factor in the process outcome. This is done starting from a cost-tolerance model associated with the process. By means of this model the existence of a measurement uncertainty is calculated in quantitative terms of cost and its impact on the process is analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CiaoPP is the abstract interpretation-based preprocessor of the Ciao multi-paradigm (Constraint) Logic Programming system. It uses modular, incremental abstract interpretation as a fundamental tool to obtain information about programs. In CiaoPP, the semantic approximations thus produced have been applied to perform high- and low-level optimizations during program compilation, including transformations such as múltiple abstract specialization, parallelization, partial evaluation, resource usage control, and program verification. More recently, novel and promising applications of such semantic approximations are being applied in the more general context of program development such as program verification. In this work, we describe our extensión of the system to incorpórate Abstraction-Carrying Code (ACC), a novel approach to mobile code safety. ACC follows the standard strategy of associating safety certificates to programs, originally proposed in Proof Carrying- Code. A distinguishing feature of ACC is that we use an abstraction (or abstract model) of the program computed by standard static analyzers as a certifícate. The validity of the abstraction on the consumer side is checked in a single-pass by a very efficient and specialized abstractinterpreter. We have implemented and benchmarked ACC within CiaoPP. The experimental results show that the checking phase is indeed faster than the proof generation phase, and that the sizes of certificates are reasonable. Moreover, the preprocessor is based on compile-time (and run-time) tools for the certification of CLP programs with resource consumption assurances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We perform a review of Web Mining techniques and we describe a Bootstrap Statistics methodology applied to pattern model classifier optimization and verification for Supervised Learning for Tour-Guide Robot knowledge repository management. It is virtually impossible to test thoroughly Web Page Classifiers and many other Internet Applications with pure empirical data, due to the need for human intervention to generate training sets and test sets. We propose using the computer-based Bootstrap paradigm to design a test environment where they are checked with better reliability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new verification procedure for sound source coverage according to ISO 140?5 requirements. The ISO 140?5 standard applies to the measurement of façade insulation and requires a sound source able to achieve a sufficiently uniform sound field in free field conditions on the façade under study. The proposed method involves the electroacoustic characterisation of the sound source in laboratory free field conditions (anechoic room) and the subsequent prediction by computer simulation of the sound free field radiated on a rectangular surface equal in size to the façade being measured. The loudspeaker is characterised in an anechoic room under laboratory controlled conditions, carefully measuring directivity, and then a computer model is designed to calculate the acoustic free field coverage for different loudspeaker positions and façade sizes. For each sound source position, the method provides the maximum direct acoustic level differences on a façade specimen and therefore determines whether the loudspeaker verifies the maximum allowed level difference of 5 dB (or 10 dB for façade dimensions greater than 5 m) required by the ISO standard. Additionally, the maximum horizontal dimension of the façade meeting the standard is calculated and provided for each sound source position, both with the 5 dB and 10 dB criteria. In the last section of the paper, the proposed procedure is compared with another method used by the authors in the past to achieve the same purpose: in situ outdoor measurements attempting to recreate free field conditions. From this comparison, it is concluded that the proposed method is able to reproduce the actual measurements with high accuracy, for example, the ground reflection effect, at least at low frequencies, which is difficult to avoid in the outdoor measurement method, and it is fully eliminated with the proposed method to achieve the free field requisite.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satellites and space equipment are exposed to diffuse acoustic fields during the launch process. The use of adequate techniques to model the response to the acoustic loads is a fundamental task during the design and verification phases. Considering the modal density of each element is necessary to identify the correct methodology. In this report selection criteria are presented in order to choose the correct modelling technique depending on the frequency ranges. A model satellite’s response to acoustic loads is presented, determining the modal densities of each component in different frequency ranges. The paper proposes to select the mathematical method in each modal density range and the differences in the response estimation due to the different used techniques. In addition, the methodologies to analyse the intermediate range of the system are discussed. The results are compared with experimental testing data obtained in an experimental modal test.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuous improvement of management and assessment processes for curricular external internships has led a group of university teachers specialised in this area to develop a mixed model of measurement that combines the verification of skill acquisition by those students choosing external internships with the satisfaction of the parties involved in that process. They included academics, educational tutors of companies and organisations and administration and services personnel in the latter category. The experience, developed within University of Alicante, has been carried out in the degrees of Business Administration and Management, Business Studies, Economics, Advertising and Public Relations, Sociology and Social Work, all part of the Faculty of Economics and Business. By designing and managing closed standardised interviews and other research tools, validated outside the centre, a system of continuous improvement and quality assurance has been created, clearly contributing to the gradual increase in the number of students with internships in this Faculty, as well as to the improvement in satisfaction, efficiency and efficacy indicators at a global level. As this experience of educational innovation has shown, the acquisition of curricular knowledge, skills, abilities and competences by the students is directly correlated with the satisfaction of those parties involved in a process that takes the student beyond the physical borders of a university campus. Ensuring the latter is a task made easier by the implementation of a mixed assessment method, combining continuous and final assessment, and characterised by its rigorousness and simple management. This report presents that model, subject in turn to a persistent and continuous control, a model all parties involved in the external internships are taking part of. Its short-term results imply an increase, estimated at 15% for the last course, in the number of students choosing curricular internships and, for the medium and long-term, a major interweaving between the academic world and its social and productive environment, both in the business and institutional areas. The potentiality of this assessment model does not lie only in the quality of its measurement tools, but also in the effects from its use in the various groups and in the actions that are carried out as a result of its implementation and which, without any doubt and as it is shown below, are the real guarantee of a continuous improvement.