908 resultados para Verification and validation technology
Resumo:
Following the processing and validation of JEFF-3.1 performed in 2006 and presented in ND2007, and as a consequence of the latest updated of this library (JEFF-3.1.2) in February 2012, a new processing and validation of JEFF-3.1.2 cross section library is presented in this paper. The processed library in ACE format at ten different temperatures was generated with NJOY-99.364 nuclear data processing system. In addition, NJOY-99 inputs are provided to generate PENDF, GENDF, MATXSR and BOXER formats. The library has undergone strict QA procedures, being compared with other available libraries (e.g. ENDF/B-VII.1) and processing codes as PREPRO-2000 codes. A set of 119 criticality benchmark experiments taken from ICSBEP-2010 has been used for validation purposes.
Resumo:
An accepted fact in software engineering is that software must undergo verification and validation process during development to ascertain and improve its quality level. But there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products. Though, some knowledge is available on the strengths and weaknesses of the available software quality assurance techniques but not much is known yet on the relationship between different techniques and contextual behavior of the techniques. Objective: This research investigates the effectiveness of two testing techniques ? equivalence class partitioning and decision coverage and one review technique ? code review by abstraction, in terms of their fault detection capability. This will be used to strengthen the practical knowledge available on these techniques.
Resumo:
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013).Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Politécnica del Ejército Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program andgroup variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults.We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.
Resumo:
Validación de un cuestionario socio-emocional en fútbol
Resumo:
In this paper a novel bidirectional multiple port dc/dc transformer topology is presented. The novel concept for dc/dc transformer is based on the Series Resonant Converter (SRC) topology operated at its resonant frequency point. This allows for higher switching frequency to be adopted and enables high efficiency/high power density operation. The feasibility of the proposed concept is verified on a 300W, 700 kHz three port prototype with 390V input voltage and 48V and 12V output voltages. A peak overall efficiency of 93% is measured at full load. A very good load and cross regulation characteristic of the converter is observed in the whole load range, from full load to open circuit. The sensitivity analysis of the resonant capacitance is also performed showing very slight deterioration in the converter performances when a resonant capacitor is changed ±30% of its nominal value.
Resumo:
El objetivo de la presente investigación es el desarrollo de un modelo de cálculo rápido, eficiente y preciso, para la estimación de los costes finales de construcción, en las fases preliminares del proyecto arquitectónico. Se trata de una herramienta a utilizar durante el proceso de elaboración de estudios previos, anteproyecto y proyecto básico, no siendo por tanto preciso para calcular el “predimensionado de costes” disponer de la total definición grafica y literal del proyecto. Se parte de la hipótesis de que en la aplicación práctica del modelo no se producirán desviaciones superiores al 10 % sobre el coste final de la obra proyectada. Para ello se formulan en el modelo de predimensionado cinco niveles de estimación de costes, de menor a mayor definición conceptual y gráfica del proyecto arquitectónico. Los cinco niveles de cálculo son: dos que toman como referencia los valores “exógenos” de venta de las viviendas (promoción inicial y promoción básica) y tres basados en cálculos de costes “endógenos” de la obra proyectada (estudios previos, anteproyecto y proyecto básico). El primer nivel de estimación de carácter “exógeno” (nivel .1), se calcula en base a la valoración de mercado de la promoción inmobiliaria y a su porcentaje de repercusión de suelo sobre el valor de venta de las viviendas. El quinto nivel de valoración, también de carácter “exógeno” (nivel .5), se calcula a partir del contraste entre el valor externo básico de mercado, los costes de construcción y los gastos de promoción estimados de la obra proyectada. Este contraste entre la “repercusión del coste de construcción” y el valor de mercado, supone una innovación respecto a los modelos de predimensionado de costes existentes, como proceso metodológico de verificación y validación extrínseca, de la precisión y validez de las estimaciones resultantes de la aplicación práctica del modelo, que se denomina Pcr.5n (Predimensionado costes de referencia con .5niveles de cálculo según fase de definición proyectual / ideación arquitectónica). Los otros tres niveles de predimensionado de costes de construcción “endógenos”, se estiman mediante cálculos analíticos internos por unidades de obra y cálculos sintéticos por sistemas constructivos y espacios funcionales, lo que se lleva a cabo en las etapas iniciales del proyecto correspondientes a estudios previos (nivel .2), anteproyecto (nivel .3) y proyecto básico (nivel .4). Estos cálculos teóricos internos son finalmente evaluados y validados mediante la aplicación práctica del modelo en obras de edificación residencial, de las que se conocen sus costes reales de liquidación final de obra. Según va evolucionando y se incrementa el nivel de definición y desarrollo del proyecto, desde los estudios previos hasta el proyecto básico, el cálculo se va perfeccionando en su nivel de eficiencia y precisión de la estimación, según la metodología aplicada: [aproximaciones sucesivas en intervalos finitos], siendo la hipótesis básica como anteriormente se ha avanzado, lograr una desviación máxima de una décima parte en el cálculo estimativo del predimensionado del coste real de obra. El cálculo del coste de ejecución material de la obra, se desarrolla en base a parámetros cúbicos funcionales “tridimensionales” del espacio proyectado y parámetros métricos constructivos “bidimensionales” de la envolvente exterior de cubierta/fachada y de la huella del edificio sobre el terreno. Los costes funcionales y constructivos se ponderan en cada fase del proceso de cálculo con sus parámetros “temáticos/específicos” de gestión (Pg), proyecto (Pp) y ejecución (Pe) de la concreta obra presupuestada, para finalmente estimar el coste de construcción por contrata, como resultado de incrementar al coste de ejecución material el porcentaje correspondiente al parámetro temático/especifico de la obra proyectada. El modelo de predimensionado de costes de construcción Pcr.5n, será una herramienta de gran interés y utilidad en el ámbito profesional, para la estimación del coste correspondiente al Proyecto Básico previsto en el marco técnico y legal de aplicación. Según el Anejo I del Código Técnico de la Edificación (CTE), es de obligado cumplimiento que el proyecto básico contenga una “Valoración aproximada de la ejecución material de la obra proyectada por capítulos”, es decir , que el Proyecto Básico ha de contener al menos un “presupuesto aproximado”, por capítulos, oficios ó tecnologías. El referido cálculo aproximado del presupuesto en el Proyecto Básico, necesariamente se ha de realizar mediante la técnica del predimensionado de costes, dado que en esta fase del proyecto arquitectónico aún no se dispone de cálculos de estructura, planos de acondicionamiento e instalaciones, ni de la resolución constructiva de la envolvente, por cuanto no se han desarrollado las especificaciones propias del posterior proyecto de ejecución. Esta estimación aproximada del coste de la obra, es sencilla de calcular mediante la aplicación práctica del modelo desarrollado, y ello tanto para estudiantes como para profesionales del sector de la construcción. Como se contiene y justifica en el presente trabajo, la aplicación práctica del modelo para el cálculo de costes en las fases preliminares del proyecto, es rápida y certera, siendo de sencilla aplicación tanto en vivienda unifamiliar (aisladas y pareadas), como en viviendas colectivas (bloques y manzanas). También, el modelo es de aplicación en el ámbito de la valoración inmobiliaria, tasaciones, análisis de viabilidad económica de promociones inmobiliarias, estimación de costes de obras terminadas y en general, cuando no se dispone del proyecto de ejecución y sea preciso calcular los costes de construcción de las obras proyectadas. Además, el modelo puede ser de aplicación para el chequeo de presupuestos calculados por el método analítico tradicional (estado de mediciones pormenorizadas por sus precios unitarios y costes descompuestos), tanto en obras de iniciativa privada como en obras promovidas por las Administraciones Públicas. Por último, como líneas abiertas a futuras investigaciones, el modelo de “predimensionado costes de referencia 5 niveles de cálculo”, se podría adaptar y aplicar para otros usos y tipologías diferentes a la residencial, como edificios de equipamientos y dotaciones públicas, valoración de edificios históricos, obras de urbanización interior y exterior de parcela, proyectos de parques y jardines, etc….. Estas lineas de investigación suponen trabajos paralelos al aquí desarrollado, y que a modo de avance parcial se recogen en las comunicaciones presentadas en los Congresos internacionales Scieconf/Junio 2013, Rics‐Cobra/Septiembre 2013 y en el IV Congreso nacional de patología en la edificación‐Ucam/Abril 2014. ABSTRACT The aim of this research is to develop a fast, efficient and accurate calculation model to estimate the final costs of construction, during the preliminary stages of the architectural project. It is a tool to be used during the preliminary study process, drafting and basic project. It is not therefore necessary to have the exact, graphic definition of the project in order to be able to calculate the cost‐scaling. It is assumed that no deviation 10% higher than the final cost of the projected work will occur during the implementation. To that purpose five levels of cost estimation are formulated in the scaling model, from a lower to a higher conceptual and graphic definition of the architectural project. The five calculation levels are: two that take as point of reference the ”exogenous” values of house sales (initial development and basic development), and three based on calculation of endogenous costs (preliminary study, drafting and basic project). The first ”exogenous” estimation level (level.1) is calculated over the market valuation of real estate development and the proportion the cost of land has over the value of the houses. The fifth level of valuation, also an ”exogenous” one (level.5) is calculated from the contrast between the basic external market value, the construction costs, and the estimated development costs of the projected work. This contrast between the ”repercussions of construction costs” and the market value is an innovation regarding the existing cost‐scaling models, as a methodological process of extrinsic verification and validation, of the accuracy and validity of the estimations obtained from the implementation of the model, which is called Pcr.5n (reference cost‐scaling with .5calculation levels according to the stage of project definition/ architectural conceptualization) The other three levels of “endogenous” construction cost‐scaling are estimated from internal analytical calculations by project units and synthetic calculations by construction systems and functional spaces. This is performed during the initial stages of the project corresponding to preliminary study process (level.2), drafting (level.3) and basic project (level.4). These theoretical internal calculations are finally evaluated and validated via implementation of the model in residential buildings, whose real costs on final payment of the works are known. As the level of definition and development of the project evolves, from preliminary study to basic project, the calculation improves in its level of efficiency and estimation accuracy, following the applied methodology: [successive approximations at finite intervals]. The basic hypothesis as above has been made, achieving a maximum deviation of one tenth, in the estimated calculation of the true cost of predimensioning work. The cost calculation for material execution of the works is developed from functional “three‐dimensional” cubic parameters for the planned space and constructive “two dimensional” metric parameters for the surface that envelopes around the facade and the building’s footprint on the plot. The functional and building costs are analyzed at every stage of the process of calculation with “thematic/specific” parameters of management (Pg), project (Pp) and execution (Pe) of the estimated work in question, and finally the cost of contractual construction is estimated, as a consequence of increasing the cost of material execution with the percentage pertaining to the thematic/specific parameter of the projected work. The construction cost‐scaling Pcr.5n model will be a useful tool of great interest in the professional field to estimate the cost of the Basic Project as prescribed in the technical and legal framework of application. According to the appendix of the Technical Building Code (CTE), it is compulsory that the basic project contains an “approximate valuation of the material execution of the work, projected by chapters”, that is, that the basic project must contain at least an “approximate estimate” by chapter, trade or technology. This approximate estimate in the Basic Project is to be performed through the cost‐scaling technique, given that structural calculations, reconditioning plans and definitive contruction details of the envelope are still not available at this stage of the architectural project, insofar as specifications pertaining to the later project have not yet been developed. This approximate estimate of the cost of the works is easy to calculate through the implementation of the given model, both for students and professionals of the building sector. As explained and justified in this work, the implementation of the model for cost‐scaling during the preliminary stage is fast and accurate, as well as easy to apply both in single‐family houses (detached and semi‐detached) and collective housing (blocks). The model can also be applied in the field of the real‐estate valuation, official appraisal, analysis of the economic viability of real estate developments, estimate of the cost of finished projects and, generally, when an implementation project is not available and it is necessary to calculate the building costs of the projected works. The model can also be applied to check estimates calculated by the traditional analytical method (state of measurements broken down into price per unit cost details), both in private works and those promoted by Public Authorities. Finally, as potential lines for future research, the “five levels of calculation cost‐scaling model”, could be adapted and applied to purposes and typologies other than the residential one, such as service buildings and public facilities, valuation of historical buildings, interior and exterior development works, park and garden planning, etc… These lines of investigation are parallel to this one and, by way of a preview, can be found in the dissertations given in the International Congresses Scieconf/June 2013, Rics‐Cobra/September 2013 and in the IV Congress on building pathology ‐Ucam/April 2014.
Resumo:
In the framework of the ITER Control Breakdown Structure (CBS), Plant System Instrumentation & Control (I&C) defines the hardware and software required to control one or more plant systems [1]. For diagnostics, most of the complex Plant System I&C are to be delivered by ITER Domestic Agencies (DAs). As an example for the DAs, ITER Organization (IO) has developed several use cases for diagnostics Plant System I&C that fully comply with guidelines presented in the Plant Control Design Handbook (PCDH) [2]. One such use case is for neutron diagnostics, specifically the Fission Chamber (FC), which is responsible for delivering time-resolved measurements of neutron source strength and fusion power to aid in assessing the functional performance of ITER [3]. ITER will deploy four Fission Chamber units, each consisting of three individual FC detectors. Two of these detectors contain Uranium 235 for Neutron detection, while a third "dummy" detector will provide gamma and noise detection. The neutron flux from each MFC is measured by the three methods: . Counting Mode: measures the number of individual pulses and their location in the record. Pulse parameters (threshold and width) are user configurable. . Campbelling Mode (Mean Square Voltage): measures the RMS deviation in signal amplitude from its average value. .Current Mode: integrates the signal amplitude over the measurement period
Resumo:
In this article an experimental campaign aimed at validating a previously published simplified serviceability design method of the columns of long jointless structures is presented. The proposed method is also extended to include tension stiffening effects which proved to be significant in structures with small amount of reinforcement subjected to small axial loading. This extension allows significant improvement of predictions for this type of element. The campaign involved columns with different reinforcement and squashing load ratios, given that these parameters had been identified as crucial when designing columns subjected to imposed displacements. Experimental results are presented and discussed, with particular regard to cracking behaviour and structural stiffness. Considerations on tension stiffening effects are also made. Finally, the application of the method to typical bridge and building cases is presented, showing the feasibility of jointless construction, and the limits which should be respected.
Resumo:
This paper describes the design and application of the Atmospheric Evaluation and Research Integrated model for Spain (AERIS). Currently, AERIS can provide concentration profiles of NO2, O3, SO2, NH3, PM, as a response to emission variations of relevant sectors in Spain. Results are calculated using transfer matrices based on an air quality modelling system (AQMS) composed by the WRF (meteorology), SMOKE (emissions) and CMAQ (atmospheric-chemical processes) models. The AERIS outputs were statistically tested against the conventional AQMS and observations, revealing a good agreement in both cases. At the moment, integrated assessment in AERIS focuses only on the link between emissions and concentrations. The quantification of deposition, impacts (health, ecosystems) and costs will be introduced in the future. In conclusion, the main asset of AERIS is its accuracy in predicting air quality outcomes for different scenarios through a simple yet robust modelling framework, avoiding complex programming and long computing times.
Resumo:
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013). Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Polite?cnica del Eje?rcito Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program and group variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults. We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.
Resumo:
A protocol of selection, training and validation of the members of the panel for bread sensory analysis is proposed to assess the influence of wheat cultivar on the sensory quality of bread. Three cultivars of bread wheat and two cultivars of spelt wheat organically-grown under the same edaphoclimatic conditions were milled and baked using the same milling and baking procedure. Through the use of triangle tests, differences were identified between the five breads. Significant differences were found between the spelt breads and those made with bread wheat for the attributes ?crumb cell homogeneity? and ?crumb elasticity?. Significant differences were also found for the odor and flavor attributes, with the bread made with ?Espelta Navarra? being the most complex, from a sensory point of view. Based on the results of this study, we propose that sensory properties should be considered as breeding criteria for future work on genetic improvement.
Resumo:
An engineering modification of blade element/momentum theory is applied to describe the vertical autorotation of helicopter rotors. A full non‐linear aerodynamic model is considered for the airfoils, taking into account the dependence of lift and drag coefficients on both the angle of attack and the Reynolds number. The proposed model, which has been validated in previous work, has allowed the identification of different autorotation modes, which depend on the descent velocity and the twist of the rotor blades. These modes present different radial distributions of driven and driving blade regions, as well as different radial upwash/downwash patterns. The number of blade sections with zero tangential force, the existence of a downwash region in the rotor disk, the stability of the autorotation state, and the overall rotor autorotation efficiency, are all analyzed in terms of the flight velocity and the characteristics of the rotor. It is shown that, in vertical autorotation, larger blade twist leads to smaller values of descent velocity for a given thrust generated by the rotor in the autorotational state.
Resumo:
© 2015 American Heart Association, Inc.
Resumo:
Subsidence is a natural hazard that affects wide areas in the world causing important economic costs annually. This phenomenon has occurred in the metropolitan area of Murcia City (SE Spain) as a result of groundwater overexploitation. In this work aquifer system subsidence is investigated using an advanced differential SAR interferometry remote sensing technique (A-DInSAR) called Stable Point Network (SPN). The SPN derived displacement results, mainly the velocity displacement maps and the time series of the displacement, reveal that in the period 2004–2008 the rate of subsidence in Murcia metropolitan area doubled with respect to the previous period from 1995 to 2005. The acceleration of the deformation phenomenon is explained by the drought period started in 2006. The comparison of the temporal evolution of the displacements measured with the extensometers and the SPN technique shows an average absolute error of 3.9±3.8 mm. Finally, results from a finite element model developed to simulate the recorded time history subsidence from known water table height changes compares well with the SPN displacement time series estimations. This result demonstrates the potential of A-DInSAR techniques to validate subsidence prediction models as an alternative to using instrumental ground based techniques for validation.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.