999 resultados para Code validation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To develop bioelectrical impedance analysis (BIA) equations to predict total body water (TBW) and fat-free mass (FFM) of Sri Lankan children. Subjects/Methods: Data were collected from 5- to 15-year-old healthy children. They were randomly assigned to validation (M/F: 105/83) and cross-validation (M/F: 53/41) groups. Height, weight and BIA were measured. TBW was assessed using isotope dilution method (D2 O). Multiple regression analysis was used to develop preliminary equations and cross-validated on an independent group. Final prediction equation was constructed combining the two groups and validated by PRESS (prediction of sum of squares) statistics. Impedance index (height2/impedance; cm2/Ω), weight and sex code (male = 1; female = 0) were used as variables. Results: Independent variables of the final prediction equation for TBW were able to predict 86.3% of variance with root means-squared error (RMSE) of 2.1l. PRESS statistics was 2.1l with press residuals of 1.2l. Independent variables were able to predict 86.9% of variance of FFM with RMSE of 2.7 kg. PRESS statistics was 2.8 kg with press residuals of 1.4 kg. Bland Altman technique showed that the majority of the residuals were within mean bias±1.96 s.d. Conclusions: Results of this study provide BIA equation for the prediction of TBW and FFM in Sri Lankan children. To the best of our knowledge there are no published BIA prediction equations validated on South Asian populations. Results of this study need to be affirmed by more studies on other closely related populations by using multi-component body composition assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A transient flame simulation tool based on unsteady Reynolds average Navier Stokes (RANS) is characterized for stationary and nonstationary flame applications with a motivation of performing computationally affordable flame stability studies. Specifically, the KIVA-3V code is utilized with incorporation of a recently proposed modified eddy dissipation concept for simulating turbulence-chemistry interaction along with a model for radiation loss. Detailed comparison of velocities, turbulent kinetic energies, temperature, and species are made with the experimental data of the turbulent, non-premixed DLR_A CH4/H-2/N-2 jet flame. The comparison shows that the model is able to predict flame structure very well. The effect of some of the modeling assumptions is assessed, and strategies to model a stationary diffusion flame are recommended. Unsteady flame simulation capabilities of the numerical model are assessed by simulating an acoustically excited, experimental, oscillatory H-2-air diffusion flame. Comparisons are made with oscillatory velocity field and OH plots, and the numerical code is observed to predict transient flame structure well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational fluid dynamics (CFD) simulations are becoming increasingly widespread with the advent of more powerful computers and more sophisticated software. The aim of these developments is to facilitate more accurate reactor design and optimization methods compared to traditional lumped-parameter models. However, in order for CFD to be a trusted method, it must be validated using experimental data acquired at sufficiently high spatial resolution. This article validates an in-house CFD code by comparison with flow-field data obtained using magnetic resonance imaging (MRI) for a packed bed with a particle-to-column diameter ratio of 2. Flows characterized by inlet Reynolds numbers, based on particle diameter, of 27, 55, 111, and 216 are considered. The code used employs preconditioning to directly solve for pressure in low-velocity flow regimes. Excellent agreement was found between the MRI and CFD data with relative error between the experimentally determined and numerically predicted flow-fields being in the range of 3-9%. © 2012 American Institute of Chemical Engineers (AIChE).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results of numerical investigations of the wet steam flow in a three stage low pressure steam turbine test rig are presented. The test rig is a scale model of a modern steam turbine design and provides flow measurements over a range of operating conditions which are used for detailed comparisons with the numerical results. For the numerical analysis a modern CFD code with user defined models for specific wet steam modelling is used. The effect of different theoretical models for nucleation and droplet growth are examined. It is shown that heterogeneous condensation is highly dependent on steam quality and, in this model turbine with high quality steam, a homogeneous theory appears to be the best choice. The homogeneous theory gives good agreement between the test rig traverse measurements and the numerical results. The differences in the droplet size distribution of the three stage turbine are shown for different loads and modelling assumptions. The different droplet growth models can influence the droplet size by a factor of two. An estimate of the influence of unsteady effects is made by means of an unsteady two-dimensional simulation. The unsteady modelling leads to a shift of nucleation into the next blade row. For the investigated three stage turbine the influence due to wake chopping on the condensation process is weak but to confirm this conclusion further investigations are needed in complete three dimensions and on turbines with more stages. Copyright © 2011 by ASME.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Administrative or quality improvement registries may or may not contain the elements needed for investigations by trauma researchers. International Classification of Diseases Program for Injury Categorisation (ICDPIC), a statistical program available through Stata, is a powerful tool that can extract injury severity scores from ICD-9-CM codes. We conducted a validation study for use of the ICDPIC in trauma research. METHODS: We conducted a retrospective cohort validation study of 40,418 patients with injury using a large regional trauma registry. ICDPIC-generated AIS scores for each body region were compared with trauma registry AIS scores (gold standard) in adult and paediatric populations. A separate analysis was conducted among patients with traumatic brain injury (TBI) comparing the ICDPIC tool with ICD-9-CM embedded severity codes. Performance in characterising overall injury severity, by the ISS, was also assessed. RESULTS: The ICDPIC tool generated substantial correlations in thoracic and abdominal trauma (weighted κ 0.87-0.92), and in head and neck trauma (weighted κ 0.76-0.83). The ICDPIC tool captured TBI severity better than ICD-9-CM code embedded severity and offered the advantage of generating a severity value for every patient (rather than having missing data). Its ability to produce an accurate severity score was consistent within each body region as well as overall. CONCLUSIONS: The ICDPIC tool performs well in classifying injury severity and is superior to ICD-9-CM embedded severity for TBI. Use of ICDPIC demonstrates substantial efficiency and may be a preferred tool in determining injury severity for large trauma datasets, provided researchers understand its limitations and take caution when examining smaller trauma datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents the description of one new order (Asplenietalia septentrionalo-cuneifolii) and two new alliances (Arenarion bertolonii and Physoplexido comosae- Saxifragion petraeae). In addition, the syntaxon Asplenietalia lanceolato-obovati is here formally raised to the order level and the name Hypno-Polypodietalia vulgaris is validated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During extreme sea states so called impact events can be observed on the wave energy converter Oyster. In small scale experimental tests these impact events cause high frequency signals in the measured load which decrease confidence in the data obtained. These loads depend on the structural dynamics of the model. Amplification of the loads can occur and is transferred through the structure from the point of impact to the load cell located in the foundation. Since the determination of design data and load cases for Wave Energy Converters originate from scale experiments, this lack of confidence has a direct effect on the development.

Numerical vibration analysis is a valuable tool in the research of the structural load response of Oyster to impact events, but must take into account the effect of the surrounding water. This can be done efficiently by adding an added mass distribution, computed with a linearised potential boundary element method. This paper presents the development and validation of a numerical procedure, which couples the OpenSource boundary element code NEMOH with the Finite Element Analysis tool CodeAster. Numerical results of the natural frequencies and mode shapes of the structure under the influence of added mass due to specific structural modes are compared with experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A biological disparity energy model can estimate local depth information by using a population of V1 complex cells. Instead of applying an analytical model which explicitly involves cell parameters like spatial frequency, orientation, binocular phase and position difference, we developed a model which only involves the cells’ responses, such that disparity can be extracted from a population code, using only a set of previously trained cells with random-dot stereograms of uniform disparity. Despite good results in smooth regions, the model needs complementary processing, notably at depth transitions. We therefore introduce a new model to extract disparity at keypoints such as edge junctions, line endings and points with large curvature. Responses of end-stopped cells serve to detect keypoints, and those of simple cells are used to detect orientations of their underlying line and edge structures. Annotated keypoints are then used in the leftright matching process, with a hierarchical, multi-scale tree structure and a saliency map to segregate disparity. By combining both models we can (re)define depth transitions and regions where the disparity energy model is less accurate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compreender a funcionalidade de uma criança é um desafio persistente em contextos de saúde e educação. Na tentativa de superar esse desafio, em 2007, a Organização Mundial de Saúde desenvolveu a Classificação Internacional de Funcionalidade, Incapacidade e Saúde para Crianças e Jovens (CIF-CJ) como o primeiro sistema de classificação universal para documentar a saúde e funcionalidade da criança. Apesar de a CIF-CJ não ser um instrumento de avaliação e intervenção, tem, no entanto, a capacidade de servir de enquadramento para o desenvolvimento de ferramentas adaptadas às necessidades dos seus utilizadores. Considerando que no contexto escolar, a escrita manual encontra-se entre as atividades mais requeridas para a participação plena de uma criança, parece ser pertinente a definição de um conjunto de códigos destinados a caracterizar o perfil de funcionalidade de uma criança, no que se refere à escrita manual. O objetivo deste estudo foi, pois, o desenvolvimento de um conjunto preliminar de códigos baseado na CIF-CJ que possa vir a constituir um code set para a escrita manual. Dada a complexidade do tema e atendendo a que se pretende alcançar consenso entre os especialistas sobre quais as categorias da CIF-CJ que devem ser consideradas, optou-se pela utilização da técnica de Delphi. A escolha da metodologia seguiu a orientação dos procedimentos adotados pelo projeto Core Set CIF. De dezoito profissionais contactados, obtiveram-se respostas de sete terapeutas ocupacionais com experiência em pediatria, que participaram em todas as rondas. No total, três rondas de questionários foram realizadas para atingir um consenso, com um nível de concordância, previamente definido, de 70%. Deste estudo resultou um conjunto preliminar de códigos com 54 categorias da CIF-CJ (16 categorias de segundo nível, 14 categorias de terceiro nível e uma categoria de quarto nível), das quais 31 são categorias das funções do corpo, uma categoria das estruturas do corpo, 12 categorias de atividades e participação e 10 categorias de fatores ambientais. Este estudo é um primeiro passo para o desenvolvimento de um code set para a escrita manual baseado na CIF-CJ , sendo claramente necessário a realização de mais pesquisas no contexto do desenvolvimento e da validação deste code set.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Au Québec, la Loi sur les services de santé et les services sociaux, Chapitre S-4.2, à son article 233, demande à ce que chacun des établissements de santé, dispose d’un code d’éthique qui essentiellement demande de préciser les droits des usagers et de fixer les conduites attendues du personnel. Le législateur souhaitait améliorer les conduites du personnel dès le début des années 1990 et envisageait désigner un organisme de surveillance pour s’en assurer. Cette contrainte ne fut pas retenue et 20 ans plus tard, la volonté d’assurer des conduites attendues n’est toujours pas assujettie de contraintes ou de contrôles même si elle est toujours souhaitée. En 2003 toutefois, le Ministre a mis en place un processus de visites ministérielles dans les milieux d’hébergement et à ce jour quelques 150 établissements ont été visités. Ces équipes se sont préoccupées entre autre de la fonction du code d’éthique pour soutenir les directions de ces établissements. Elles ne réussissent pas à pouvoir s’appuyer sur le code d’éthique pour qu’il soit l’assise pour baser les décisions cliniques, organisationnelles et de gestion de chacune des organisations du réseau de la santé et des services sociaux du Québec. Il faut à ce moment-ci faire le constat que le code d’éthique, obligatoire, figure au nombre des nombreuses contraintes rencontrées par les organisations. Les établissements doivent passer un processus d’agrément aux trois ans et le code d’éthique n’est pas davantage un élément dynamique retenu à ce processus de validation de normes de qualité. De plus, une revue québécoise spécialisée en gestion de la santé a consacré un numéro complet de 15 articles sur « éthique et comportements » et le code d’éthique y est absent sauf pour deux articles qui s’y attardent spécifiquement. Est-ce une question d’éthique dont il est question par ce code, ou si ce n’est pas davantage de la déontologie, d’autant que le législateur veut avant tout s’assurer de comportements adéquats de la part des employés et des autres personnes qui exercent leur profession. Est-ce qu’un code de conduite ne serait pas plus approprié pour atteindre les fins visées? Cette question est répondue dans ce mémoire qui regarde les concepts d’éthique, de déontologie, de codes, de régulation des comportements. De plus, des analyses détaillées de 35 codes d’éthique actuels de divers établissements et de diverses régions du Québec iv sont présentées. La littérature nous donne les conditions de réussite pour un code et outre l’importance à accorder aux valeurs énoncées dans l’organisation, il est également question des sanctions à prévoir au non-respect de ces valeurs. Elles se doivent d’être claires et appliquées. Enfin, beaucoup d’organisations parlent maintenant de code de conduite et ce terme serait tout à fait approprié pour rejoindre le souhait du législateur qui veut assurer des conduites irréprochables des employés et autres personnes qui y travaillent. C’est la conclusion de ce travail, énoncée sous forme de recommandation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le travail de modélisation a été réalisé à travers EGSnrc, un logiciel développé par le Conseil National de Recherche Canada.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a new methodology for comparing satellite radiation budget data with a numerical weather prediction (NWP) model. This is applied to data from the Geostationary Earth Radiation Budget (GERB) instrument on Meteosat-8. The methodology brings together, in near-real time, GERB broadband shortwave and longwave fluxes with simulations based on analyses produced by the Met Office global NWP model. Results for the period May 2003 to February 2005 illustrate the progressive improvements in the data products as various initial problems were resolved. In most areas the comparisons reveal systematic errors in the model's representation of surface properties and clouds, which are discussed elsewhere. However, for clear-sky regions over the oceans the model simulations are believed to be sufficiently accurate to allow the quality of the GERB fluxes themselves to be assessed and any changes in time of the performance of the instrument to be identified. Using model and radiosonde profiles of temperature and humidity as input to a single-column version of the model's radiation code, we conduct sensitivity experiments which provide estimates of the expected model errors over the ocean of about ±5–10 W m−2 in clear-sky outgoing longwave radiation (OLR) and ±0.01 in clear-sky albedo. For the more recent data the differences between the observed and modeled OLR and albedo are well within these error estimates. The close agreement between the observed and modeled values, particularly for the most recent period, illustrates the value of the methodology. It also contributes to the validation of the GERB products and increases confidence in the quality of the data, prior to their release.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software. We have been working to address this problem by finding ways to provide at least some of the benefits of formal software engineering techniques to end-user programmers. In this talk, focusing on the spreadsheet application paradigm, I present several of our approaches, focusing on methodologies that utilize source-code-analysis techniques to help end-users build more dependable spreadsheets. Behind the scenes, our methodologies use static analyses such as dataflow analysis and slicing, together with dynamic analyses such as execution monitoring, to support user tasks such as validation and fault localization. I show how, to accommodate the user base of spreadsheet languages, an interface to these methodologies can be provided in a manner that does not require an understanding of the theory behind the analyses, yet supports the interactive, incremental process by which spreadsheets are created. Finally, I present empirical results gathered in the use of our methodologies that highlight several costs and benefits trade-offs, and many opportunities for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In high energy teletherapy, VMC++ is known to be a very accurate and efficient Monte Carlo (MC) code. In principle, the MC method is also a powerful dose calculation tool in other areas in radiation oncology, e.g., brachytherapy or orthovoltage radiotherapy. However, VMC++ is not validated for the low-energy range of such applications. This work aims in the validation of the VMC++ MC code for photon beams in the energy range between 20 and 1000 keV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monte Carlo (MC) based dose calculations can compute dose distributions with an accuracy surpassing that of conventional algorithms used in radiotherapy, especially in regions of tissue inhomogeneities and surface discontinuities. The Swiss Monte Carlo Plan (SMCP) is a GUI-based framework for photon MC treatment planning (MCTP) interfaced to the Eclipse treatment planning system (TPS). As for any dose calculation algorithm, also the MCTP needs to be commissioned and validated before using the algorithm for clinical cases. Aim of this study is the investigation of a 6 MV beam for clinical situations within the framework of the SMCP. In this respect, all parts i.e. open fields and all the clinically available beam modifiers have to be configured so that the calculated dose distributions match the corresponding measurements. Dose distributions for the 6 MV beam were simulated in a water phantom using a phase space source above the beam modifiers. The VMC++ code was used for the radiation transport through the beam modifiers (jaws, wedges, block and multileaf collimator (MLC)) as well as for the calculation of the dose distributions within the phantom. The voxel size of the dose distributions was 2mm in all directions. The statistical uncertainty of the calculated dose distributions was below 0.4%. Simulated depth dose curves and dose profiles in terms of [Gy/MU] for static and dynamic fields were compared with the corresponding measurements using dose difference and γ analysis. For the dose difference criterion of ±1% of D(max) and the distance to agreement criterion of ±1 mm, the γ analysis showed an excellent agreement between measurements and simulations for all static open and MLC fields. The tuning of the density and the thickness for all hard wedges lead to an agreement with the corresponding measurements within 1% or 1mm. Similar results have been achieved for the block. For the validation of the tuned hard wedges, a very good agreement between calculated and measured dose distributions was achieved using a 1%/1mm criteria for the γ analysis. The calculated dose distributions of the enhanced dynamic wedges (10°, 15°, 20°, 25°, 30°, 45° and 60°) met the criteria of 1%/1mm when compared with the measurements for all situations considered. For the IMRT fields all compared measured dose values agreed with the calculated dose values within a 2% dose difference or within 1 mm distance. The SMCP has been successfully validated for a static and dynamic 6 MV photon beam, thus resulting in accurate dose calculations suitable for applications in clinical cases.