958 resultados para MODEL TESTS
Resumo:
Patients suffering from cystic fibrosis (CF) show thick secretions, mucus plugging and bronchiectasis in bronchial and alveolar ducts. This results in substantial structural changes of the airway morphology and heterogeneous ventilation. Disease progression and treatment effects are monitored by so-called gas washout tests, where the change in concentration of an inert gas is measured over a single or multiple breaths. The result of the tests based on the profile of the measured concentration is a marker for the severity of the ventilation inhomogeneity strongly affected by the airway morphology. However, it is hard to localize underlying obstructions to specific parts of the airways, especially if occurring in the lung periphery. In order to support the analysis of lung function tests (e.g. multi-breath washout), we developed a numerical model of the entire airway tree, coupling a lumped parameter model for the lung ventilation with a 4th-order accurate finite difference model of a 1D advection-diffusion equation for the transport of an inert gas. The boundary conditions for the flow problem comprise the pressure and flow profile at the mouth, which is typically known from clinical washout tests. The natural asymmetry of the lung morphology is approximated by a generic, fractal, asymmetric branching scheme which we applied for the conducting airways. A conducting airway ends when its dimension falls below a predefined limit. A model acinus is then connected to each terminal airway. The morphology of an acinus unit comprises a network of expandable cells. A regional, linear constitutive law describes the pressure-volume relation between the pleural gap and the acinus. The cyclic expansion (breathing) of each acinus unit depends on the resistance of the feeding airway and on the flow resistance and stiffness of the cells themselves. Special care was taken in the development of a conservative numerical scheme for the gas transport across bifurcations, handling spatially and temporally varying advective and diffusive fluxes over a wide range of scales. Implicit time integration was applied to account for the numerical stiffness resulting from the discretized transport equation. Local or regional modification of the airway dimension, resistance or tissue stiffness are introduced to mimic pathological airway restrictions typical for CF. This leads to a more heterogeneous ventilation of the model lung. As a result the concentration in some distal parts of the lung model remains increased for a longer duration. The inert gas concentration at the mouth towards the end of the expirations is composed of gas from regions with very different washout efficiency. This results in a steeper slope of the corresponding part of the washout profile.
Resumo:
The Empirical CODE Orbit Model (ECOM) of the Center for Orbit Determination in Europe (CODE), which was developed in the early 1990s, is widely used in the International GNSS Service (IGS) community. For a rather long time, spurious spectral lines are known to exist in geophysical parameters, in particular in the Earth Rotation Parameters (ERPs) and in the estimated geocenter coordinates, which could recently be attributed to the ECOM. These effects grew creepingly with the increasing influence of the GLONASS system in recent years in the CODE analysis, which is based on a rigorous combination of GPS and GLONASS since May 2003. In a first step we show that the problems associated with the ECOM are to the largest extent caused by the GLONASS, which was reaching full deployment by the end of 2011. GPS-only, GLONASS-only, and combined GPS/GLONASS solutions using the observations in the years 2009–2011 of a global network of 92 combined GPS/GLONASS receivers were analyzed for this purpose. In a second step we review direct solar radiation pressure (SRP) models for GNSS satellites. We demonstrate that only even-order short-period harmonic perturbations acting along the direction Sun-satellite occur for GPS and GLONASS satellites, and only odd-order perturbations acting along the direction perpendicular to both, the vector Sun-satellite and the spacecraft’s solar panel axis. Based on this insight we assess in the third step the performance of four candidate orbit models for the future ECOM. The geocenter coordinates, the ERP differences w. r. t. the IERS 08 C04 series of ERPs, the misclosures for the midnight epochs of the daily orbital arcs, and scale parameters of Helmert transformations for station coordinates serve as quality criteria. The old and updated ECOM are validated in addition with satellite laser ranging (SLR) observations and by comparing the orbits to those of the IGS and other analysis centers. Based on all tests, we present a new extended ECOM which substantially reduces the spurious signals in the geocenter coordinate z (by about a factor of 2–6), reduces the orbit misclosures at the day boundaries by about 10 %, slightly improves the consistency of the estimated ERPs with those of the IERS 08 C04 Earth rotation series, and substantially reduces the systematics in the SLR validation of the GNSS orbits.
Resumo:
Simulating surface wind over complex terrain is a challenge in regional climate modelling. Therefore, this study aims at identifying a set-up of the Weather Research and Forecasting Model (WRF) model that minimises system- atic errors of surface winds in hindcast simulations. Major factors of the model configuration are tested to find a suitable set-up: the horizontal resolution, the planetary boundary layer (PBL) parameterisation scheme and the way the WRF is nested to the driving data set. Hence, a number of sensitivity simulations at a spatial resolution of 2 km are carried out and compared to observations. Given the importance of wind storms, the analysis is based on case studies of 24 historical wind storms that caused great economic damage in Switzerland. Each of these events is downscaled using eight different model set-ups, but sharing the same driving data set. The results show that the lack of representation of the unresolved topography leads to a general overestimation of wind speed in WRF. However, this bias can be substantially reduced by using a PBL scheme that explicitly considers the effects of non-resolved topography, which also improves the spatial structure of wind speed over Switzerland. The wind direction, although generally well reproduced, is not very sensitive to the PBL scheme. Further sensitivity tests include four types of nesting methods: nesting only at the boundaries of the outermost domain, analysis nudging, spectral nudging, and the so-called re-forecast method, where the simulation is frequently restarted. These simulations show that restricting the freedom of the model to develop large-scale disturbances slightly increases the temporal agreement with the observations, at the same time that it further reduces the overestimation of wind speed, especially for maximum wind peaks. The model performance is also evaluated in the outermost domains, where the resolution is coarser. The results demonstrate the important role of horizontal resolution, where the step from 6 to 2 km significantly improves model performance. In summary, the combination of a grid size of 2 km, the non-local PBL scheme modified to explicitly account for non-resolved orography, as well as analysis or spectral nudging, is a superior combination when dynamical downscaling is aimed at reproducing real wind fields.
Resumo:
Trabecular bone is a porous mineralized tissue playing a major load bearing role in the human body. Prediction of age-related and disease-related fractures and the behavior of bone implant systems needs a thorough understanding of its structure-mechanical property relationships, which can be obtained using microcomputed tomography-based finite element modeling. In this study, a nonlinear model for trabecular bone as a cohesive-frictional material was implemented in a large-scale computational framework and validated by comparison of μFE simulations with experimental tests in uniaxial tension and compression. A good correspondence of stiffness and yield points between simulations and experiments was found for a wide range of bone volume fraction and degree of anisotropy in both tension and compression using a non-calibrated, average set of material parameters. These results demonstrate the ability of the model to capture the effects leading to failure of bone for three anatomical sites and several donors, which may be used to determine the apparent behavior of trabecular bone and its evolution with age, disease, and treatment in the future.
Resumo:
PURPOSE The implementation of genomic-based medicine is hindered by unresolved questions regarding data privacy and delivery of interpreted results to health-care practitioners. We used DNA-based prediction of HIV-related outcomes as a model to explore critical issues in clinical genomics. METHODS We genotyped 4,149 markers in HIV-positive individuals. Variants allowed for prediction of 17 traits relevant to HIV medical care, inference of patient ancestry, and imputation of human leukocyte antigen (HLA) types. Genetic data were processed under a privacy-preserving framework using homomorphic encryption, and clinical reports describing potentially actionable results were delivered to health-care providers. RESULTS A total of 230 patients were included in the study. We demonstrated the feasibility of encrypting a large number of genetic markers, inferring patient ancestry, computing monogenic and polygenic trait risks, and reporting results under privacy-preserving conditions. The average execution time of a multimarker test on encrypted data was 865 ms on a standard computer. The proportion of tests returning potentially actionable genetic results ranged from 0 to 54%. CONCLUSIONS The model of implementation presented herein informs on strategies to deliver genomic test results for clinical care. Data encryption to ensure privacy helps to build patient trust, a key requirement on the road to genomic-based medicine.Genet Med advance online publication 14 January 2016Genetics in Medicine (2016); doi:10.1038/gim.2015.167.
Macroeconomic Rationality and Lucas' Misperceptions Model: Further Evidence from Forty-One Countries
Resumo:
Several researchers have examined Lucas's misperceptions model as well as various propositions derived from it within a cross-section empirical framework. The cross-section approach imposes a single monetary policy regime for the entire period. Our paper innovates on existing tests of those rational expectations propositions by allowing the simultaneous effect of monetary and short run aggregate supply (oil price) shocks on output behavior and the employment of advanced panel econometric techniques. Our empirical findings, for a sample of 41 countries over 1949 to 1999, provide evidence in favor of the majority of rational expectations propositions.
Resumo:
It is important to check the fundamental assumption of most popular Item Response Theory models, unidimensionality. However, it is hard for educational and psychological tests to be strictly unidimensional. The tests studied in this paper are from a standardized high-stake testing program. They feature potential multidimensionality by presenting various item types and item sets. Confirmatory factor analyses with one-factor and bifactor models, and based on both linear structural equation modeling approach and nonlinear IRT approach were conducted. The competing models were compared and the implications of the bifactor model for checking essential unidimensionality were discussed.
Resumo:
With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^
Resumo:
As the requirements for health care hospitalization have become more demanding, so has the discharge planning process become a more important part of the health services system. A thorough understanding of hospital discharge planning can, then, contribute to our understanding of the health services system. This study involved the development of a process model of discharge planning from hospitals. Model building involved the identification of factors used by discharge planners to develop aftercare plans, and the specification of the roles of these factors in the development of the discharge plan. The factors in the model were concatenated in 16 discrete decision sequences, each of which produced an aftercare plan.^ The sample for this study comprised 407 inpatients admitted to the M. D. Anderson Hospital and Tumor Institution at Houston, Texas, who were discharged to any site within Texas during a 15 day period. Allogeneic bone marrow donors were excluded from the sample. The factors considered in the development of discharge plans were recorded by discharge planners and were used to develop the model. Data analysis consisted of sorting the discharge plans using the plan development factors until for some combination and sequence of factors all patients were discharged to a single site. The arrangement of factors that led to that aftercare plan became a decision sequence in the model.^ The model constructs the same discharge plans as those developed by hospital staff for every patient in the study. Tests of the validity of the model should be extended to other patients at the MDAH, to other cancer hospitals, and to other inpatient services. Revisions of the model based on these tests should be of value in the management of discharge planning services and in the design and development of comprehensive community health services.^
Resumo:
Tuberous sclerosis complex (TSC) is a dominant tumor suppressor disorder caused by mutations in either TSC1 or TSC2. The proteins of these genes form a complex to inhibit the mammalian target of rapamycin complex 1 (mTORC1), which controls protein translation and cell growth. TSC causes substantial neuropathology, often leading to autism spectrum disorders (ASDs) in up to 60% of patients. The anatomic and neurophysiologic links between these two disorders are not well understood. However, both disorders share cerebellar abnormalities. Therefore, we have characterized a novel mouse model in which the Tsc2 gene was selectively deleted from cerebellar Purkinje cells (Tsc2f/-;Cre). These mice exhibit progressive Purkinje cell degeneration. Since loss of Purkinje cells is a well-reported postmortem finding in patients with ASD, we conducted a series of behavior tests to assess if Tsc2f/-;Cre mice displayed autistic-like deficits. Using the three chambered social choice assay, we found that Tsc2f/-;Cre mice showed behavioral deficits, exhibiting no preference between a stranger mouse and an inanimate object, or between a novel and a familiar mouse. Tsc2f/-;Cre mice also demonstrated increased repetitive behavior as assessed with marble burying activity. Altogether, these results demonstrate that loss of Tsc2 in Purkinje cells in a haploinsufficient background lead to behavioral deficits that are characteristic of human autism. Therefore, Purkinje cells loss and/or dysfunction may be an important link between TSC and ASD. Additionally, we have examined some of the cellular mechanisms resulting from mutations in Tsc2 leading to Purkinje cell death. Loss of Tsc2 led to upregulation of mTORC1 and increased cell size. As a consequence of increased protein synthesis, several cellular stress pathways were upregulated. Principally, these included altered calcium signaling, oxidative stress, and ER stress. Likely as a consequence of ER stress, there was also upregulation of ubiquitin and autophagy. Excitingly, treatment with an mTORC1 inhibitor, rapamycin attenuated mTORC1 activity and prevented Purkinje cell death by reducing of calcium signaling, the ER stress response, and ubiquitin. Remarkably, rapamycin treatment also reversed the social behavior deficits, thus providing a promising potential therapy for TSC-associated ASD.
Resumo:
The copepod Calanus finmarchicus is the dominant species of the meso-zooplankton in the Norwegian Sea, and constitutes an important link between the phytoplankton and the higher trophic levels in the Norwegian Sea food chain. An individualbased model for C. finmarchicus, based on super-individuals and evolving traits for behaviour, stages, etc., is two-way coupled to the NORWegian ECOlogical Model system (NORWECOM). One year of modelled C. finmarchicus spatial distribution, production and biomass are found to represent observations reasonably well. High C. finmarchicus abundance is found along the Norwegian shelf-break in the early summer, while the overwintering population is found along the slope and in the deeper Norwegian Sea basins. The timing of the spring bloom is generally later than in the observations. Annual Norwegian Sea production is found to be 29 million tonnes of carbon and a production to biomass (P/B) ratio of 4.3 emerges. Sensitivity tests show that the modelling system is robust to initial values of behavioural traits and with regards to the number of super-individuals simulated given that this is above about 50,000 individuals. Experiments with the model system indicate that it provides a valuable tool for studies of ecosystem responses to causative forces such as prey density or overwintering population size. For example, introducing C. finmarchicus food limitations reduces the stock dramatically, but on the other hand, a reduced stock may rebuild in one year under normal conditions. The NetCDF file contains model grid coordinates and bottom topography.
Resumo:
Canonical test cases for sloshing wave impact problems are pre-sented and discussed. In these cases the experimental setup has been simpli?ed seeking the highest feasible repeatability; a rectangular tank subjected to harmonic roll motion has been the tested con?guration. Both lateral and roof impacts have been studied, since both cases are relevant in sloshing assessment and show speci?c dynamics. An analysis of the impact pressure of the ?rst four impact events is provided in all cases. It has been found that not in all cases a Gaussian ?tting of each individual peak is feasible. The tests have been conducted with both water and oil in order to obtain high and moderate Reynolds number data; the latter may be useful as simpler test cases to assess the capabilities of CFD codes in simulating sloshing impacts. The re-peatability of impact pressure values increases dramatically when using oil. In addition, a study of the two-dimensionality of the problem using a tank con?guration that can be adjusted to 4 di?erent thicknesses has been carried out. Though the kinemat-ics of the free surface does not change signi cantly in some of the cases, the impact pressure values of the ?rst impact events changes substantially from the small to the large aspect ratios thus meaning that attention has to be paid to this issue when reference data is used for validation of 2D and 3D CFD codes.
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help teachers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with a two-fold objective: a) To develop a model for teaching and evaluating core competences that is useful and easily applicable to its different degrees, and b) to provide support to teachers by creating an area within the Website for Educational Innovation where they can search for information on the model corresponding to each core competence approved by UPM. Information available on each competence includes its definition, the formulation of indicators providing evidence on the level of acquisition, the recommended teaching and evaluation methodology, examples of evaluation rules for the different levels of competence acquisition, and descriptions of best practices. These best practices correspond to pilot tests applied to several of the academic subjects conducted at UPM in order to validate the model. This work describes the general procedure that was used and presents the model developed specifically for the problem-solving competence. Some of the pilot experiences are also summarised and their results analysed
Resumo:
This paper provides partial results of an on-going research aimed at investigating the seismic response of reinforced concrete (RC) frames equipped with hysteretic-type energy dissipating devices (EDD). From a prototype RC frame structure designed only for gravity loads, a test model scaled in geometry to 2/5 was defined and built in the Laboratory of Structures of the University of Granada. Four EDDs were installed in the test model to provide the same seismic resistance than a conventional RC bare frame designed for sustain gravity and seismic loads following current codes. The test model with EDDs was subjected to several seismic simulations with the shaking table of Laboratory of structures of the University of Granada. The test results provide empirical evidences on the efficiency of the EDDs to prevent damage on the main frame and concentrating the inelastic deformations on the EDDs.