931 resultados para Model comparison
Resumo:
The main aim of radiotherapy is to deliver a dose of radiation that is high enough to destroy the tumour cells while at the same time minimising the damage to normal healthy tissues. Clinically, this has been achieved by assigning a prescription dose to the tumour volume and a set of dose constraints on critical structures. Once an optimal treatment plan has been achieved the dosimetry is assessed using the physical parameters of dose and volume. There has been an interest in using radiobiological parameters to evaluate and predict the outcome of a treatment plan in terms of both a tumour control probability (TCP) and a normal tissue complication probability (NTCP). In this study, simple radiobiological models that are available in a commercial treatment planning system were used to compare three dimensional conformal radiotherapy treatments (3D-CRT) and intensity modulated radiotherapy (IMRT) treatments of the prostate. Initially both 3D-CRT and IMRT were planned for 2 Gy/fraction to a total dose of 60 Gy to the prostate. The sensitivity of the TCP and the NTCP to both conventional dose escalation and hypo-fractionation was investigated. The biological responses were calculated using the Källman S-model. The complication free tumour control probability (P+) is generated from the combined NTCP and TCP response values. It has been suggested that the alpha/beta ratio for prostate carcinoma cells may be lower than for most other tumour cell types. The effect of this on the modelled biological response for the different fractionation schedules was also investigated.
Resumo:
We use Bayesian model selection techniques to test extensions of the standard flat LambdaCDM paradigm. Dark-energy and curvature scenarios, and primordial perturbation models are considered. To that end, we calculate the Bayesian evidence in favour of each model using Population Monte Carlo (PMC), a new adaptive sampling technique which was recently applied in a cosmological context. The Bayesian evidence is immediately available from the PMC sample used for parameter estimation without further computational effort, and it comes with an associated error evaluation. Besides, it provides an unbiased estimator of the evidence after any fixed number of iterations and it is naturally parallelizable, in contrast with MCMC and nested sampling methods. By comparison with analytical predictions for simulated data, we show that our results obtained with PMC are reliable and robust. The variability in the evidence evaluation and the stability for various cases are estimated both from simulations and from data. For the cases we consider, the log-evidence is calculated with a precision of better than 0.08. Using a combined set of recent CMB, SNIa and BAO data, we find inconclusive evidence between flat LambdaCDM and simple dark-energy models. A curved Universe is moderately to strongly disfavoured with respect to a flat cosmology. Using physically well-motivated priors within the slow-roll approximation of inflation, we find a weak preference for a running spectral index. A Harrison-Zel'dovich spectrum is weakly disfavoured. With the current data, tensor modes are not detected; the large prior volume on the tensor-to-scalar ratio r results in moderate evidence in favour of r=0.
Resumo:
This paper presents a comparison between a physical model and an artificial neural network model (NN) for temperature estimation inside a building room. Despite the obvious advantages of the physical model for structure optimisation purposes, this paper will test the performance of neural models for inside temperature estimation. The great advantage of the NN model is a big reduction of human effort time, because it is not needed to develop the structural geometry and structural thermal capacities and to simulate, which consumes a great human effort and great computation time. The NN model deals with this problem as a “black box” problem. We describe the use of the Radial Basis Function (RBF), the training method and a multi-objective genetic algorithm for optimisation/selection of the RBF neural network inputs and number of neurons.
Resumo:
OBJECTIVE: : To determine the influence of nebulizer types and nebulization modes on bronchodilator delivery in a mechanically ventilated pediatric lung model. DESIGN: : In vitro, laboratory study. SETTING: : Research laboratory of a university hospital. INTERVENTIONS: : Using albuterol as a marker, three nebulizer types (jet nebulizer, ultrasonic nebulizer, and vibrating-mesh nebulizer) were tested in three nebulization modes in a nonhumidified bench model mimicking the ventilatory pattern of a 10-kg infant. The amounts of albuterol deposited on the inspiratory filters (inhaled drug) at the end of the endotracheal tube, on the expiratory filters, and remaining in the nebulizers or in the ventilator circuit were determined. Particle size distribution of the nebulizers was also measured. MEASUREMENTS AND MAIN RESULTS: : The inhaled drug was 2.8% ± 0.5% for the jet nebulizer, 10.5% ± 2.3% for the ultrasonic nebulizer, and 5.4% ± 2.7% for the vibrating-mesh nebulizer in intermittent nebulization during the inspiratory phase (p < 0.01). The most efficient nebulizer was the vibrating-mesh nebulizer in continuous nebulization (13.3% ± 4.6%, p < 0.01). Depending on the nebulizers, a variable but important part of albuterol was observed as remaining in the nebulizers (jet and ultrasonic nebulizers), or being expired or lost in the ventilator circuit (all nebulizers). Only small particles (range 2.39-2.70 µm) reached the end of the endotracheal tube. CONCLUSIONS: : Important differences between nebulizer types and nebulization modes were seen for albuterol deposition at the end of the endotracheal tube in an in vitro pediatric ventilator-lung model. New aerosol devices, such as ultrasonic and vibrating-mesh nebulizers, were more efficient than the jet nebulizer.
Resumo:
We complete the development of a testing ground for axioms of discrete stochastic choice. Our contribution here is to develop new posterior simulation methods for Bayesian inference, suitable for a class of prior distributions introduced by McCausland and Marley (2013). These prior distributions are joint distributions over various choice distributions over choice sets of di fferent sizes. Since choice distributions over di fferent choice sets can be mutually dependent, previous methods relying on conjugate prior distributions do not apply. We demonstrate by analyzing data from a previously reported experiment and report evidence for and against various axioms.
Resumo:
Tropical cyclones have been investigated in a T159 version of the MPI ECHAM5 climate model using a novel technique to diagnose the evolution of the 3-dimensional vorticity structure of tropical cyclones, including their full life cycle from weak initial vortex to their possible extra-tropical transition. Results have been compared with reanalyses (ERA40 and JRA25) and observed tropical storms during the period 1978-1999 for the Northern Hemisphere. There is no indication of any trend in the number or intensity of tropical storms during this period in ECHAM5 or in re-analyses but there are distinct inter-annual variations. The storms simulated by ECHAM5 are realistic both in space and time, but the model and even more so the re-analyses, underestimate the intensities of the most intense storms (in terms of their maximum wind speeds). There is an indication of a response to ENSO with a smaller number of Atlantic storms during El Niño in agreement with previous studies. The global divergence circulation responds to El Niño by setting up a large-scale convergence flow, with the center over the central Pacific with enhanced subsidence over the tropical Atlantic. At the same time there is an increase in the vertical wind shear in the region of the tropical Atlantic where tropical storms normally develop. There is a good correspondence between the model and ERA40 except that the divergence circulation is somewhat stronger in the model. The model underestimates storms in the Atlantic but tends to overestimate them in the Western Pacific and in the North Indian Ocean. It is suggested that the overestimation of storms in the Pacific by the model is related to an overly strong response to the tropical Pacific SST anomalies. The overestimation in 2 the North Indian Ocean is likely to be due to an over prediction in the intensity of monsoon depressions, which are then classified as intense tropical storms. Nevertheless, overall results are encouraging and will further contribute to increased confidence in simulating intense tropical storms with high-resolution climate models.
Resumo:
Urban land surface schemes have been developed to model the distinct features of the urban surface and the associated energy exchange processes. These models have been developed for a range of purposes and make different assumptions related to the inclusion and representation of the relevant processes. Here, the first results of Phase 2 from an international comparison project to evaluate 32 urban land surface schemes are presented. This is the first large-scale systematic evaluation of these models. In four stages, participants were given increasingly detailed information about an urban site for which urban fluxes were directly observed. At each stage, each group returned their models' calculated surface energy balance fluxes. Wide variations are evident in the performance of the models for individual fluxes. No individual model performs best for all fluxes. Providing additional information about the surface generally results in better performance. However, there is clear evidence that poor choice of parameter values can cause a large drop in performance for models that otherwise perform well. As many models do not perform well across all fluxes, there is need for caution in their application, and users should be aware of the implications for applications and decision making.
Resumo:
Salmonella are closely related to commensal Escherichia coli but have gained virulence factors enabling them to behave as enteric pathogens. Less well studied are the similarities and differences that exist between the metabolic properties of these organisms that may contribute toward niche adaptation of Salmonella pathogens. To address this, we have constructed a genome scale Salmonella metabolic model (iMA945). The model comprises 945 open reading frames or genes, 1964 reactions, and 1036 metabolites. There was significant overlap with genes present in E. coli MG1655 model iAF1260. In silico growth predictions were simulated using the model on different carbon, nitrogen, phosphorous, and sulfur sources. These were compared with substrate utilization data gathered from high throughput phenotyping microarrays revealing good agreement. Of the compounds tested, the majority were utilizable by both Salmonella and E. coli. Nevertheless a number of differences were identified both between Salmonella and E. coli and also within the Salmonella strains included. These differences provide valuable insight into differences between a commensal and a closely related pathogen and within different pathogenic strains opening new avenues for future explorations.
Resumo:
A number of urban land-surface models have been developed in recent years to satisfy the growing requirements for urban weather and climate interactions and prediction. These models vary considerably in their complexity and the processes that they represent. Although the models have been evaluated, the observational datasets have typically been of short duration and so are not suitable to assess the performance over the seasonal cycle. The First International Urban Land-Surface Model comparison used an observational dataset that spanned a period greater than a year, which enables an analysis over the seasonal cycle, whilst the variety of models that took part in the comparison allows the analysis to include a full range of model complexity. The results show that, in general, urban models do capture the seasonal cycle for each of the surface fluxes, but have larger errors in the summer months than in the winter. The net all-wave radiation has the smallest errors at all times of the year but with a negative bias. The latent heat flux and the net storage heat flux are also underestimated, whereas the sensible heat flux generally has a positive bias throughout the seasonal cycle. A representation of vegetation is a necessary, but not sufficient, condition for modelling the latent heat flux and associated sensible heat flux at all times of the year. Models that include a temporal variation in anthropogenic heat flux show some increased skill in the sensible heat flux at night during the winter, although their daytime values are consistently overestimated at all times of the year. Models that use the net all-wave radiation to determine the net storage heat flux have the best agreement with observed values of this flux during the daytime in summer, but perform worse during the winter months. The latter could result from a bias of summer periods in the observational datasets used to derive the relations with net all-wave radiation. Apart from these models, all of the other model categories considered in the analysis result in a mean net storage heat flux that is close to zero throughout the seasonal cycle, which is not seen in the observations. Models with a simple treatment of the physical processes generally perform at least as well as models with greater complexity.
Resumo:
he first international urban land surface model comparison was designed to identify three aspects of the urban surface-atmosphere interactions: (1) the dominant physical processes, (2) the level of complexity required to model these, and 3) the parameter requirements for such a model. Offline simulations from 32 land surface schemes, with varying complexity, contributed to the comparison. Model results were analysed within a framework of physical classifications and over four stages. The results show that the following are important urban processes; (i) multiple reflections of shortwave radiation within street canyons, (ii) reduction in the amount of visible sky from within the canyon, which impacts on the net long-wave radiation, iii) the contrast in surface temperatures between building roofs and street canyons, and (iv) evaporation from vegetation. Models that use an appropriate bulk albedo based on multiple solar reflections, represent building roof surfaces separately from street canyons and include a representation of vegetation demonstrate more skill, but require parameter information on the albedo, height of the buildings relative to the width of the streets (height to width ratio), the fraction of building roofs compared to street canyons from a plan view (plan area fraction) and the fraction of the surface that is vegetated. These results, whilst based on a single site and less than 18 months of data, have implications for the future design of urban land surface models, the data that need to be measured in urban observational campaigns, and what needs to be included in initiatives for regional and global parameter databases.
Resumo:
In this paper, we investigate the pricing of crack spread options. Particular emphasis is placed on the question of whether univariate modeling of the crack spread or explicit modeling of the two underlyings is preferable. Therefore, we contrast a bivariate GARCH volatility model for cointegrated underlyings with the alternative of modeling the crack spread directly. Conducting an empirical analysis of crude oil/heating oil and crude oil/gasoline crack spread options traded on the New York Mercantile Exchange, the more simplistic univariate approach is found to be superior with respect to option pricing performance.
Resumo:
Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.