944 resultados para Asymptotic Variance, Bayesian Models, Burn-in, Ergodic Average, Ising Model
Resumo:
A comparison of a constant (continuous delivery of 4% FiO(2)) and a variable (initial 5% FiO(2) with adjustments to induce low amplitude EEG (LAEEG) and hypotension) hypoxic/ischemic insult was performed to determine which insult was more effective in producing a consistent degree of survivable neuropathological damage in a newborn piglet model of perinatal asphyxia. We also examined which physiological responses contributed to this outcome. Thirty-nine 1-day-old piglets were subjected to either a constant hypoxic/ischemic insult of 30- to 37-min duration or a variable hypoxic/ischemic insult of 30-min low peak amplitude EEG (LAEEG < 5 mu V) including 10 min of low mean arterial blood pressure (MABP < 70% of baseline). Control animals (n = 6) received 21% FiO(2) for the duration of the experiment. At 72 h, the piglets were euthanased, their brains removed and fixed in 4% paraformaldehyde and assessed for hypoxic/ischemic injury by histological analysis. Based on neuropathology scores, piglets were grouped as undamaged or damaged; piglets that did not survive to 72 h were grouped separately as dead. The variable insult resulted in a greater number of piglets with neuropathological damage (undamaged = 12.5%, damaged = 68.75%, dead = 18.75%) while the constant insult resulted in a large proportion of undamaged piglets (undamaged = 50%, damaged = 22.2%, dead = 27.8%). A hypoxic insult varied to maintain peak amplitude EEG < 5 mu V results in a greater number of survivors with a consistent degree of neuropathological damage than a constant hypoxic insult. Physiological variables MABP, LAEEG, pH and arterial base excess were found to be significantly associated with neuropathological outcome. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
We explore the implications of refinements in the mechanical description of planetary constituents on the convection modes predicted by finite-element simulations. The refinements consist in the inclusion of incremental elasticity, plasticity (yielding) and multiple simultaneous creep mechanisms in addition to the usual visco-plastic models employed in the context of unified plate-mantle models. The main emphasis of this paper rests on the constitutive and computational formulation of the model. We apply a consistent incremental formulation of the non-linear governing equations avoiding the computationally expensive iterations that are otherwise necessary to handle the onset of plastic yield. In connection with episodic convection simulations, we point out the strong dependency of the results on the choice of the initial temperature distribution. Our results also indicate that the inclusion of elasticity in the constitutive relationships lowers the mechanical energy associated with subduction events.
Resumo:
Background: Oral itraconazole (ITRA) is used for the treatment of allergic bronchopulmonary aspergillosis in patients with cystic fibrosis (CF) because of its antifungal activity against Aspergillus species. ITRA has an active hydroxy-metabolite (OH-ITRA) which has similar antifungal activity. ITRA is a highly lipophilic drug which is available in two different oral formulations, a capsule and an oral solution. It is reported that the oral solution has a 60% higher relative bioavailability. The influence of altered gastric physiology associated with CF on the pharmacokinetics (PK) of ITRA and its metabolite has not been previously evaluated. Objectives: 1) To estimate the population (pop) PK parameters for ITRA and its active metabolite OH-ITRA including relative bioavailability of the parent after administration of the parent by both capsule and solution and 2) to assess the performance of the optimal design. Methods: The study was a cross-over design in which 30 patients received the capsule on the first occasion and 3 days later the solution formulation. The design was constrained to have a maximum of 4 blood samples per occasion for estimation of the popPK of both ITRA and OH-ITRA. The sampling times for the population model were optimized previously using POPT v.2.0.[1] POPT is a series of applications that run under MATLAB and provide an evaluation of the information matrix for a nonlinear mixed effects model given a particular design. In addition it can be used to optimize the design based on evaluation of the determinant of the information matrix. The model details for the design were based on prior information obtained from the literature, which suggested that ITRA may have either linear or non-linear elimination. The optimal sampling times were evaluated to provide information for both competing models for the parent and metabolite and for both capsule and solution simultaneously. Blood samples were assayed by validated HPLC.[2] PopPK modelling was performed using FOCE with interaction under NONMEM, version 5 (level 1.1; GloboMax LLC, Hanover, MD, USA). The PK of ITRA and OH‑ITRA was modelled simultaneously using ADVAN 5. Subsequently three methods were assessed for modelling concentrations less than the LOD (limit of detection). These methods (corresponding to methods 5, 6 & 4 from Beal[3], respectively) were (a) where all values less than LOD were assigned to half of LOD, (b) where the closest missing value that is less than LOD was assigned to half the LOD and all previous (if during absorption) or subsequent (if during elimination) missing samples were deleted, and (c) where the contribution of the expectation of each missing concentration to the likelihood is estimated. The LOD was 0.04 mg/L. The final model evaluation was performed via bootstrap with re-sampling and a visual predictive check. The optimal design and the sampling windows of the study were evaluated for execution errors and for agreement between the observed and predicted standard errors. Dosing regimens were simulated for the capsules and the oral solution to assess their ability to achieve ITRA target trough concentration (Cmin,ss of 0.5-2 mg/L) or a combined Cmin,ss for ITRA and OH-ITRA above 1.5mg/L. Results and Discussion: A total of 241 blood samples were collected and analysed, 94% of them were taken within the defined optimal sampling windows, of which 31% where taken within 5 min of the exact optimal times. Forty six per cent of the ITRA values and 28% of the OH-ITRA values were below LOD. The entire profile after administration of the capsule for five patients was below LOD and therefore the data from this occasion was omitted from estimation. A 2-compartment model with 1st order absorption and elimination best described ITRA PK, with 1st order metabolism of the parent to OH-ITRA. For ITRA the clearance (ClItra/F) was 31.5 L/h; apparent volumes of central and peripheral compartments were 56.7 L and 2090 L, respectively. Absorption rate constants for capsule (kacap) and solution (kasol) were 0.0315 h-1 and 0.125 h-1, respectively. Comparative bioavailability of the capsule was 0.82. There was no evidence of nonlinearity in the popPK of ITRA. No screened covariate significantly improved the fit to the data. The results of the parameter estimates from the final model were comparable between the different methods for accounting for missing data, (M4,5,6)[3] and provided similar parameter estimates. The prospective application of an optimal design was found to be successful. Due to the sampling windows, most of the samples could be collected within the daily hospital routine, but still at times that were near optimal for estimating the popPK parameters. The final model was one of the potential competing models considered in the original design. The asymptotic standard errors provided by NONMEM for the final model and empirical values from bootstrap were similar in magnitude to those predicted from the Fisher Information matrix associated with the D-optimal design. Simulations from the final model showed that the current dosing regimen of 200 mg twice daily (bd) would provide a target Cmin,ss (0.5-2 mg/L) for only 35% of patients when administered as the solution and 31% when administered as capsules. The optimal dosing schedule was 500mg bd for both formulations. The target success for this dosing regimen was 87% for the solution with an NNT=4 compared to capsules. This means, for every 4 patients treated with the solution one additional patient will achieve a target success compared to capsule but at an additional cost of AUD $220 per day. The therapeutic target however is still doubtful and potential risks of these dosing schedules need to be assessed on an individual basis. Conclusion: A model was developed which described the popPK of ITRA and its main active metabolite OH-ITRA in adult CF after administration of both capsule and solution. The relative bioavailability of ITRA from the capsule was 82% that of the solution, but considerably more variable. To incorporate missing data, using the simple Beal method 5 (using half LOD for all samples below LOD) provided comparable results to the more complex but theoretically better Beal method 4 (integration method). The optimal sparse design performed well for estimation of model parameters and provided a good fit to the data.
Resumo:
Calculating the potentials on the heart’s epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations.
Resumo:
Two probabilistic interpretations of the n-tuple recognition method are put forward in order to allow this technique to be analysed with the same Bayesian methods used in connection with other neural network models. Elementary demonstrations are then given of the use of maximum likelihood and maximum entropy methods for tuning the model parameters and assisting their interpretation. One of the models can be used to illustrate the significance of overlapping n-tuple samples with respect to correlations in the patterns.
Resumo:
A Bayesian procedure for the retrieval of wind vectors over the ocean using satellite borne scatterometers requires realistic prior near-surface wind field models over the oceans. We have implemented carefully chosen vector Gaussian Process models; however in some cases these models are too smooth to reproduce real atmospheric features, such as fronts. At the scale of the scatterometer observations, fronts appear as discontinuities in wind direction. Due to the nature of the retrieval problem a simple discontinuity model is not feasible, and hence we have developed a constrained discontinuity vector Gaussian Process model which ensures realistic fronts. We describe the generative model and show how to compute the data likelihood given the model. We show the results of inference using the model with Markov Chain Monte Carlo methods on both synthetic and real data.
Resumo:
We develop an approach for a sparse representation for Gaussian Process (GP) models in order to overcome the limitations of GPs caused by large data sets. The method is based on a combination of a Bayesian online algorithm together with a sequential construction of a relevant subsample of the data which fully specifies the prediction of the model. Experimental results on toy examples and large real-world datasets indicate the efficiency of the approach.
Resumo:
A practical Bayesian approach for inference in neural network models has been available for ten years, and yet it is not used frequently in medical applications. In this chapter we show how both regularisation and feature selection can bring significant benefits in diagnostic tasks through two case studies: heart arrhythmia classification based on ECG data and the prognosis of lupus. In the first of these, the number of variables was reduced by two thirds without significantly affecting performance, while in the second, only the Bayesian models had an acceptable accuracy. In both tasks, neural networks outperformed other pattern recognition approaches.
Resumo:
Using the core aspects of five main models of human resource management (HRM), this article investigates the dominant HRM practices in the Indian manufacturing sector. The evaluation is conducted in the context of the recently liberalized economic environment. In response to ever-increasing levels of globalization of business, the article initially highlights the need for more cross-national comparative HRM research. Then it briefly analyzes the five models of HRM (namely, the `Matching model'; the `Harvard model'; the `Contextual model'; the `5-P model'; and the `European model') and identifies the main research questions emerging from these that could be used to reveal and highlight the HRM practices in different national/regional settings. The findings of the research are based on a questionnaire survey of 137 large Indian firms and 24 in-depth interviews in as many firms. The examination not only helped to present the scenario of HRM practices in the Indian context but also the logic dictating the presence of such practices. The article contributes to the fields of cross-national HRM and industrial relations research. It also has key messages for policy makers and opens avenues for further research.
Resumo:
We assessed summation of contrast across eyes and area at detection threshold ( C t). Stimuli were sine-wave gratings (2.5 c/deg) spatially modulated by cosine- and anticosine-phase raised plaids (0.5 c/deg components oriented at ±45°). When presented dichoptically the signal regions were interdigitated across eyes but produced a smooth continuous grating following their linear binocular sum. The average summation ratio ( C t1/([ C t1+2]) for this stimulus pair was 1.64 (4.3 dB). This was only slightly less than the binocular summation found for the same patch type presented to both eyes, and the area summation found for the two different patch types presented to the same eye. We considered 192 model architectures containing each of the following four elements in all possible orders: (i) linear summation or a MAX operator across eyes, (ii) linear summation or a MAX operator across area, (iii) linear or accelerating contrast transduction, and (iv) additive Gaussian, stochastic noise. Formal equivalences reduced this to 62 different models. The most successful four-element model was: linear summation across eyes followed by nonlinear contrast transduction, linear summation across area, and late noise. Model performance was enhanced when additional nonlinearities were placed before binocular summation and after area summation. The implications for models of probability summation and uncertainty are discussed.
Resumo:
The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.
Resumo:
Drugs acting at 5-HT receptors were evaluated on three animal models of anxiety. On the elevated X-maze test the majority of 5-HT1 agonists were found to be anxiogenic. However, ipsapirone was anxiolytic and buspirone and gepirone were inactive. The 5-HT2 agonist DOI and the 5-HT2 antagonist ritanserin were anxiolytic while ICI 169,369, a 5-HT2 antagonist was inactive. All 5-HT3 antagonists tested were inactive in this test, while the indirect serotomimetics zimeldine and fenfluramine were anxiogenic. Neither beta-adrenoceptor agonists nor antagonists had reproducible effects on anxiety in this model. Combined beta-1/beta-2 adrenoceptor antagonists reversed the anxiogenic effects of 8-OH-DPAT while selective beta-1 or beta-2 antagonists did not. On the social interaction model the 5-HT1 agonists 8-OH-DPAT, RU 24969 and 5-MeODMT were anxiogenic and ipsapirone was anxiolytic. The 5-HT2 agonist DOI and the beta-adrenoceptor- and 5-HT- antagonist pindolol were anxiolytic, while the 5-HT2 and 5-HT3 antagonists were inactive. In the marble burying test, the 5-HT upake inhibitors zimeldine, fluvoxamine, indalpine and citalopram, the 5-HT1B/5-HT1C agonists mCPP and TFMPP and the 5-HT2/5-HT1C agonist DOI reduced marble burying without affecting locomotor activity. 5-HT1A agonists and the 5-HT2 and 5-HT3 antagonists were without effect. Lesions of the dorsal raphe nucleus reversed the anxiogenic effects of 8-OH-DPAT in the X-maze model. The implication of these results for the understanding of the pharmacology of 5-HT in anxiety is discussed.
Resumo:
In an attempt to better understand the impact of the World Bank on human development in poor countries, we use cross-country data on African countries for the 1990–2002 period to examine this relationship. The coefficient estimates of our parsimonious fixed-effects models indicate that while loans and grants of the Bank have had a positive impact on some relatively short-term indicators of health and education in an average African country, there is little evidence to suggest that such loans and grants have helped these countries to consolidate on the short-term gains.
Resumo:
Requirements-aware systems address the need to reason about uncertainty at runtime to support adaptation decisions, by representing quality of services (QoS) requirements for service-based systems (SBS) with precise values in run-time queryable model specification. However, current approaches do not support updating of the specification to reflect changes in the service market, like newly available services or improved QoS of existing ones. Thus, even if the specification models reflect design-time acceptable requirements they may become obsolete and miss opportunities for system improvement by self-adaptation. This articles proposes to distinguish "abstract" and "concrete" specification models: the former consists of linguistic variables (e.g. "fast") agreed upon at design time, and the latter consists of precise numeric values (e.g. "2ms") that are dynamically calculated at run-time, thus incorporating up-to-date QoS information. If and when freshly calculated concrete specifications are not satisfied anymore by the current service configuration, an adaptation is triggered. The approach was validated using four simulated SBS that use services from a previously published, real-world dataset; in all cases, the system was able to detect unsatisfied requirements at run-time and trigger suitable adaptations. Ongoing work focuses on policies to determine recalculation of specifications. This approach will allow engineers to build SBS that can be protected against market-caused obsolescence of their requirements specifications. © 2012 IEEE.