957 resultados para Minimum Variance Model
Resumo:
The purpose of this study was to evaluate the effect of continuously released BDNF on peripheral nerve regeneration in a rat model. Initial in vitro evaluation of calcium alginate prolonged-release-capsules (PRC) proved a consistent release of BDNF for a minimum of 8 weeks. In vivo, a worst case scenario was created by surgical removal of a 20-mm section of the sciatic nerve of the rat. Twenty-four autologous fascia tubes were filled with calcium alginate spheres and sutured to the epineurium of both nerve ends. The animals were divided into 3 groups. In group 1, the fascial tube contained plain calcium alginate spheres. In groups 2 and 3, the fascial tube contained calcium alginate spheres with BDNF alone or BDNF stabilized with bovine serum albumin, respectively. The autocannibalization of the operated extremity was clinically assessed and documented in 12 additional rats. The regeneration was evaluated histologically at 4 weeks and 10 weeks in a blinded manner. The length of nerve fibers and the numbers of axons formed in the tube was measured. Over a 10-week period, axons have grown significantly faster in groups 2 and 3 with continuously released BDNF compared to the control. The rats treated with BDNF (groups 2 and 3) demonstrated significantly less autocannibalization than the control group (group 1). These results suggest that BDNF may not only stimulate faster peripheral nerve regeneration provided there is an ideal, biodegradable continuous delivery system but that it significantly reduces the neuropathic pain in the rat model.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed models and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated margional residual vector by the Cholesky decomposition of the inverse of the estimated margional variance matrix. The resulting "rotated" residuals are used to construct an empirical cumulative distribution function and pointwise standard errors. The theoretical framework, including conditions and asymptotic properties, involves technical details that are motivated by Lange and Ryan (1989), Pierce (1982), and Randles (1982). Our method appears to work well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series). Our methods can produce satisfactory results even for models that do not satisfy all of the technical conditions stated in our theory.
Resumo:
The construction of a reliable, practically useful prediction rule for future response is heavily dependent on the "adequacy" of the fitted regression model. In this article, we consider the absolute prediction error, the expected value of the absolute difference between the future and predicted responses, as the model evaluation criterion. This prediction error is easier to interpret than the average squared error and is equivalent to the mis-classification error for the binary outcome. We show that the distributions of the apparent error and its cross-validation counterparts are approximately normal even under a misspecified fitted model. When the prediction rule is "unsmooth", the variance of the above normal distribution can be estimated well via a perturbation-resampling method. We also show how to approximate the distribution of the difference of the estimated prediction errors from two competing models. With two real examples, we demonstrate that the resulting interval estimates for prediction errors provide much more information about model adequacy than the point estimates alone.
Resumo:
This paper introduces a novel approach to making inference about the regression parameters in the accelerated failure time (AFT) model for current status and interval censored data. The estimator is constructed by inverting a Wald type test for testing a null proportional hazards model. A numerically efficient Markov chain Monte Carlo (MCMC) based resampling method is proposed to simultaneously obtain the point estimator and a consistent estimator of its variance-covariance matrix. We illustrate our approach with interval censored data sets from two clinical studies. Extensive numerical studies are conducted to evaluate the finite sample performance of the new estimators.
Resumo:
We introduce a diagnostic test for the mixing distribution in a generalised linear mixed model. The test is based on the difference between the marginal maximum likelihood and conditional maximum likelihood estimates of a subset of the fixed effects in the model. We derive the asymptotic variance of this difference, and propose a test statistic that has a limiting chi-square distribution under the null hypothesis that the mixing distribution is correctly specified. For the important special case of the logistic regression model with random intercepts, we evaluate via simulation the power of the test in finite samples under several alternative distributional forms for the mixing distribution. We illustrate the method by applying it to data from a clinical trial investigating the effects of hormonal contraceptives in women.
Resumo:
Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique which is commonly used to quantify changes in blood oxygenation and flow coupled to neuronal activation. One of the primary goals of fMRI studies is to identify localized brain regions where neuronal activation levels vary between groups. Single voxel t-tests have been commonly used to determine whether activation related to the protocol differs across groups. Due to the generally limited number of subjects within each study, accurate estimation of variance at each voxel is difficult. Thus, combining information across voxels in the statistical analysis of fMRI data is desirable in order to improve efficiency. Here we construct a hierarchical model and apply an Empirical Bayes framework on the analysis of group fMRI data, employing techniques used in high throughput genomic studies. The key idea is to shrink residual variances by combining information across voxels, and subsequently to construct an improved test statistic in lieu of the classical t-statistic. This hierarchical model results in a shrinkage of voxel-wise residual sample variances towards a common value. The shrunken estimator for voxelspecific variance components on the group analyses outperforms the classical residual error estimator in terms of mean squared error. Moreover, the shrunken test-statistic decreases false positive rate when testing differences in brain contrast maps across a wide range of simulation studies. This methodology was also applied to experimental data regarding a cognitive activation task.
Resumo:
A phenomenological transition film evaporation model was introduced to a pore network model with the consideration of pore radius, contact angle, non-isothermal interface temperature, microscale fluid flows and heat and mass transfers. This was achieved by modeling the transition film region of the menisci in each pore throughout the porous transport layer of a half-cell polymer electrolyte membrane (PEM) fuel cell. The model presented in this research is compared with the standard diffusive fuel cell modeling approach to evaporation and shown to surpass the conventional modeling approach in terms of predicting the evaporation rates in porous media. The current diffusive evaporation models used in many fuel cell transport models assumes a constant evaporation rate across the entire liquid-air interface. The transition film model was implemented into the pore network model to address this issue and create a pore size dependency on the evaporation rates. This is accomplished by evaluating the transition film evaporation rates determined by the kinetic model for every pore containing liquid water in the porous transport layer (PTL). The comparison of a transition film and diffusive evaporation model shows an increase in predicted evaporation rates for smaller pore sizes with the transition film model. This is an important parameter when considering the micro-scaled pore sizes seen in the PTL and becomes even more substantial when considering transport in fuel cells containing an MPL, or a large variance in pore size. Experimentation was performed to validate the transition film model by monitoring evaporation rates from a non-zero contact angle water droplet on a heated substrate. The substrate was a glass plate with a hydrophobic coating to reduce wettability. The tests were performed at a constant substrate temperature and relative humidity. The transition film model was able to accurately predict the drop volume as time elapsed. By implementing the transition film model to a pore network model the evaporation rates present in the PTL can be more accurately modeled. This improves the ability of a pore network model to predict the distribution of liquid water and ultimately the level of flooding exhibited in a PTL for various operating conditions.
Resumo:
In this thesis, we consider Bayesian inference on the detection of variance change-point models with scale mixtures of normal (for short SMN) distributions. This class of distributions is symmetric and thick-tailed and includes as special cases: Gaussian, Student-t, contaminated normal, and slash distributions. The proposed models provide greater flexibility to analyze a lot of practical data, which often show heavy-tail and may not satisfy the normal assumption. As to the Bayesian analysis, we specify some prior distributions for the unknown parameters in the variance change-point models with the SMN distributions. Due to the complexity of the joint posterior distribution, we propose an efficient Gibbs-type with Metropolis- Hastings sampling algorithm for posterior Bayesian inference. Thereafter, following the idea of [1], we consider the problems of the single and multiple change-point detections. The performance of the proposed procedures is illustrated and analyzed by simulation studies. A real application to the closing price data of U.S. stock market has been analyzed for illustrative purposes.
Resumo:
Radio frequency electromagnetic fields (RF-EMF) in our daily life are caused by numerous sources such as fixed site transmitters (e.g. mobile phone base stations) or indoor devices (e.g. cordless phones). The objective of this study was to develop a prediction model which can be used to predict mean RF-EMF exposure from different sources for a large study population in epidemiological research. We collected personal RF-EMF exposure measurements of 166 volunteers from Basel, Switzerland, by means of portable exposure meters, which were carried during one week. For a validation study we repeated exposure measurements of 31 study participants 21 weeks after the measurements of the first week on average. These second measurements were not used for the model development. We used two data sources as exposure predictors: 1) a questionnaire on potentially exposure relevant characteristics and behaviors and 2) modeled RF-EMF from fixed site transmitters (mobile phone base stations, broadcast transmitters) at the participants' place of residence using a geospatial propagation model. Relevant exposure predictors, which were identified by means of multiple regression analysis, were the modeled RF-EMF at the participants' home from the propagation model, housing characteristics, ownership of communication devices (wireless LAN, mobile and cordless phones) and behavioral aspects such as amount of time spent in public transports. The proportion of variance explained (R2) by the final model was 0.52. The analysis of the agreement between calculated and measured RF-EMF showed a sensitivity of 0.56 and a specificity of 0.95 (cut-off: 90th percentile). In the validation study, the sensitivity and specificity of the model were 0.67 and 0.96, respectively. We could demonstrate that it is feasible to model personal RF-EMF exposure. Most importantly, our validation study suggests that the model can be used to assess average exposure over several months.
Resumo:
Contracts paying a guaranteed minimum rate of return and a fraction of a positive excess rate, which is specified relative to a benchmark portfolio, are closely related to unit-linked life-insurance products and can be considered as alternatives to direct investment in the underlying benchmark. They contain an embedded power option, and the key issue is the tractable and realistic hedging of this option, in order to rigorously justify valuation by arbitrage arguments and prevent the guarantees from becoming uncontrollable liabilities to the issuer. We show how to determine the contract parameters conservatively and implement robust risk-management strategies.
Resumo:
Compliance with punctual delivery under the high pressure of costs can be implemented through the optimization of the in-house tool supply. Within the Transfer Project 13 of the Collaborative Research Centre 489 using the example of the forging industry, a mathematical model was developed which determines the minimum inventory of forging tools required for production, considering the tool appropriation delay.
Resumo:
The response of atmospheric chemistry and dynamics to volcanic eruptions and to a decrease in solar activity during the Dalton Minimum is investigated with the fully coupled atmosphere–ocean chemistry general circulation model SOCOL-MPIOM (modeling tools for studies of SOlar Climate Ozone Links-Max Planck Institute Ocean Model) covering the time period 1780 to 1840 AD. We carried out several sensitivity ensemble experiments to separate the effects of (i) reduced solar ultra-violet (UV) irradiance, (ii) reduced solar visible and near infrared irradiance, (iii) enhanced galactic cosmic ray intensity as well as less intensive solar energetic proton events and auroral electron precipitation, and (iv) volcanic aerosols. The introduced changes of UV irradiance and volcanic aerosols significantly influence stratospheric dynamics in the early 19th century, whereas changes in the visible part of the spectrum and energetic particles have smaller effects. A reduction of UV irradiance by 15%, which represents the presently discussed highest estimate of UV irradiance change caused by solar activity changes, causes global ozone decrease below the stratopause reaching as much as 8% in the midlatitudes at 5 hPa and a significant stratospheric cooling of up to 2 °C in the mid-stratosphere and to 6 °C in the lower mesosphere. Changes in energetic particle precipitation lead only to minor changes in the yearly averaged temperature fields in the stratosphere. Volcanic aerosols heat the tropical lower stratosphere, allowing more water vapour to enter the tropical stratosphere, which, via HOx reactions, decreases upper stratospheric and mesospheric ozone by roughly 4%. Conversely, heterogeneous chemistry on aerosols reduces stratospheric NOx, leading to a 12% ozone increase in the tropics, whereas a decrease in ozone of up to 5% is found over Antarctica in boreal winter. The linear superposition of the different contributions is not equivalent to the response obtained in a simulation when all forcing factors are applied during the Dalton Minimum (DM) – this effect is especially well visible for NOx/NOy. Thus, this study also shows the non-linear behaviour of the coupled chemistry-climate system. Finally, we conclude that especially UV and volcanic eruptions dominate the changes in the ozone, temperature and dynamics while the NOx field is dominated by the energetic particle precipitation. Visible radiation changes have only very minor effects on both stratospheric dynamics and chemistry.
Resumo:
We investigate the effects of a recently proposed 21st century Dalton minimum like decline of solar activity on the evolution of Earth's climate and ozone layer. Three sets of two member ensemble simulations, radiatively forced by a midlevel emission scenario (Intergovernmental Panel on Climate Change RCP4.5), are performed with the atmosphere-ocean chemistry-climate model AOCCM SOCOL3-MPIOM, one with constant solar activity, the other two with reduced solar activity and different strength of the solar irradiance forcing. A future grand solar minimum will reduce the global mean surface warming of 2 K between 1986–2005 and 2081–2100 by 0.2 to 0.3 K. Furthermore, the decrease in solar UV radiation leads to a significant delay of stratospheric ozone recovery by 10 years and longer. Therefore, the effects of a solar activity minimum, should it occur, may interfere with international efforts for the protection of global climate and the ozone layer.
Resumo:
The aim of this study was to assess the effect of bracket type on the labiopalatal moments generated by lingual and conventional brackets. Incognito™ lingual brackets (3M Unitek), STb™ lingual brackets (Light Lingual System; ORMCO), In-Ovation L lingual brackets (DENTSPLY GAC), and conventional 0.018 inch slot brackets (Gemini; 3M Unitek) were bonded on identical maxillary acrylic resin models with levelled and aligned teeth. Each model was mounted on the orthodontic measurement and simulation system and 10 0.0175 × 0.0175 TMA wires were used for each bracket type. The wire was ligated with elastomerics into the Incognito, STb, and conventional brackets and each measurement was repeated once after religation. A 15 degrees buccal root torque (+15 degrees) and then a 15 degrees palatal root torque (-15 degrees) were gradually applied to the right central incisor bracket. After each activation, the bracket returned to its initial position and the moments in the sagittal plane were recorded during these rotations of the bracket. One-way analysis of variance with post hoc multiple comparisons (Tukey test at 0.05 error rate) was conducted to assess the effect on bracket type on the generated moments. The magnitude of maximum moment at +15 degrees ranged 8.8, 8.2, 7.1, and 5.8 Nmm for the Incognito, STb, conventional Gemini, and the In-Ovation L brackets, respectively; similar values were recorded at -15 degrees: 8.6, 8.1, 7.0, and 5.7 Nmm, respectively. The recorded differences of maximum moments were statistically significant, except between the Incognito and STb brackets. Additionally, the torque angles were evaluated at which the crown torque fell well below the minimum levels of 5.0 Nmm, as well as the moment/torque ratio at the last part of the activation/deactivation curve, between 10 and 15 degrees. The lowest torque expression was observed at the self-ligating lingual brackets, followed by the conventional brackets. The Incognito and STb lingual brackets generated the highest moments.