923 resultados para Mean Value Theorem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To determine the value of applying finger trap distraction during direct MR arthrography of the wrist to assess intrinsic ligament and triangular fibrocartilage complex (TFCC) tears. MATERIALS AND METHODS: Twenty consecutive patients were prospectively investigated by three-compartment wrist MR arthrography. Imaging was performed with 3-T scanners using a three-dimensional isotropic (0.4 mm) T1-weighted gradient-recalled echo sequence, with and without finger trap distraction (4 kg). In a blind and independent fashion, two musculoskeletal radiologists measured the width of the scapholunate (SL), lunotriquetral (LT) and ulna-TFC (UTFC) joint spaces. They evaluated the amount of contrast medium within these spaces using a four-point scale, and assessed SL, LT and TFCC tears, as well as the disruption of Gilula's carpal arcs. RESULTS: With finger trap distraction, both readers found a significant increase in width of the SL space (mean Δ = +0.1mm, p ≤ 0.040), and noticed more contrast medium therein (p ≤ 0.035). In contrast, the differences in width of the LT (mean Δ = +0.1 mm, p ≥ 0.057) and UTFC (mean Δ = 0mm, p ≥ 0.728) spaces, as well as the amount of contrast material within these spaces were not statistically significant (p = 0.607 and ≥ 0.157, respectively). Both readers detected more SL (Δ = +1, p = 0.157) and LT (Δ = +2, p = 0.223) tears, although statistical significance was not reached, and Gilula's carpal arcs were more frequently disrupted during finger trap distraction (Δ = +5, p = 0.025). CONCLUSION: The application of finger trap distraction during direct wrist MR arthrography may enhance both detection and characterisation of SL and LT ligament tears by widening the SL space and increasing the amount of contrast within the SL and LT joint spaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Attrition in longitudinal studies can lead to biased results. The study is motivated by the unexpected observation that alcohol consumption decreased despite increased availability, which may be due to sample attrition of heavy drinkers. Several imputation methods have been proposed, but rarely compared in longitudinal studies of alcohol consumption. The imputation of consumption level measurements is computationally particularly challenging due to alcohol consumption being a semi-continuous variable (dichotomous drinking status and continuous volume among drinkers), and the non-normality of data in the continuous part. Data come from a longitudinal study in Denmark with four waves (2003-2006) and 1771 individuals at baseline. Five techniques for missing data are compared: Last value carried forward (LVCF) was used as a single, and Hotdeck, Heckman modelling, multivariate imputation by chained equations (MICE), and a Bayesian approach as multiple imputation methods. Predictive mean matching was used to account for non-normality, where instead of imputing regression estimates, "real" observed values from similar cases are imputed. Methods were also compared by means of a simulated dataset. The simulation showed that the Bayesian approach yielded the most unbiased estimates for imputation. The finding of no increase in consumption levels despite a higher availability remained unaltered. Copyright (C) 2011 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many people regard the concept of hypothesis testing as fundamental to inferential statistics. Various schools of thought, in particular frequentist and Bayesian, have promoted radically different solutions for taking a decision about the plausibility of competing hypotheses. Comprehensive philosophical comparisons about their advantages and drawbacks are widely available and continue to span over large debates in the literature. More recently, controversial discussion was initiated by an editorial decision of a scientific journal [1] to refuse any paper submitted for publication containing null hypothesis testing procedures. Since the large majority of papers published in forensic journals propose the evaluation of statistical evidence based on the so called p-values, it is of interest to expose the discussion of this journal's decision within the forensic science community. This paper aims to provide forensic science researchers with a primer on the main concepts and their implications for making informed methodological choices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molec ul ar dynamics calculations of the mean sq ua re displacement have been carried out for the alkali metals Na, K and Cs and for an fcc nearest neighbour Lennard-Jones model applicable to rare gas solids. The computations for the alkalis were done for several temperatures for temperature vol ume a swell as for the the ze r 0 pressure ze ro zero pressure volume corresponding to each temperature. In the fcc case, results were obtained for a wide range of both the temperature and density. Lattice dynamics calculations of the harmonic and the lowe s t order anharmonic (cubic and quartic) contributions to the mean square displacement were performed for the same potential models as in the molecular dynamics calculations. The Brillouin zone sums arising in the harmonic and the quartic terms were computed for very large numbers of points in q-space, and were extrapolated to obtain results ful converged with respect to the number of points in the Brillouin zone.An excellent agreement between the lattice dynamics results was observed molecular dynamics and in the case of all the alkali metals, e~ept for the zero pressure case of CSt where the difference is about 15 % near the melting temperature. It was concluded that for the alkalis, the lowest order perturbation theory works well even at temperat ures close to the melting temperat ure. For the fcc nearest neighbour model it was found that the number of particles (256) used for the molecular dynamics calculations, produces a result which is somewhere between 10 and 20 % smaller than the value converged with respect to the number of particles. However, the general temperature dependence of the mean square displacement is the same in molecular dynamics and lattice dynamics for all temperatures at the highest densities examined, while at higher volumes and high temperatures the results diverge. This indicates the importance of the higher order (eg. ~* ) perturbation theory contributions in these cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe numerical simulations designed to elucidate the role of mean ocean salinity in climate. Using a coupled atmosphere-ocean general circulation model, we study a 100-year sensitivity experiment in which the global-mean salinity is approximately doubled from its present observed value, by adding 35 psu everywhere in the ocean. The salinity increase produces a rapid global-mean sea-surface warming of C within a few years, caused by reduced vertical mixing associated with changes in cabbeling. The warming is followed by a gradual global-mean sea-surface cooling of C within a few decades, caused by an increase in the vertical (downward) component of the isopycnal diffusive heat flux. We find no evidence of impacts on the variability of the thermohaline circulation (THC) or El Niño/Southern Oscillation (ENSO). The mean strength of the Atlantic meridional overturning is reduced by 20% and the North Atlantic Deep Water penetrates less deeply. Nevertheless, our results dispute claims that higher salinities for the world ocean have profound consequences for the thermohaline circulation. In additional experiments with doubled atmospheric carbon dioxide, we find that the amplitude and spatial pattern of the global warming signal are modified in the hypersaline ocean. In particular, the equilibrated global-mean sea-surface temperature increase caused by doubling carbon dioxide is reduced by 10%. We infer the existence of a non-linear interaction between the climate responses to modified carbon dioxide and modified salinity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Samples of whole crop wheat (WCW, n = 134) and whole crop barley (WCB, n = 16) were collected from commercial farms in the UK over a 2-year period (2003/2004 and 2004/2005). Near infrared reflectance spectroscopy (NIRS) was compared with laboratory and in vitro digestibility measures to predict digestible organic matter in the dry matter (DOMD) and metabolisable energy (ME) contents measured in vivo using sheep. Spectral models using the mean spectra of two scans were compared with those using individual spectra (duplicate spectra). Overall NIRS accurately predicted the concentration of chemical components in whole crop cereals apart from crude protein. ammonia-nitrogen, water-soluble carbohydrates, fermentation acids and solubility values. In addition. the spectral models had higher prediction power for in vivo DOMD and ME than chemical components or in vitro digestion methods. Overall there Was a benefit from the use of duplicate spectra rather than mean spectra and this was especially so for predicting in vivo DOMD and ME where the sample population size was smaller. The spectral models derived deal equally well with WCW and WCB and Would he of considerable practical value allowing rapid determination of nutritive value of these forages before their use in diets of productive animals. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The potential of near infrared spectroscopy in conjunction with partial least squares regression to predict Miscanthus xgiganteus and short rotation coppice willow quality indices was examined. Moisture, calorific value, ash and carbon content were predicted with a root mean square error of cross validation of 0.90% (R2 = 0.99), 0.13 MJ/kg (R2 = 0.99), 0.42% (R2 = 0.58), and 0.57% (R2 = 0.88), respectively. The moisture and calorific value prediction models had excellent accuracy while the carbon and ash models were fair and poor, respectively. The results indicate that near infrared spectroscopy has the potential to predict quality indices of dedicated energy crops, however the models must be further validated on a wider range of samples prior to implementation. The utilization of such models would assist in the optimal use of the feedstock based on its biomass properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is concern that insect pollinators, such as honey bees, are currently declining in abundance, and are under serious threat from environmental changes such as habitat loss and climate change; the use of pesticides in intensive agriculture, and emerging diseases. This paper aims to evaluate how much public support there would be in preventing further decline to maintain the current number of bee colonies in the UK. The contingent valuation method (CVM) was used to obtain the willingness to pay (WTP) for a theoretical pollinator protection policy. Respondents were asked whether they would be WTP to support such a policy and how much would they pay? Results show that the mean WTP to support the bee protection policy was £1.37/week/household. Based on there being 24.9 million households in the UK, this is equivalent to £1.77 billion per year. This total value can show the importance of maintaining the overall pollination service to policy makers. We compare this total with estimates obtained using a simple market valuation of pollination for the UK.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A survey was conducted to elicit dairy farmers’ willingness to pay (WTP) to reduce the prevalence of lameness in their herds. A choice experiment questionnaire was administered using face-to-face interviews of 163 farmers in England and Wales. Whole herd lameness assessments by trained researchers recorded a mean lameness prevalence of nearly 24% which was substantially higher than that estimated by farmers. Farmers’ responses to a series of attitudinal questions showed that they strongly agreed that cows can suffer a lot of pain from lameness and believed that they could reduce lameness in their herds. Farmers’ mean WTP to avoid lameness amounted to UK£411 per lame cow but with considerable variation across the sample. Median WTP of UK£249 per lame cow was considered a better measure of central tendency for the sample. In addition, the survey found that farmers had a substantial WTP to avoid the inconvenience associated with lameness control (a median value of UK£97 per lame cow) but that they were generally prepared to incur greater inconvenience if it reduced lameness. The study findings suggest that farmers need a better understanding of the scale and costs of lameness in their herds and the benefits of control. To encourage action, farmers need to be convinced that lameness control measures perceived as inconvenient will be cost effective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – Progress in retrofitting the UK's commercial properties continues to be slow and fragmented. New research from the UK and USA suggests that radical changes are needed to drive large-scale retrofitting, and that new and innovative models of financing can create new opportunities. The purpose of this paper is to offer insights into the terminology of retrofit and the changes in UK policy and practice that are needed to scale up activity in the sector. Design/methodology/approach – The paper reviews and synthesises key published research into commercial property retrofitting in the UK and USA and also draws on policy and practice from the EU and Australia. Findings – The paper provides a definition of “retrofit”, and compares and contrasts this with “refurbishment” and “renovation” in an international context. The paper summarises key findings from recent research and suggests that there are a number of policy and practice measures which need to be implemented in the UK for commercial retrofitting to succeed at scale. These include improved funding vehicles for retrofit; better transparency in actual energy performance; and consistency in measurement, verification and assessment standards. Practical implications – Policy and practice in the UK needs to change if large-scale commercial property retrofit is to be rolled out successfully. This requires mandatory legislation underpinned by incentives and penalties for non-compliance. Originality/value – This paper synthesises recent research to provide a set of policy and practice recommendations which draw on international experience, and can assist on implementation in the UK.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – Investors are now able to analyse more noise-free news to inform their trading decisions than ever before. Their expectation that more information means better performance is not supported by previous psychological experiments which argue that too much information actually impairs performance. The purpose of this paper is to examine whether the degree of information explicitness improves stock market performance. Design/methodology/approach – An experiment is conducted in a computer laboratory to examine a trading simulation manipulated from a real market-shock. Participants’ performance efficiency and effectiveness are measured separately. Findings – The results indicate that the explicitness of information neither improves nor impairs participants’ performance effectiveness from the perspectives of returns, share and cash positions, and trading volumes. However, participants’ performance efficiency is significantly affected by information explicitness. Originality/value – The novel approach and findings of this research add to the knowledge of the impact of information explicitness on the quality of decision making in a financial market environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Considerable progress has been made in understanding the present and future regional and global sea level in the 2 years since the publication of the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change. Here, we evaluate how the new results affect the AR5’s assessment of (i) historical sea level rise, including attribution of that rise and implications for the sea level budget, (ii) projections of the components and of total global mean sea level (GMSL), and (iii) projections of regional variability and emergence of the anthropogenic signal. In each of these cases, new work largely provides additional evidence in support of the AR5 assessment, providing greater confidence in those findings. Recent analyses confirm the twentieth century sea level rise, with some analyses showing a slightly smaller rate before 1990 and some a slightly larger value than reported in the AR5. There is now more evidence of an acceleration in the rate of rise. Ongoing ocean heat uptake and associated thermal expansion have continued since 2000, and are consistent with ocean thermal expansion reported in the AR5. A significant amount of heat is being stored deeper in the water column, with a larger rate of heat uptake since 2000 compared to the previous decades and with the largest storage in the Southern Ocean. The first formal detection studies for ocean thermal expansion and glacier mass loss since the AR5 have confirmed the AR5 finding of a significant anthropogenic contribution to sea level rise over the last 50 years. New projections of glacier loss from two regions suggest smaller contributions to GMSL rise from these regions than in studies assessed by the AR5; additional regional studies are required to further assess whether there are broader implications of these results. Mass loss from the Greenland Ice Sheet, primarily as a result of increased surface melting, and from the Antarctic Ice Sheet, primarily as a result of increased ice discharge, has accelerated. The largest estimates of acceleration in mass loss from the two ice sheets for 2003–2013 equal or exceed the acceleration of GMSL rise calculated from the satellite altimeter sea level record over the longer period of 1993–2014. However, when increased mass gain in land water storage and parts of East Antarctica, and decreased mass loss from glaciers in Alaska and some other regions are taken into account, the net acceleration in the ocean mass gain is consistent with the satellite altimeter record. New studies suggest that a marine ice sheet instability (MISI) may have been initiated in parts of the West Antarctic Ice Sheet (WAIS), but that it will affect only a limited number of ice streams in the twenty-first century. New projections of mass loss from the Greenland and Antarctic Ice Sheets by 2100, including a contribution from parts of WAIS undergoing unstable retreat, suggest a contribution that falls largely within the likely range (i.e., two thirds probability) of the AR5. These new results increase confidence in the AR5 likely range, indicating that there is a greater probability that sea level rise by 2100 will lie in this range with a corresponding decrease in the likelihood of an additional contribution of several tens of centimeters above the likely range. In view of the comparatively limited state of knowledge and understanding of rapid ice sheet dynamics, we continue to think that it is not yet possible to make reliable quantitative estimates of future GMSL rise outside the likely range. Projections of twenty-first century GMSL rise published since the AR5 depend on results from expert elicitation, but we have low confidence in conclusions based on these approaches. New work on regional projections and emergence of the anthropogenic signal suggests that the two commonly predicted features of future regional sea level change (the increasing tilt across the Antarctic Circumpolar Current and the dipole in the North Atlantic) are related to regional changes in wind stress and surface heat flux. Moreover, it is expected that sea level change in response to anthropogenic forcing, particularly in regions of relatively low unforced variability such as the low-latitude Atlantic, will be detectable over most of the ocean by 2040. The east-west contrast of sea level trends in the Pacific observed since the early 1990s cannot be satisfactorily accounted for by climate models, nor yet definitively attributed either to unforced variability or forced climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Amaranth bars enriched with fructans: acceptability and nutritional value. There is an increasing appeal for convenience foods with potential health benefits to the consumer. Raw materials with high nutritional value and functional properties must be used on the development of these food products. Amaranth is a gluten-free grain with high nutrition value. Inulin and oligofructose are prebiotic ingredients presenting effects as the enhancement of calcium absorption. Amaranth bars enriched with inulin and oligofructose were developed in the flavors: banana, Brazilian nuts and dried grape, coconut, peach, strawberry and wall nut. The proximate composition were determined and compared to commercial cereal bars, available in traditional (n=59), light (n=60), diet (n=8), with soy (n=10) and quinoa (n=1) categories. Amaranth bars present mean global acceptance values from 6.3 to 7.6 on a 9-point hedonic scale, nutritional advantages as compared to commercial cereal bars (caloric reduction and higher levels of dietary fiber). Although amaranth is an unknown raw material in Brazil, it shows good potential to be used in the manufacturing of ready-to-eat products. As they are gluten free, these amaranth bars are also an alternative product for celiacs, also contributing to the enhancement of calcium absorption, a problem frequently observed in these patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss the estimation of the expected value of the quality-adjusted survival, based on multistate models. We generalize an earlier work, considering the sojourn times in health states are not identically distributed, for a given vector of covariates. Approaches based on semiparametric and parametric (exponential and Weibull distributions) methodologies are considered. A simulation study is conducted to evaluate the performance of the proposed estimator and the jackknife resampling method is used to estimate the variance of such estimator. An application to a real data set is also included.