963 resultados para Probabilistic mean value theorem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this master’s thesis was twofold: first to examine the concept of customer value and its drivers and second to identify information use practices. The first part of the study represents explorative research that was carried out by examining a case company’s customer satisfaction data that was used to identify sales and technical customer service related value drivers on a detailed attribute level. This was followed by an examination of whether these attributes had been commented on in a positive or a negative light and what were the reasons why the case company had received higher or lower ratings than its competitor. As a result a classification of different sales and technical customer service related attributes was created. The results indicated that the case company has performed well, but that the results varied on the company’s business segment level. The case company’s staff, service and the benefits from a long-lasting relationship came up in a positive light whereas attitude, flexibility and reaction time came up in a negative light. The reasons for a higher or lower score in comparison to competitor varied. The results indicated that a customer’s satisfaction with the company’s performance did not always mean that the company was outperforming the competition. The second part of the study focused on customer satisfaction information use from the viewpoints of information access, dissemination and reaction. The study was conducted by running an internal survey among the case company’s staff. The results showed that information use practices varied across the company and some units or teams had taken a more proactive approach to the information use than others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

18F-fluoro-2-deoxyglucose (FDG) positron emission tomography (PET)/computed tomography (CT) is widely used to diagnose and stage non-small cell lung cancer (NSCLC). The aim of this retrospective study was to evaluate the predictive ability of different FDG standardized uptake values (SUVs) in 74 patients with newly diagnosed NSCLC. 18F-FDG PET/CT scans were performed and different SUV parameters (SUVmax, SUVavg, SUVT/L, and SUVT/A) obtained, and their relationship with clinical characteristics were investigated. Meanwhile, correlation and multiple stepwise regression analyses were performed to determine the primary predictor of SUVs for NSCLC. Age, gender, and tumor size significantly affected SUV parameters. The mean SUVs of squamous cell carcinoma were higher than those of adenocarcinoma. Poorly differentiated tumors exhibited higher SUVs than well-differentiated ones. Further analyses based on the pathologic type revealed that the SUVmax, SUVavg, and SUVT/L of poorly differentiated adenocarcinoma tumors were higher than those of moderately or well-differentiated tumors. Among these four SUV parameters, SUVT/Lwas the primary predictor for tumor differentiation. However, in adenocarcinoma, SUVmax was the determining factor for tumor differentiation. Our results showed that these four SUV parameters had predictive significance related to NSCLC tumor differentiation; SUVT/L appeared to be most useful overall, but SUVmax was the best index for adenocarcinoma tumor differentiation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated the diagnostic value of the apparent diffusion coefficient (ADC) and fractional anisotropy (FA) of magnetic resonance diffusion tensor imaging (DTI) in patients with spinal cord compression (SCC) using a meta-analysis framework. Multiple scientific literature databases were exhaustively searched to identify articles relevant to this study. Mean values and standardized mean differences (SMDs) were calculated for the ADC and FA in normal and diseased tissues. The STATA version 12.0 software was used for statistical analysis. Of the 41 articles initially retrieved through database searches, 11 case-control studies were eligible for the meta-analysis and contained a combined total of 645 human subjects (394 patients with SCC and 251 healthy controls). All 11 studies reported data on FA, and 9 contained data related to the ADC. The combined SMDs of the ADC and FA showed that the ADC was significantly higher and the FA was lower in patients with SCC than in healthy controls. Subgroup analysis based on the b value showed higher ADCs in patients with SCC than in healthy controls at b values of both ≤500 and >500 s/mm2. In summary, the main findings of this meta-analysis revealed an increased ADC and decreased FA in patients with SCC, indicating that DTI is an important diagnostic imaging tool to assess patients suspected to have SCC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An investor can either conduct independent analysis or rely on the analyses of others. Stock analysts provide markets with expectations regarding particular securities. However, analysts have different capabilities and resources, of which investors are seldom cognizant. The local advantage refers to the advantage stemming from cultural or geographical proximity to securities analyzed. The research has confirmed that local agents are generally more accurate or produce excess returns. This thesis tests the investment value of the local advantage regarding Finnish stocks via target price data. The empirical section investigates the local advantage from several aspects. It is discovered that local analysts were more focused on certain sectors generally located close to consumer markets. Market reactions to target price revisions were generally insignificant with the exception to local positive target prices. Both local and foreign target prices were overly optimistic and exhibited signs of herding. Neither group could be identified as a leader or follower of new information. Additionally, foreign price change expectations were more in line with the quantitative models and ideas such as beta or return mean reversion. The locals were more accurate than foreign analysts in 5 out of 9 sectors and vice versa in one. These sectors were somewhat in line with coverage decisions and buttressed the idea of local advantage stemming from proximity to markets, not to headquarters. The accuracy advantage was dependent on sample years and on the measure used. Local analysts ranked magnitudes of price changes more accurately in optimistic and foreign analysts in pessimistic target prices. Directional accuracy of both groups was under 50% and target prices held no linear predictive power. Investment value of target prices were tested by forming mean-variance efficient portfolios. Parallel to differing accuracies in the levels of expectations foreign portfolio performed better when short sales were allowed and local better when disallowed. Both local and non-local portfolios performed worse than a passive index fund, albeit not statistically significantly. This was in line with previously reported low overall accuracy and different accuracy profiles. Refraining from estimating individual stock returns altogether produced statistically significantly higher Sharpe ratios compared to local or foreign portfolios. The proposed method of testing the investment value of target prices of different groups suffered from some inconsistencies. Nevertheless, these results are of interest to investors seeking the advice of security analysts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An investor can either conduct independent analysis or rely on the analyses of others. Stock analysts provide markets with expectations regarding particular securities. However, analysts have different capabilities and resources, of which investors are seldom cognizant. The local advantage refers to the advantage stemming from cultural or geographical proximity to securities analyzed. The research has confirmed that local agents are generally more accurate or produce excess returns. This thesis tests the investment value of the local advantage regarding Finnish stocks via target price data. The empirical section investigates the local advantage from several aspects. It is discovered that local analysts were more focused on certain sectors generally located close to consumer markets. Market reactions to target price revisions were generally insignificant with the exception to local positive target prices. Both local and foreign target prices were overly optimistic and exhibited signs of herding. Neither group could be identified as a leader or follower of new information. Additionally, foreign price change expectations were more in line with the quantitative models and ideas such as beta or return mean reversion. The locals were more accurate than foreign analysts in 5 out of 9 sectors and vice versa in one. These sectors were somewhat in line with coverage decisions and buttressed the idea of local advantage stemming from proximity to markets, not to headquarters. The accuracy advantage was dependent on sample years and on the measure used. Local analysts ranked magnitudes of price changes more accurately in optimistic and foreign analysts in pessimistic target prices. Directional accuracy of both groups was under 50% and target prices held no linear predictive power. Investment value of target prices were tested by forming mean-variance efficient portfolios. Parallel to differing accuracies in the levels of expectations foreign portfolio performed better when short sales were allowed and local better when disallowed. Both local and non-local portfolios performed worse than a passive index fund, albeit not statistically significantly. This was in line with previously reported low overall accuracy and different accuracy profiles. Refraining from estimating individual stock returns altogether produced statistically significantly higher Sharpe ratios compared to local or foreign portfolios. The proposed method of testing the investment value of target prices of different groups suffered from some inconsistencies. Nevertheless, these results are of interest to investors seeking the advice of security analysts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molec ul ar dynamics calculations of the mean sq ua re displacement have been carried out for the alkali metals Na, K and Cs and for an fcc nearest neighbour Lennard-Jones model applicable to rare gas solids. The computations for the alkalis were done for several temperatures for temperature vol ume a swell as for the the ze r 0 pressure ze ro zero pressure volume corresponding to each temperature. In the fcc case, results were obtained for a wide range of both the temperature and density. Lattice dynamics calculations of the harmonic and the lowe s t order anharmonic (cubic and quartic) contributions to the mean square displacement were performed for the same potential models as in the molecular dynamics calculations. The Brillouin zone sums arising in the harmonic and the quartic terms were computed for very large numbers of points in q-space, and were extrapolated to obtain results ful converged with respect to the number of points in the Brillouin zone.An excellent agreement between the lattice dynamics results was observed molecular dynamics and in the case of all the alkali metals, e~ept for the zero pressure case of CSt where the difference is about 15 % near the melting temperature. It was concluded that for the alkalis, the lowest order perturbation theory works well even at temperat ures close to the melting temperat ure. For the fcc nearest neighbour model it was found that the number of particles (256) used for the molecular dynamics calculations, produces a result which is somewhere between 10 and 20 % smaller than the value converged with respect to the number of particles. However, the general temperature dependence of the mean square displacement is the same in molecular dynamics and lattice dynamics for all temperatures at the highest densities examined, while at higher volumes and high temperatures the results diverge. This indicates the importance of the higher order (eg. ~* ) perturbation theory contributions in these cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe numerical simulations designed to elucidate the role of mean ocean salinity in climate. Using a coupled atmosphere-ocean general circulation model, we study a 100-year sensitivity experiment in which the global-mean salinity is approximately doubled from its present observed value, by adding 35 psu everywhere in the ocean. The salinity increase produces a rapid global-mean sea-surface warming of C within a few years, caused by reduced vertical mixing associated with changes in cabbeling. The warming is followed by a gradual global-mean sea-surface cooling of C within a few decades, caused by an increase in the vertical (downward) component of the isopycnal diffusive heat flux. We find no evidence of impacts on the variability of the thermohaline circulation (THC) or El Niño/Southern Oscillation (ENSO). The mean strength of the Atlantic meridional overturning is reduced by 20% and the North Atlantic Deep Water penetrates less deeply. Nevertheless, our results dispute claims that higher salinities for the world ocean have profound consequences for the thermohaline circulation. In additional experiments with doubled atmospheric carbon dioxide, we find that the amplitude and spatial pattern of the global warming signal are modified in the hypersaline ocean. In particular, the equilibrated global-mean sea-surface temperature increase caused by doubling carbon dioxide is reduced by 10%. We infer the existence of a non-linear interaction between the climate responses to modified carbon dioxide and modified salinity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Samples of whole crop wheat (WCW, n = 134) and whole crop barley (WCB, n = 16) were collected from commercial farms in the UK over a 2-year period (2003/2004 and 2004/2005). Near infrared reflectance spectroscopy (NIRS) was compared with laboratory and in vitro digestibility measures to predict digestible organic matter in the dry matter (DOMD) and metabolisable energy (ME) contents measured in vivo using sheep. Spectral models using the mean spectra of two scans were compared with those using individual spectra (duplicate spectra). Overall NIRS accurately predicted the concentration of chemical components in whole crop cereals apart from crude protein. ammonia-nitrogen, water-soluble carbohydrates, fermentation acids and solubility values. In addition. the spectral models had higher prediction power for in vivo DOMD and ME than chemical components or in vitro digestion methods. Overall there Was a benefit from the use of duplicate spectra rather than mean spectra and this was especially so for predicting in vivo DOMD and ME where the sample population size was smaller. The spectral models derived deal equally well with WCW and WCB and Would he of considerable practical value allowing rapid determination of nutritive value of these forages before their use in diets of productive animals. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The potential of near infrared spectroscopy in conjunction with partial least squares regression to predict Miscanthus xgiganteus and short rotation coppice willow quality indices was examined. Moisture, calorific value, ash and carbon content were predicted with a root mean square error of cross validation of 0.90% (R2 = 0.99), 0.13 MJ/kg (R2 = 0.99), 0.42% (R2 = 0.58), and 0.57% (R2 = 0.88), respectively. The moisture and calorific value prediction models had excellent accuracy while the carbon and ash models were fair and poor, respectively. The results indicate that near infrared spectroscopy has the potential to predict quality indices of dedicated energy crops, however the models must be further validated on a wider range of samples prior to implementation. The utilization of such models would assist in the optimal use of the feedstock based on its biomass properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is concern that insect pollinators, such as honey bees, are currently declining in abundance, and are under serious threat from environmental changes such as habitat loss and climate change; the use of pesticides in intensive agriculture, and emerging diseases. This paper aims to evaluate how much public support there would be in preventing further decline to maintain the current number of bee colonies in the UK. The contingent valuation method (CVM) was used to obtain the willingness to pay (WTP) for a theoretical pollinator protection policy. Respondents were asked whether they would be WTP to support such a policy and how much would they pay? Results show that the mean WTP to support the bee protection policy was £1.37/week/household. Based on there being 24.9 million households in the UK, this is equivalent to £1.77 billion per year. This total value can show the importance of maintaining the overall pollination service to policy makers. We compare this total with estimates obtained using a simple market valuation of pollination for the UK.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given a nonlinear model, a probabilistic forecast may be obtained by Monte Carlo simulations. At a given forecast horizon, Monte Carlo simulations yield sets of discrete forecasts, which can be converted to density forecasts. The resulting density forecasts will inevitably be downgraded by model mis-specification. In order to enhance the quality of the density forecasts, one can mix them with the unconditional density. This paper examines the value of combining conditional density forecasts with the unconditional density. The findings have positive implications for issuing early warnings in different disciplines including economics and meteorology, but UK inflation forecasts are considered as an example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several methods are examined which allow to produce forecasts for time series in the form of probability assignments. The necessary concepts are presented, addressing questions such as how to assess the performance of a probabilistic forecast. A particular class of models, cluster weighted models (CWMs), is given particular attention. CWMs, originally proposed for deterministic forecasts, can be employed for probabilistic forecasting with little modification. Two examples are presented. The first involves estimating the state of (numerically simulated) dynamical systems from noise corrupted measurements, a problem also known as filtering. There is an optimal solution to this problem, called the optimal filter, to which the considered time series models are compared. (The optimal filter requires the dynamical equations to be known.) In the second example, we aim at forecasting the chaotic oscillations of an experimental bronze spring system. Both examples demonstrate that the considered time series models, and especially the CWMs, provide useful probabilistic information about the underlying dynamical relations. In particular, they provide more than just an approximation to the conditional mean.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three wind gust estimation (WGE) methods implemented in the numerical weather prediction (NWP) model COSMO-CLM are evaluated with respect to their forecast quality using skill scores. Two methods estimate gusts locally from mean wind speed and the turbulence state of the atmosphere, while the third one considers the mixing-down of high momentum within the planetary boundary layer (WGE Brasseur). One hundred and fifty-eight windstorms from the last four decades are simulated and results are compared with gust observations at 37 stations in Germany. Skill scores reveal that the local WGE methods show an overall better behaviour, whilst WGE Brasseur performs less well except for mountain regions. The here introduced WGE turbulent kinetic energy (TKE) permits a probabilistic interpretation using statistical characteristics of gusts at observational sites for an assessment of uncertainty. The WGE TKE formulation has the advantage of a ‘native’ interpretation of wind gusts as result of local appearance of TKE. The inclusion of a probabilistic WGE TKE approach in NWP models has, thus, several advantages over other methods, as it has the potential for an estimation of uncertainties of gusts at observational sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose and demonstrate a fully probabilistic (Bayesian) approach to the detection of cloudy pixels in thermal infrared (TIR) imagery observed from satellite over oceans. Using this approach, we show how to exploit the prior information and the fast forward modelling capability that are typically available in the operational context to obtain improved cloud detection. The probability of clear sky for each pixel is estimated by applying Bayes' theorem, and we describe how to apply Bayes' theorem to this problem in general terms. Joint probability density functions (PDFs) of the observations in the TIR channels are needed; the PDFs for clear conditions are calculable from forward modelling and those for cloudy conditions have been obtained empirically. Using analysis fields from numerical weather prediction as prior information, we apply the approach to imagery representative of imagers on polar-orbiting platforms. In comparison with the established cloud-screening scheme, the new technique decreases both the rate of failure to detect cloud contamination and the false-alarm rate by one quarter. The rate of occurrence of cloud-screening-related errors of >1 K in area-averaged SSTs is reduced by 83%. Copyright © 2005 Royal Meteorological Society.