966 resultados para Mean Value Theorem
Resumo:
AIM To evaluate the diagnostic value (sensitivity, specificity) of positron emission mammography (PEM) in a single site non-interventional study using the maximum PEM uptake value (PUVmax). PATIENTS, METHODS In a singlesite, non-interventional study, 108 patients (107 women, 1 man) with a total of 151 suspected lesions were scanned with a PEM Flex Solo II (Naviscan) at 90 min p.i. with 3.5 MBq 18F-FDG per kg of body weight. In this ROI(region of interest)-based analysis, maximum PEM uptake value (PUV) was determined in lesions, tumours (PUVmaxtumour), benign lesions (PUVmaxnormal breast) and also in healthy tissues on the contralateral side (PUVmaxcontralateral breast). These values were compared and contrasted. In addition, the ratios of PUVmaxtumour / PUVmaxcontralateral breast and PUVmaxnormal breast / PUVmaxcontralateral breast were compared. The image data were interpreted independently by two experienced nuclear medicine physicians and compared with histology in cases of suspected carcinoma. RESULTS Based on a criteria of PUV>1.9, 31 out of 151 lesions in the patient cohort were found to be malignant (21%). A mean PUVmaxtumour of 3.78 ± 2.47 was identified in malignant tumours, while a mean PUVmaxnormal breast of 1.17 ± 0.37 was reported in the glandular tissue of the healthy breast, with the difference being statistically significant (p < 0.001). Similarly, the mean ratio between tumour and healthy glandular tissue in breast cancer patients (3.15 ± 1.58) was found to be significantly higher than the ratio for benign lesions (1.17 ± 0.41, p < 0.001). CONCLUSION PEM is capable of differentiating breast tumours from benign lesions with 100% sensitivity along with a high specificity of 96%, when a threshold of PUVmax >1.9 is applied.
Resumo:
Mean corpuscular volume, which is an inexpensive and widely available measure to assess, increases in HIV infected individuals receiving zidovudine and stavudine raising the hypothesis that it could be used as a surrogate for adherence.^ The aim of this study was to examine the association between mean corpuscular volume and adherence to antiretroviral therapy among HIV infected children and adolescents aged 0–19 years in Uganda as well as the extent to which changes in mean corpuscular volume predict adherence as determined by virologic suppression.^ The investigator retrospectively reviewed and analyzed secondary data of 158 HIV infected children and adolescents aged 0–19 years who initiated antiretroviral therapy under an observational cohort at the Baylor College of Medicine Children's Foundation - Uganda. Viral suppression was used as the gold standard for monitoring adherence and defined as viral load of < 400 copies/ml at 24 and 48 weeks. ^ Patients were at least 48 weeks on therapy, age 0.2–18.4 years, 54.4% female, 82.3% on zidovudine based regimen, 92% WHO stage III at initiation of therapy, median pre therapy MCV 80.6 fl (70.3–98.3 fl), median CD4% 10.2% (0.3%–28.0%), and mean pre therapy viral load 407,712.9 ± 270,413.9 copies/ml. For both 24 and 48 weeks of antiretroviral therapy, patients with viral suppression had a greater mean percentage change in mean corpuscular volume (15.1% ± 8.4 vs. 11.1% ± 7.8 and 2.3% ± 13.2 vs. -2.7% ± 10.5 respectively). The mean percentage change in mean corpuscular volume was greater in the first 24 weeks of therapy for patients with and without viral suppression (15.1% ± 8.4 vs. 2.3% ± 13.2 and 11.1% ± 7.8 vs. -2.7% ± 10.5 respectively). In the multivariate logistic regression model, percentage change in mean corpuscular volume ≥ 20% was significantly associated with viral suppression (adjusted OR 4.0; CI 1.2–13.3; p value 0.02). The ability of percentage changes in MCV to correctly identify children and adolescents with viral suppression was higher at a cut off of ≥ 20% (90.7%; sensitivity, 31.7%) than at ≥ 9% (82.9%; sensitivity, 78.9%). Negative predictive value was lower at ≥ 20% change (25%; specificity, 84.8%) than at ≥ 9% change (33.3%; specificity, 39.4%).^ Mean corpuscular volume is a useful marker of adherence among children and adolescents with viral suppression. ^
Resumo:
Esta Tesis presenta un nuevo método para filtrar errores en bases de datos multidimensionales. Este método no precisa ninguna información a priori sobre la naturaleza de los errores. En concreto, los errrores no deben ser necesariamente pequeños, ni de distribución aleatoria ni tener media cero. El único requerimiento es que no estén correlados con la información limpia propia de la base de datos. Este nuevo método se basa en una extensión mejorada del método básico de reconstrucción de huecos (capaz de reconstruir la información que falta de una base de datos multidimensional en posiciones conocidas) inventado por Everson y Sirovich (1995). El método de reconstrucción de huecos mejorado ha evolucionado como un método de filtrado de errores de dos pasos: en primer lugar, (a) identifica las posiciones en la base de datos afectadas por los errores y después, (b) reconstruye la información en dichas posiciones tratando la información de éstas como información desconocida. El método resultante filtra errores O(1) de forma eficiente, tanto si son errores aleatorios como sistemáticos e incluso si su distribución en la base de datos está concentrada o esparcida por ella. Primero, se ilustra el funcionamiento delmétodo con una base de datosmodelo bidimensional, que resulta de la dicretización de una función transcendental. Posteriormente, se presentan algunos casos prácticos de aplicación del método a dos bases de datos tridimensionales aerodinámicas que contienen la distribución de presiones sobre un ala a varios ángulos de ataque. Estas bases de datos resultan de modelos numéricos calculados en CFD. ABSTRACT A method is presented to filter errors out in multidimensional databases. The method does not require any a priori information about the nature the errors. In particular, the errors need not to be small, neither random, nor exhibit zero mean. Instead, they are only required to be relatively uncorrelated to the clean information contained in the database. The method is based on an improved extension of a seminal iterative gappy reconstruction method (able to reconstruct lost information at known positions in the database) due to Everson and Sirovich (1995). The improved gappy reconstruction method is evolved as an error filtering method in two steps, since it is adapted to first (a) identify the error locations in the database and then (b) reconstruct the information in these locations by treating the associated data as gappy data. The resultingmethod filters out O(1) errors in an efficient fashion, both when these are random and when they are systematic, and also both when they concentrated and when they are spread along the database. The performance of the method is first illustrated using a two-dimensional toymodel database resulting fromdiscretizing a transcendental function and then tested on two CFD-calculated, three-dimensional aerodynamic databases containing the pressure coefficient on the surface of a wing for varying values of the angle of attack. A more general performance analysis of the method is presented with the intention of quantifying the randomness factor the method admits maintaining a correct performance and secondly, quantifying the size of error the method can detect. Lastly, some improvements of the method are proposed with their respective verification.
Resumo:
In this paper the authors construct a theory about how the expansion of higher education could be associated with several factors that indicate a decline in the quality of degrees. They assume that the expansion of tertiary education takes place through three channels, and show how these channels are likely to reduce average study time, lower academic requirements and average wages, and inflate grades. First, universities have an incentive to increase their student body through public and private funding schemes beyond a level at which they can keep their academic requirements high. Second, due to skill-biased technological change, employers have an incentive to recruit staff with a higher education degree. Third, students have an incentive to acquire a college degree due to employers’ preferences for such qualifications; the university application procedures; and through the growing social value placed on education. The authors develop a parsimonious dynamic model in which a student, a college and an employer repeatedly make decisions about requirement levels, performance and wage levels. Their model shows that if i) universities have the incentive to decrease entrance requirements, ii) employers are more likely to employ staff with a higher education degree and iii) all types of students enrol in colleges, the final grade will not necessarily induce weaker students to study more to catch up with more able students. In order to re-establish a quality-guarantee mechanism, entrance requirements should be set at a higher level.
Resumo:
Este trabalho tem com objetivo abordar o problema de alocação de ativos (análise de portfólio) sob uma ótica Bayesiana. Para isto foi necessário revisar toda a análise teórica do modelo clássico de média-variância e na sequencia identificar suas deficiências que comprometem sua eficácia em casos reais. Curiosamente, sua maior deficiência não esta relacionado com o próprio modelo e sim pelos seus dados de entrada em especial ao retorno esperado calculado com dados históricos. Para superar esta deficiência a abordagem Bayesiana (modelo de Black-Litterman) trata o retorno esperado como uma variável aleatória e na sequência constrói uma distribuição a priori (baseado no modelo de CAPM) e uma distribuição de verossimilhança (baseado na visão de mercado sob a ótica do investidor) para finalmente aplicar o teorema de Bayes tendo como resultado a distribuição a posteriori. O novo valor esperado do retorno, que emerge da distribuição a posteriori, é que substituirá a estimativa anterior do retorno esperado calculado com dados históricos. Os resultados obtidos mostraram que o modelo Bayesiano apresenta resultados conservadores e intuitivos em relação ao modelo clássico de média-variância.
Resumo:
BACKGROUND: Recent studies have demonstrated that exercise capacity is an independent predictor of mortality in women. Normative values of exercise capacity for age in women have not been well established. Our objectives were to construct a nomogram to permit determination of predicted exercise capacity for age in women and to assess the predictive value of the nomogram with respect to survival. METHODS: A total of 5721 asymptomatic women underwent a symptom-limited, maximal stress test. Exercise capacity was measured in metabolic equivalents (MET). Linear regression was used to estimate the mean MET achieved for age. A nomogram was established to allow the percentage of predicted exercise capacity to be estimated on the basis of age and the exercise capacity achieved. The nomogram was then used to determine the percentage of predicted exercise capacity for both the original cohort and a referral population of 4471 women with cardiovascular symptoms who underwent a symptom-limited stress test. Survival data were obtained for both cohorts, and Cox survival analysis was used to estimate the rates of death from any cause and from cardiac causes in each group. RESULTS: The linear regression equation for predicted exercise capacity (in MET) on the basis of age in the cohort of asymptomatic women was as follows: predicted MET = 14.7 - (0.13 x age). The risk of death among asymptomatic women whose exercise capacity was less than 85 percent of the predicted value for age was twice that among women whose exercise capacity was at least 85 percent of the age-predicted value (P<0.001). Results were similar in the cohort of symptomatic women. CONCLUSIONS: We have established a nomogram for predicted exercise capacity on the basis of age that is predictive of survival among both asymptomatic and symptomatic women. These findings could be incorporated into the interpretation of exercise stress tests, providing additional prognostic information for risk stratification.
Resumo:
Let f : [0, 1] x R2 -> R be a function satisfying the Caxatheodory conditions and t(1 - t)e(t) epsilon L-1 (0, 1). Let a(i) epsilon R and xi(i) (0, 1) for i = 1,..., m - 2 where 0 < xi(1) < xi(2) < (...) < xi(m-2) < 1 - In this paper we study the existence of C[0, 1] solutions for the m-point boundary value problem [GRAPHICS] The proof of our main result is based on the Leray-Schauder continuation theorem.
Resumo:
Background Regression to the mean (RTM) is a statistical phenomenon that can make natural variation in repeated data look like real change. It happens when unusually large or small measurements tend to be followed by measurements that are closer to the mean. Methods We give some examples of the phenomenon, and discuss methods to overcome it at the design and analysis stages of a study. Results The effect of RTM in a sample becomes more noticeable with increasing measurement error and when follow-up measurements are only examined on a sub-sample selected using a baseline value. Conclusions RTM is a ubiquitous phenomenon in repeated data and should always be considered as a possible cause of an observed change. Its effect can be alleviated through better study design and use of suitable statistical methods.
Resumo:
The recent deregulation in electricity markets worldwide has heightened the importance of risk management in energy markets. Assessing Value-at-Risk (VaR) in electricity markets is arguably more difficult than in traditional financial markets because the distinctive features of the former result in a highly unusual distribution of returns-electricity returns are highly volatile, display seasonalities in both their mean and volatility, exhibit leverage effects and clustering in volatility, and feature extreme levels of skewness and kurtosis. With electricity applications in mind, this paper proposes a model that accommodates autoregression and weekly seasonals in both the conditional mean and conditional volatility of returns, as well as leverage effects via an EGARCH specification. In addition, extreme value theory (EVT) is adopted to explicitly model the tails of the return distribution. Compared to a number of other parametric models and simple historical simulation based approaches, the proposed EVT-based model performs well in forecasting out-of-sample VaR. In addition, statistical tests show that the proposed model provides appropriate interval coverage in both unconditional and, more importantly, conditional contexts. Overall, the results are encouraging in suggesting that the proposed EVT-based model is a useful technique in forecasting VaR in electricity markets. (c) 2005 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.
Resumo:
Public values are moving from a research concern to policy discourse and management practice. There are, though, different readings of what public values actually mean. Reflection suggests two distinct strands of thinking: a generative strand that sees public value emerging from processes of public debate; and an institutional interpretation that views public values as the attributes of government producers. Neither perspective seems to offer a persuasive account of how the public gains from strengthened public values. Key propositions on values are generated from comparison of influential texts. A provisional framework is presented of the values base of public institutions and the loosely coupled public propositions flowing from these values. Value propositions issue from different governing contexts, which are grouped into policy frames that then compete with other problem frames for citizens’ cognitive resources. Vital democratic commitments to pluralism require public values to be distributed in competition with other, respected, frames.
Resumo:
The value of knowing about data availability and system accessibility is analyzed through theoretical models of Information Economics. When a user places an inquiry for information, it is important for the user to learn whether the system is not accessible or the data is not available, rather than not have any response. In reality, various outcomes can be provided by the system: nothing will be displayed to the user (e.g., a traffic light that does not operate, a browser that keeps browsing, a telephone that does not answer); a random noise will be displayed (e.g., a traffic light that displays random signals, a browser that provides disorderly results, an automatic voice message that does not clarify the situation); a special signal indicating that the system is not operating (e.g., a blinking amber indicating that the traffic light is down, a browser responding that the site is unavailable, a voice message regretting to tell that the service is not available). This article develops a model to assess the value of the information for the user in such situations by employing the information structure model prevailing in Information Economics. Examples related to data accessibility in centralized and in distributed systems are provided for illustration.
Resumo:
2000 Mathematics Subject Classification: Primary 26A33; Secondary 47G20, 31B05
Resumo:
MSC 2010: 34A37, 34B15, 26A33, 34C25, 34K37
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
In recent years, the luxury market has entered a period of very modest growth, which has been dubbed the ‘new normal’, where varying tourist flows, currency fluctuations, and shifted consumer tastes dictate the terms. The modern luxury consumer is a fickle mistress. Especially millennials – people born in the 1980s and 1990s – are the embodiment of this new form of demanding luxury consumer with particular tastes and values. Modern consumers, and specifically millennials, want experiences and free time, and are interested in a brand’s societal position and environmental impact. The purpose of this thesis is to investigate what the luxury value perceptions of millennials in higher education are in Europe, seeing as many of the most prominent luxury goods companies in the world originate from Europe. Perceived luxury value is herein examined from the individual’s perspective. As values and value perceptions are complex constructs, using qualitative research methods is justifiable. The data for thesis has been gathered by means of a group interview. The interview participants all study hospitality management in a private college, and each represent a different nationality. Cultural theories and research on luxury and luxury values provide the scientific foundation for this thesis, and a multidimensional luxury value model is used as a theoretical tool in sorting and analyzing the data. The results show that millennials in Europe value much more than simply modern and hard luxury. Functional, financial, individual, and social aspects are all present in perceived luxury value, but some more in a negative sense than others. Conspicuous, status-seeking consumption is mostly frowned upon, as is the consumption of luxury goods for the sake of satisfying social requisites and peer pressure. Most of the positive value perceptions are attributed to the functional dimension, as luxury products are seen to come with a promise of high quality and reliability, which justifies any price premiums. Ecological and ethical aspects of luxury are already a contemporary trend, but perceived even more as an important characteristic of luxury in the future. Most importantly, having time is fundamental. Depending on who is asked, luxury can mean anything, just as much as it can mean nothing.