955 resultados para Mean-value solution


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work models the competitive behaviour of individuals who maximize their own utility managing their network of connections with other individuals. Utility is taken as a synonym of reputation in this model. Each agent has to decide between two variables: the quality of connections and the number of connections. Hence, the reputation of an individual is a function of the number and the quality of connections within the network. On the other hand, individuals incur in a cost when they improve their network of contacts. The initial value of the quality and number of connections of each individual is distributed according to an initial (given) distribution. The competition occurs over continuous time and among a continuum of agents. A mean field game approach is adopted to solve the model, leading to an optimal trajectory for the number and quality of connections for each individual.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract:INTRODUCTION:The Montenegro skin test (MST) has good clinical applicability and low cost for the diagnosis of American tegumentary leishmaniasis (ATL). However, no studies have validated the reference value (5mm) typically used to discriminate positive and negative results. We investigated MST results and evaluated its performance using different cut-off points.METHODS:The results of laboratory tests for 4,256 patients with suspected ATL were analyzed, and 1,182 individuals were found to fulfill the established criteria. Two groups were formed. The positive cutaneous leishmaniasis (PCL) group included patients with skin lesions and positive direct search for parasites (DS) results. The negative cutaneous leishmaniasis (NCL) group included patients with skin lesions with evolution up to 2 months, negative DS results, and negative indirect immunofluorescence assay results who were residents of urban areas that were reported to be probable sites of infection at domiciles and peridomiciles.RESULTS:The PCL and NCL groups included 769 and 413 individuals, respectively. The mean ± standard deviation MST in the PCL group was 12.62 ± 5.91mm [95% confidence interval (CI): 12.20-13.04], and that in the NCL group was 1.43 ± 2.17mm (95% CI: 1.23-1.63). Receiver-operating characteristic curve analysis indicated 97.4% sensitivity and 93.9% specificity for a cut-off of 5mm and 95.8% sensitivity and 97.1% specificity for a cut-off of 6mm.CONCLUSIONS:Either 5mm or 6mm could be used as the cut-off value for diagnosing ATL, as both values had high sensitivity and specificity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To determine in arrhythmogenic right ventricular cardiomyopathy the value of QT interval dispersion for identifying the induction of sustained ventricular tachycardia in the electrophysiological study or the risk of sudden cardiac death. METHODS: We assessed QT interval dispersion in the 12-lead electrocardiogram of 26 patients with arrhythmogenic right ventricular cardiomyopathy. We analyzed its association with sustained ventricular tachycardia and sudden cardiac death, and in 16 controls similar in age and sex. RESULTS: (mean ± SD). QT interval dispersion: patients = 53.8±14.1ms; control group = 35.0±10.6ms, p=0.001. Patients with induction of ventricular tachycardia: 52.5±13.8ms; without induction of ventricular tachycardia: 57.5±12.8ms, p=0.420. In a mean follow-up period of 41±11 months, five sudden cardiac deaths occurred. QT interval dispersion in this group was 62.0±17.8, and in the others it was 51.9±12.8ms, p=0.852. Using a cutoff > or = 60ms to define an increase in the degree of the QT interval dispersion, we were able to identify patients at risk of sudden cardiac death with a sensitivity of 60%, a specificity of 57%, and positive and negative predictive values of 25% and 85%, respectively. CONCLUSION: Patients with arrhythmogenic right ventricular cardiomyopathy have a significant increase in the degree of QT interval dispersion when compared with the healthy population. However it, did not identify patients with induction of ventricular tachycardia in the electrophysiological study, showing a very low predictive value for defining the risk of sudden cardiac death in the population studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE - To assess the diagnostic value, the characteristics, and feasibility of tilt-table testing in children and adolescents. METHODS - From August 1991 to June 1997, we retrospectively assessed 94 patients under the age of 18 years who had a history of recurring syncope and presyncope of unknown origin and who were referred for tilt-table testing. These patients were divided into 2 groups: group I (children) - 36 patients with ages ranging from 3 to 12 (mean of 9.19±2.31) years; group II (adolescents) - 58 patients with ages ranging from 13 to 18 (mean of 16.05±1.40) years. We compared the positivity rate, the type of hemodynamic response, and the time period required for the test to become positive in the 2 groups. RESULTS - The positivity rates were 41.6 % and 50% for groups I and II, respectively. The pattern of positive hemodynamic response that predominated in both groups was the mixed response. The mean time period required for the test to become positive was shorter in group I (11.0±7.23 min) than in group II (18.44±7.83 min). No patient experienced technical difficulty or complications. CONCLUSION - No difference was observed in regard to feasibility, positivity rate, and pattern of positive response for the tilt-table test in children and adolescents. Pediatric patients had earlier positive responses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado em Bioengenharia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The idea for this thesis arose from a chain of reactions first set in motion by a particular experience. In keeping with the contemporary need to deconstruct every phenomenon it seemed important to analyse this experience in the hope of a satisfactory explanation. The experience referred to is the aesthetic experience provoked by works of art. The plan for the thesis involved trying to establish whether the aesthetic experience is unique and individual, or whether it is one that is experienced universally. Each question that arises in the course of this exploration promotes a dialectical reaction. I rely on the history of aesthetics as a philosophical discipline to supply the answers. This study concentrates on the efforts by philosophers and critical theorists to understand the tensions between the empirical and the emotional, the individual and the universal responses to the sociological, political and material conditions that prevail and are expressed through the medium of art. What I found is that the history of aesthetics is full of contradictory evidence and cannot provide a dogmatic solution to the questions posed. In fact what is indicated is that the mystery that attaches to the aesthetic experience is one that can also apply to the spiritual or transcendent experience. The aim of this thesis is to support the contribution of visual art in the spiritual well being of human development and supports the uniqueness of the evaluation and aesthetic judgement by the individual of a work of art. I suggest that mystery will continue to be of value in the holistic development of human beings and this mystery can be expressed through visual art. Furthermore, this thesis might suggest that what could be looked at is whether a work of art may be redemptive in its affect and offset the current decline in affective religious practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new solution concept to address the problem of sharing a surplus among the agents generating it. The problem is formulated in the preferences-endowments space. The solution is defined recursively, incorporating notions of consistency and fairness and relying on properties satisfied by the Shapley value for Transferable Utility (TU) games. We show a solution exists, and call it the Ordinal Shapley value (OSV). We characterize the OSV using the notion of coalitional dividends, and furthermore show it is monotone and anonymous. Finally, similarly to the weighted Shapely value for TU games, we construct a weighted OSV as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a new solution concept to address the problem of sharing a surplus among the agents generating it. The sharing problem is formulated in the preferences-endowments space. The solution is defined in a recursive manner incorporating notions of consistency and fairness and relying on properties satisfied by the Shapley value for Transferable Utility (TU) games. We show a solution exists, and refer to it as an Ordinal Shapley value (OSV). The OSV associates with each problem an allocation as well as a matrix of concessions ``measuring'' the gains each agent foregoes in favor of the other agents. We analyze the structure of the concessions, and show they are unique and symmetric. Next we characterize the OSV using the notion of coalitional dividends, and furthermore show it is monotone in an agent's initial endowments and satisfies anonymity. Finally, similarly to the weighted Shapley value for TU games, we construct a weighted OSV as well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To determine the value of applying finger trap distraction during direct MR arthrography of the wrist to assess intrinsic ligament and triangular fibrocartilage complex (TFCC) tears. MATERIALS AND METHODS: Twenty consecutive patients were prospectively investigated by three-compartment wrist MR arthrography. Imaging was performed with 3-T scanners using a three-dimensional isotropic (0.4 mm) T1-weighted gradient-recalled echo sequence, with and without finger trap distraction (4 kg). In a blind and independent fashion, two musculoskeletal radiologists measured the width of the scapholunate (SL), lunotriquetral (LT) and ulna-TFC (UTFC) joint spaces. They evaluated the amount of contrast medium within these spaces using a four-point scale, and assessed SL, LT and TFCC tears, as well as the disruption of Gilula's carpal arcs. RESULTS: With finger trap distraction, both readers found a significant increase in width of the SL space (mean Δ = +0.1mm, p ≤ 0.040), and noticed more contrast medium therein (p ≤ 0.035). In contrast, the differences in width of the LT (mean Δ = +0.1 mm, p ≥ 0.057) and UTFC (mean Δ = 0mm, p ≥ 0.728) spaces, as well as the amount of contrast material within these spaces were not statistically significant (p = 0.607 and ≥ 0.157, respectively). Both readers detected more SL (Δ = +1, p = 0.157) and LT (Δ = +2, p = 0.223) tears, although statistical significance was not reached, and Gilula's carpal arcs were more frequently disrupted during finger trap distraction (Δ = +5, p = 0.025). CONCLUSION: The application of finger trap distraction during direct wrist MR arthrography may enhance both detection and characterisation of SL and LT ligament tears by widening the SL space and increasing the amount of contrast within the SL and LT joint spaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we assume that for some commodities individuals may wish to adjust their levels of consumption from their normal Marshallian levels so as to match the consumption levels of a group of other individuals, in order to signal that they conform to the consumption norms of that group. Unlike Veblen’s concept of conspicuous consumption this can mean that some individuals may reduce their consumption of the relevant commodities. We model this as a three-stage game in which individuals first decide whether or not they wish to adhere to a norm, then decide which norm they wish to adhere to, and finally decide their actual consumption. We present a number of examples of the resulting equilibria, and then discuss the potential policy implications of this model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As part of a project to use the long-lived (T(1/2)=1200a) (166m)Ho as reference source in its reference ionisation chamber, IRA standardised a commercially acquired solution of this nuclide using the 4pibeta-gamma coincidence and 4pigamma (NaI) methods. The (166m)Ho solution supplied by Isotope Product Laboratories was measured to have about 5% Europium impurities (3% (154)Eu, 0.94% (152)Eu and 0.9% (155)Eu). Holmium had therefore to be separated from europium, and this was carried out by means of ion-exchange chromatography. The holmium fractions were collected without europium contamination: 162h long HPGe gamma measurements indicated no europium impurity (detection limits of 0.01% for (152)Eu and (154)Eu, and 0.03% for (155)Eu). The primary measurement of the purified (166m)Ho solution with the 4pi (PC) beta-gamma coincidence technique was carried out at three gamma energy settings: a window around the 184.4keV peak and gamma thresholds at 121.8 and 637.3keV. The results show very good self-consistency, and the activity concentration of the solution was evaluated to be 45.640+/-0.098kBq/g (0.21% with k=1). The activity concentration of this solution was also measured by integral counting with a well-type 5''x5'' NaI(Tl) detector and efficiencies computed by Monte Carlo simulations using the GEANT code. These measurements were mutually consistent, while the resulting weighted average of the 4pi NaI(Tl) method was found to agree within 0.15% with the result of the 4pibeta-gamma coincidence technique. An ampoule of this solution and the measured value of the concentration were submitted to the BIPM as a contribution to the Système International de Référence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To evaluate the feasibility, determine the optimal b-value, and assess the utility of 3-T diffusion-weighted MR imaging (DWI) of the spine in differentiating benign from pathologic vertebral compression fractures.Methods and Materials: Twenty patients with 38 vertebral compression fractures (24 benign, 14 pathologic) and 20 controls (total: 23 men, 17 women, mean age 56.2years) were included from December 2010 to May 2011 in this IRB-approved prospective study. MR imaging of the spine was performed on a 3-T unit with T1-w, fat-suppressed T2-w, gadolinium-enhanced fat-suppressed T1-w and zoomed-EPI (2D RF excitation pulse combined with reduced field-of-view single-shot echo-planar readout) diffusion-w (b-values: 0, 300, 500 and 700s/mm2) sequences. Two radiologists independently assessed zoomed-EPI image quality in random order using a 4-point scale: 1=excellent to 4=poor. They subsequently measured apparent diffusion coefficients (ADCs) in normal vertebral bodies and compression fractures, in consensus.Results: Lower b-values correlated with better image quality scores, with significant differences between b=300 (mean±SD=2.6±0.8), b=500 (3.0±0.7) and b=700 (3.6±0.6) (all p<0.001). Mean ADCs of normal vertebral bodies (n=162) were 0.23, 0.17 and 0.11×10-3mm2/s with b=300, 500 and 700s/mm2, respectively. In contrast, mean ADCs were 0.89, 0.70 and 0.59×10-3mm2/s for benign vertebral compression fractures and 0.79, 0.66 and 0.51×10-3mm2/s for pathologic fractures with b=300, 500 and 700s/mm2, respectively. No significant difference was found between ADCs of benign and pathologic fractures.Conclusion: 3-T DWI of the spine is feasible and lower b-values (300s/mm2) are recommended. However, our preliminary results show no advantage of DWI in differentiating benign from pathologic vertebral compression fractures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Attrition in longitudinal studies can lead to biased results. The study is motivated by the unexpected observation that alcohol consumption decreased despite increased availability, which may be due to sample attrition of heavy drinkers. Several imputation methods have been proposed, but rarely compared in longitudinal studies of alcohol consumption. The imputation of consumption level measurements is computationally particularly challenging due to alcohol consumption being a semi-continuous variable (dichotomous drinking status and continuous volume among drinkers), and the non-normality of data in the continuous part. Data come from a longitudinal study in Denmark with four waves (2003-2006) and 1771 individuals at baseline. Five techniques for missing data are compared: Last value carried forward (LVCF) was used as a single, and Hotdeck, Heckman modelling, multivariate imputation by chained equations (MICE), and a Bayesian approach as multiple imputation methods. Predictive mean matching was used to account for non-normality, where instead of imputing regression estimates, "real" observed values from similar cases are imputed. Methods were also compared by means of a simulated dataset. The simulation showed that the Bayesian approach yielded the most unbiased estimates for imputation. The finding of no increase in consumption levels despite a higher availability remained unaltered. Copyright (C) 2011 John Wiley & Sons, Ltd.