991 resultados para Non-ideal mixtures
Resumo:
The classical central limit theorem states the uniform convergence of the distribution functions of the standardized sums of independent and identically distributed square integrable real-valued random variables to the standard normal distribution function. While first versions of the central limit theorem are already due to Moivre (1730) and Laplace (1812), a systematic study of this topic started at the beginning of the last century with the fundamental work of Lyapunov (1900, 1901). Meanwhile, extensions of the central limit theorem are available for a multitude of settings. This includes, e.g., Banach space valued random variables as well as substantial relaxations of the assumptions of independence and identical distributions. Furthermore, explicit error bounds are established and asymptotic expansions are employed to obtain better approximations. Classical error estimates like the famous bound of Berry and Esseen are stated in terms of absolute moments of the random summands and therefore do not reflect a potential closeness of the distributions of the single random summands to a normal distribution. Non-classical approaches take this issue into account by providing error estimates based on, e.g., pseudomoments. The latter field of investigation was initiated by work of Zolotarev in the 1960's and is still in its infancy compared to the development of the classical theory. For example, non-classical error bounds for asymptotic expansions seem not to be available up to now ...
Resumo:
Atrial fibrillation (AF) is the most common cardiac arrhythmia and is associated with an unfavorable prognosis, increasing the risk of stroke and death. Although traditionally associated with cardiovascular diseases, there is increasing evidence of high incidence of AF in patients with highly prevalent noncardiovascular diseases, such as cancer, sepsis, chronic obstructive pulmonary disease, obstructive sleep apnea and chronic kidney disease. Therefore, considerable number of patients has been affected by these comorbidities, leading to an increased risk of adverse outcomes.The authors performed a systematic review of the literature aiming to better elucidate the interaction between these conditions.Several mechanisms seem to contribute to the concomitant presence of AF and noncardiovascular diseases. Comorbidities, advanced age, autonomic dysfunction, electrolyte disturbance and inflammation are common to these conditions and may predispose to AF.The treatment of AF in these patients represents a clinical challenge, especially in terms of antithrombotic therapy, since the scores for stratification of thromboembolic risk, such as the CHADS2 and CHA2DS2VASc scores, and the scores for hemorrhagic risk, like the HAS-BLED score have limitations when applied in these conditions.The evidence in this area is still scarce and further investigations to elucidate aspects like epidemiology, pathogenesis, prevention and treatment of AF in noncardiovascular diseases are still needed.
Resumo:
Abstract Background: GRACE risk score (GS) is a scoring system which has a prognostic significance in patients with non-ST segment elevation myocardial infarction (non-STEMI). Objective: The present study aimed to determine whether end-systolic or end-diastolic epicardial fat thickness (EFT) is more closely associated with high-risk non-STEMI patients according to the GS. Methods: We evaluated 207 patients who had non-STEMI beginning from October 2012 to February 2013, and 162 of them were included in the study (115 males, mean age: 66.6 ± 12.8 years). End-systolic and end-diastolic EFTs were measured with echocardiographic methods. Patients with high in-hospital GS were categorized as the H-GS group (in hospital GS > 140), while other patients were categorized as the low-to-moderate risk group (LM-GS). Results: Systolic and diastolic blood pressures of H-GS patients were lower than those of LM-GS patients, and the average heart rate was higher in this group. End-systolic EFT and end-diastolic EFT were significantly higher in the H-GS group. The echocardiographic assessment of right and left ventricles showed significantly decreased ejection fraction in both ventricles in the H-GS group. The highest correlation was found between GS and end-diastolic EFT (r = 0.438). Conclusion: End-systolic and end-diastolic EFTs were found to be increased in the H-GS group. However, end-diastolic EFT and GS had better correlation than end-systolic EFT and GS.
Resumo:
Abstract Background: BNP has been extensively evaluated to determine short- and intermediate-term prognosis in patients with acute coronary syndrome, but its role in long-term mortality is not known. Objective: To determine the very long-term prognostic role of B-type natriuretic peptide (BNP) for all-cause mortality in patients with non-ST segment elevation acute coronary syndrome (NSTEACS). Methods: A cohort of 224 consecutive patients with NSTEACS, prospectively seen in the Emergency Department, had BNP measured on arrival to establish prognosis, and underwent a median 9.34-year follow-up for all-cause mortality. Results: Unstable angina was diagnosed in 52.2%, and non-ST segment elevation myocardial infarction, in 47.8%. Median admission BNP was 81.9 pg/mL (IQ range = 22.2; 225) and mortality rate was correlated with increasing BNP quartiles: 14.3; 16.1; 48.2; and 73.2% (p < 0.0001). ROC curve disclosed 100 pg/mL as the best BNP cut-off value for mortality prediction (area under the curve = 0.789, 95% CI= 0.723-0.854), being a strong predictor of late mortality: BNP < 100 = 17.3% vs. BNP ≥ 100 = 65.0%, RR = 3.76 (95% CI = 2.49-5.63, p < 0.001). On logistic regression analysis, age >72 years (OR = 3.79, 95% CI = 1.62-8.86, p = 0.002), BNP ≥ 100 pg/mL (OR = 6.24, 95% CI = 2.95-13.23, p < 0.001) and estimated glomerular filtration rate (OR = 0.98, 95% CI = 0.97-0.99, p = 0.049) were independent late-mortality predictors. Conclusions: BNP measured at hospital admission in patients with NSTEACS is a strong, independent predictor of very long-term all-cause mortality. This study allows raising the hypothesis that BNP should be measured in all patients with NSTEACS at the index event for long-term risk stratification.
When is the Best Time for the Second Antiplatelet Agent in Non-St Elevation Acute Coronary Syndrome?
Resumo:
Abstract Dual antiplatelet therapy is a well-established treatment in patients with non-ST elevation acute coronary syndrome (NSTE-ACS), with class I of recommendation (level of evidence A) in current national and international guidelines. Nonetheless, these guidelines are not precise or consensual regarding the best time to start the second antiplatelet agent. The evidences are conflicting, and after more than a decade using clopidogrel in this scenario, benefits from the routine pretreatment, i.e. without knowing the coronary anatomy, with dual antiplatelet therapy remain uncertain. The recommendation for the upfront treatment with clopidogrel in NSTE-ACS is based on the reduction of non-fatal events in studies that used the conservative strategy with eventual invasive stratification, after many days of the acute event. This approach is different from the current management of these patients, considering the established benefits from the early invasive strategy, especially in moderate to high-risk patients. The only randomized study to date that specifically tested the pretreatment in NSTE-ACS in the context of early invasive strategy, used prasugrel, and it did not show any benefit in reducing ischemic events with pretreatment. On the contrary, its administration increased the risk of bleeding events. This study has brought the pretreatment again into discussion, and led to changes in recent guidelines of the American and European cardiology societies. In this paper, the authors review the main evidence of the pretreatment with dual antiplatelet therapy in NSTE-ACS.
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2012
Resumo:
Magdeburg, Univ., Fak. für Verfahrens- und Systemtechnik, Diss., 2009
Resumo:
Abstract Background: Recent studies have shown changes in cardiac autonomic control of obese preadolescents. Objective: To assess the heart rate responses and cardiac autonomic modulation of obese preadolescents during constant expiratory effort. Methods: This study assessed 10 obese and 10 non-obese preadolescents aged 9 to 12 years. The body mass index of the obese group was between the 95th and 97th percentiles of the CDC National Center for Health Statistics growth charts, while that of the non-obese group, between the 5th and 85th percentiles. Initially, they underwent anthropometric and clinical assessment, and their maximum expiratory pressures were obtained. Then, the preadolescents underwent a constant expiratory effort of 70% of their maximum expiratory pressure for 20 seconds, with heart rate measurement 5 minutes before, during and 5 minutes after it. Heart rate variability (HRV) and heart rate values were analyzed by use of a software. Results: The HRV did not differ when compared before and after the constant expiratory effort intra- and intergroup. The heart rate values differed (p < 0.05) during the effort, being the total variation in non-obese preadolescents of 18.5 ± 1.5 bpm, and in obese, of 12.2 ± 1.3 bpm. Conclusion: The cardiac autonomic modulation did not differ between the groups when comparing before and after the constant expiratory effort. However, the obese group showed lower cardiovascular response to baroreceptor stimuli during the effort, suggesting lower autonomic baroreflex sensitivity.
Resumo:
Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2010
Resumo:
Magdeburg, Univ., Fak. für Verfahrens- und Systemtechnik, Diss., 2010
Resumo:
Magdeburg, Univ., Fak. für Naturwiss., Diss., 2010
Resumo:
Magdeburg, Univ., Fak. für Naturwiss., Diss., 2011
Resumo:
The main object of the present paper consists in giving formulas and methods which enable us to determine the minimum number of repetitions or of individuals necessary to garantee some extent the success of an experiment. The theoretical basis of all processes consists essentially in the following. Knowing the frequency of the desired p and of the non desired ovents q we may calculate the frequency of all possi- ble combinations, to be expected in n repetitions, by expanding the binomium (p-+q)n. Determining which of these combinations we want to avoid we calculate their total frequency, selecting the value of the exponent n of the binomium in such a way that this total frequency is equal or smaller than the accepted limit of precision n/pª{ 1/n1 (q/p)n + 1/(n-1)| (q/p)n-1 + 1/ 2!(n-2)| (q/p)n-2 + 1/3(n-3) (q/p)n-3... < Plim - -(1b) There does not exist an absolute limit of precision since its value depends not only upon psychological factors in our judgement, but is at the same sime a function of the number of repetitions For this reasen y have proposed (1,56) two relative values, one equal to 1-5n as the lowest value of probability and the other equal to 1-10n as the highest value of improbability, leaving between them what may be called the "region of doubt However these formulas cannot be applied in our case since this number n is just the unknown quantity. Thus we have to use, instead of the more exact values of these two formulas, the conventional limits of P.lim equal to 0,05 (Precision 5%), equal to 0,01 (Precision 1%, and to 0,001 (Precision P, 1%). The binominal formula as explained above (cf. formula 1, pg. 85), however is of rather limited applicability owing to the excessive calculus necessary, and we have thus to procure approximations as substitutes. We may use, without loss of precision, the following approximations: a) The normal or Gaussean distribution when the expected frequency p has any value between 0,1 and 0,9, and when n is at least superior to ten. b) The Poisson distribution when the expected frequecy p is smaller than 0,1. Tables V to VII show for some special cases that these approximations are very satisfactory. The praticai solution of the following problems, stated in the introduction can now be given: A) What is the minimum number of repititions necessary in order to avoid that any one of a treatments, varieties etc. may be accidentally always the best, on the best and second best, or the first, second, and third best or finally one of the n beat treatments, varieties etc. Using the first term of the binomium, we have the following equation for n: n = log Riim / log (m:) = log Riim / log.m - log a --------------(5) B) What is the minimun number of individuals necessary in 01der that a ceratin type, expected with the frequency p, may appaer at least in one, two, three or a=m+1 individuals. 1) For p between 0,1 and 0,9 and using the Gaussean approximation we have: on - ó. p (1-p) n - a -1.m b= δ. 1-p /p e c = m/p } -------------------(7) n = b + b² + 4 c/ 2 n´ = 1/p n cor = n + n' ---------- (8) We have to use the correction n' when p has a value between 0,25 and 0,75. The greek letters delta represents in the present esse the unilateral limits of the Gaussean distribution for the three conventional limits of precision : 1,64; 2,33; and 3,09 respectively. h we are only interested in having at least one individual, and m becomes equal to zero, the formula reduces to : c= m/p o para a = 1 a = { b + b²}² = b² = δ2 1- p /p }-----------------(9) n = 1/p n (cor) = n + n´ 2) If p is smaller than 0,1 we may use table 1 in order to find the mean m of a Poisson distribution and determine. n = m: p C) Which is the minimun number of individuals necessary for distinguishing two frequencies p1 and p2? 1) When pl and p2 are values between 0,1 and 0,9 we have: n = { δ p1 ( 1-pi) + p2) / p2 (1 - p2) n= 1/p1-p2 }------------ (13) n (cor) We have again to use the unilateral limits of the Gaussean distribution. The correction n' should be used if at least one of the valors pl or p2 has a value between 0,25 and 0,75. A more complicated formula may be used in cases where whe want to increase the precision : n (p1 - p2) δ { p1 (1- p2 ) / n= m δ = δ p1 ( 1 - p1) + p2 ( 1 - p2) c= m / p1 - p2 n = { b2 + 4 4 c }2 }--------- (14) n = 1/ p1 - p2 2) When both pl and p2 are smaller than 0,1 we determine the quocient (pl-r-p2) and procure the corresponding number m2 of a Poisson distribution in table 2. The value n is found by the equation : n = mg /p2 ------------- (15) D) What is the minimun number necessary for distinguishing three or more frequencies, p2 p1 p3. If the frequecies pl p2 p3 are values between 0,1 e 0,9 we have to solve the individual equations and sue the higest value of n thus determined : n 1.2 = {δ p1 (1 - p1) / p1 - p2 }² = Fiim n 1.2 = { δ p1 ( 1 - p1) + p1 ( 1 - p1) }² } -- (16) Delta represents now the bilateral limits of the : Gaussean distrioution : 1,96-2,58-3,29. 2) No table was prepared for the relatively rare cases of a comparison of threes or more frequencies below 0,1 and in such cases extremely high numbers would be required. E) A process is given which serves to solve two problemr of informatory nature : a) if a special type appears in n individuals with a frequency p(obs), what may be the corresponding ideal value of p(esp), or; b) if we study samples of n in diviuals and expect a certain type with a frequency p(esp) what may be the extreme limits of p(obs) in individual farmlies ? I.) If we are dealing with values between 0,1 and 0,9 we may use table 3. To solve the first question we select the respective horizontal line for p(obs) and determine which column corresponds to our value of n and find the respective value of p(esp) by interpolating between columns. In order to solve the second problem we start with the respective column for p(esp) and find the horizontal line for the given value of n either diretly or by approximation and by interpolation. 2) For frequencies smaller than 0,1 we have to use table 4 and transform the fractions p(esp) and p(obs) in numbers of Poisson series by multiplication with n. Tn order to solve the first broblem, we verify in which line the lower Poisson limit is equal to m(obs) and transform the corresponding value of m into frequecy p(esp) by dividing through n. The observed frequency may thus be a chance deviate of any value between 0,0... and the values given by dividing the value of m in the table by n. In the second case we transform first the expectation p(esp) into a value of m and procure in the horizontal line, corresponding to m(esp) the extreme values om m which than must be transformed, by dividing through n into values of p(obs). F) Partial and progressive tests may be recomended in all cases where there is lack of material or where the loss of time is less importent than the cost of large scale experiments since in many cases the minimun number necessary to garantee the results within the limits of precision is rather large. One should not forget that the minimun number really represents at the same time a maximun number, necessary only if one takes into consideration essentially the disfavorable variations, but smaller numbers may frequently already satisfactory results. For instance, by definition, we know that a frequecy of p means that we expect one individual in every total o(f1-p). If there were no chance variations, this number (1- p) will be suficient. and if there were favorable variations a smaller number still may yield one individual of the desired type. r.nus trusting to luck, one may start the experiment with numbers, smaller than the minimun calculated according to the formulas given above, and increase the total untill the desired result is obtained and this may well b ebefore the "minimum number" is reached. Some concrete examples of this partial or progressive procedure are given from our genetical experiments with maize.
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2012
Resumo:
Magdeburg, Univ., Fak. für , Diss., 2012