923 resultados para Mean Value Theorem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy efficiency has become an important research topic in intralogistics. Especially in this field the focus is placed on automated storage and retrieval systems (AS/RS) utilizing stacker cranes as these systems are widespread and consume a significant portion of the total energy demand of intralogistical systems. Numerical simulation models were developed to calculate the energy demand rather precisely for discrete single and dual command cycles. Unfortunately these simulation models are not suitable to perform fast calculations to determine a mean energy demand value of a complete storage aisle. For this purpose analytical approaches would be more convenient but until now analytical approaches only deliver results for certain configurations. In particular, for commonly used stacker cranes equipped with an intermediate circuit connection within their drive configuration there is no analytical approach available to calculate the mean energy demand. This article should address this research gap and present a calculation approach which enables planners to quickly calculate the energy demand of these systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One non bt-corn hybrid (Pioneer 3489) and three btcorn hyrids (Pioneer 34RO7, Novartis NX6236, and Novartis N64-Z4) were planted in replicated 7.1-acre fields. After grain harvest, fields were stocked with 3 mature cows in midgestation to be strip-grazed as four paddocks over 126 days. Six similar cows were allotted to replicated drylots. All cows were fed hay as necessary to maintain a condition score of 5 on a 9-point scale. Cows were condition-scored biweekly and weighed monthly. Forage yield and weathering losses were determined by sampling one 4-m2 location per grazed or ungrazed paddock in each field with a minimum total of 2 locations of grazed or ungrazed forage per field. To measure forage selection during grazing, samples of grazed forage were collected from the rumen of one fistulated steer that grazed for 2 hours after ruminal evacuation. Non-bt-corn hybrids had greater (P<.05) infestation of corn borers in the upper stalk, lower stalk and ear shank than bt-corn hybrids. However, there were no differences in grain yields or dropped grain between hybrids. Crop residue dry matter, organic matter and in vitro digestible dry matter yields at the initiation of grazing did not differ between corn hybrids. Dry matter, organic matter and in vitro digestible dry matter losses tended (P<.10) to be greater from the NX6236 and N64-Z4 hybrids than from the 3489 and 34RO7 hybrids and were greater (P<.05) from grazed than non-grazed areas of the fields. At the initiation of grazing, dry matter concentrations of the crop residues from the NX6236 and N64-Z4 hybrids tended to be lower than those from the 3489 and 34RO7 hybrids. Crop residues from the NX6236 and N64-74 hybrids had lower concentrations of acid detergent fiber (P<.05) and acid detergent lignin (P=.07) and higher concentrations of in vitro digestible organic matter than the 3489 and 34RO7 hybrids. Over the grazing season, corn hybrid did not affect mean rates of change in forage composition. The concentration of in vitro digestible organic matter in forage selected by steers after two weeks of grazing did not differ. However, steers grazing corn crop residues consumed forage with higher (P<.05) concentrations of neutral detergent fiber, acid detergent fiber, and acid detergent insoluble nitrogen than steers fed hay. The acid detergent fiber concentration of forage selected by steers grazing the 3489 and N64-Z4 hybrids was lower (P < .05) than concentrations from the 34RO7 and NX6236 hybrids. In order to maintain similar body condition score changes, cows grazing crop residues from the 3489, 34RO7, NX6236, and N64-Z4 hybrids required 650, 628, 625, and 541 kg hay DM/cow compared with a hay requirement of 1447 kg hay DM/cow for cows maintained in a drylot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective Arterial lactate, base excess (BE), lactate clearance, and Sequential Organ Failure Assessment (SOFA) score have been shown to correlate with outcome in severely injured patients. The goal of the present study was to separately assess their predictive value in patients suffering from traumatic brain injury (TBI) as opposed to patients suffering from injuries not related to the brain. Materials and methods A total of 724 adult trauma patients with an Injury Severity Score (ISS) ≥ 16 were grouped into patients without TBI (non-TBI), patients with isolated TBI (isolated TBI), and patients with a combination of TBI and non-TBI injuries (combined injuries). The predictive value of the above parameters was then analyzed using both uni- and multivariate analyses. Results The mean age of the patients was 39 years (77 % males), with a mean ISS of 32 (range 16–75). Mortality ranged from 14 % (non-TBI) to 24 % (combined injuries). Admission and serial lactate/BE values were higher in non-survivors of all groups (all p < 0.01), but not in patients with isolated TBI. Admission SOFA scores were highest in non-survivors of all groups (p = 0.023); subsequently septic patients also showed elevated SOFA scores (p < 0.01), except those with isolated TBI. In this group, SOFA score was the only parameter which showed significant differences between survivors and non-survivors. Receiver operating characteristic (ROC) analysis revealed lactate to be the best overall predictor for increased mortality and further septic complications, irrespective of the leading injury. Conclusion Lactate showed the best performance in predicting sepsis or death in all trauma patients except those with isolated TBI, and the differences were greatest in patients with substantial bleeding. Following isolated TBI, SOFA score was the only parameter which could differentiate survivors from non-survivors on admission, although the SOFA score, too, was not an independent predictor of death following multivariate analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RATIONALE In biomedical journals authors sometimes use the standard error of the mean (SEM) for data description, which has been called inappropriate or incorrect. OBJECTIVE To assess the frequency of incorrect use of SEM in articles in three selected cardiovascular journals. METHODS AND RESULTS All original journal articles published in 2012 in Cardiovascular Research, Circulation: Heart Failure and Circulation Research were assessed by two assessors for inappropriate use of SEM when providing descriptive information of empirical data. We also assessed whether the authors state in the methods section that the SEM will be used for data description. Of 441 articles included in this survey, 64% (282 articles) contained at least one instance of incorrect use of the SEM, with two journals having a prevalence above 70% and "Circulation: Heart Failure" having the lowest value (27%). In 81% of articles with incorrect use of SEM, the authors had explicitly stated that they use the SEM for data description and in 89% SEM bars were also used instead of 95% confidence intervals. Basic science studies had a 7.4-fold higher level of inappropriate SEM use (74%) than clinical studies (10%). LIMITATIONS The selection of the three cardiovascular journals was based on a subjective initial impression of observing inappropriate SEM use. The observed results are not representative for all cardiovascular journals. CONCLUSION In three selected cardiovascular journals we found a high level of inappropriate SEM use and explicit methods statements to use it for data description, especially in basic science studies. To improve on this situation, these and other journals should provide clear instructions to authors on how to report descriptive information of empirical data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alveolar echinococcosis (AE), caused by larva stage of Echinococcus multilocularis, is one of the lethal parasitic diseases of man and a major public health problem in many countries in the northern hemisphere. When the living conditions and habits in Turkey were considered in terms of relation with the life cycle of the parasite, it was suggested that AE has been much more common than reported mainly from the Eastern Anatolia region of Turkey. Since in vitro serologic diagnosis tests with high specificity for AE have not been used in our country, most of the cases with liver lesions were misdiagnosed by radiological investigations as malignancies. The aim of this study was to evaluate the diagnostic value of the in-house ELISA methods developed by using three different antigens (EgHF, Em2, EmII/3-10) in the serological diagnosis of AE. The study samples included a total of 100 sera provided by Bern University Parasitology Institute where samples were obtained from patients with helminthiasis and all were confirmed by clinical, parasitological and/or histopathological means. Ten samples from each of the cases infected by E.multilocularis, E.granulosus, Taenia solium, Wuchereria bancrofti, Strongyloides stercolaris, Ascaris lumbricoides, Toxocara canis, Trichinella spiralis, Fasciola hepatica and Schistosoma haematobium were studied. In the study, EgHF (E.granulosus hydatid fluid) antigens were prepared in our laboratory from the liver cyst fluids of sheeps with cystic echinococcosis, however Em2 (E.multilocularis metacestode-purified laminated layer) and EmII/3-10 (E.multilocularis recombinant protoscolex tegument) antigens were provided by Bern University Parasitology Institute. Flat bottom ELISA plates were covered with EgHF, Em2 and EmII/3-10 antigens in the concentrations of 2.5 µg, 1 µg and 0.18 µg per well, respectively, and all sera were tested by EgHF-ELISA, Em2-ELISA and EmII/3-10-ELISA methods. For each tests, the samples which were reactive above the cut-off value (mean OD of negative controls+2 SD) were accepted as positive. The sensitivity of the ELISA tests performed with EgHF, Em2 and Em2II/3-10 antigens were estimated as 100%, 90% and 90%, respectively, whereas the specificity were 63%, 91% and 91%, respectively. When Em2-ELISA and EmII/3-10-ELISA tests were evaluated together, the specificity increased to 96%. Our data indicated that the highest sensitivity (100% with EgHF-ELISA) and specificity (96% with Em2-ELISA + EmII/3-10-ELISA) for the serodiagnosis of AE can be achieved by the combined use of the ELISA tests with three different antigens. It was concluded that the early and accurate diagnosis of AE in our country which is endemic for that disease, could be supported by the use of highly specific serological tests such as Em2-ELISA ve EmII/3-10-ELISA contributing radiological data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Acute mesenteric ischemia (AMI) is an emergency with a mortality rate up to 50 %. Detecting AMI continues to be a major challenge. This study assed the correlation of repeated preoperative serum lactate with bowel necrosis and to identify risk factors for a lethal outcome in patients with AMI. METHODS A retrospective study of 91 patients with clinically and pathologically confirmed AMI from January 2006 to December 2012 was performed. RESULTS In-hospital mortality rate was 42.9 %. Two hundred nine preoperative lactate measurements were analyzed (2.3 ± 1.1 values per patient). Less than or equal to six hours prior to surgery, the mean serum lactate level was significantly higher (4.97 ± 4.21 vs. 3.24 ± 3.05 mmol/L, p = 0.006) and the mean pH significantly lower (7.28 ± 0.12 vs. 7.37 ± 0.08, p = 0.001) compared to >6 h before surgery. Thirty-four patients had at least two lactate measurements within 24 h prior to surgery. In this subgroup, 17 patients (50 %) exhibited an increase, 17 patients (50 %) a decrease in lactate levels. Forward logistic regression analysis showed that length of necrotic bowel and the highest lactate value 24 h prior to surgery were independent risk factors for mortality (r (2)  = 0.329). CONCLUSION The value of serial lactate and pH measurements to predict the length of necrotic bowel is very limited. Length of necrotic bowel and lactate values are independent risk factors for mortality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIM To evaluate the diagnostic value (sensitivity, specificity) of positron emission mammography (PEM) in a single site non-interventional study using the maximum PEM uptake value (PUVmax). PATIENTS, METHODS In a singlesite, non-interventional study, 108 patients (107 women, 1 man) with a total of 151 suspected lesions were scanned with a PEM Flex Solo II (Naviscan) at 90 min p.i. with 3.5 MBq 18F-FDG per kg of body weight. In this ROI(region of interest)-based analysis, maximum PEM uptake value (PUV) was determined in lesions, tumours (PUVmaxtumour), benign lesions (PUVmaxnormal breast) and also in healthy tissues on the contralateral side (PUVmaxcontralateral breast). These values were compared and contrasted. In addition, the ratios of PUVmaxtumour / PUVmaxcontralateral breast and PUVmaxnormal breast / PUVmaxcontralateral breast were compared. The image data were interpreted independently by two experienced nuclear medicine physicians and compared with histology in cases of suspected carcinoma. RESULTS Based on a criteria of PUV>1.9, 31 out of 151 lesions in the patient cohort were found to be malignant (21%). A mean PUVmaxtumour of 3.78 ± 2.47 was identified in malignant tumours, while a mean PUVmaxnormal breast of 1.17 ± 0.37 was reported in the glandular tissue of the healthy breast, with the difference being statistically significant (p < 0.001). Similarly, the mean ratio between tumour and healthy glandular tissue in breast cancer patients (3.15 ± 1.58) was found to be significantly higher than the ratio for benign lesions (1.17 ± 0.41, p < 0.001). CONCLUSION PEM is capable of differentiating breast tumours from benign lesions with 100% sensitivity along with a high specificity of 96%, when a threshold of PUVmax >1.9 is applied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mean corpuscular volume, which is an inexpensive and widely available measure to assess, increases in HIV infected individuals receiving zidovudine and stavudine raising the hypothesis that it could be used as a surrogate for adherence.^ The aim of this study was to examine the association between mean corpuscular volume and adherence to antiretroviral therapy among HIV infected children and adolescents aged 0–19 years in Uganda as well as the extent to which changes in mean corpuscular volume predict adherence as determined by virologic suppression.^ The investigator retrospectively reviewed and analyzed secondary data of 158 HIV infected children and adolescents aged 0–19 years who initiated antiretroviral therapy under an observational cohort at the Baylor College of Medicine Children's Foundation - Uganda. Viral suppression was used as the gold standard for monitoring adherence and defined as viral load of < 400 copies/ml at 24 and 48 weeks. ^ Patients were at least 48 weeks on therapy, age 0.2–18.4 years, 54.4% female, 82.3% on zidovudine based regimen, 92% WHO stage III at initiation of therapy, median pre therapy MCV 80.6 fl (70.3–98.3 fl), median CD4% 10.2% (0.3%–28.0%), and mean pre therapy viral load 407,712.9 ± 270,413.9 copies/ml. For both 24 and 48 weeks of antiretroviral therapy, patients with viral suppression had a greater mean percentage change in mean corpuscular volume (15.1% ± 8.4 vs. 11.1% ± 7.8 and 2.3% ± 13.2 vs. -2.7% ± 10.5 respectively). The mean percentage change in mean corpuscular volume was greater in the first 24 weeks of therapy for patients with and without viral suppression (15.1% ± 8.4 vs. 2.3% ± 13.2 and 11.1% ± 7.8 vs. -2.7% ± 10.5 respectively). In the multivariate logistic regression model, percentage change in mean corpuscular volume ≥ 20% was significantly associated with viral suppression (adjusted OR 4.0; CI 1.2–13.3; p value 0.02). The ability of percentage changes in MCV to correctly identify children and adolescents with viral suppression was higher at a cut off of ≥ 20% (90.7%; sensitivity, 31.7%) than at ≥ 9% (82.9%; sensitivity, 78.9%). Negative predictive value was lower at ≥ 20% change (25%; specificity, 84.8%) than at ≥ 9% change (33.3%; specificity, 39.4%).^ Mean corpuscular volume is a useful marker of adherence among children and adolescents with viral suppression. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta Tesis presenta un nuevo método para filtrar errores en bases de datos multidimensionales. Este método no precisa ninguna información a priori sobre la naturaleza de los errores. En concreto, los errrores no deben ser necesariamente pequeños, ni de distribución aleatoria ni tener media cero. El único requerimiento es que no estén correlados con la información limpia propia de la base de datos. Este nuevo método se basa en una extensión mejorada del método básico de reconstrucción de huecos (capaz de reconstruir la información que falta de una base de datos multidimensional en posiciones conocidas) inventado por Everson y Sirovich (1995). El método de reconstrucción de huecos mejorado ha evolucionado como un método de filtrado de errores de dos pasos: en primer lugar, (a) identifica las posiciones en la base de datos afectadas por los errores y después, (b) reconstruye la información en dichas posiciones tratando la información de éstas como información desconocida. El método resultante filtra errores O(1) de forma eficiente, tanto si son errores aleatorios como sistemáticos e incluso si su distribución en la base de datos está concentrada o esparcida por ella. Primero, se ilustra el funcionamiento delmétodo con una base de datosmodelo bidimensional, que resulta de la dicretización de una función transcendental. Posteriormente, se presentan algunos casos prácticos de aplicación del método a dos bases de datos tridimensionales aerodinámicas que contienen la distribución de presiones sobre un ala a varios ángulos de ataque. Estas bases de datos resultan de modelos numéricos calculados en CFD. ABSTRACT A method is presented to filter errors out in multidimensional databases. The method does not require any a priori information about the nature the errors. In particular, the errors need not to be small, neither random, nor exhibit zero mean. Instead, they are only required to be relatively uncorrelated to the clean information contained in the database. The method is based on an improved extension of a seminal iterative gappy reconstruction method (able to reconstruct lost information at known positions in the database) due to Everson and Sirovich (1995). The improved gappy reconstruction method is evolved as an error filtering method in two steps, since it is adapted to first (a) identify the error locations in the database and then (b) reconstruct the information in these locations by treating the associated data as gappy data. The resultingmethod filters out O(1) errors in an efficient fashion, both when these are random and when they are systematic, and also both when they concentrated and when they are spread along the database. The performance of the method is first illustrated using a two-dimensional toymodel database resulting fromdiscretizing a transcendental function and then tested on two CFD-calculated, three-dimensional aerodynamic databases containing the pressure coefficient on the surface of a wing for varying values of the angle of attack. A more general performance analysis of the method is presented with the intention of quantifying the randomness factor the method admits maintaining a correct performance and secondly, quantifying the size of error the method can detect. Lastly, some improvements of the method are proposed with their respective verification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the authors construct a theory about how the expansion of higher education could be associated with several factors that indicate a decline in the quality of degrees. They assume that the expansion of tertiary education takes place through three channels, and show how these channels are likely to reduce average study time, lower academic requirements and average wages, and inflate grades. First, universities have an incentive to increase their student body through public and private funding schemes beyond a level at which they can keep their academic requirements high. Second, due to skill-biased technological change, employers have an incentive to recruit staff with a higher education degree. Third, students have an incentive to acquire a college degree due to employers’ preferences for such qualifications; the university application procedures; and through the growing social value placed on education. The authors develop a parsimonious dynamic model in which a student, a college and an employer repeatedly make decisions about requirement levels, performance and wage levels. Their model shows that if i) universities have the incentive to decrease entrance requirements, ii) employers are more likely to employ staff with a higher education degree and iii) all types of students enrol in colleges, the final grade will not necessarily induce weaker students to study more to catch up with more able students. In order to re-establish a quality-guarantee mechanism, entrance requirements should be set at a higher level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho tem com objetivo abordar o problema de alocação de ativos (análise de portfólio) sob uma ótica Bayesiana. Para isto foi necessário revisar toda a análise teórica do modelo clássico de média-variância e na sequencia identificar suas deficiências que comprometem sua eficácia em casos reais. Curiosamente, sua maior deficiência não esta relacionado com o próprio modelo e sim pelos seus dados de entrada em especial ao retorno esperado calculado com dados históricos. Para superar esta deficiência a abordagem Bayesiana (modelo de Black-Litterman) trata o retorno esperado como uma variável aleatória e na sequência constrói uma distribuição a priori (baseado no modelo de CAPM) e uma distribuição de verossimilhança (baseado na visão de mercado sob a ótica do investidor) para finalmente aplicar o teorema de Bayes tendo como resultado a distribuição a posteriori. O novo valor esperado do retorno, que emerge da distribuição a posteriori, é que substituirá a estimativa anterior do retorno esperado calculado com dados históricos. Os resultados obtidos mostraram que o modelo Bayesiano apresenta resultados conservadores e intuitivos em relação ao modelo clássico de média-variância.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Recent studies have demonstrated that exercise capacity is an independent predictor of mortality in women. Normative values of exercise capacity for age in women have not been well established. Our objectives were to construct a nomogram to permit determination of predicted exercise capacity for age in women and to assess the predictive value of the nomogram with respect to survival. METHODS: A total of 5721 asymptomatic women underwent a symptom-limited, maximal stress test. Exercise capacity was measured in metabolic equivalents (MET). Linear regression was used to estimate the mean MET achieved for age. A nomogram was established to allow the percentage of predicted exercise capacity to be estimated on the basis of age and the exercise capacity achieved. The nomogram was then used to determine the percentage of predicted exercise capacity for both the original cohort and a referral population of 4471 women with cardiovascular symptoms who underwent a symptom-limited stress test. Survival data were obtained for both cohorts, and Cox survival analysis was used to estimate the rates of death from any cause and from cardiac causes in each group. RESULTS: The linear regression equation for predicted exercise capacity (in MET) on the basis of age in the cohort of asymptomatic women was as follows: predicted MET = 14.7 - (0.13 x age). The risk of death among asymptomatic women whose exercise capacity was less than 85 percent of the predicted value for age was twice that among women whose exercise capacity was at least 85 percent of the age-predicted value (P<0.001). Results were similar in the cohort of symptomatic women. CONCLUSIONS: We have established a nomogram for predicted exercise capacity on the basis of age that is predictive of survival among both asymptomatic and symptomatic women. These findings could be incorporated into the interpretation of exercise stress tests, providing additional prognostic information for risk stratification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let f : [0, 1] x R2 -> R be a function satisfying the Caxatheodory conditions and t(1 - t)e(t) epsilon L-1 (0, 1). Let a(i) epsilon R and xi(i) (0, 1) for i = 1,..., m - 2 where 0 < xi(1) < xi(2) < (...) < xi(m-2) < 1 - In this paper we study the existence of C[0, 1] solutions for the m-point boundary value problem [GRAPHICS] The proof of our main result is based on the Leray-Schauder continuation theorem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Regression to the mean (RTM) is a statistical phenomenon that can make natural variation in repeated data look like real change. It happens when unusually large or small measurements tend to be followed by measurements that are closer to the mean. Methods We give some examples of the phenomenon, and discuss methods to overcome it at the design and analysis stages of a study. Results The effect of RTM in a sample becomes more noticeable with increasing measurement error and when follow-up measurements are only examined on a sub-sample selected using a baseline value. Conclusions RTM is a ubiquitous phenomenon in repeated data and should always be considered as a possible cause of an observed change. Its effect can be alleviated through better study design and use of suitable statistical methods.