918 resultados para Random Regret Minimization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although many studies find that voting in Africa approximates an ethnic census in that voting is primarily along ethnic lines, hardly any of the studies have sought to explain ethnic voting following a rational choice framework. Using data of voter opinions from a survey conducted two weeks before the December 2007 Kenyan elections, we find that the expected benefits associated with a win by each of the presidential candidates varied significantly across voters from different ethnic groups. We hypothesize that decision to participate in the elections was influenced by the expected benefits as per the minimax-regret voting model. We test the predictions of this model using data of voter turnout in the December 2007 elections and find that turnout across ethnic groups varied systematically with expected benefits. The results suggest that individuals participated in the elections primarily to avoid the maximum regret should a candidate from another ethnic group win. The results therefore offer credence to the minimax regret model as proposed by Ferejohn and Fiorina (1974) and refute the Downsian expected utility model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An introduction to Fourier Series based on the minimization of the least square error between an approximate series representation and the exact function.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective. To determine the accuracy of the urine protein:creatinine ratio (pr:cr) in predicting 300 mg of protein in 24-hour urine collection in pregnant patients with suspected preeclampsia. ^ Methods. A systematic review was performed. Articles were identified through electronic databases and the relevant citations were hand searching of textbooks and review articles. Included studies evaluated patients for suspected preeclampsia with a 24-hour urine sample and a pr:cr. Only English language articles were included. The studies that had patients with chronic illness such as chronic hypertension, diabetes mellitus or renal impairment were excluded from the review. Two researchers extracted accuracy data for pr:cr relative to a gold standard of 300 mg of protein in 24-hour sample as well as population and study characteristics. The data was analyzed and summarized in tabular and graphical form. ^ Results. Sixteen studies were identified and only three studies met our inclusion criteria with 510 total patients. The studies evaluated different cut-points for positivity of pr:cr from 130 mg/g to 700 mg/g. Sensitivities and specificities for pr:cr of 130mg/g -150 mg/g were 90-93% and 33-65%, respectively; for a pr:cr of 300 mg/g were 81-95% and 52-80%, respectively; for a pr:cr of 600-700mg/g were 85-87% and 96-97%, respectively. ^ Conclusion. The value of a random pr:cr to exclude pre-eclampsia is limited because even low levels of pr:cr (130-150 mg/g) may miss up to 10% of patients with significant proteinuria. A pr:cr of more than 600 mg/g may obviate a 24-hour collection.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Random Forests™ is reported to be one of the most accurate classification algorithms in complex data analysis. It shows excellent performance even when most predictors are noisy and the number of variables is much larger than the number of observations. In this thesis Random Forests was applied to a large-scale lung cancer case-control study. A novel way of automatically selecting prognostic factors was proposed. Also, synthetic positive control was used to validate Random Forests method. Throughout this study we showed that Random Forests can deal with large number of weak input variables without overfitting. It can account for non-additive interactions between these input variables. Random Forests can also be used for variable selection without being adversely affected by collinearities. ^ Random Forests can deal with the large-scale data sets without rigorous data preprocessing. It has robust variable importance ranking measure. Proposed is a novel variable selection method in context of Random Forests that uses the data noise level as the cut-off value to determine the subset of the important predictors. This new approach enhanced the ability of the Random Forests algorithm to automatically identify important predictors for complex data. The cut-off value can also be adjusted based on the results of the synthetic positive control experiments. ^ When the data set had high variables to observations ratio, Random Forests complemented the established logistic regression. This study suggested that Random Forests is recommended for such high dimensionality data. One can use Random Forests to select the important variables and then use logistic regression or Random Forests itself to estimate the effect size of the predictors and to classify new observations. ^ We also found that the mean decrease of accuracy is a more reliable variable ranking measurement than mean decrease of Gini. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gastroesophageal reflux disease is a common condition affecting 25 to 40% of the population and causes significant morbidity in the U.S., accounting for at least 9 million office visits to physicians with estimated annual costs of $10 billion. Previous research has not clearly established whether infection with Helicobacter pylori, a known cause of peptic ulcer, atrophic gastritis and non cardia adenocarcinoma of the stomach, is associated with gastroesophageal reflux disease. This study is a secondary analysis of data collected in a cross-sectional study of a random sample of adult residents of Ciudad Juarez, Mexico, that was conducted in 2004 (Prevalence and Determinants of Chronic Atrophic Gastritis Study or CAG study, Dr. Victor M. Cardenas, Principal Investigator). In this study, the presence of gastroesophageal reflux disease was based on responses to the previously validated Spanish Language Dyspepsia Questionnaire. Responses to this questionnaire indicating the presence of gastroesophageal reflux symptoms and disease were compared with the presence of H. pylori infection as measured by culture, histology and rapid urease test, and with findings of upper endoscopy (i.e., hiatus hernia and erosive and atrophic esophagitis). The prevalence ratio was calculated using bivariate, stratified and multivariate negative binomial logistic regression analyses in order to assess the relation between active H. pylori infection and the prevalence of gastroesophageal reflux typical syndrome and disease, while controlling for known risk factors of gastroesophageal reflux disease such as obesity. In a random sample of 174 adults 48 (27.6%) of the study participants had typical reflux syndrome and only 5% (or 9/174) had gastroesophageal reflux disease per se according to the Montreal consensus, which defines reflux syndromes and disease based on whether the symptoms are perceived as troublesome by the subject. There was no association between H. pylori infection and typical reflux syndrome or gastroesophageal reflux disease. However, we found that in this Northern Mexican population, there was a moderate association (Prevalence Ratio=2.5; 95% CI=1.3, 4.7) between obesity (≥30 kg/m2) and typical reflux syndrome. Management and prevention of obesity will significantly curb the growing numbers of persons affected by gastroesophageal reflux symptoms and disease in Northern Mexico. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an algorithm for generating scale-free networks with adjustable clustering coefficient. The algorithm is based on a random walk procedure combined with a triangle generation scheme which takes into account genetic factors; this way, preferential attachment and clustering control are implemented using only local information. Simulations are presented which support the validity of the scheme, characterizing its tuning capabilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The implementation of abstract machines involves complex decisions regarding, e.g., data representation, opcodes, or instruction specialization levéis, all of which affect the final performance of the emulator and the size of the bytecode programs in ways that are often difficult to foresee. Besides, studying alternatives by implementing abstract machine variants is a time-consuming and error-prone task because of the level of complexity and optimization of competitive implementations, which makes them generally difficult to understand, maintain, and modify. This also makes it hard to genérate specific implementations for particular purposes. To ameliorate those problems, we propose a systematic approach to the automatic generation of implementations of abstract machines. Different parts of their definition (e.g., the instruction set or the infernal data and bytecode representation) are kept sepárate and automatically assembled in the generation process. Alternative versions of the abstract machine are therefore easier to produce, and variants of their implementation can be created mechanically, with specific characteristics for a particular application if necessary. We illustrate the practicality of the approach by reporting on an implementation of a generator of production-quality WAMs which are specialized for executing a particular fixed (set of) program(s). The experimental results show that the approach is effective in reducing emulator size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We establish a refined version of the Second Law of Thermodynamics for Langevin stochastic processes describing mesoscopic systems driven by conservative or non-conservative forces and interacting with thermal noise. The refinement is based on the Monge-Kantorovich optimal mass transport and becomes relevant for processes far from quasi-stationary regime. General discussion is illustrated by numerical analysis of the optimal memory erasure protocol for a model for micron-size particle manipulated by optical tweezers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, spacial agencies have shown a growing interest in optical wireless as an alternative to wired and radio-frequency communications. The use of these techniques for intra-spacecraft communications reduces the effect of take-off acceleration and vibrations on the systems by avoiding the need for rugged connectors and provides a significant mass reduction. Diffuse transmission also eases the design process as terminals can be placed almost anywhere without a tight planification to ensure the proper system behaviour. Previous studies have compared the performance of radio-frequency and infrared optical communications. In an intra-satellite environment optical techniques help reduce EMI related problems, and their main disadvantages - multipath dispersion and the need for line-of-sight - can be neglected due to the reduced cavity size. Channel studies demonstrate that the effect of the channel can be neglected in small environments if data bandwidth is lower than some hundreds of MHz.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The algorithms and graphic user interface software package ?OPT-PROx? are developed to meet food engineering needs related to canned food thermal processing simulation and optimization. The adaptive random search algorithm and its modification coupled with penalty function?s approach, and the finite difference methods with cubic spline approximation are utilized by ?OPT-PROx? package (http://tomakechoice. com/optprox/index.html). The diversity of thermal food processing optimization problems with different objectives and required constraints are solvable by developed software. The geometries supported by the ?OPT-PROx? are the following: (1) cylinder, (2) rectangle, (3) sphere. The mean square error minimization principle is utilized in order to estimate the heat transfer coefficient of food to be heated under optimal condition. The developed user friendly dialogue and used numerical procedures makes the ?OPT-PROx? software useful to food scientists in research and education, as well as to engineers involved in optimization of thermal food processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mersenne Twister (MT) uniform random number generators are key cores for hardware acceleration of Monte Carlo simulations. In this work, two different architectures are studied: besides the classical table-based architecture, a different architecture based on a circular buffer and especially targeting FPGAs is proposed. A 30% performance improvement has been obtained when compared to the fastest previous work. The applicability of the proposed MT architectures has been proven in a high performance Gaussian RNG.