977 resultados para statistical techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Observations in daily practice are sometimes registered as positive values larger then a given threshold α. The sample space is in this case the interval (α,+∞), α & 0, which can be structured as a real Euclidean space in different ways. This fact opens the door to alternative statistical models depending not only on the assumed distribution function, but also on the metric which is considered as appropriate, i.e. the way differences are measured, and thus variability

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is a first draft of the principle of statistical modelling on coordinates. Several causes —which would be long to detail—have led to this situation close to the deadline for submitting papers to CODAWORK’03. The main of them is the fast development of the approach along thelast months, which let appear previous drafts as obsolete. The present paper contains the essential parts of the state of the art of this approach from my point of view. I would like to acknowledge many clarifying discussions with the group of people working in this field in Girona, Barcelona, Carrick Castle, Firenze, Berlin, G¨ottingen, and Freiberg. They have given a lot of suggestions and ideas. Nevertheless, there might be still errors or unclear aspects which are exclusively my fault. I hope this contribution serves as a basis for further discussions and new developments

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Available screening tests for dementia are of limited usefulness because they are influenced by the patient's culture and educational level. The Eurotest, an instrument based on the knowledge and handling of money, was designed to overcome these limitations. The objective of this study was to evaluate the diagnostic accuracy of the Eurotest in identifying dementia in customary clinical practice. METHODS A cross-sectional, multi-center, naturalistic phase II study was conducted. The Eurotest was administered to consecutive patients, older than 60 years, in general neurology clinics. The patients' condition was classified as dementia or no dementia according to DSM-IV diagnostic criteria. We calculated sensitivity (Sn), specificity (Sp) and area under the ROC curves (aROC) with 95% confidence intervals. The influence of social and educational factors on scores was evaluated with multiple linear regression analysis, and the influence of these factors on diagnostic accuracy was evaluated with logistic regression. RESULTS Sixteen neurologists recruited a total of 516 participants: 101 with dementia, 380 without dementia, and 35 who were excluded. Of the 481 participants who took the Eurotest, 38.7% were totally or functionally illiterate and 45.5% had received no formal education. Mean time needed to administer the test was 8.2+/-2.0 minutes. The best cut-off point was 20/21, with Sn = 0.91 (0.84-0.96), Sp = 0.82 (0.77-0.85), and aROC = 0.93 (0.91-0.95). Neither the scores on the Eurotest nor its diagnostic accuracy were influenced by social or educational factors. CONCLUSION This naturalistic and pragmatic study shows that the Eurotest is a rapid, simple and useful screening instrument, which is free from educational influences, and has appropriate internal and external validity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compositional data naturally arises from the scientific analysis of the chemicalcomposition of archaeological material such as ceramic and glass artefacts. Data of thistype can be explored using a variety of techniques, from standard multivariate methodssuch as principal components analysis and cluster analysis, to methods based upon theuse of log-ratios. The general aim is to identify groups of chemically similar artefactsthat could potentially be used to answer questions of provenance.This paper will demonstrate work in progress on the development of a documentedlibrary of methods, implemented using the statistical package R, for the analysis ofcompositional data. R is an open source package that makes available very powerfulstatistical facilities at no cost. We aim to show how, with the aid of statistical softwaresuch as R, traditional exploratory multivariate analysis can easily be used alongside, orin combination with, specialist techniques of compositional data analysis.The library has been developed from a core of basic R functionality, together withpurpose-written routines arising from our own research (for example that reported atCoDaWork'03). In addition, we have included other appropriate publicly availabletechniques and libraries that have been implemented in R by other authors. Availablefunctions range from standard multivariate techniques through to various approaches tolog-ratio analysis and zero replacement. We also discuss and demonstrate a smallselection of relatively new techniques that have hitherto been little-used inarchaeometric applications involving compositional data. The application of the libraryto the analysis of data arising in archaeometry will be demonstrated; results fromdifferent analyses will be compared; and the utility of the various methods discussed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE. To evaluate potential risk factors for the development of multiple sclerosis in Brazilian patients. METHOD. A case control study was carried out in 81 patients enrolled at the Department of Neurology of the Hospital da Lagoa in Rio de Janeiro, and 81 paired controls. A standardized questionnaire on demographic, social and cultural variables, and medical and family history was used. Statistical analysis was performed using descriptive statistics and conditional logistic regression models with the SPSS for Windows software program. RESULTS. Having standard vaccinations (vaccinations specified by the Brazilian government) (OR=16.2; 95% CI=2.3-115.2), smoking (OR=7.6; 95% CI=2.1-28.2), being single (OR=4.7; 95% CI=1.4-15.6) and eating animal brain (OR=3.4; 95% CI=1.2-9.8) increased the risk of developing MS. CONCLUSIONS. RESULTS of this study may contribute towards better awareness of the epidemiological characteristics of Brazilian patients with multiple sclerosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION Functional imaging studies of addiction following protracted abstinence have not been systematically conducted to look at the associations between severity of use of different drugs and brain dysfunction. Findings from such studies may be relevant to implement specific interventions for treatment. The aim of this study was to examine the association between resting-state regional brain metabolism (measured with 18F-fluorodeoxyglucose Positron Emission Tomography (FDG-PET) and the severity of use of cocaine, heroin, alcohol, MDMA and cannabis in a sample of polysubstance users with prolonged abstinence from all drugs used. METHODS Our sample consisted of 49 polysubstance users enrolled in residential treatment. We conducted correlation analyses between estimates of use of cocaine, heroin, alcohol, MDMA and cannabis and brain metabolism (BM) (using Statistical Parametric Mapping voxel-based (VB) whole-brain analyses). In all correlation analyses conducted for each of the drugs we controlled for the co-abuse of the other drugs used. RESULTS The analysis showed significant negative correlations between severity of heroin, alcohol, MDMA and cannabis use and BM in the dorsolateral prefrontal cortex (DLPFC) and temporal cortex. Alcohol use was further associated with lower metabolism in frontal premotor cortex and putamen, and stimulants use with parietal cortex. CONCLUSIONS Duration of use of different drugs negatively correlated with overlapping regions in the DLPFC, whereas severity of cocaine, heroin and alcohol use selectively impact parietal, temporal, and frontal-premotor/basal ganglia regions respectively. The knowledge of these associations could be useful in the clinical practice since different brain alterations have been associated with different patterns of execution that may affect the rehabilitation of these patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most of economic literature has presented its analysis under the assumption of homogeneous capital stock.However, capital composition differs across countries. What has been the pattern of capital compositionassociated with World economies? We make an exploratory statistical analysis based on compositional datatransformed by Aitchinson logratio transformations and we use tools for visualizing and measuring statisticalestimators of association among the components. The goal is to detect distinctive patterns in the composition.As initial findings could be cited that:1. Sectorial components behaved in a correlated way, building industries on one side and , in a lessclear view, equipment industries on the other.2. Full sample estimation shows a negative correlation between durable goods component andother buildings component and between transportation and building industries components.3. Countries with zeros in some components are mainly low income countries at the bottom of theincome category and behaved in a extreme way distorting main results observed in the fullsample.4. After removing these extreme cases, conclusions seem not very sensitive to the presence ofanother isolated cases

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The statistical analysis of literary style is the part of stylometry that compares measurable characteristicsin a text that are rarely controlled by the author, with those in other texts. When thegoal is to settle authorship questions, these characteristics should relate to the author’s style andnot to the genre, epoch or editor, and they should be such that their variation between authors islarger than the variation within comparable texts from the same author.For an overview of the literature on stylometry and some of the techniques involved, see for exampleMosteller and Wallace (1964, 82), Herdan (1964), Morton (1978), Holmes (1985), Oakes (1998) orLebart, Salem and Berry (1998).Tirant lo Blanc, a chivalry book, is the main work in catalan literature and it was hailed to be“the best book of its kind in the world” by Cervantes in Don Quixote. Considered by writterslike Vargas Llosa or Damaso Alonso to be the first modern novel in Europe, it has been translatedseveral times into Spanish, Italian and French, with modern English translations by Rosenthal(1996) and La Fontaine (1993). The main body of this book was written between 1460 and 1465,but it was not printed until 1490.There is an intense and long lasting debate around its authorship sprouting from its first edition,where its introduction states that the whole book is the work of Martorell (1413?-1468), while atthe end it is stated that the last one fourth of the book is by Galba (?-1490), after the death ofMartorell. Some of the authors that support the theory of single authorship are Riquer (1990),Chiner (1993) and Badia (1993), while some of those supporting the double authorship are Riquer(1947), Coromines (1956) and Ferrando (1995). For an overview of this debate, see Riquer (1990).Neither of the two candidate authors left any text comparable to the one under study, and thereforediscriminant analysis can not be used to help classify chapters by author. By using sample textsencompassing about ten percent of the book, and looking at word length and at the use of 44conjunctions, prepositions and articles, Ginebra and Cabos (1998) detect heterogeneities that mightindicate the existence of two authors. By analyzing the diversity of the vocabulary, Riba andGinebra (2000) estimates that stylistic boundary to be near chapter 383.Following the lead of the extensive literature, this paper looks into word length, the use of the mostfrequent words and into the use of vowels in each chapter of the book. Given that the featuresselected are categorical, that leads to three contingency tables of ordered rows and therefore tothree sequences of multinomial observations.Section 2 explores these sequences graphically, observing a clear shift in their distribution. Section 3describes the problem of the estimation of a suden change-point in those sequences, in the followingsections we propose various ways to estimate change-points in multinomial sequences; the methodin section 4 involves fitting models for polytomous data, the one in Section 5 fits gamma modelsonto the sequence of Chi-square distances between each row profiles and the average profile, theone in Section 6 fits models onto the sequence of values taken by the first component of thecorrespondence analysis as well as onto sequences of other summary measures like the averageword length. In Section 7 we fit models onto the marginal binomial sequences to identify thefeatures that distinguish the chapters before and after that boundary. Most methods rely heavilyon the use of generalized linear models

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work provides a general description of the multi sensor data fusion concept, along with a new classification of currently used sensor fusion techniques for unmanned underwater vehicles (UUV). Unlike previous proposals that focus the classification on the sensors involved in the fusion, we propose a synthetic approach that is focused on the techniques involved in the fusion and their applications in UUV navigation. We believe that our approach is better oriented towards the development of sensor fusion systems, since a sensor fusion architecture should be first of all focused on its goals and then on the fused sensors

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Obtaining automatic 3D profile of objects is one of the most important issues in computer vision. With this information, a large number of applications become feasible: from visual inspection of industrial parts to 3D reconstruction of the environment for mobile robots. In order to achieve 3D data, range finders can be used. Coded structured light approach is one of the most widely used techniques to retrieve 3D information of an unknown surface. An overview of the existing techniques as well as a new classification of patterns for structured light sensors is presented. This kind of systems belong to the group of active triangulation method, which are based on projecting a light pattern and imaging the illuminated scene from one or more points of view. Since the patterns are coded, correspondences between points of the image(s) and points of the projected pattern can be easily found. Once correspondences are found, a classical triangulation strategy between camera(s) and projector device leads to the reconstruction of the surface. Advantages and constraints of the different patterns are discussed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Few publications have compared ultrasound (US) to histology in diagnosing schistosomiasis-induced liver fibrosis (LF); none has used magnetic resonance (MR). The aim of this study was to evaluate schistosomal LF using these three methods. Fourteen patients with hepatosplenic schistosomiasis admitted to hospital for surgical treatment of variceal bleeding were investigated. They were submitted to upper digestive endoscopy, US, MR and wedge liver biopsy. The World Health Organization protocol for US in schistosomiasis was used. Hepatic fibrosis was classified as absent, slight, moderate or intense. Histology and MR confirmed Symmers' fibrosis in all cases. US failed to detect it in one patient. Moderate agreement was found comparing US to MR; poor agreement was found when US or MR were compared to histology. Re-classifying LF as only slight or intense created moderate agreement between imaging techniques and histology. Histomorphometry did not separate slight from intense LF. Two patients with advanced hepatosplenic schistosomiasis presented slight LF. Our data suggest that the presence of the characteristic periportal fibrosis, diagnosed by US, MR or histology, associated with a sign of portal hypertension, defines the severity of the disease. We conclude that imaging techniques are reliable to define the presence of LF but fail in grading its intensity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The absolute necessity of obtaining 3D information of structured and unknown environments in autonomous navigation reduce considerably the set of sensors that can be used. The necessity to know, at each time, the position of the mobile robot with respect to the scene is indispensable. Furthermore, this information must be obtained in the least computing time. Stereo vision is an attractive and widely used method, but, it is rather limited to make fast 3D surface maps, due to the correspondence problem. The spatial and temporal correspondence among images can be alleviated using a method based on structured light. This relationship can be directly found codifying the projected light; then each imaged region of the projected pattern carries the needed information to solve the correspondence problem. We present the most significant techniques, used in recent years, concerning the coded structured light method

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND AND OBJECTIVES Prevalence of hyponutrition in hospitalized patients is very high and it has been shown to be an important prognostic factor. Most of admitted patients depend on hospital food to cover their nutritional demands being important to assess the factors influencing their intake, which may be modified in order to improve it and prevent the consequences of inadequate feeding. In previous works, it has been shown that one of the worst scored characteristics of dishes was the temperature. The aim of this study was to assess the influence of temperature on patient's satisfaction and amount eaten depending on whether the food was served in isothermal trolleys keeping proper food temperature or not. MATERIAL AND METHODS We carried out satisfaction surveys to hospitalized patients having regular diets, served with or without isothermal trolleys. The following data were gathered: age, gender, weight, number of visits, mobility, autonomy, amount of orally taken medication, intake of out-of-hospital foods, qualification of food temperature, presentation and smokiness, amount of food eaten, and reasons for not eating all the content of the tray. RESULTS Of the 363 surveys, 134 (37.96%) were done to patients with isothermal trays and 229 (62.04%) to patients without them. Sixty percent of the patients referred having eaten less than the normal amount within the last week, the most frequent reason being decreased appetite. During lunch and dinner, 69.3% and 67.7%, respectively, ate half or less of the tray content, the main reasons being as follows: lack of appetite (42% at lunch time and 40% at dinner), do not like the food (24.3 and 26.2%) or taste (15.3 and 16.8%). Other less common reasons were the odor, the amount of food, having nausea or vomiting, fatigue, and lack of autonomy. There were no significant differences in the amount eaten by gender, weight, number of visits, amount of medication, and level of physical activity. The food temperature was classified as adequate by 62% of the patients, the presentation by 95%, and smokiness by 85%. When comparing the patients served with or without isothermal trays, there were no differences with regards to baseline characteristics analyzed that might have had an influence on amount eaten. Ninety percent of the patients with isothermal trolley rated the food temperature as good, as compared with 57.2% of the patients with conventional trolley, the difference being statistically significant (P = 0.000). Besides, there were differences in the amount of food eaten between patients with and without isothermal trolley, so that 41% and 27.7% ate all the tray content, respectively, difference being statistically significant (P = 0.007). There were no differences in smokiness or presentation rating. CONCLUSIONS Most of the patients (60%) had decreased appetite during hospital admission. The percentage of hospitalized patients rating the food temperature as being good is higher among patients served with isothermal trolleys. The amount of food eaten by the patients served with isothermal trolleys is significantly higher that in those without them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND Evidence associating exposure to water disinfection by-products with reduced birth weight and altered duration of gestation remains inconclusive. OBJECTIVE We assessed exposure to trihalomethanes (THMs) during pregnancy through different water uses and evaluated the association with birth weight, small for gestational age (SGA), low birth weight (LBW), and preterm delivery. METHODS Mother-child cohorts set up in five Spanish areas during the years 2000-2008 contributed data on water ingestion, showering, bathing, and swimming in pools. We ascertained residential THM levels during pregnancy periods through ad hoc sampling campaigns (828 measurements) and regulatory data (264 measurements), which were modeled and combined with personal water use and uptake factors to estimate personal uptake. We defined outcomes following standard definitions and included 2,158 newborns in the analysis. RESULTS Median residential THM ranged from 5.9 μg/L (Valencia) to 114.7 μg/L (Sabadell), and speciation differed across areas. We estimated that 89% of residential chloroform and 96% of brominated THM uptakes were from showering/bathing. The estimated change of birth weight for a 10% increase in residential uptake was -0.45 g (95% confidence interval: -1.36, 0.45 g) for chloroform and 0.16 g (-1.38, 1.70 g) for brominated THMs. Overall, THMs were not associated with SGA, LBW, or preterm delivery. CONCLUSIONS Despite the high THM levels in some areas and the extensive exposure assessment, results suggest that residential THM exposure during pregnancy driven by inhalation and dermal contact routes is not associated with birth weight, SGA, LBW, or preterm delivery in Spain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last decade, the development of statistical models in support of forensic fingerprint identification has been the subject of increasing research attention, spurned on recently by commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. Such models are increasingly seen as useful tools in support of the fingerprint identification process within or in addition to the ACE-V framework. This paper provides a critical review of recent statistical models from both a practical and theoretical perspective. This includes analysis of models of two different methodologies: Probability of Random Correspondence (PRC) models that focus on calculating probabilities of the occurrence of fingerprint configurations for a given population, and Likelihood Ratio (LR) models which use analysis of corresponding features of fingerprints to derive a likelihood value representing the evidential weighting for a potential source.