886 resultados para Support Vector Machine (SVM)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geographic information systems give us the possibility to analyze, produce, and edit geographic information. Furthermore, these systems fall short on the analysis and support of complex spatial problems. Therefore, when a spatial problem, like land use management, requires a multi-criteria perspective, multi-criteria decision analysis is placed into spatial decision support systems. The analytic hierarchy process is one of many multi-criteria decision analysis methods that can be used to support these complex problems. Using its capabilities we try to develop a spatial decision support system, to help land use management. Land use management can undertake a broad spectrum of spatial decision problems. The developed decision support system had to accept as input, various formats and types of data, raster or vector format, and the vector could be polygon line or point type. The support system was designed to perform its analysis for the Zambezi river Valley in Mozambique, the study area. The possible solutions for the emerging problems had to cover the entire region. This required the system to process large sets of data, and constantly adjust to new problems’ needs. The developed decision support system, is able to process thousands of alternatives using the analytical hierarchy process, and produce an output suitability map for the problems faced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Propolis is a chemically complex biomass produced by honeybees (Apis mellifera) from plant resins added of salivary enzymes, beeswax, and pollen. The biological activities described for propolis were also identified for donor plants resin, but a big challenge for the standardization of the chemical composition and biological effects of propolis remains on a better understanding of the influence of seasonality on the chemical constituents of that raw material. Since propolis quality depends, among other variables, on the local flora which is strongly influenced by (a)biotic factors over the seasons, to unravel the harvest season effect on the propolis chemical profile is an issue of recognized importance. For that, fast, cheap, and robust analytical techniques seem to be the best choice for large scale quality control processes in the most demanding markets, e.g., human health applications. For that, UV-Visible (UV-Vis) scanning spectrophotometry of hydroalcoholic extracts (HE) of seventy-three propolis samples, collected over the seasons in 2014 (summer, spring, autumn, and winter) and 2015 (summer and autumn) in Southern Brazil was adopted. Further machine learning and chemometrics techniques were applied to the UV-Vis dataset aiming to gain insights as to the seasonality effect on the claimed chemical heterogeneity of propolis samples determined by changes in the flora of the geographic region under study. Descriptive and classification models were built following a chemometric approach, i.e. principal component analysis (PCA) and hierarchical clustering analysis (HCA) supported by scripts written in the R language. The UV-Vis profiles associated with chemometric analysis allowed identifying a typical pattern in propolis samples collected in the summer. Importantly, the discrimination based on PCA could be improved by using the dataset of the fingerprint region of phenolic compounds ( = 280-400m), suggesting that besides the biological activities of those secondary metabolites, they also play a relevant role for the discrimination and classification of that complex matrix through bioinformatics tools. Finally, a series of machine learning approaches, e.g., partial least square-discriminant analysis (PLS-DA), k-Nearest Neighbors (kNN), and Decision Trees showed to be complementary to PCA and HCA, allowing to obtain relevant information as to the sample discrimination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The chemical composition of propolis is affected by environmental factors and harvest season, making it difficult to standardize its extracts for medicinal usage. By detecting a typical chemical profile associated with propolis from a specific production region or season, certain types of propolis may be used to obtain a specific pharmacological activity. In this study, propolis from three agroecological regions (plain, plateau, and highlands) from southern Brazil, collected over the four seasons of 2010, were investigated through a novel NMR-based metabolomics data analysis workflow. Chemometrics and machine learning algorithms (PLS-DA and RF), including methods to estimate variable importance in classification, were used in this study. The machine learning and feature selection methods permitted construction of models for propolis sample classification with high accuracy (>75%, reaching 90% in the best case), better discriminating samples regarding their collection seasons comparatively to the harvest regions. PLS-DA and RF allowed the identification of biomarkers for sample discrimination, expanding the set of discriminating features and adding relevant information for the identification of the class-determining metabolites. The NMR-based metabolomics analytical platform, coupled to bioinformatic tools, allowed characterization and classification of Brazilian propolis samples regarding the metabolite signature of important compounds, i.e., chemical fingerprint, harvest seasons, and production regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aims to advance blinking detection in the context of work activity. Rather than patients having to attend a clinic, blinking videos can be acquired in a work environment, and further automatically analyzed. Therefore, this paper presents a methodology to perform the automatic detection of eye blink using consumer videos acquired with low-cost web cameras. This methodology includes the detection of the face and eyes of the recorded person, and then it analyzes the low-level features of the eye region to create a quantitative vector. Finally, this vector is classified into one of the two categories considered —open and closed eyes— by using machine learning algorithms. The effectiveness of the proposed methodology was demonstrated since it provides unbiased results with classification errors under 5%

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magdeburg, Univ., Fak. für Elektrotechnik und Informationstechnik, Diss., 2010

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent years, kernel methods have revealed very powerful tools in many application domains in general and in remote sensing image classification in particular. The special characteristics of remote sensing images (high dimension, few labeled samples and different noise sources) are efficiently dealt with kernel machines. In this paper, we propose the use of structured output learning to improve remote sensing image classification based on kernels. Structured output learning is concerned with the design of machine learning algorithms that not only implement input-output mapping, but also take into account the relations between output labels, thus generalizing unstructured kernel methods. We analyze the framework and introduce it to the remote sensing community. Output similarity is here encoded into SVM classifiers by modifying the model loss function and the kernel function either independently or jointly. Experiments on a very high resolution (VHR) image classification problem shows promising results and opens a wide field of research with structured output kernel methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Triatoma rubrovaria has become the most frequently captured triatomine species since the control of T. infestans in the state of Rio Grande do Sul (RS), Brazil. The aim of this study was to evaluate aspects of the vectorial competence of T. rubrovaria using nymphs raised in laboratory under environmental conditions of temperature and humidity and fed on mice. The average developmental period of T. rubrovaria was 180.1 days. The percentage of defecation shortly after feeding was still higher than previous studies in which samples of T. rubrovaria subjected to a slight starvation period before the blood meal were used. The obtained results support former indication that T. rubrovaria presents bionomic characteristics propitious to be a good vector of Trypanosoma cruzi to man. Therefore its domiciliary invasion process must be continuously monitored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Freshwater lymnaeid snails are crucial in defining transmission and epidemiology of fascioliasis. In South America, human endemic areas are related to high altitudes in Andean regions. The species Lymnaea diaphana has, however, been involved in low altitude areas of Chile, Argentina and Peru where human infection also occurs. Complete nuclear ribosomal DNA 18S, internal transcribed spacer (ITS)-2 and ITS-1 and fragments of mitochondrial DNA 16S and cytochrome c oxidase (cox)1 genes of L. diaphana specimens from its type locality offered 1,848, 495, 520, 424 and 672 bp long sequences. Comparisons with New and Old World Galba/Fossaria, Palaearctic stagnicolines, Nearctic stagnicolines, Old World Radix and Pseudosuccinea allowed to conclude that (i) L. diaphana shows sequences very different from all other lymnaeids, (ii) each marker allows its differentiation, except cox1 amino acid sequence, and (iii) L. diaphana is not a fossarine lymnaeid, but rather an archaic relict form derived from the oldest North American stagnicoline ancestors. Phylogeny and large genetic distances support the genus Pectinidens as the first stagnicoline representative in the southern hemisphere, including colonization of extreme world regions, as most southern Patagonia, long time ago. The phylogenetic link of L. diaphana with the stagnicoline group may give light to the aforementioned peculiar low altitude epidemiological scenario of fascioliasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Guatemala, the Ministry of Health (MoH) began a vector control project with Japanese cooperation in 2000 to reduce the risk of Chagas disease infection. Rhodnius prolixus is one of the principal vectors and is targeted for elimination. The control method consisted of extensive residual insecticide spraying campaigns, followed by community-based surveillance with selective respraying. Interventions in nine endemic departments identified 317 villages with R. prolixus of 4,417 villages surveyed. Two cycles of residual insecticide spraying covered over 98% of the houses in the identified villages. Fourteen villages reinfestated were all resprayed. Between 2000-2003 and 2008, the number of infested villages decreased from 317 to two and the house infestation rate reduced from 0.86% to 0.0036%. Seroprevalence rates in 2004-2005, when compared with an earlier study in 1998, showed a significant decline from 5.3% to 1.3% among schoolchildren in endemic areas. The total operational cost was US$ 921,815, where the cost ratio between preparatory, attack and surveillance phases was approximately 2:12:1. In 2008, Guatemala was certified for interruption of Chagas disease transmission by R. prolixus. What facilitated the process was existing knowledge in vector control and notable commitment by the MoH, as well as political, managerial and technical support by external stakeholders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parasite population structure is often thought to be largely shaped by that of its host. In the case of a parasite with a complex life cycle, two host species, each with their own patterns of demography and migration, spread the parasite. However, the population structure of the parasite is predicted to resemble only that of the most vagile host species. In this study, we tested this prediction in the context of a vector-transmitted parasite. We sampled the haemosporidian parasite Polychromophilus melanipherus across its European range, together with its bat fly vector Nycteribia schmidlii and its host, the bent-winged bat Miniopterus schreibersii. Based on microsatellite analyses, the wingless vector, and not the bat host, was identified as the least structured population and should therefore be considered the most vagile host. Genetic distance matrices were compared for all three species based on a mitochondrial DNA fragment. Both host and vector populations followed an isolation-by-distance pattern across the Mediterranean, but not the parasite. Mantel tests found no correlation between the parasite and either the host or vector populations. We therefore found no support for our hypothesis; the parasite population structure matched neither vector nor host. Instead, we propose a model where the parasite's gene flow is represented by the added effects of host and vector dispersal patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to present an Activity-Based Costing spreadsheet tool for analyzing the logistics costs. The tool can be used both by customer-companies and logistics service providers. The study discusses the influence of different activity models on costs. Additionally this paper discusses about the logistical performance across the total supply chain This study is carried out using ananalytical research approach and literature material has been used for supplementing the concerned research approach. Cost structure analysis was based on the theory of activity-based management. This study was outlined to spare part logistics in machine-shop industry. The outlines of logistics services and logisticalperformance discussed in this report are based on the new logistics business concept (LMS-concept), which has been presented earlier in the Valssi-project. Oneof the aims of this study is to increase awareness of different activity modelson logistics costs. The report paints an overall picture about the business environment and requirements for the new logistics concept.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Psychophysical studies suggest that humans preferentially use a narrow band of low spatial frequencies for face recognition. Here we asked whether artificial face recognition systems have an improved recognition performance at the same spatial frequencies as humans. To this end, we estimated recognition performance over a large database of face images by computing three discriminability measures: Fisher Linear Discriminant Analysis, Non-Parametric Discriminant Analysis, and Mutual Information. In order to address frequency dependence, discriminabilities were measured as a function of (filtered) image size. All three measures revealed a maximum at the same image sizes, where the spatial frequency content corresponds to the psychophysical found frequencies. Our results therefore support the notion that the critical band of spatial frequencies for face recognition in humans and machines follows from inherent properties of face images, and that the use of these frequencies is associated with optimal face recognition performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diplomityön tarkoituksena oli kuidutusrumpulaitteiston käytön- ja kannatuksen kehittä-minen. Työ rajattiin laajuutensa vuoksi koskemaan tuotesarjan viittä pienintä kokoa. Työn alkuosassa käsitellään kuidutuksen teoriaa ja siihen soveltuvia laitteistoja. Käytön suunnittelun kannalta olennaista käynnistystehon tarvetta on tarkasteltu lähtökohdaisesti fysiikan avulla. Perustietoja teorialle on haettu aiemmista tutkimuksista sekä kirjallisuu-desta. Tarkastelun tuloksena teoriaa on kehitty ja se on saatu vastaamaan todellisuutta aiempaa paremmin. Kannatuksen ja käytön toteuttamisvaihtoja etsittäessä on käytetty systemaattisen koneen-suunnittelun keinoja. Saatuja ideoita on arvioitu teknillis-taloudellisin perustein ja näistä on valittu parhaat vaihtoehdot jatkokehitykseen. Jatkokehitysvaiheessa ratkaisuvaihto-ehtoja on tarkasteltu komponenttitasolla ja näistä on tehty yksityiskohtaiset kustannus-laskelmat. Työn tuloksena on esitetty kannatuksen ja käytön toteutusvaihtoehto, jonka avulla voidaan saavuttaa merkittäviä kustannussäästöjä. Korkea, 30 prosentin kustannussäästö-tavoite saavutettiin.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lime kiln is used as a part of the modern kraft pulp process in order to produce burnt lime from lime mud. This rotating kiln is supported by support rollers, which are traditionally supported by journal bearings. Since the continuous growth in the production of pulp mills requires larger lime kilns, the traditional bearing construction has become unreliable. The main problem especially involves the running-in phase of the bearings. In the present thesis, a new type of support roller was developed by using the systematic approach of machine design. Structural analysis was conducted on the critical parts of the selected structure by the finite element method. The operation of hydrodynamic bearings was examined by analytical methods. As a result of this work, a new type of support for rotating kilns was designed, which is more reliable and easier to service. A new support roller geometry is described, which pro¬vides for significant cost savings.