867 resultados para least square-support vector machine


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study aims to identify the factors that influence the behavior intention to adopt an academic Information System (SIE), in an environment of mandatory use, applied in the procurement process at the Federal University of Pará (UFPA). For this, it was used a model of innovation adoption and technology acceptance (TAM), focused in attitudes and intentions regarding the behavior intention. The research was conducted a quantitative survey, through survey in a sample of 96 administrative staff of the researched institution. For data analysis, it was used structural equation modeling (SEM), using the partial least squares method (Partial Least Square PLS-PM). As to results, the constructs attitude and subjective norms were confirmed as strong predictors of behavioral intention in a pre-adoption stage. Despite the use of SIE is required, the perceived voluntariness also predicts the behavior intention. Regarding attitude, classical variables of TAM, like as ease of use and perceived usefulness, appear as the main influence of attitude towards the system. It is hoped that the results of this study may provide subsidies for more efficient management of the process of implementing systems and information technologies, particularly in public universities

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aimed to examine how students perceives the factors that may influence them to attend a training course offered in the distance virtual learning environment (VLE) of the National School of Public Administration (ENAP). Thus, as theoretical basis it was used the Unified Theory of Acceptance and Use of Technology (UTAUT), the result of an integration of eight previous models which aimed to explain the same phenomenon (acceptance/use of information technology). The research approach was a quantitative and qualitative. To achieve the study objectives were made five semi-structured interviews and an online questionnaire (websurvey) in a valid sample of 101 public employees scattered throughout the country. The technique used to the analysis of quantitative data was the structural equation modeling (SEM), by the method of Partial Least Square Path Modeling (PLS-PM). To qualitative data was the thematic content analysis. Among the results, it was found that, in the context of public service, the degree whose the individual believes that the use of an AVA will help its performance at work (performance expectancy) is a factor to its intended use and also influence its use. Among the results, it was found that the belief which the public employee has in the use of a VLE as a way to improve the performance of his work (performance expectation) was determinant for its intended use that, in turn, influenced their use. It was confirmed that, under the voluntary use of technology, the general opinion of the student s social circle (social influence) has no effect on their intention to use the VLE. The effort expectancy and facilitating conditions were not directly related to the intended use and use, respectively. However, emerged from the students speeches that the opinions of their coworkers, the ease of manipulate the VLE, the flexibility of time and place of the distance learning program and the presence of a tutor are important to their intentions to do a distance learning program. With the results, it is expected that the managers of the distance learning program of ENAP turn their efforts to reduce the impact of the causes of non-use by those unwilling to adopt voluntarily the e-learning, and enhance the potentialities of distance learning for those who are already users

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Departamento de Administração, Programa de Pós-graduação em Administração, 2016.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Circulating low density lipoproteins (LDL) are thought to play a crucial role in the onset and development of atherosclerosis, though the detailed molecular mechanisms responsible for their biological effects remain controversial. The complexity of biomolecules (lipids, glycans and protein) and structural features (isoforms and chemical modifications) found in LDL particles hampers the complete understanding of the mechanism underlying its atherogenicity. For this reason the screening of LDL for features discriminative of a particular pathology in search of biomarkers is of high importance. Three major biomolecule classes (lipids, protein and glycans) in LDL particles were screened using mass spectrometry coupled to liquid chromatography. Dual-polarity screening resulted in good lipidome coverage, identifying over 300 lipid species from 12 lipid sub-classes. Multivariate analysis was used to investigate potential discriminators in the individual lipid sub-classes for different study groups (age, gender, pathology). Additionally, the high protein sequence coverage of ApoB-100 routinely achieved (≥70%) assisted in the search for protein modifications correlating to aging and pathology. The large size and complexity of the datasets required the use of chemometric methods (Partial Least Square-Discriminant Analysis, PLS-DA) for their analysis and for the identification of ions that discriminate between study groups. The peptide profile from enzymatically digested ApoB-100 can be correlated with the high structural complexity of lipids associated with ApoB-100 using exploratory data analysis. In addition, using targeted scanning modes, glycosylation sites within neutral and acidic sugar residues in ApoB-100 are also being explored. Together or individually, knowledge of the profiles and modifications of the major biomolecules in LDL particles will contribute towards an in-depth understanding, will help to map the structural features that contribute to the atherogenicity of LDL, and may allow identification of reliable, pathology-specific biomarkers. This research was supported by a Marie Curie Intra-European Fellowship within the 7th European Community Framework Program (IEF 255076). Work of A. Rudnitskaya was supported by Portuguese Science and Technology Foundation, through the European Social Fund (ESF) and "Programa Operacional Potencial Humano - POPH".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the rapid changes that governs the Swedish financial sector such as financial deregulations and technological innovations, it is imperative to examine the extent to which the Swedish Financial institutions had performed amid these changes. For this to be accomplish, the work investigates what are the determinants of performance for Swedish Financial Monetary Institutions? Assumptions were derived from theoretical and empirical literatures to investigate the authenticity of this research question using seven explanatory variables. Two models were specified using Returns on Asset (ROA) and Return on Equity (ROE) as the main performance indicators and for the sake of reliability and validity, three different estimators such as Ordinary Least Square (OLS), Generalized Least Square (GLS) and Feasible Generalized Least Square (FGLS) were employed. The Akaike Information Criterion (AIC) was also used to verify which specification explains performance better while performing robustness check of parameter estimates was done by correcting for standard errors. Based on the findings, ROA specification proves to have the lowest Akaike Information Criterion (AIC) and Standard errors compared to ROE specification. Under ROA, two variables; the profit margins and the Interest coverage ratio proves to be statistically significant while under ROE just the interest coverage ratio (ICR) for all the estimators proves significant. The result also shows that the FGLS is the most efficient estimator, then follows the GLS and the last OLS. when corrected for SE robust, the gearing ratio which measures the capital structure becomes significant under ROA and its estimate become positive under ROE robust. Conclusions were drawn that, within the period of study three variables (ICR, profit margins and gearing) shows significant and four variables were insignificant. The overall findings show that the institutions strive to their best to maximize returns but these returns were just normal to cover their costs of operation. Much should be done as per the ASC theory to avoid liquidity and credit risks problems. Again, estimated values of ICR and profit margins shows that a considerable amount of efforts with sound financial policies are required to increase performance by one percentage point. Areas of further research could be how the individual stochastic factors such as the Dupont model, repo rates, inflation, GDP etc. can influence performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se calculó la obtención de las constantes ópticas usando el método de Wolfe. Dichas contantes: coeficiente de absorción (α), índice de refracción (n) y espesor de una película delgada (d ), son de importancia en el proceso de caracterización óptica del material. Se realizó una comparación del método del Wolfe con el método empleado por R. Swanepoel. Se desarrolló un modelo de programación no lineal con restricciones, de manera que fue posible estimar las constantes ópticas de películas delgadas semiconductoras, a partir únicamente, de datos de transmisión conocidos. Se presentó una solución al modelo de programación no lineal para programación cuadrática. Se demostró la confiabilidad del método propuesto, obteniendo valores de α = 10378.34 cm−1, n = 2.4595, d =989.71 nm y Eg = 1.39 Ev, a través de experimentos numéricos con datos de medidas de transmitancia espectral en películas delgadas de Cu3BiS3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solar radiation data is crucial for the design of energy systems based on the solar resource. Since diffuse radiation measurements are not always available in the archive data series, either due to the inexistence of measuring equipment, shading device misplacement or missing data, models to generate these data are needed. In this work, one year of hourly and daily horizontal solar global and diffuse irradiation measurements in Évora are used to establish a new relation between the diffuse radiation and the clearness index. The proposed model includes a fitting parameter, which was adjusted through a simple optimization procedure to minimize the Least Square Error as compared to measurements. A comparison against several other fitting models presented in the literature was also carried out using the Root Mean Square Error as statistical indicator, and it was found that the present model is more accurate than the previous fitting models for the diffuse radiation data in Évora.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The near infrared (NIR) spectroscopy presents itself as an interesting non-destructive test tool as it enables a fast, simple and reliable way for characterizing large samplings of biological materials in a short period of time. This work aimed to establish multivariate models to estimate the crystallinity indices and tensile and burst strength of cellulosic and nanocellulosic films through NIR spectroscopy. NIR spectra were recorded from the films before tensile and bursting strength, and crystallinity tests. Spectral information were correlated with reference values obtained by laboratory procedures through partial least square regression (PLS-R). The PLS-R model for estimating the crystallinity index presented a coefficient of determination in cross-validation (R2cv) of 0,94 and the ratio of performance to deviation (RPD) was 3,77. The mechanical properties of the films presented a high correlation with the NIR spectra: R2p = 0,85 (RPD = 2,23) for tensile and R2p = 0,93 (RPD = 3,40) for burst strength. The statistics associated to the models presented have shown that the NIR spectroscopy has the potential to estimate the crystallinity index and resistance properties of cellulose and nanocellulose films on in-line monitoring systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, technological advancements have brought industry and research towards the automation of various processes. Automation brings a reduction in costs and an improvement in product quality. For this reason, companies are pushing research to investigate new technologies. The agriculture industry has always looked towards automating various processes, from product processing to storage. In the last years, the automation of harvest and cultivation phases also has become attractive, pushed by the advancement of autonomous driving. Nevertheless, ADAS systems are not enough. Merging different technologies will be the solution to obtain total automation of agriculture processes. For example, sensors that estimate products' physical and chemical properties can be used to evaluate the maturation level of fruit. Therefore, the fusion of these technologies has a key role in industrial process automation. In this dissertation, ADAS systems and sensors for precision agriculture will be both treated. Several measurement procedures for characterizing commercial 3D LiDARs will be proposed and tested to cope with the growing need for comparison tools. Axial errors and transversal errors have been investigated. Moreover, a measurement method and setup for evaluating the fog effect on 3D LiDARs will be proposed. Each presented measurement procedure has been tested. The obtained results highlight the versatility and the goodness of the proposed approaches. Regarding the precision agriculture sensors, a measurement approach for the Moisture Content and density estimation of crop directly on the field is presented. The approach regards the employment of a Near Infrared spectrometer jointly with Partial Least Square statistical analysis. The approach and the model will be described together with a first laboratory prototype used to evaluate the NIRS approach. Finally, a prototype for on the field analysis is realized and tested. The test results are promising, evidencing that the proposed approach is suitable for Moisture Content and density estimation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il lavoro di questa tesi è stato implementare su Matlab un algoritmo numerico che permettesse di determinare la posizione di un utilizzatore mediante misure GPS. In particolare, dopo un'introduzione al sistema GPS e alla navigazione GPS, è stato analizzato il contenuto dei LOG di ESEO e si è proceduto all'implementazione dell'algoritmo sul calcolatore. Lo scopo ultimo della tesi è stato verificare l'accuratezza di tale algoritmo, mediante il calcolo degli errori con le posizioni reali e mediante una verifica tramite calcolo del range geometrico.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research presents a method for frequency estimation in power systems using an adaptive filter based on the Least Mean Square Algorithm (LMS). In order to analyze a power system, three-phase voltages were converted into a complex signal applying the alpha beta-transform and the results were used in an adaptive filtering algorithm. Although the use of the complex LMS algorithm is described in the literature, this paper deals with some practical aspects of the algorithm implementation. In order to reduce computing time, a coefficient generator was implemented. For the algorithm validation, a computing simulation of a power system was carried Out using the ATP software. Many different situations were Simulated for the performance analysis of the proposed methodology. The results were compared to a commercial relay for validation, showing the advantages of the new method. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Geographic information systems give us the possibility to analyze, produce, and edit geographic information. Furthermore, these systems fall short on the analysis and support of complex spatial problems. Therefore, when a spatial problem, like land use management, requires a multi-criteria perspective, multi-criteria decision analysis is placed into spatial decision support systems. The analytic hierarchy process is one of many multi-criteria decision analysis methods that can be used to support these complex problems. Using its capabilities we try to develop a spatial decision support system, to help land use management. Land use management can undertake a broad spectrum of spatial decision problems. The developed decision support system had to accept as input, various formats and types of data, raster or vector format, and the vector could be polygon line or point type. The support system was designed to perform its analysis for the Zambezi river Valley in Mozambique, the study area. The possible solutions for the emerging problems had to cover the entire region. This required the system to process large sets of data, and constantly adjust to new problems’ needs. The developed decision support system, is able to process thousands of alternatives using the analytical hierarchy process, and produce an output suitability map for the problems faced.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the second half of 1980, 112 (or ca. 16%) of the inhabitants of the new settlement of São José, city of Manaus, contracted cutaneous leishmaniasis whilst clearing their properties of terra firme rainforest. With the aid of SUCAM, the authors carried out a pilot study to investigate the feasibility of reducing populations of Lutzomyia umbratilis, the local silvatic vector of Leishmania braziliensis guyanensis, by spraying insecticide on its favoured diurnal resting sites, the bases of the larger forest trees. Most manvector contact is at these resting sites and, therefore, it was encouraging to record a marked reduction of the tree-base populations of L. umbratilis for 21 days following just one application of D.D.T. emulsion in an area 200m square. Most of the treated trunks were not occupied by L. umbratilis for at least eleven months. Suggestions for extending the pilot study are made, and the need for collaboration with a clinical team is emphasized. Leishmania b. guyanensis is the aetiological agent of [quot ]pain bois[quot ], which is hyperendemic from French Guiana to central Amazônia. In the absence of proven vaccines or methods of vector control, some simple methods for limiting transmission of Le. b. guyanensis to man are listed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.