950 resultados para finite mixture models
Resumo:
Recent reports have indicated that 23.5% of the nation's highway bridges are structurally deficient and 17.7% are functionally obsolete. A significant number of these bridges are on the Iowa secondary road system where over 86% of the rural bridge management responsibilities are assigned to the counties. Some of the bridges can be strengthened or otherwise rehabilitated, but many more are in need of immediate replacement. In a recent investigation (HR-365 "Evaluation of Bridge Replacement Alternatives for the County Bridge System") several types of replacement bridges that are currently being used on low volume roads were identified. It was also determined that a large number of counties (69%) have the ability and are interested in utilizing their own forces to design and construct short span bridges. In reviewing the results from HR-365, the research team developed one "new" bridge replacement concept and a modification of a replacement system currently being used. Both of these bridge replacement alternatives were investigated in this study, the results of which are presented in two volumes. This volume (Volume 1) presents the results of Concept 1 - Steel Beam Precast Units. Concept 2 - Modification of the Beam-in-Slab Bridge is presented in Volume 2. Concept 1, involves the fabrication of precast units (two steel beams connected by a concrete slab) by county work forces. Deck thickness is limited so that the units can be fabricated at one site and then transported to the bridge site where they are connected and the remaining portion of the deck placed. Since Concept 1 bridge is primarily intended for use on low-volume roads, the precast units can be constructed with new or used beams. In the experimental part of the investigation, there were three types of static load tests: small scale connector tests, "handling strength" tests, and service and overload tests of a model bridge. Three finite element models for analyzing the bridge in various states of construction were also developed. Small scale connector tests were completed to determine the best method of connecting the precast double-T (PCDT) units. "Handling strength" tests on an individual PCDT unit were performed to determine the strength and behavior of the precast unit in this configuration. The majority of the testing was completed on the model bridge [L=9,750 mm (32 ft), W=6,400 mm (21 ft)] which was fabricated using the precast units developed. Some of the variables investigated in the model bridge tests were number of connectors required to connect adjacent precast units, contribution of diaphragms to load distribution, influence of position of diaphragms on bridge strength and load distribution, and effect of cast-in-place portion of deck on load distribution. In addition to the service load tests, the bridge was also subjected to overload conditions. Using the finite element models developed, one can predict the behavior and strength of bridges similar to the laboratory model as well as design them. Concept 1 has successfully passed all laboratory testing; the next step is to field test it.
Resumo:
The atomic force microscope is a convenient tool to probe living samples at the nanometric scale. Among its numerous capabilities, the instrument can be operated as a nano-indenter to gather information about the mechanical properties of the sample. In this operating mode, the deformation of the cantilever is displayed as a function of the indentation depth of the tip into the sample. Fitting this curve with different theoretical models permits us to estimate the Young's modulus of the sample at the indentation spot. We describe what to our knowledge is a new technique to process these curves to distinguish structures of different stiffness buried into the bulk of the sample. The working principle of this new imaging technique has been verified by finite element models and successfully applied to living cells.
Resumo:
Multi-span pre-tensioned pre-stressed concrete beam (PPCB) bridges made continuous usually experience a negative live load moment region over the intermediate supports. Conventional thinking dictates that sufficient reinforcement must be provided in this region to satisfy the strength and serviceability requirements associated with the tensile stresses in the deck. The American Association of State Highway and Transportation Officials (AASHTO) Load and Resistance Factor Design (LRFD) Bridge Design Specifications recommend the negative moment reinforcement (b2 reinforcement) be extended beyond the inflection point. Based upon satisfactory previous performance and judgment, the Iowa Department of Transportation (DOT) Office of Bridges and Structures (OBS) currently terminates b2 reinforcement at 1/8 of the span length. Although the Iowa DOT policy results in approximately 50% shorter b2 reinforcement than the AASHTO LRFD specifications, the Iowa DOT has not experienced any significant deck cracking over the intermediate supports. The primary objective of this project was to investigate the Iowa DOT OBS policy regarding the required amount of b2 reinforcement to provide the continuity over bridge decks. Other parameters, such as termination length, termination pattern, and effects of the secondary moments, were also studied. Live load tests were carried out on five bridges. The data were used to calibrate three-dimensional finite element models of two bridges. Parametric studies were conducted on the bridges with an uncracked deck, a cracked deck, and a cracked deck with a cracked pier diaphragm for live load and shrinkage load. The general conclusions were as follows: -- The parametric study results show that an increased area of the b2 reinforcement slightly reduces the strain over the pier, whereas an increased length and staggered reinforcement pattern slightly reduce the strains of the deck at 1/8 of the span length. -- Finite element modeling results suggest that the transverse field cracks over the pier and at 1/8 of the span length are mainly due to deck shrinkage. -- Bridges with larger skew angles have lower strains over the intermediate supports. -- Secondary moments affect the behavior in the negative moment region. The impact may be significant enough such that no tensile stresses in the deck may be experienced.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
Inference of Markov random field images segmentation models is usually performed using iterative methods which adapt the well-known expectation-maximization (EM) algorithm for independent mixture models. However, some of these adaptations are ad hoc and may turn out numerically unstable. In this paper, we review three EM-like variants for Markov random field segmentation and compare their convergence properties both at the theoretical and practical levels. We specifically advocate a numerical scheme involving asynchronous voxel updating, for which general convergence results can be established. Our experiments on brain tissue classification in magnetic resonance images provide evidence that this algorithm may achieve significantly faster convergence than its competitors while yielding at least as good segmentation results.
Resumo:
Rectangular hollow section (RHS) members are components widely used in engineering applications because of their good-looking, good properties in engineering areas and inexpensive cost comparing to members with other sections. The increasing use of RHS in load bearing structures makes it necessary to analyze the fatigue behavior of the RHS members. In this thesis, concentration will be given to the fatigue behavior of the RHS members under variable amplitude pure torsional loading. For the RHS members, failure will normally occur in the corner region if the welded regions are under full penetration. This is because of the complicated stress components' distributions at the RHScorners, where all of three fracture mechanics modes will happen. Mode I is mainly caused by the residual stresses that caused by the manufacturing process. Modes II and III are caused by the applied torsional loading. Stress based Findleymodel is also used to analyze the stress components. Constant amplitude fatigue tests have been done as well as variable amplitude fatigue tests. The specimens under variable amplitude loading gave longer fatigue lives than those under constant amplitude loading. Results from tests show an S-N curvewith slope around 5.
Resumo:
Diplomityössä tehdään jatkokehitystä KCI Konecranes yrityksen siltanosturin laskentaohjelmaan. Ohjelman tärkeimmät jatkokehityskohteet kartoitettiin käyttäjäkyselyn avulla ja niistä valittiin toivotuimmat, sekä diplomityön lujuusopilliseen aihepiiriin parhaiten soveltuvat. Työhön valitut kaksi aihetta ovat koteloprofiilin kaksiosaisen uuman lujuuslaskennan selvittäminen ja siltanosturin kahdeksanpyöräisenpäätykannattajan elementtimallin suunnittelu. Diplomityössä selvitetään jatkokehityskohteisiin liittyvä teoria, mutta varsinainen ohjelmointi jätetään työn ulkopuolelle. Kaksiosaisella uumalla varustetussa koteloprofiilissa nostovaunun kulkukiskon alla olevan uuman yläosa tehdään paksummaksi, jotta uuma kestäisi nostovaunun pyöräkuormasta aiheutuvan paikallisen jännityksen, eliniin sanotun rusennusjännityksen. Rusennusjännityksen määrittäminen uumalevyissä on kaksiosaisen uuman lujuuslaskennan tärkein tehtävä. Rusennuksen aiheuttamankalvojännityksen ja jännityskeskittymien määrittämiseen erilaisissa konstruktioissa etsittiin sopivimmat menetelmät kirjallisuudesta ja standardeista. Kalvojännitys voidaan määrittää luotettavasti käyttäen joko 45 asteen sääntöä tai standardin mukaista menetelmää ja jännityskonsentraatioiden suuruus saadaan kertomallakalvojännitys jännityskonsentraatiokertoimilla. Menetelmien toimivuus verifioitiin tekemällä kymmeniä uuman elementtimalleja erilaisin dimensioin ja reunaehdoin ja vertaamalla elementtimallien tuloksia käsin laskettuihin. Käsin lasketut jännitykset saatiin vastaamaan tarkasti elementtimallien tuloksia. Kaksiosaisen uuman lommahdus- ja väsymislaskentaa tutkittiin alustavasti. Kahdeksanpyöräisiä päätykannattajia käytetään suurissa siltanostureissa pienentämään pyöräkuormia ja radan rusennusjännityksiä. Kahdeksanpyöräiselle siltanosturin päätykannattajalle suunniteltiin elementtimallit molempiin rakenteesta käytettyihin konstruktioihin: nivelöityyn ja jäykkäkehäiseen malliin. Elementtimallien rakentamisessa hyödynnettiin jo olemassa olevia malleja, jolloin niiden lisääminen ohjelmakoodiin nopeutuu ja ne ovat varmasti yhteensopivia muiden laskentamoduuleiden kanssa. Elementtimallien värähtelyanalyysin reunaehtoja tarkasteltiin. Värähtelyanalyysin reunaehtoihin ei tutkimuksen perusteella tarvitse tehdä muutoksia, mutta staattisen analyysin reunaehdot kaipaavat vielä lisätutkimusta.
Resumo:
This thesis is about detection of local image features. The research topic belongs to the wider area of object detection, which is a machine vision and pattern recognition problem where an object must be detected (located) in an image. State-of-the-art object detection methods often divide the problem into separate interest point detection and local image description steps, but in this thesis a different technique is used, leading to higher quality image features which enable more precise localization. Instead of using interest point detection the landmark positions are marked manually. Therefore, the quality of the image features is not limited by the interest point detection phase and the learning of image features is simplified. The approach combines both interest point detection and local description into one phase for detection. Computational efficiency of the descriptor is therefore important, leaving out many of the commonly used descriptors as unsuitably heavy. Multiresolution Gabor features has been the main descriptor in this thesis and improving their efficiency is a significant part. Actual image features are formed from descriptors by using a classifierwhich can then recognize similar looking patches in new images. The main classifier is based on Gaussian mixture models. Classifiers are used in one-class classifier configuration where there are only positive training samples without explicit background class. The local image feature detection method has been tested with two freely available face detection databases and a proprietary license plate database. The localization performance was very good in these experiments. Other applications applying the same under-lying techniques are also presented, including object categorization and fault detection.
Resumo:
Belt-drive systems have been and still are the most commonly used power transmission form in various applications of different scale and use. The peculiar features of the dynamics of the belt-drives include highly nonlinear deformation,large rigid body motion, a dynamical contact through a dry friction interface between the belt and pulleys with sticking and slipping zones, cyclic tension of the belt during the operation and creeping of the belt against the pulleys. The life of the belt-drive is critically related on these features, and therefore, amodel which can be used to study the correlations between the initial values and the responses of the belt-drives is a valuable source of information for the development process of the belt-drives. Traditionally, the finite element models of the belt-drives consist of a large number of elements thatmay lead to computational inefficiency. In this research, the beneficial features of the absolute nodal coordinate formulation are utilized in the modeling of the belt-drives in order to fulfill the following requirements for the successful and efficient analysis of the belt-drive systems: the exact modeling of the rigid body inertia during an arbitrary rigid body motion, the consideration of theeffect of the shear deformation, the exact description of the highly nonlinear deformations and a simple and realistic description of the contact. The use of distributed contact forces and high order beam and plate elements based on the absolute nodal coordinate formulation are applied to the modeling of the belt-drives in two- and three-dimensional cases. According to the numerical results, a realistic behavior of the belt-drives can be obtained with a significantly smaller number of elements and degrees of freedom in comparison to the previously published finite element models of belt-drives. The results of theexamples demonstrate the functionality and suitability of the absolute nodal coordinate formulation for the computationally efficient and realistic modeling ofbelt-drives. This study also introduces an approach to avoid the problems related to the use of the continuum mechanics approach in the definition of elastic forces on the absolute nodal coordinate formulation. This approach is applied to a new computationally efficient two-dimensional shear deformable beam element based on the absolute nodal coordinate formulation. The proposed beam element uses a linear displacement field neglecting higher-order terms and a reduced number of nodal coordinates, which leads to fewer degrees of freedom in a finite element.
Resumo:
We investigate what processes may underlie heterogeneity in social preferences. We address this question by examining participants' decisions and associated response times across 12 mini-ultimatum games. Using a finite mixture model and cross-validating its classification with a response time analysis, we identified four groups of responders: one group takes little to no account of the proposed split or the foregone allocation and swiftly accepts any positive offer; two groups process primarily the objective properties of the allocations (fairness and kindness) and need more time the more properties need to be examined; and a fourth group, which takes more time than the others, appears to take into account what they would have proposed had they been put in the role of the proposer. We discuss implications of this joint decision-response time analysis.
Resumo:
Työssä perehdyttiin azimuth potkurilaitteen rakenteelliseen kestävyyteen vaikuttaviin seikkoihin. Työn tavoitteena oli tutkia eri rakenneratkaisuja ja yksityiskohtia sekä niiden vaikutusta koko potkurilaitteen toimintaan. Potkurilaitteiden lujuustekninen tarkastelu sisälsi rakenteen eheyden tarkistuksen mekaanisten kuormitusten alla kuin myös värähtelykäyttäytymisen tutkimisen. Tarkastelun pääsääntöinen kohde oli potkurilaitteen alarunko, eli laivan ulkopuolelle jäävä osa. Diplomityö oli osa kahden uuden potkurilaitteen kehitysprojektia. Projektissa esiin tulleita rakennemuutoksia vertailtiin vanhoihin, jo toteutettuihin rakenneyksityiskohtiin. Lisäksi tutkittiin olemassa olevien rakenneratkaisujen hyödyntämistä uusissa laitteissa. Huomiota kiinnitettiin myös hitsien muotoiluun ja näiden mitoituskäytäntöihin. Rungon ehyyttä koskevat laskelman ovat hyvin pitkälti suoritettu elementtimenetelmää apuna käyttäen. Elementtimenetelmää varten luotiin useita malleja kattaen kokonaisia potkurilaittee runkoja kuin myös sen yksittäisiä osia. Elementtimenetelmän avulla saatuja tuloksia vertailtiin käytettävissä oleviin mittaustuloksiin. Työssä havaittiin potkurilaitteen rungon mitoittavaksi tekijäksi staattisessa tilanteessa muodostuvan voimansiirtolinja ja siinä sijaitsevien hammaspyörien siirtymät. Tämän seurauksena rungon taipumat ovat määräävämpiä geometrian kannalta kuin materiaalien sallitut jännitykset.
Resumo:
Työssä tutkittiin kestomagneettigeneraattorin staattoriytimen aksiaalisen puristamisen ja lämpötilan vaikutuksia staattorin jännityksiin ja muodonmuutoksiin. Lisäksi tutkittiin aksiaalisen puristusjännityksen vaikutusta staattorin aksiaalisuuntaisen ominaismuodon taajuuteen. Rakenteesta tehtiin elementtimalli, jonka avulla laskettiin rakenteen siirtymät, jännitykset ja ominaistaajuudet. Rakenteen jännityksiä ja siirtymiä tutkittiin myös yksinkertaisella jousikuvauksella, jonka tuloksia verrattiin elementtimallilla saatuihin tuloksiin. Työn tavoitteena oli kehittää uusi staattorirakenne. Elementtimenetelmän tulosten perusteella staattorirakennetta voidaan yksinkertaistaa poistamalla nyt käytetty puristusrengas, jolloin rakenteesta tulee yksinkertaisempi ja kustannustehokkaampi.
Resumo:
Mixture Models can be used in experimental situations involving areas related to food science and chemistry. Some problems of a statistical nature can be found, such as effects of multicollinearity that result in uncertainty in the optimization of a dependent variable. This study proposes the application of the ridge model adapted for mixture planning considering the Kronecker (K-model) and Scheffe (S-Model) methods applied to response surfaces. The method determined the proportions of hexane, acetone and alcohol proportions that resulted in the maximum response of percentage of extracted pequi (Caryocar brasiliense) pulp oil.
Resumo:
Speaker diarization is the process of sorting speeches according to the speaker. Diarization helps to search and retrieve what a certain speaker uttered in a meeting. Applications of diarization systemsextend to other domains than meetings, for example, lectures, telephone, television, and radio. Besides, diarization enhances the performance of several speech technologies such as speaker recognition, automatic transcription, and speaker tracking. Methodologies previously used in developing diarization systems are discussed. Prior results and techniques are studied and compared. Methods such as Hidden Markov Models and Gaussian Mixture Models that are used in speaker recognition and other speech technologies are also used in speaker diarization. The objective of this thesis is to develop a speaker diarization system in meeting domain. Experimental part of this work indicates that zero-crossing rate can be used effectively in breaking down the audio stream into segments, and adaptive Gaussian Models fit adequately short audio segments. Results show that 35 Gaussian Models and one second as average length of each segment are optimum values to build a diarization system for the tested data. Uniting the segments which are uttered by same speaker is done in a bottom-up clustering by a newapproach of categorizing the mixture weights.
Resumo:
Numerical simulation of machining processes can be traced back to the early seventies when finite element models for continuous chip formation were proposed. The advent of fast computers and development of new techniques to model large plastic deformations have favoured machining simulation. Relevant aspects of finite element simulation of machining processes are discussed in this paper, such as solution methods, material models, thermo-mechanical coupling, friction models, chip separation and breakage strategies and meshing/re-meshing strategies.