33 resultados para Modeling Rapport Using Hidden Markov Models
Resumo:
Granular flow phenomena are frequently encountered in the design of process and industrial plants in the traditional fields of the chemical, nuclear and oil industries as well as in other activities such as food and materials handling. Multi-phase flow is one important branch of the granular flow. Granular materials have unusual kinds of behavior compared to normal materials, either solids or fluids. Although some of the characteristics are still not well-known yet, one thing is confirmed: the particle-particle interaction plays a key role in the dynamics of granular materials, especially for dense granular materials. At the beginning of this thesis, detailed illustration of developing two models for describing the interaction based on the results of finite-element simulation, dimension analysis and numerical simulation is presented. The first model is used to describing the normal collision of viscoelastic particles. Based on some existent models, more parameters are added to this model, which make the model predict the experimental results more accurately. The second model is used for oblique collision, which include the effects from tangential velocity, angular velocity and surface friction based on Coulomb's law. The theoretical predictions of this model are in agreement with those by finite-element simulation. I n the latter chapters of this thesis, the models are used to predict industrial granular flow and the agreement between the simulations and experiments also shows the validation of the new model. The first case presents the simulation of granular flow passing over a circular obstacle. The simulations successfully predict the existence of a parabolic steady layer and show how the characteristics of the particles, such as coefficients of restitution and surface friction affect the separation results. The second case is a spinning container filled with granular material. Employing the previous models, the simulation could also reproduce experimentally observed phenomena, such as a depression in the center of a high frequency rotation. The third application is about gas-solid mixed flow in a vertically vibrated device. Gas phase motion is added to coherence with the particle motion. The governing equations of the gas phase are solved by using the Large eddy simulation (LES) and particle motion is predicted by using the Lagrangian method. The simulation predicted some pattern formation reported by experiment.
Resumo:
Transitional flow past a three-dimensional circular cylinder is a widely studied phenomenon since this problem is of interest with respect to many technical applications. In the present work, the numerical simulation of flow past a circular cylinder, performed by using a commercial CFD code (ANSYS Fluent 12.1) with large eddy simulation (LES) and RANS (κ - ε and Shear-Stress Transport (SST) κ - ω! model) approaches. The turbulent flow for ReD = 1000 & 3900 is simulated to investigate the force coefficient, Strouhal number, flow separation angle, pressure distribution on cylinder and the complex three dimensional vortex shedding of the cylinder wake region. The numerical results extracted from these simulations have good agreement with the experimental data (Zdravkovich, 1997). Moreover, grid refinement and time-step influence have been examined. Numerical calculations of turbulent cross-flow in a staggered tube bundle continues to attract interest due to its importance in the engineering application as well as the fact that this complex flow represents a challenging problem for CFD. In the present work a time dependent simulation using κ – ε, κ - ω! and SST models are performed in two dimensional for a subcritical flow through a staggered tube bundle. The predicted turbulence statistics (mean and r.m.s velocities) have good agreement with the experimental data (S. Balabani, 1996). Turbulent quantities such as turbulent kinetic energy and dissipation rate are predicted using RANS models and compared with each other. The sensitivity of grid and time-step size have been analyzed. Model constants sensitivity study have been carried out by adopting κ – ε model. It has been observed that model constants are very sensitive to turbulence statistics and turbulent quantities.
Resumo:
Construction of multiple sequence alignments is a fundamental task in Bioinformatics. Multiple sequence alignments are used as a prerequisite in many Bioinformatics methods, and subsequently the quality of such methods can be critically dependent on the quality of the alignment. However, automatic construction of a multiple sequence alignment for a set of remotely related sequences does not always provide biologically relevant alignments.Therefore, there is a need for an objective approach for evaluating the quality of automatically aligned sequences. The profile hidden Markov model is a powerful approach in comparative genomics. In the profile hidden Markov model, the symbol probabilities are estimated at each conserved alignment position. This can increase the dimension of parameter space and cause an overfitting problem. These two research problems are both related to conservation. We have developed statistical measures for quantifying the conservation of multiple sequence alignments. Two types of methods are considered, those identifying conserved residues in an alignment position, and those calculating positional conservation scores. The positional conservation score was exploited in a statistical prediction model for assessing the quality of multiple sequence alignments. The residue conservation score was used as part of the emission probability estimation method proposed for profile hidden Markov models. The results of the predicted alignment quality score highly correlated with the correct alignment quality scores, indicating that our method is reliable for assessing the quality of any multiple sequence alignment. The comparison of the emission probability estimation method with the maximum likelihood method showed that the number of estimated parameters in the model was dramatically decreased, while the same level of accuracy was maintained. To conclude, we have shown that conservation can be successfully used in the statistical model for alignment quality assessment and in the estimation of emission probabilities in the profile hidden Markov models.
Resumo:
Speaker diarization is the process of sorting speeches according to the speaker. Diarization helps to search and retrieve what a certain speaker uttered in a meeting. Applications of diarization systemsextend to other domains than meetings, for example, lectures, telephone, television, and radio. Besides, diarization enhances the performance of several speech technologies such as speaker recognition, automatic transcription, and speaker tracking. Methodologies previously used in developing diarization systems are discussed. Prior results and techniques are studied and compared. Methods such as Hidden Markov Models and Gaussian Mixture Models that are used in speaker recognition and other speech technologies are also used in speaker diarization. The objective of this thesis is to develop a speaker diarization system in meeting domain. Experimental part of this work indicates that zero-crossing rate can be used effectively in breaking down the audio stream into segments, and adaptive Gaussian Models fit adequately short audio segments. Results show that 35 Gaussian Models and one second as average length of each segment are optimum values to build a diarization system for the tested data. Uniting the segments which are uttered by same speaker is done in a bottom-up clustering by a newapproach of categorizing the mixture weights.
Resumo:
The theme of this thesis is context-speci c independence in graphical models. Considering a system of stochastic variables it is often the case that the variables are dependent of each other. This can, for instance, be seen by measuring the covariance between a pair of variables. Using graphical models, it is possible to visualize the dependence structure found in a set of stochastic variables. Using ordinary graphical models, such as Markov networks, Bayesian networks, and Gaussian graphical models, the type of dependencies that can be modeled is limited to marginal and conditional (in)dependencies. The models introduced in this thesis enable the graphical representation of context-speci c independencies, i.e. conditional independencies that hold only in a subset of the outcome space of the conditioning variables. In the articles included in this thesis, we introduce several types of graphical models that can represent context-speci c independencies. Models for both discrete variables and continuous variables are considered. A wide range of properties are examined for the introduced models, including identi ability, robustness, scoring, and optimization. In one article, a predictive classi er which utilizes context-speci c independence models is introduced. This classi er clearly demonstrates the potential bene ts of the introduced models. The purpose of the material included in the thesis prior to the articles is to provide the basic theory needed to understand the articles.
Resumo:
Huonetilojen lämpöolosuhteiden hallinta on tärkeä osa talotekniikan suunnittelua. Tavallisesti huonetilan lämpöolosuhteita mallinnetaan menetelmillä, joissa lämpödynamiikkaa lasketaan huoneilmassa yhdessä laskentapisteessä ja rakenteissa seinäkohtaisesti. Tarkastelun kohteena on yleensä vain huoneilman lämpötila. Tämän diplomityön tavoitteena oli kehittää huoneilman lämpöolosuhteiden simulointimalli, jossa rakenteiden lämpödynamiikka lasketaan epästationaarisesti energia-analyysilaskennalla ja huoneilman virtauskenttä mallinnetaan valittuna ajanhetkenä stationaarisesti virtauslaskennalla. Tällöin virtauskentälle saadaan jakaumat suunnittelun kannalta olennaisista suureista, joita tyypillisesti ovat esimerkiksi ilman lämpötila ja nopeus. Simulointimallin laskentatuloksia verrattiin testihuonetiloissa tehtyihin mittauksiin. Tulokset osoittautuivat riittävän tarkoiksi talotekniikan suunnitteluun. Mallilla simuloitiin kaksi huonetilaa, joissa tarvittiin tavallista tarkempaa mallinnusta. Vertailulaskelmia tehtiin eri turbulenssimalleilla, diskretointitarkkuuksilla ja hilatiheyksillä. Simulointitulosten havainnollistamiseksi suunniteltiin asiakastuloste, jossa on esitetty suunnittelun kannalta olennaiset asiat. Simulointimallilla saatiin lisätietoa varsinkin lämpötilakerrostumista, joita tyypillisesti on arvioitu kokemukseen perustuen. Simulointimallin kehityksen taustana käsiteltiin rakennusten sisäilmastoa, lämpöolosuhteita ja laskentamenetelmiä sekä mallinnukseen soveltuvia kaupallisia ohjelmia. Simulointimallilla saadaan entistä tarkempaa ja yksityiskohtaisempaa tietoa lämpöolosuhteiden hallinnan suunnitteluun. Mallin käytön ongelmia ovat vielä virtauslaskennan suuri laskenta-aika, turbulenssin mallinnus, tuloilmalaitteiden reunaehtojen tarkka määritys ja laskennan konvergointi. Kehitetty simulointimalli tarjoaa hyvän perustan virtauslaskenta- ja energia-analyysiohjelmien kehittämiseksi ja yhdistämiseksi käyttäjäystävälliseksi talotekniikan suunnittelutyökaluksi.
Resumo:
Formal software development processes and well-defined development methodologies are nowadays seen as the definite way to produce high-quality software within time-limits and budgets. The variety of such high-level methodologies is huge ranging from rigorous process frameworks like CMMI and RUP to more lightweight agile methodologies. The need for managing this variety and the fact that practically every software development organization has its own unique set of development processes and methods have created a profession of software process engineers. Different kinds of informal and formal software process modeling languages are essential tools for process engineers. These are used to define processes in a way which allows easy management of processes, for example process dissemination, process tailoring and process enactment. The process modeling languages are usually used as a tool for process engineering where the main focus is on the processes themselves. This dissertation has a different emphasis. The dissertation analyses modern software development process modeling from the software developers’ point of view. The goal of the dissertation is to investigate whether the software process modeling and the software process models aid software developers in their day-to-day work and what are the main mechanisms for this. The focus of the work is on the Software Process Engineering Metamodel (SPEM) framework which is currently one of the most influential process modeling notations in software engineering. The research theme is elaborated through six scientific articles which represent the dissertation research done with process modeling during an approximately five year period. The research follows the classical engineering research discipline where the current situation is analyzed, a potentially better solution is developed and finally its implications are analyzed. The research applies a variety of different research techniques ranging from literature surveys to qualitative studies done amongst software practitioners. The key finding of the dissertation is that software process modeling notations and techniques are usually developed in process engineering terms. As a consequence the connection between the process models and actual development work is loose. In addition, the modeling standards like SPEM are partially incomplete when it comes to pragmatic process modeling needs, like light-weight modeling and combining pre-defined process components. This leads to a situation, where the full potential of process modeling techniques for aiding the daily development activities can not be achieved. Despite these difficulties the dissertation shows that it is possible to use modeling standards like SPEM to aid software developers in their work. The dissertation presents a light-weight modeling technique, which software development teams can use to quickly analyze their work practices in a more objective manner. The dissertation also shows how process modeling can be used to more easily compare different software development situations and to analyze their differences in a systematic way. Models also help to share this knowledge with others. A qualitative study done amongst Finnish software practitioners verifies the conclusions of other studies in the dissertation. Although processes and development methodologies are seen as an essential part of software development, the process modeling techniques are rarely used during the daily development work. However, the potential of these techniques intrigues the practitioners. As a conclusion the dissertation shows that process modeling techniques, most commonly used as tools for process engineers, can also be used as tools for organizing the daily software development work. This work presents theoretical solutions for bringing the process modeling closer to the ground-level software development activities. These theories are proven feasible by presenting several case studies where the modeling techniques are used e.g. to find differences in the work methods of the members of a software team and to share the process knowledge to a wider audience.
Resumo:
Diplomityö käsittelee kallioporalaitteen eliniän kuormitusten arviointiin tarkoitetun simulointimallin rakentamista. Työssä luotiin modulaarinen malli, jolla voidaan simuloida dynaamisesti esteen yliajo. Esteen yliajo on perinteinen tehdastesti uusien laitteiden prototyypeille. Sillä pyritään saamaan selville laitteen maksimikuormitustapaukset. Pintaporalaitteen simulointimalli tehtiin ADAMS-ohjelmistolla laitteen suunnittelussa tehtyjä CAD-malleja hyödyntäen. ADAMS-ohjelmistolla mallinnettiin laitteen yksinkertaistettu mekaniikka siten, että laite oli jaettu viiteen eri moduuliin. Runko ja telasto olivat omana osionaan. Puomi ja syöttölaite oli oma kokoonpanonsa ja ohjaamo sekä katteet olivat kaksi erillistä modulia. Viides moduuli oli hydrauliikka, joka simuloitiin rinnakkaissimulointina EASY5-ohjelmistolla. Simuloituja esteen yliajon tuloksia verrattiin prototyyppilaitteen tehdastestien mitattuihin kuormituksiin. Vertailu tehtiin mallinnettujen neljän sylinterin paineita tarkastelemalla. Tuloksia tarkastellessa havaittiin, että simulaatiomalli antaa varsinkin staattisissa tarkasteluissa oikean suuntaisia tuloksia. Dynaamisissa tilanteissa koneen maksimikuormituksilla mallinnustarkkuus ei tämän työn laajuudessa ole riittävä varsinaiseen eliniän analysointiin. Sen sijaan mallinnustekniikka todettiin periaatteessa toimivaksi ja jatkokehityskelpoiseksi.
Resumo:
Tässä työssä tutkittiin miten totuudenmukaisia tuloksia syklonierottimen virtauskentästä saadaan numeerisella laskennalla, kun käytetään eri turbulenssimalleja. Tarkoitus oli myös selvittää yleisesti syklonin toimintaperiaatteita, haasteita sen käytössä sekä syklonin numeerisen virtauslaskennan perusteita. Numeerisen virtauslaskennan teoria selitetään pääpiirteittäin, samoin turbulenssin mallinnus. Työn laskentaosiossa simuloitiin Fluent-ohjelmalla syklonin virtauskenttää kuumalla ilmalla sekä kahdella eri turbulenssimallilla ja verrattiin tuloksia kirjallisuudesta löytyviin mittaustuloksiin. Simuloinnit suoritettiin sekä ajasta riippuvana että ajasta riippumattomana ja kahdella eri laskentahilalla. Simulointien tulokset osoittivat, että RNG k-ε turbulenssimalli ei kykene tuottamaan totuu-denmukaista virtauskenttää. Toisen käytetyn turbulenssimallin, Reynolds-jännitysmallin tulokset vastasivat enemmän mittaustuloksia. Reynolds-jännitysmallia voidaan pitää käyttökelpoisena syklonin simuloinnissa tämän työn ja kirjallisuuden perusteella. Mallissa oli yksinkertaistuksia, esimerkiksi kiinteää ainetta ei otettu huomioon lainkaan.
Resumo:
In recent decades, industrial activity growth and increasing water usage worldwide have led to the release of various pollutants, such as toxic heavy metals and nutrients, into the aquatic environment. Modified nanocellulose and microcellulose-based adsorption materials have the potential to remove these contaminants from aqueous solutions. The present research consisted of the preparation of five different nano/microcellulose-based adsorbents, their characterization, the study of adsorption kinetics and isotherms, the determination of adsorption mechanisms, and an evaluation of adsorbents’ regeneration properties. The same well known reactions and modification methods that were used for modifying conventional cellulose also worked for microfibrillated cellulose (MFC). The use of succinic anhydride modified mercerized nanocellulose, and aminosilane and hydroxyapatite modified nanostructured MFC for the removal of heavy metals from aqueous solutions exhibited promising results. Aminosilane, epoxy and hydroxyapatite modified MFC could be used as a promising alternative for H2S removal from aqueous solutions. In addition, new knowledge about the adsorption properties of carbonated hydroxyapatite modified MFC as multifunctional adsorbent for the removal of both cations and anions ions from water was obtained. The maghemite nanoparticles (Fe3O4) modified MFC was found to be a highly promising adsorbent for the removal of As(V) from aqueous solutions due to its magnetic properties, high surface area, and high adsorption capacity . The maximum removal efficiencies of each adsorbent were studied in batch mode. The results of adsorption kinetics indicated very fast removal rates for all the studied pollutants. Modeling of adsorption isotherms and adsorption kinetics using various theoretical models provided information about the adsorbent’s surface properties and the adsorption mechanisms. This knowledge is important for instance, in designing water treatment units/plants. Furthermore, the correspondence between the theory behind the model and properties of the adsorbent as well as adsorption mechanisms were also discussed. On the whole, both the experimental results and theoretical considerations supported the potential applicability of the studied nano/microcellulose-based adsorbents in water treatment applications.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
The aim of this study was to contribute to the current knowledge-based theory by focusing on a research gap that exists in the empirically proven determination of the simultaneous but differentiable effects of intellectual capital (IC) assets and knowledge management (KM) practices on organisational performance (OP). The analysis was built on the past research and theoreticised interactions between the latent constructs specified using the survey-based items that were measured from a sample of Finnish companies for IC and KM and the dependent construct for OP determined using information available from financial databases. Two widely used and commonly recommended measures in the literature on management science, i.e. the return on total assets (ROA) and the return on equity (ROE), were calculated for OP. Thus the investigation of the relationship between IC and KM impacting OP in relation to the hypotheses founded was possible to conduct using objectively derived performance indicators. Using financial OP measures also strengthened the dynamic features of data needed in analysing simultaneous and causal dependences between the modelled constructs specified using structural path models. The estimates were obtained for the parameters of structural path models using a partial least squares-based regression estimator. Results showed that the path dependencies between IC and OP or KM and OP were always insignificant when analysed separate to any other interactions or indirect effects caused by simultaneous modelling and regardless of the OP measure used that was either ROA or ROE. The dependency between the constructs for KM and IC appeared to be very strong and was always significant when modelled simultaneously with other possible interactions between the constructs and using either ROA or ROE to define OP. This study, however, did not find statistically unambiguous evidence for proving the hypothesised causal mediation effects suggesting, for instance, that the effects of KM practices on OP are mediated by the IC assets. Due to the fact that some indication about the fluctuations of causal effects was assessed, it was concluded that further studies are needed for verifying the fundamental and likely hidden causal effects between the constructs of interest. Therefore, it was also recommended that complementary modelling and data processing measures be conducted for elucidating whether the mediation effects occur between IC, KM and OP, the verification of which requires further investigations of measured items and can be build on the findings of this study.
Resumo:
Summary: Analyzing longitudinal data using latent-variable models
Resumo:
The active magnetic bearings have recently been intensively developed because of noncontact support having several advantages compared to conventional bearings. Due to improved materials, strategies of control, and electrical components, the performance and reliability of the active magnetic bearings are improving. However, additional bearings, retainer bearings, still have a vital role in the applications of the active magnetic bearings. The most crucial moment when the retainer bearings are needed is when the rotor drops from the active magnetic bearings on the retainer bearings due to component or power failure. Without appropriate knowledge of the retainer bearings, there is a chance that an active magnetic bearing supported rotor system will be fatal in a drop-down situation. This study introduces a detailed simulation model of a rotor system in order to describe a rotor drop-down situation on the retainer bearings. The introduced simulation model couples a finite element model with component mode synthesis and detailed bearing models. In this study, electrical components and electromechanical forces are not in the focus. The research looks at the theoretical background of the finite element method with component mode synthesis that can be used in the dynamic analysis of flexible rotors. The retainer bearings are described by using two ball bearing models, which include damping and stiffness properties, oil film, inertia of rolling elements and friction between races and rolling elements. Thefirst bearing model assumes that the cage of the bearing is ideal and that the cage holds the balls in their predefined positions precisely. The second bearing model is an extension of the first model and describes the behavior of the cageless bearing. In the bearing model, each ball is described by using two degrees of freedom. The models introduced in this study are verified with a corresponding actual structure. By using verified bearing models, the effects of the parameters of the rotor system onits dynamics during emergency stops are examined. As shown in this study, the misalignment of the retainer bearings has a significant influence on the behavior of the rotor system in a drop-down situation. In this study, a stability map of the rotor system as a function of rotational speed of the rotor and the misalignment of the retainer bearings is presented. In addition, the effects of parameters of the simulation procedure and the rotor system on the dynamics of system are studied.
Resumo:
This thesis examines the history and evolution of information system process innovation (ISPI) processes (adoption, adaptation, and unlearning) within the information system development (ISD) work in an internal information system (IS) department and in two IS software house organisations in Finland over a 43-year time-period. The study offers insights into influential actors and their dependencies in deciding over ISPIs. The research usesa qualitative research approach, and the research methodology involves the description of the ISPI processes, how the actors searched for ISPIs, and how the relationships between the actors changed over time. The existing theories were evaluated using the conceptual models of the ISPI processes based on the innovationliterature in the IS area. The main focus of the study was to observe changes in the main ISPI processes over time. The main contribution of the thesis is a new theory. The term theory should be understood as 1) a new conceptual framework of the ISPI processes, 2) new ISPI concepts and categories, and the relationships between the ISPI concepts inside the ISPI processes. The study gives a comprehensive and systematic study on the history and evolution of the ISPI processes; reveals the factors that affected ISPI adoption; studies ISPI knowledge acquisition, information transfer, and adaptation mechanisms; and reveals the mechanismsaffecting ISPI unlearning; changes in the ISPI processes; and diverse actors involved in the processes. The results show that both the internal IS department and the two IS software houses sought opportunities to improve their technical skills and career paths and this created an innovative culture. When new technology generations come to the market the platform systems need to be renewed, and therefore the organisations invest in ISPIs in cycles. The extent of internal learning and experiments was higher than the external knowledge acquisition. Until the outsourcing event (1984) the decision-making was centralised and the internalIS department was very influential over ISPIs. After outsourcing, decision-making became distributed between the two IS software houses, the IS client, and itsinternal IT department. The IS client wanted to assure that information systemswould serve the business of the company and thus wanted to co-operate closely with the software organisations.