49 resultados para parameter-space graph
Resumo:
The use of domain-specific languages (DSLs) has been proposed as an approach to cost-e ectively develop families of software systems in a restricted application domain. Domain-specific languages in combination with the accumulated knowledge and experience of previous implementations, can in turn be used to generate new applications with unique sets of requirements. For this reason, DSLs are considered to be an important approach for software reuse. However, the toolset supporting a particular domain-specific language is also domain-specific and is per definition not reusable. Therefore, creating and maintaining a DSL requires additional resources that could be even larger than the savings associated with using them. As a solution, di erent tool frameworks have been proposed to simplify and reduce the cost of developments of DSLs. Developers of tool support for DSLs need to instantiate, customize or configure the framework for a particular DSL. There are di erent approaches for this. An approach is to use an application programming interface (API) and to extend the basic framework using an imperative programming language. An example of a tools which is based on this approach is Eclipse GEF. Another approach is to configure the framework using declarative languages that are independent of the underlying framework implementation. We believe this second approach can bring important benefits as this brings focus to specifying what should the tool be like instead of writing a program specifying how the tool achieves this functionality. In this thesis we explore this second approach. We use graph transformation as the basic approach to customize a domain-specific modeling (DSM) tool framework. The contributions of this thesis includes a comparison of di erent approaches for defining, representing and interchanging software modeling languages and models and a tool architecture for an open domain-specific modeling framework that e ciently integrates several model transformation components and visual editors. We also present several specific algorithms and tool components for DSM framework. These include an approach for graph query based on region operators and the star operator and an approach for reconciling models and diagrams after executing model transformation programs. We exemplify our approach with two case studies MICAS and EFCO. In these studies we show how our experimental modeling tool framework has been used to define tool environments for domain-specific languages.
Resumo:
This paper presents the design for a graphical parameter editor for Testing and Test Control Notation 3 (TTCN-3) test suites. This work was done in the context of OpenTTCN IDE, a TTCN-3 development environment built on top of the Eclipse platform. The design presented relies on an additional parameter editing tab added to the launch configurations for test campaigns. This parameter editing tab shows the list of editable parameters and allows opening editing components for the different parameters. Each TTCN-3 primitive type will have a specific editing component providing tools to ease modification of values of that type.
Resumo:
Pulsed electroacoustic (PEA) method is a commonly used non-destructive technique for investigating space charges. It has been developed since early 1980s. These days there is continuing interest for better understanding of the influence of space charge on the reliability of solid electrical insulation under high electric field. The PEA method is widely used for space charge profiling for its robust and relatively inexpensive features. The PEA technique relies on a voltage impulse used to temporarily disturb the space charge equilibrium in a dielectric. The acoustic wave is generated by charge movement in the sample and detected by means of a piezoelectric film. The spatial distribution of the space charge is contained within the detected signal. The principle of such a system is already well established, and several kinds of setups have been constructed for different measurement needs. This thesis presents the design of a PEA measurement system as a systems engineering project. The operating principle and some recent developments are summarised. The steps of electrical and mechanical design of the instrument are discussed. A common procedure for measuring space charges is explained and applied to verify the functionality of the system. The measurement system is provided as an additional basic research tool for the Corporate Research Centre of ABB (China) Ltd. It can be used to characterise flat samples with thickness of 0.2–0.5 mm under DC stress. The spatial resolution of the measurement is 20 μm.
Resumo:
The objective of this master’s thesis is to investigate the loss behavior of three-level ANPC inverter and compare it with conventional NPC inverter. The both inverters are controlled with mature space vector modulation strategy. In order to provide the comparison both accurate and detailed enough NPC and ANPC simulation models should be obtained. The similar control model of SVM is utilized for both NPC and ANPC inverter models. The principles of control algorithms, the structure and description of models are clarified. The power loss calculation model is based on practical calculation approaches with certain assumptions. The comparison between NPC and ANPC topologies is presented based on results obtained for each semiconductor device, their switching and conduction losses and efficiency of the inverters. Alternative switching states of ANPC topology allow distributing losses among the switches more evenly, than in NPC inverter. Obviously, the losses of a switching device depend on its position in the topology. Losses distribution among the components in ANPC topology allows reducing the stress on certain switches, thus losses are equally distributed among the semiconductors, however the efficiency of the inverters is the same. As a new contribution to earlier studies, the obtained models of SVM control, NPC and ANPC inverters have been built. Thus, this thesis can be used in further more complicated modelling of full-power converters for modern multi-megawatt wind energy conversion systems.
Resumo:
This thesis addresses the use of covariant phase space observables in quantum tomography. Necessary and sufficient conditions for the informational completeness of covariant phase space observables are proved, and some state reconstruction formulae are derived. Different measurement schemes for measuring phase space observables are considered. Special emphasis is given to the quantum optical eight-port homodyne detection scheme and, in particular, on the effect of non-unit detector efficiencies on the measured observable. It is shown that the informational completeness of the observable does not depend on the efficiencies. As a related problem, the possibility of reconstructing the position and momentum distributions from the marginal statistics of a phase space observable is considered. It is shown that informational completeness for the phase space observable is neither necessary nor sufficient for this procedure. Two methods for determining the distributions from the marginal statistics are presented. Finally, two alternative methods for determining the state are considered. Some of their shortcomings when compared to the phase space method are discussed.
Resumo:
Teemanumero 1/2011 : Kauhut ja pelot.
Resumo:
Parameter estimation still remains a challenge in many important applications. There is a need to develop methods that utilize achievements in modern computational systems with growing capabilities. Owing to this fact different kinds of Evolutionary Algorithms are becoming an especially perspective field of research. The main aim of this thesis is to explore theoretical aspects of a specific type of Evolutionary Algorithms class, the Differential Evolution (DE) method, and implement this algorithm as codes capable to solve a large range of problems. Matlab, a numerical computing environment provided by MathWorks inc., has been utilized for this purpose. Our implementation empirically demonstrates the benefits of a stochastic optimizers with respect to deterministic optimizers in case of stochastic and chaotic problems. Furthermore, the advanced features of Differential Evolution are discussed as well as taken into account in the Matlab realization. Test "toycase" examples are presented in order to show advantages and disadvantages caused by additional aspects involved in extensions of the basic algorithm. Another aim of this paper is to apply the DE approach to the parameter estimation problem of the system exhibiting chaotic behavior, where the well-known Lorenz system with specific set of parameter values is taken as an example. Finally, the DE approach for estimation of chaotic dynamics is compared to the Ensemble prediction and parameter estimation system (EPPES) approach which was recently proposed as a possible solution for similar problems.
Resumo:
This thesis presents an approach for formulating and validating a space averaged drag model for coarse mesh simulations of gas-solid flows in fluidized beds using the two-fluid model. Proper modeling for fluid dynamics is central in understanding any industrial multiphase flow. The gas-solid flows in fluidized beds are heterogeneous and usually simulated with the Eulerian description of phases. Such a description requires the usage of fine meshes and small time steps for the proper prediction of its hydrodynamics. Such constraint on the mesh and time step size results in a large number of control volumes and long computational times which are unaffordable for simulations of large scale fluidized beds. If proper closure models are not included, coarse mesh simulations for fluidized beds do not give reasonable results. The coarse mesh simulation fails to resolve the mesoscale structures and results in uniform solids concentration profiles. For a circulating fluidized bed riser, such predicted profiles result in a higher drag force between the gas and solid phase and also overestimated solids mass flux at the outlet. Thus, there is a need to formulate the closure correlations which can accurately predict the hydrodynamics using coarse meshes. This thesis uses the space averaging modeling approach in the formulation of closure models for coarse mesh simulations of the gas-solid flow in fluidized beds using Geldart group B particles. In the analysis of formulating the closure correlation for space averaged drag model, the main parameters for the modeling were found to be the averaging size, solid volume fraction, and distance from the wall. The closure model for the gas-solid drag force was formulated and validated for coarse mesh simulations of the riser, which showed the verification of this modeling approach. Coarse mesh simulations using the corrected drag model resulted in lowered values of solids mass flux. Such an approach is a promising tool in the formulation of appropriate closure models which can be used in coarse mesh simulations of large scale fluidized beds.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
This study examines the structure of the Russian Reflexive Marker ( ся/-сь) and offers a usage-based model building on Construction Grammar and a probabilistic view of linguistic structure. Traditionally, reflexive verbs are accounted for relative to non-reflexive verbs. These accounts assume that linguistic structures emerge as pairs. Furthermore, these accounts assume directionality where the semantics and structure of a reflexive verb can be derived from the non-reflexive verb. However, this directionality does not necessarily hold diachronically. Additionally, the semantics and the patterns associated with a particular reflexive verb are not always shared with the non-reflexive verb. Thus, a model is proposed that can accommodate the traditional pairs as well as for the possible deviations without postulating different systems. A random sample of 2000 instances marked with the Reflexive Marker was extracted from the Russian National Corpus and the sample used in this study contains 819 unique reflexive verbs. This study moves away from the traditional pair account and introduces the concept of Neighbor Verb. A neighbor verb exists for a reflexive verb if they share the same phonological form excluding the Reflexive Marker. It is claimed here that the Reflexive Marker constitutes a system in Russian and the relation between the reflexive and neighbor verbs constitutes a cross-paradigmatic relation. Furthermore, the relation between the reflexive and the neighbor verb is argued to be of symbolic connectivity rather than directionality. Effectively, the relation holding between particular instantiations can vary. The theoretical basis of the present study builds on this assumption. Several new variables are examined in order to systematically model variability of this symbolic connectivity, specifically the degree and strength of connectivity between items. In usage-based models, the lexicon does not constitute an unstructured list of items. Instead, items are assumed to be interconnected in a network. This interconnectedness is defined as Neighborhood in this study. Additionally, each verb carves its own niche within the Neighborhood and this interconnectedness is modeled through rhyme verbs constituting the degree of connectivity of a particular verb in the lexicon. The second component of the degree of connectivity concerns the status of a particular verb relative to its rhyme verbs. The connectivity within the neighborhood of a particular verb varies and this variability is quantified by using the Levenshtein distance. The second property of the lexical network is the strength of connectivity between items. Frequency of use has been one of the primary variables in functional linguistics used to probe this. In addition, a new variable called Constructional Entropy is introduced in this study building on information theory. It is a quantification of the amount of information carried by a particular reflexive verb in one or more argument constructions. The results of the lexical connectivity indicate that the reflexive verbs have statistically greater neighborhood distances than the neighbor verbs. This distributional property can be used to motivate the traditional observation that the reflexive verbs tend to have idiosyncratic properties. A set of argument constructions, generalizations over usage patterns, are proposed for the reflexive verbs in this study. In addition to the variables associated with the lexical connectivity, a number of variables proposed in the literature are explored and used as predictors in the model. The second part of this study introduces the use of a machine learning algorithm called Random Forests. The performance of the model indicates that it is capable, up to a degree, of disambiguating the proposed argument construction types of the Russian Reflexive Marker. Additionally, a global ranking of the predictors used in the model is offered. Finally, most construction grammars assume that argument construction form a network structure. A new method is proposed that establishes generalization over the argument constructions referred to as Linking Construction. In sum, this study explores the structural properties of the Russian Reflexive Marker and a new model is set forth that can accommodate both the traditional pairs and potential deviations from it in a principled manner.
Resumo:
InsomniaGame oli Turun yliopiston digitaalisen kulttuurin oppiaineen ja Insomnia verkkopeliyhdistyksen yhteistyössä vuosina 2010 ja 2011 toteuttama pelikonseptikokeilu. InsomniaGame oli osa laajempaa ”CoEx: Yhteisöllistä tekemistä tukevat tilat kokemusten jakamisessa” kaksivuotista (1.10.2009–31.12.2011) hanketta, jonka toteuttivat yhteistyössä Turun yliopiston Porin yksikkö, Tampereen teknillisen yliopiston Porin yksikkö ja Tampereen yliopisto. Hankkeen tavoitteena oli toteuttaa sosiaalista mediaa, yhteisöllisyyttä ja lisättyä todellisuutta hyödyntäviä virtuaalisia ja julkisia tiloja, joissa käyttäjät voivat jakaa kokemuksia. Tutkimus on luonteeltaan soveltava pro gradu -tutkielma, joka sisältää kaksi vuotta kestäneen ja kaksi pelisovellusta sisältävän työosuuden. InsomniaGame koostui erilaisista pelaajien suorittamista tehtävistä, pelialustasta sekä taustatarinasta. Päätutkimuskysymykset ovat: Mitkä tekijät vaikuttivat pelisuunnitteluprosessiin ja miten? Työ esittelee InsomniaGame-pelin kehityksen. Erityistarkastelussa ovat suunnitteluprosessin ja pelin sisällölliset muutokset sekä niihin vaikuttaneet tekijät. Pelin kehitys perustui pääasiassa erilaisiin dokumentteihin, joita käytettiin suunnittelun apuvälineenä sekä viestinnässä projektin eri toimijoiden kesken. Tutkimus pyrkii syntyneiden dokumenttien sekä pelisuunnittelijoiden muistin perusteella rekonstruoimaan InsomniaGame-pelisovelluksen kehityskaaren. InsomniaGamen kehityksessä oli monia tekijöitä, jotka muuttuivat sen kehityskaaren aikana. Itse pelin sisältö, kuten myös suunnittelutapa, muuttuivat kahden vuoden aikana huomattavasti. Pelillä oli myös monia erityispiirteitä, jotka tekevät sen kehityksestä ainutlaatuisen, sillä esimerkiksi pelin testaaminen yhtenä kokonaisuutena oli mahdotonta. Lisäksi peli oli tutkimus- ja yhteistyöprojekti, jossa oli mukana monia eri toimijoita ja erityisesti tutkimuksessa korostuu yhteistyökumppani Insomnia verkkopeliyhdistyksen osallisuus. InsomniaGamen kummankaan vuoden toteutus ei sujunut odotetulla tavalla, mikä osaltaan vaikutti etenkin jälkimmäisen vuoden pelin suunnitteluun. Varsinainen suunnittelutyö kuitenkin eteni ensimmäisenä vuonna käytetyn mallin mukaisesti, mutta kuitenkin niin että alkuperäiset oletukset pelisuunnittelusta ja lopputuloksesta muuttuivat. Tämän vuoksi peliprojektia voi paikoitellen luonnehtia jopa kaoottiseksi, ja erityisesti toteutusvaiheessa jouduttiin luomaan nopealla aikataululla uusia toimintamalleja. Työ toimii mallina tuleville peliprojekteille, mutta erityisen tärkeää olisi luoda yhtenäinen kehitysalusta vastaavanlaisia projekteja varten.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.