21 resultados para Branch Dependency Tracking

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

20.00% 20.00%

Publicador:

Resumo:

EPC 2006 kansainvälinen tuottavuuskonfrenssi

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Usingof belt for high precision applications has become appropriate because of the rapid development in motor and drive technology as well as the implementation of timing belts in servo systems. Belt drive systems provide highspeed and acceleration, accurate and repeatable motion with high efficiency, long stroke lengths and low cost. Modeling of a linear belt-drive system and designing its position control are examined in this work. Friction phenomena and position dependent elasticity of the belt are analyzed. Computer simulated results show that the developed model is adequate. The PID control for accurate tracking control and accurate position control is designed and applied to the real test setup. Both the simulation and the experimental results demonstrate that the designed controller meets the specified performance specifications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tiedon jakaminen ja kommunikointi ovat tärkeitä toimintoja verkostoituneiden yritysten välillä ja ne käsitetäänkin yhteistyösuhteen yhtenä menestystekijänä ja kulmakivenä. Tiedon jakamiseen liittyviä haasteita ovat mm. yrityksen liiketoiminnalle kriittisen tiedon vuotaminen ja liiketoiminnan vaatima tiedon reaaliaikaisuus ja riittävä määrä. Tuotekehitysyhteistyössä haasteellista on tiedon jäsentymättömyys ja sitä kautta lisääntyvä tiedon jakamisen tarve, minkä lisäksi jaettava tieto on usein monimutkaista ja yksityiskohtaista. Lisäksi tuotteiden elinkaaret lyhenevät, ja ulkoistaminen ja yhteistyö ovat yhä kasvavia trendejä liiketoiminnassa. Yhdessä nämä tekijät johtavat siihen, että tiedon jakaminen on haastavaa eritoten verkostoituneiden yritysten välillä. Tässä tutkimuksessa tiedon jakamisen haasteisiin pyrittiin vastaamaan ottamalla lähtökohdaksi tiedon jakamisen tilanneriippuvuuden ymmärtäminen. Työssä vastattiin kahteen pääkysymykseen: Mikä on tiedon jakamisen tilanneriippuvuus ja miten sitä voidaan hallita? Tilanneriippuvuudella tarkoitetaan työssä niitä tekijöitä, jotka vaikuttavat siihen, miten yritys jakaa tietoa tuotekehityskumppaneidensa kanssa. Tiedon jakamisella puolestaan tarkoitetaan yrityksestä toiselle siirrettävää tietoa, jota tarvitaan tuotekehitysprojektin aikana. Työn empiirinen aineisto on kerätty laadullisella tutkimusotteella case- eli tapaustutkimuksena yhdessä telekommunikaatioalan yrityksessä jasen eri liiketoimintayksiköissä. Tutkimusjoukko käsitti 19 tuotekehitys- ja toimittajanhallintatehtävissä toimivaa johtajaa tai päällikköä. Työ nojaa pääasiassa hankintojen johtamisen tutkimuskenttään ja tilanneriippuvuuden selvittämiseksi paneuduttiin erityisesti verkostojen tutkimukseen. Työssä kuvattiin tiedon jakaminen yhtenä verkoston toimintona ja yhteistyöhön liittyvättiedon jakamisen hyödyt, haasteet ja riskit identifioitiin. Tämän lisäksi työssä kehitettiin verkoston tutkimismalleja ja yhdistettiin eri tasoilla tapahtuvaa verkoston tutkimusta. Työssä esitettiin malli verkoston toimintojen tutkimiseksija todettiin, että verkostotutkimusta pitäisi tehdä verkosto, ketju, yrityssuhde- ja yritystasolla. Malliin on myös hyvä yhdistää tuote- ja tehtäväkohtaiset ominaispiirteet. Kirjallisuuskatsauksen perusteella huomattiin, että tiedon jakamista on aiemmin tarkasteltu lähinnä tuote- ja yrityssuhteiden tasolla. Väitöskirjassa esitettiin lisää merkittäviä tekijöitä, jotka vaikuttavat tiedon jakamiseen. Näitä olivat mm. tuotekehitystehtävän luonne, teknologia-alueen kypsyys ja toimittajan kyvykkyys. Tiedon jakamisen luonnetta tarkasteltaessa erotettiin operatiivinen, projektin hallintaan ja tuotekehitykseen liittyvä tieto sekä yleinen, toimittajan hallintaan liittyvä strateginen tieto. Tulosten mukaan erityisesti tuotekehityksen määrittelyvaihe ja tapaamiset kasvotusten korostuivat yhteistyössä. Empirian avulla tutkittiin myös niitä tekijöitä, joilla tiedon jakamista voidaan hallita tilanneriippuvuuteen perustuen, koska aiemmin tiedon jakamisen hallintakeinoja tai menestystekijöitä ei ole liitetty suoranaisesti eri olosuhteisiin. Nämä hallintakeinot jaettiin yhteistyötason- ja tuotekehitysprojektitason tekijöihin. Yksi työn keskeisistä tuloksista on se, että huolimatta tiedon jakamisen haasteista, monet niistä voidaan eliminoida tunnistamalla vallitsevat olosuhteet ja panostamalla tiedon jakamisen hallintakeinoihin. Työn manageriaalinen hyöty koskee erityisesti yrityksiä, jotka suunnittelevat ja tekevät tuotekehitysyhteistyötä yrityskumppaniensa kanssa. Työssä esitellään keinoja tämän haasteellisen tehtäväkentän hallintaan ja todetaan, että yritysten pitäisikin kiinnittää entistä enemmän huomiota tiedon jakamisen ja kommunikaation hallintaan jo tuotekehitysyhteistyötä suunniteltaessa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biomedical research is currently facing a new type of challenge: an excess of information, both in terms of raw data from experiments and in the number of scientific publications describing their results. Mirroring the focus on data mining techniques to address the issues of structured data, there has recently been great interest in the development and application of text mining techniques to make more effective use of the knowledge contained in biomedical scientific publications, accessible only in the form of natural human language. This thesis describes research done in the broader scope of projects aiming to develop methods, tools and techniques for text mining tasks in general and for the biomedical domain in particular. The work described here involves more specifically the goal of extracting information from statements concerning relations of biomedical entities, such as protein-protein interactions. The approach taken is one using full parsing—syntactic analysis of the entire structure of sentences—and machine learning, aiming to develop reliable methods that can further be generalized to apply also to other domains. The five papers at the core of this thesis describe research on a number of distinct but related topics in text mining. In the first of these studies, we assessed the applicability of two popular general English parsers to biomedical text mining and, finding their performance limited, identified several specific challenges to accurate parsing of domain text. In a follow-up study focusing on parsing issues related to specialized domain terminology, we evaluated three lexical adaptation methods. We found that the accurate resolution of unknown words can considerably improve parsing performance and introduced a domain-adapted parser that reduced the error rate of theoriginal by 10% while also roughly halving parsing time. To establish the relative merits of parsers that differ in the applied formalisms and the representation given to their syntactic analyses, we have also developed evaluation methodology, considering different approaches to establishing comparable dependency-based evaluation results. We introduced a methodology for creating highly accurate conversions between different parse representations, demonstrating the feasibility of unification of idiverse syntactic schemes under a shared, application-oriented representation. In addition to allowing formalism-neutral evaluation, we argue that such unification can also increase the value of parsers for domain text mining. As a further step in this direction, we analysed the characteristics of publicly available biomedical corpora annotated for protein-protein interactions and created tools for converting them into a shared form, thus contributing also to the unification of text mining resources. The introduced unified corpora allowed us to perform a task-oriented comparative evaluation of biomedical text mining corpora. This evaluation established clear limits on the comparability of results for text mining methods evaluated on different resources, prompting further efforts toward standardization. To support this and other research, we have also designed and annotated BioInfer, the first domain corpus of its size combining annotation of syntax and biomedical entities with a detailed annotation of their relationships. The corpus represents a major design and development effort of the research group, with manual annotation that identifies over 6000 entities, 2500 relationships and 28,000 syntactic dependencies in 1100 sentences. In addition to combining these key annotations for a single set of sentences, BioInfer was also the first domain resource to introduce a representation of entity relations that is supported by ontologies and able to capture complex, structured relationships. Part I of this thesis presents a summary of this research in the broader context of a text mining system, and Part II contains reprints of the five included publications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Teoreettisen populaatiosynteesin avulla voidaan mallintaa tähtijoukkojen ja galaksien fotometrisiä ominaisuuksia yhdistämällä yksittäisten tähtien tuottama säteily, joka saadaan teoreettisista tähtien kehitysmalleista. Valitsemalla sopiva massajakauma syntyville tähdille voidaan muodostaa yksinkertainen tähtipopulaatio, joka koostuu saman ikäisistä ja kemialliselta koostumukseltaan yhtenäisistä tähdistä. Monimutkaisempia tähtipopulaatioita voidaan muodostaa konvoloimalla yksinkertaisten tähtipopulaatioiden luminositeetti jonkin valitun tähtienmuodostushistorian kanssa sekä yhdistämällä näin muodostettuja populaatioita. Tässä työssä tarkastellaan asymptoottisen jättiläishaaran (AGB) tähtien uusien, tarkentuneiden evoluutiomallien vaikutusta populaatiosynteesin tuloksiin niin yksinkertaisten tähtipopulaatioiden kuin galaksien mallinnukseen soveltuvien monimutkaisempien tähtipopulaatioiden kohdalla. Työn päätarkoitus on tuottaa uudistuneisiin malleihin perustuvat populaation massa-luminositeetti -suhteen ja värin väliset relaatiot (MLC-relaatiot). MLC-relaatioita voidaan käyttää populaation massan määrittämiseen sen fotometristen ominaisuuksien (väri, luminositeetti) perusteella. Lisäksi tutkitaan tähtienvälisen pölyn vaikutusta yksinkertaisen spiraaligalaksimallin MLC-relaatioihin. Työssä käytetyt tähtien kehitysmallit perustuvat julkaisuun Marigo et al. (Astronomy & Astrophysics 482, 2008). Havaitaan, että AGB-tähtien vaikutus populaation integroituun luminositeettiin on pieni näkyvillä aallonpituuksilla, mutta merkittävä lähi-infrapuna-alueella. Vaikutus MLC-relaatioihin on vastaavasti merkittävä tarkkailtaessa luminositeettia lähi-infrapunassa sekä käytettäessä värejä, joissa yhdistetään optisia ja lähi-infrapunan kaistoja. Todetaan, että MLC-relaatioiden käyttö lähi-infrapunassa edellyttää tarkentuneen AGB-vaiheen sisällyttämistä populaatiosynteesin malleihin. Tähtienvälisen pölyn vaikutus MLC-relaatioihin todetaan riippuvan käytetystä kaistasta ja väristä, mutta vaikutuksen havaitaan olevan suurin optisen ja lähi-infrapunan väriyhdistelmillä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The modern society is getting increasingly dependent on software applications. These run on processors, use memory and account for controlling functionalities that are often taken for granted. Typically, applications adjust the functionality in response to a certain context that is provided or derived from the informal environment with various qualities. To rigorously model the dependence of an application on a context, the details of the context are abstracted and the environment is assumed stable and fixed. However, in a context-aware ubiquitous computing environment populated by autonomous agents, a context and its quality parameters may change at any time. This raises the need to derive the current context and its qualities at runtime. It also implies that a context is never certain and may be subjective, issues captured by the context’s quality parameter of experience-based trustworthiness. Given this, the research question of this thesis is: In what logical topology and by what means may context provided by autonomous agents be derived and formally modelled to serve the context-awareness requirements of an application? This research question also stipulates that the context derivation needs to incorporate the quality of the context. In this thesis, we focus on the quality of context parameter of trustworthiness based on experiences having a level of certainty and referral experiences, thus making trustworthiness reputation based. Hence, in this thesis we seek a basis on which to reason and analyse the inherently inaccurate context derived by autonomous agents populating a ubiquitous computing environment in order to formally model context-awareness. More specifically, the contribution of this thesis is threefold: (i) we propose a logical topology of context derivation and a method of calculating its trustworthiness, (ii) we provide a general model for storing experiences and (iii) we formalise the dependence between the logical topology of context derivation and its experience-based trustworthiness. These contributions enable abstraction of a context and its quality parameters to a Boolean decision at runtime that may be formally reasoned with. We employ the Action Systems framework for modelling this. The thesis is a compendium of the author’s scientific papers, which are republished in Part II. Part I introduces the field of research by providing the mending elements for the thesis to be a coherent introduction for addressing the research question. In Part I we also review a significant body of related literature in order to better illustrate our contributions to the research field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are particularly useful in conditions where the Lambertian assumption holds, and the 3D points have constant color despite viewing angle. The goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real-time applications. The developed image-based 3D pose and structure estimation methods are finally demonstrated in practise in indoor 3D reconstruction use, and in a live augmented reality application.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual object tracking has been one of the most popular research topics in the field of computer vision recently. Specifically, hand tracking has attracted significant attention since it would enable many useful practical applications. However, hand tracking is still a very challenging problem which cannot be considered solved. The fact that almost every aspect of hand appearance can change is the fundamental reason for this difficulty. This thesis focused on 2D-based hand tracking in high-speed camera videos. During the project, a toolbox for this purpose was collected which contains nine different tracking methods. In the experiments, these methods were tested and compared against each other with both high-speed videos recorded during the project and publicly available normal speed videos. The results revealed that tracking accuracies varied considerably depending on the video and the method. Therefore, no single method was clearly the best in all videos, but three methods, CT, HT, and TLD, performed better than the others overall. Moreover, the results provide insights about the suitability of each method to different types and situations of hand tracking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An augmented reality (AR) device must know observer’s location and orientation, i.e. observer’s pose, to be able to correctly register the virtual content to observer’s view. One possible way to determine and continuously follow-up the pose is model-based visual tracking. It supposes that a 3D model of the surroundings is known and that there is a video camera that is fixed to the device. The pose is tracked by comparing the video camera image to the model. Each new pose estimate is usually based on the previous estimate. However, the first estimate must be found out without a prior estimate, i.e. the tracking must be initialized, which in practice means that some model features must be identified from the image and matched to model features. This is known in literature as model-to-image registration problem or simultaneous pose and correspondence problem. This report reviews visual tracking initialization methods that are suitable for visual tracking in ship building environment when the ship CAD model is available. The environment is complex, which makes the initialization non-trivial. The report has been done as part of MARIN project.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kartta kuuluu A. E. Nordenskiöldin kokoelmaan

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the times preceding the Second World War the subject of aircraft tracking has been a core interest to both military and non-military aviation. During subsequent years both technology and configuration of the radars allowed the users to deploy it in numerous fields, such as over-the-horizon radar, ballistic missile early warning systems or forward scatter fences. The latter one was arranged in a bistatic configuration. The bistatic radar has continuously re-emerged over the last eighty years for its intriguing capabilities and challenging configuration and formulation. The bistatic radar arrangement is used as the basis of all the analyzes presented in this work. The aircraft tracking method of VHF Doppler-only information, developed in the first part of this study, is solely based on Doppler frequency readings in relation to time instances of their appearance. The corresponding inverse problem is solved by utilising a multistatic radar scenario with two receivers and one transmitter and using their frequency readings as a base for aircraft trajectory estimation. The quality of the resulting trajectory is then compared with ground-truth information based on ADS-B data. The second part of the study deals with the developement of a method for instantaneous Doppler curve extraction from within a VHF time-frequency representation of the transmitted signal, with a three receivers and one transmitter configuration, based on a priori knowledge of the probability density function of the first order derivative of the Doppler shift, and on a system of blocks for identifying, classifying and predicting the Doppler signal. The extraction capabilities of this set-up are tested with a recorded TV signal and simulated synthetic spectrograms. Further analyzes are devoted to more comprehensive testing of the capabilities of the extraction method. Besides testing the method, the classification of aircraft is performed on the extracted Bistatic Radar Cross Section profiles and the correlation between them for different types of aircraft. In order to properly estimate the profiles, the ADS-B aircraft location information is adjusted based on extracted Doppler frequency and then used for Bistatic Radar Cross Section estimation. The classification is based on seven types of aircraft grouped by their size into three classes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many industrial applications need object recognition and tracking capabilities. The algorithms developed for those purposes are computationally expensive. Yet ,real time performance, high accuracy and small power consumption are essential measures of the system. When all these requirements are combined, hardware acceleration of these algorithms becomes a feasible solution. The purpose of this study is to analyze the current state of these hardware acceleration solutions, which algorithms have been implemented in hardware and what modifications have been done in order to adapt these algorithms to hardware.