37 resultados para pacs: data handling techniques
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Monissasovelluksissa on hyvin tärkeää vähentää valolähteen vaikutusta kohteen oikean värin havainnoimiseksi. Tämä on tarpeen mm. virtuaalisissa museoissa, telelääketieteessä, verkkokaupassa ja verkkorahassa. Tässä tutkielmassa on kehitetty tekniikkaa kirkkaiden heijastusten poistoon spektrikuvista. Työ sisältää katsauksen yleisen värillisen kuvan ymmärtämiseen, mihin perustuen analysoitiin erilaisia kirkkaiden heijastusten poistO'tekniikoita. Työssä kehitettiin uusi kirkkaiden heijastusten poistO'menetelmä, joka perustuu dikromaattiseen heijastus-malliin, joka kuvaa spektrisen datan objektin omaan väriin ja valaisevan valon väriin perustuen. Ehdotettu kirkkaiden heijastusten poistO'menetelmä hyödyntää erilaisia olemassaolevia menetelmiä, kuten pääkomponenttimenetelmää ja tiedon luokittelu-menetelmää. Yritys kehittää nopeasti toimiva algoritmi, joka myös suoriutuu tehtävästä hyvin, on onnistunut. Kokeet toteutettiin ehdotetun menetelmän mukaisesti ja toimivalla algoritmilla saatiin halutut lopputulokset. Edelleentyö sisältää ehdotuksia esitetyn algoritmin parantamiseksi.
Resumo:
Diplomityössä tehdään perusselvitys olemassaolevista tiedonsiirtotekniikoista, joita voidaan käyttää teollisuusympäristössä. Tiedonsiirtojärjestelmiä tarvitaan teollisuuden prosessien ja laitteiden ohjauksessa monessa eri tasossa. Ylemmän tason tiedonsiirtoväyliin kuuluu esimerkiksi taloushallinnan talousväylä, jota pitkin tapahtuu operatiivista tiedonsiirtoa. Operatiivinen tiedonsiirto koostuu tuotantotiedonsiirrosta ja koneiden toimintaan liittyvien tietojen siirrosta. Tavoitteena on tuotannon tehostaminen sekä logistiikan ja koneiden käytettävyyden parantaminen. Alemman tason väyliin kuuluu käyttöväylä, joka voi koostua useammasta osaväylästä: tehdasväylästä, prosessiväylästä ja kenttäväylästä. Ohjaustiedonsiirto ja laitteiden välinen tiedonsiirto tapahtuu käyttöväylää pitkin. Tiedonsiirtoa tapahtuu myös laitteiden sisäisesti. Työn alkuosassa esitellään laitteita, joiden välillä voidaan tarvita tiedonsiirtoyhteyttä ja vaatimuksia, joita teollisuusympäristö asettaa tiedonsiirtoyhteyksille. Työssä esitellään teollisuusympäristöön soveltuvien tiedonsiirtotekniikoiden ominaisuuksia ja sovelluksia. Eri tiedonsiirtoyhteyksien toteuttamiseen sopivien tiedonsiirtojärjestelmien valintaa käsitellään esimerkkien valossa. Työssä esitellään myös esimerkkilaitteisto, jonka avulla tutkitaan pienjännitteisen sähköverkon soveltuvuutta tiedonsiirtoon. Työssä esitetään yhteenveto siitä, millä tiedonsiirtotekniikoilla eri laitteiden välinen tiedonsiirto on toteutettavissa ja pohditaan uusien tiedonsiirtotekniikoiden tarjoamia mahdollisuuksia ja niiden käyttöön liittyviä vaatimuksia.
Resumo:
Langattomien tiedonsiirtotekniikoiden kehitys on luonut mahdollisuuden Stora Enson ajoneuvojärjestelmän tietoliikennejärjestelmän uudistamiselle ja toiminnan kehittämiselle. Ajoneuvojärjestelmän tietoliikenteen tärkein tehtävä on kuljetuksen ohjaustiedon siirtäminen nopeasti ja luotettavasti Stora Enso Metsän metsäjärjestelmän, tuotantolaitosten ja ajoneuvojärjestelmän ajoneuvojen välillä. Tähän tehtävään ajoneuvojärjestelmä käyttää langatonta tiedonvälitystä. Tässä työssä suunnitellaan ja toteutetaan Stora Enson ajoneuvojärjestelmän kolmannen version tietoliikenne langattoman matkaviestinverkon välityksellä. Tietoliikennejärjestelmän langattoman tiedonvälityksen tekniset ratkaisut perustellaan. Järjestelmätestauksen suunnittelu ja toteutus Stora Enson tuotantoympäristössä esitetään. Testaustuloksia analysoidaan ja verrataan nykyiseen järjestelmään sekä asetettuihin tavoitteisiin, jotta varmistetaan tietoliikennejärjestelmälle asetettujen laatuvaatimusten täyttyminen. Lopuksi tarkastellaan ajoneuvojärjestelmän tulevaisuuden kehityskohteita ja tulevaisuuden tiedonsiirtotekniikoiden mahdollistamia tietoliikenteen kehitysmahdollisuuksia.
Resumo:
Biomedical research is currently facing a new type of challenge: an excess of information, both in terms of raw data from experiments and in the number of scientific publications describing their results. Mirroring the focus on data mining techniques to address the issues of structured data, there has recently been great interest in the development and application of text mining techniques to make more effective use of the knowledge contained in biomedical scientific publications, accessible only in the form of natural human language. This thesis describes research done in the broader scope of projects aiming to develop methods, tools and techniques for text mining tasks in general and for the biomedical domain in particular. The work described here involves more specifically the goal of extracting information from statements concerning relations of biomedical entities, such as protein-protein interactions. The approach taken is one using full parsing—syntactic analysis of the entire structure of sentences—and machine learning, aiming to develop reliable methods that can further be generalized to apply also to other domains. The five papers at the core of this thesis describe research on a number of distinct but related topics in text mining. In the first of these studies, we assessed the applicability of two popular general English parsers to biomedical text mining and, finding their performance limited, identified several specific challenges to accurate parsing of domain text. In a follow-up study focusing on parsing issues related to specialized domain terminology, we evaluated three lexical adaptation methods. We found that the accurate resolution of unknown words can considerably improve parsing performance and introduced a domain-adapted parser that reduced the error rate of theoriginal by 10% while also roughly halving parsing time. To establish the relative merits of parsers that differ in the applied formalisms and the representation given to their syntactic analyses, we have also developed evaluation methodology, considering different approaches to establishing comparable dependency-based evaluation results. We introduced a methodology for creating highly accurate conversions between different parse representations, demonstrating the feasibility of unification of idiverse syntactic schemes under a shared, application-oriented representation. In addition to allowing formalism-neutral evaluation, we argue that such unification can also increase the value of parsers for domain text mining. As a further step in this direction, we analysed the characteristics of publicly available biomedical corpora annotated for protein-protein interactions and created tools for converting them into a shared form, thus contributing also to the unification of text mining resources. The introduced unified corpora allowed us to perform a task-oriented comparative evaluation of biomedical text mining corpora. This evaluation established clear limits on the comparability of results for text mining methods evaluated on different resources, prompting further efforts toward standardization. To support this and other research, we have also designed and annotated BioInfer, the first domain corpus of its size combining annotation of syntax and biomedical entities with a detailed annotation of their relationships. The corpus represents a major design and development effort of the research group, with manual annotation that identifies over 6000 entities, 2500 relationships and 28,000 syntactic dependencies in 1100 sentences. In addition to combining these key annotations for a single set of sentences, BioInfer was also the first domain resource to introduce a representation of entity relations that is supported by ontologies and able to capture complex, structured relationships. Part I of this thesis presents a summary of this research in the broader context of a text mining system, and Part II contains reprints of the five included publications.
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
The recent digitization, fragmentation of the media landscape and consumers’ changing media behavior are all changes that have had drastic effects on creating marketing communications. In order to create effective marketing communications large advertisers are now co-operating with a variety of marketing communications companies. The purpose of the study is to understand how advertisers perceive these different companies and more importantly how do advertisers expect their roles to change in the future as the media landscape continues to evolve. Especially the changing roles of advertising agencies and media agencies are examined as they are at the moment the most relevant partners of the advertisers. However, the research is conducted from a network perspective rather than focusing on single actors of the marketing communications industry network. The research was conducted using a qualitative theme interview method. The empirical data was gathered by interviewing representatives from nine of the 50 largest Finnish advertisers measured by media spending. Thus, the research was conducted solely from large B2C advertisers’ perspective while the views of their other relevant actors of the network were left unexplored. The interviewees were chosen with a focus on variety of points of view. The analytical framework that was used to analyze the gathered data was built the IMP group’s industrial network model that consists of actors, their resources and activities. As technology driven media landscape fragmentation and consumers’ changing media behavior continue to increase the complexity of creating marketing communications, advertisers are going to need to rely on a growing number of partnerships as they see that the current actors of the network will not be able to widen their expertise to answer to these new needs. The advertisers expect to form new partnerships with actors that are more specialized and able to react and produce activities more quickly than at the moment. Thus, new smaller and more agile actors with looser structures are going to appear to fill these new needs. Therefore, the need of co-operation between the actors is going to become more important. These changes pose the biggest threat for traditional advertising agencies as they were seen as being most unable to cope with the ongoing change. Media agencies are in a more favorable position for remaining relevant for the advertisers as they will be able to justify their activities and provided value by leveraging their data handling abilities. In general the advertisers expect to be working with a limited number of close actors and in addition having a network of smaller actors, which are used on a more ad hoc basis.
Resumo:
The objective of this study was to understand how organizational knowledge governance mechanisms affect individual motivation, opportunity, and the ability to share knowledge (MOA framework), and further, how individual knowledge-sharing conditions affect actual knowledge sharing behaviour. The study followed the knowledge governance approach and a micro-foundations perspective to develop a theoretical model and hypotheses, which could explain the casual relationships between knowledge governance mechanisms, individual knowledge sharing conditions, and individual knowledge sharing behaviour. The quantitative research strategy and multivariate data analysis techniques (SEM) were used in the hypotheses testing with a survey dataset of 256 employees from eleven military schools of Finnish Defence Forces (FDF). The results showed that “performance-based feedback and rewards” affects employee’s “intrinsic motivation towards knowledge sharing”, that “lateral coordination” affects employee’s “knowledge self-efficacy”, and that ”training and development” is positively related to “time availability” for knowledge sharing but affects negatively employee’s knowledge self-efficacy. Individual motivation and knowledge self-efficacy towards knowledge sharing affected knowledge sharing behaviour when work-related knowledge was shared 1) between employees in a department and 2) between employees in different departments, however these factors did not play a crucial role in subordinate–superior knowledge sharing. The findings suggest that individual motivation, opportunity, and the ability towards knowledge sharing affects individual knowledge sharing behaviour differently in different knowledge sharing situations. Furthermore, knowledge governance mechanisms can be used to manage individual-level knowledge sharing conditions and individual knowledge sharing behaviour but their affect also vary in different knowledge sharing situations.
Resumo:
Tiivistelmä Tekijä: Antti Korkki Tutkielman nimi: Hiljaisen tiedon siirtäminen Palveluyritys Oy:n Helsingin myyntiosastolla myyntipäällikön näkökulmasta Tiedekunta: Kauppatieteellinen tiedekunta Maisteriohjelma: Tietojohtaminen Vuosi: 2014 Pro gradu –tutkielma: Lappeenrannan teknillinen yliopisto 88 sivua, 20 kuvaa ja yksi taulukko Tarkastajat: Professori Markku Ikävalko Tutkijatohtori Anna-Maija Nisula Hakusanat: Hiljainen tieto, tiedon jakaminen ja osaaminen Kiristynyt kilpailutilanne kuljetusalalla pakottaa alan yritykset etsimään uusia keinoja strategisen kilpailuedun saavuttamiseksi. Palvelualan yritykselle merkittävin kilpailukeino on palvelun laadulla kilpaileminen ja korkean laadun saavuttamiseksi on tärkeää, että oikea tieto on oikeassa paikassa oikeaan aikaan. Tämä tarkoittaa käytännössä reaaliaikaisten tiedon siirtämisen menetelmien käyttämistä. Tämän tutkimuksen tarkoituksena on selvittää Palveluyritys Oy:n Helsingin myyntiosaston tiedon siirtämisen foorumien nykytilanne myyntipäällikön näkökulmasta. Lisäksi tutkimuksella etsitään keinoja tehostaa hiljaisen tiedon siirtämistä Palveluyritys Oy:n Helsingin myyntiosastolla. Tutkimuksessa käytetään laadullista tutkimusmenetelmää ja teoriaohjaavaa sisällön analyysiä.
Resumo:
Leveraging cloud services, companies and organizations can significantly improve their efficiency, as well as building novel business opportunities. Cloud computing offers various advantages to companies while having some risks for them too. Advantages offered by service providers are mostly about efficiency and reliability while risks of cloud computing are mostly about security problems. Problems with security of the cloud still demand significant attention in order to tackle the potential problems. Security problems in the cloud as security problems in any area of computing, can not be fully tackled. However creating novel and new solutions can be used by service providers to mitigate the potential threats to a large extent. Looking at the security problem from a very high perspective, there are two focus directions. Security problems that threaten service user’s security and privacy are at one side. On the other hand, security problems that threaten service provider’s security and privacy are on the other side. Both kinds of threats should mostly be detected and mitigated by service providers. Looking a bit closer to the problem, mitigating security problems that target providers can protect both service provider and the user. However, the focus of research community mostly is to provide solutions to protect cloud users. A significant research effort has been put in protecting cloud tenants against external attacks. However, attacks that are originated from elastic, on-demand and legitimate cloud resources should still be considered seriously. The cloud-based botnet or botcloud is one of the prevalent cases of cloud resource misuses. Unfortunately, some of the cloud’s essential characteristics enable criminals to form reliable and low cost botclouds in a short time. In this paper, we present a system that helps to detect distributed infected Virtual Machines (VMs) acting as elements of botclouds. Based on a set of botnet related system level symptoms, our system groups VMs. Grouping VMs helps to separate infected VMs from others and narrows down the target group under inspection. Our system takes advantages of Virtual Machine Introspection (VMI) and data mining techniques.
Resumo:
Developing nations vary in data usage techniques with respect to developed nations because of lack of standard information technology architecture. With the concept of globalization in the modern times, there is a necessity of information sharing between different developing nations for better advancements in socio-economic and science and technology fields. A robust IT architecture is needed and has to be built between different developing nations which eases information sharing and other data usage methods. A framework like TOGAF may work in this case as a normal IT framework may not fit to meet the requirements of an enterprise architecture. The intention of the thesis is to build an enterprise architecture between different developing nations using a framework TOGAF
Resumo:
Visual data mining (VDM) tools employ information visualization techniques in order to represent large amounts of high-dimensional data graphically and to involve the user in exploring data at different levels of detail. The users are looking for outliers, patterns and models – in the form of clusters, classes, trends, and relationships – in different categories of data, i.e., financial, business information, etc. The focus of this thesis is the evaluation of multidimensional visualization techniques, especially from the business user’s perspective. We address three research problems. The first problem is the evaluation of projection-based visualizations with respect to their effectiveness in preserving the original distances between data points and the clustering structure of the data. In this respect, we propose the use of existing clustering validity measures. We illustrate their usefulness in evaluating five visualization techniques: Principal Components Analysis (PCA), Sammon’s Mapping, Self-Organizing Map (SOM), Radial Coordinate Visualization and Star Coordinates. The second problem is concerned with evaluating different visualization techniques as to their effectiveness in visual data mining of business data. For this purpose, we propose an inquiry evaluation technique and conduct the evaluation of nine visualization techniques. The visualizations under evaluation are Multiple Line Graphs, Permutation Matrix, Survey Plot, Scatter Plot Matrix, Parallel Coordinates, Treemap, PCA, Sammon’s Mapping and the SOM. The third problem is the evaluation of quality of use of VDM tools. We provide a conceptual framework for evaluating the quality of use of VDM tools and apply it to the evaluation of the SOM. In the evaluation, we use an inquiry technique for which we developed a questionnaire based on the proposed framework. The contributions of the thesis consist of three new evaluation techniques and the results obtained by applying these evaluation techniques. The thesis provides a systematic approach to evaluation of various visualization techniques. In this respect, first, we performed and described the evaluations in a systematic way, highlighting the evaluation activities, and their inputs and outputs. Secondly, we integrated the evaluation studies in the broad framework of usability evaluation. The results of the evaluations are intended to help developers and researchers of visualization systems to select appropriate visualization techniques in specific situations. The results of the evaluations also contribute to the understanding of the strengths and limitations of the visualization techniques evaluated and further to the improvement of these techniques.
Resumo:
This book is dedicated to celebrate the 60th birthday of Professor Rainer Huopalahti. Professor Rainer “Repe” Huopalahti has had, and in fact is still enjoying a distinguished career in the analysis of food and food related flavor compounds. One will find it hard to make any progress in this particular field without a valid and innovative sample handling technique and this is a field in which Professor Huopalahti has made great contributions. The title and the front cover of this book honors Professor Huopahti’s early steps in science. His PhD thesis which was published on 1985 is entitled “Composition and content of aroma compounds in the dill herb, Anethum graveolens L., affected by different factors”. At that time, the thesis introduced new technology being applied to sample handling and analysis of flavoring compounds of dill. Sample handling is an essential task that in just about every analysis. If one is working with minor compounds in a sample or trying to detect trace levels of the analytes, one of the aims of sample handling may be to increase the sensitivity of the analytical method. On the other hand, if one is working with a challenging matrix such as the kind found in biological samples, one of the aims is to increase the selectivity. However, quite often the aim is to increase both the selectivity and the sensitivity. This book provides good and representative examples about the necessity of valid sample handling and the role of the sample handling in the analytical method. The contributors of the book are leading Finnish scientists on the field of organic instrumental analytical chemistry. Some of them are also Repe’ s personal friends and former students from the University of Turku, Department of Biochemistry and Food Chemistry. Importantly, the authors all know Repe in one way or another and are well aware of his achievements on the field of analytical chemistry. The editorial team had a great time during the planning phase and during the “hard work editorial phase” of the book. For example, we came up with many ideas on how to publish the book. After many long discussions, we decided to have a limited edition as an “old school hard cover book” – and to acknowledge more modern ways of disseminating knowledge by publishing an internet version of the book on the webpages of the University of Turku. Downloading the book from the webpage for personal use is free of charge. We believe and hope that the book will be read with great interest by scientists working in the fascinating field of organic instrumental analytical chemistry. We decided to publish our book in English for two main reasons. First, we believe that in the near future, more and more teaching in Finnish Universities will be delivered in English. To facilitate this process and encourage students to develop good language skills, it was decided to be published the book in English. Secondly, we believe that the book will also interest scientists outside Finland – particularly in the other member states of the European Union. The editorial team thanks all the authors for their willingness to contribute to this book – and to adhere to the very strict schedule. We also want to thank the various individuals and enterprises who financially supported the book project. Without that support, it would not have been possible to publish the hardcover book.
Resumo:
This work is devoted to the problem of reconstructing the basis weight structure at paper web with black{box techniques. The data that is analyzed comes from a real paper machine and is collected by an o®-line scanner. The principal mathematical tool used in this work is Autoregressive Moving Average (ARMA) modelling. When coupled with the Discrete Fourier Transform (DFT), it gives a very flexible and interesting tool for analyzing properties of the paper web. Both ARMA and DFT are independently used to represent the given signal in a simplified version of our algorithm, but the final goal is to combine the two together. Ljung-Box Q-statistic lack-of-fit test combined with the Root Mean Squared Error coefficient gives a tool to separate significant signals from noise.
Resumo:
Nowadays the used fuel variety in power boilers is widening and new boiler constructions and running models have to be developed. This research and development is done in small pilot plants where more faster analyse about the boiler mass and heat balance is needed to be able to find and do the right decisions already during the test run. The barrier on determining boiler balance during test runs is the long process of chemical analyses of collected input and outputmatter samples. The present work is concentrating on finding a way to determinethe boiler balance without chemical analyses and optimise the test rig to get the best possible accuracy for heat and mass balance of the boiler. The purpose of this work was to create an automatic boiler balance calculation method for 4 MW CFB/BFB pilot boiler of Kvaerner Pulping Oy located in Messukylä in Tampere. The calculation was created in the data management computer of pilot plants automation system. The calculation is made in Microsoft Excel environment, which gives a good base and functions for handling large databases and calculations without any delicate programming. The automation system in pilot plant was reconstructed und updated by Metso Automation Oy during year 2001 and the new system MetsoDNA has good data management properties, which is necessary for big calculations as boiler balance calculation. Two possible methods for calculating boiler balance during test run were found. Either the fuel flow is determined, which is usedto calculate the boiler's mass balance, or the unburned carbon loss is estimated and the mass balance of the boiler is calculated on the basis of boiler's heat balance. Both of the methods have their own weaknesses, so they were constructed parallel in the calculation and the decision of the used method was left to user. User also needs to define the used fuels and some solid mass flowsthat aren't measured automatically by the automation system. With sensitivity analysis was found that the most essential values for accurate boiler balance determination are flue gas oxygen content, the boiler's measured heat output and lower heating value of the fuel. The theoretical part of this work concentrates in the error management of these measurements and analyses and on measurement accuracy and boiler balance calculation in theory. The empirical part of this work concentrates on the creation of the balance calculation for the boiler in issue and on describing the work environment.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.