836 resultados para Data fusion applications
Resumo:
Ohjelmistoteollisuudessa pitkiä ja vaikeita kehityssyklejä voidaan helpottaa käyttämällä hyväksi ohjelmistokehyksiä (frameworks). Ohjelmistokehykset edustavat kokoelmaa luokkia, jotka tarjoavat yleisiä ratkaisuja tietyn ongelmakentän tarpeisiin vapauttaen ohjelmistokehittäjät keskittymään sovelluskohtaisiin vaatimuksiin. Hyvin suunniteltujen ohjelmistokehyksien käyttö lisää suunnitteluratkaisujen sekä lähdekoodin uudelleenkäytettävyyttä enemmän kuin mikään muu suunnittelulähestymistapa. Tietyn kohdealueen tietämys voidaan tallentaa ohjelmistokehyksiin, joista puolestaan voidaan erikoistaa viimeisteltyjä ohjelmistotuotteita. Tässä diplomityössä kuvataan ohjelmistoagentteihin (software agents) perustuvaa ohjelmistokehyksen suunnittelua toteutusta. Pääpaino työssä on vaatimusmäärittelyä vastaavan suunnitelman sekä toteutuksen kuvaaminen ohjelmistokehykselle, josta voidaan erikoistaa erilaiseen tiedonkeruuseen kykeneviä ohjelmistoja Internet ympäristöön. Työn kokeellisessa osuudessa esitellään myös esimerkkisovellus, joka perustuu työssä kehitettyyn ohjelmistokehykseen.
Resumo:
Tämä diplomityö kirjoitettiin UPM- Kymmene konsernin UPM Net Services sa/nv osastolle Brysselissä ja Helsingissä. Työn aihe, Data communication in paper sales environment, määriteltiin käsittelemään paperin myyntijärjestelmään liittyviä aiheita. Nykyinen paperin myyntijärjestelmä on käsitelty ensin teoriassa ja aiheeseen kuuluvat ohjelmistotuotteet ja työkaluohjelmistot on esitelty. Parannuksia nykyiseen järjestelmään on pohdittu ohjelmistosuunnittelun, tehokkuuden, tiedon hallinnan, tietoturvallisuuden ja liiketoiminnan näkökulmista. Diplomityön käytännön osuudessa esitellään kaksi ohjelmistoa. Nämä ohjelmistot tehtiin UPM Net Services'lle, jotta saatiin kokemuksia viestin välitykseen perustuvasta tiedon siirrosta. Diplomityön johtopäätösosuudessa todetaan, että paperin myyntijärjestelmän tiedon siirto toimii luotettavasti nykyisessä järjestelmässä. Tulevaisuuden tarpeet ja parannukset ovat kuitenkin vaikeasti toteutettavissa nykyään käytettävin välinein. Erityisesti internetin hyödyntäminen nähdään tärkeänä, mutta se on vaikeasti otettavissa käyttöön nykyisessä järjestelmässä. Viestin välitykseen perustuvat järjestelmät ovat osoittautuneet käytännössä toimiviksi ja tärkein kehitysehdotus onkin viestin välitysjärjestelmän käyttöönotto.
Resumo:
Työn päätavoitteena oli tutkia mobiilipalveluita ja langattomia sovelluksia Suomen terveydenhuollon sektorilla. Tutkimus havainnollistaa avain-alueita, missä mobiilipalvelut ja langattomat sovellukset voivat antaa lisäarvoa perinteiseen lääketieteen harjoittamiseen, ja selvittää, mitkä ovat tähän kehitykseen liittyvät suurimmat ongelmat ja uhkat sekä tutkimustuloksiin pohjautuvat mahdolliset palvelut ja sovellukset 5-10 vuoden kuluttua. Tutkimus oli luonteeltaan kvalitatiivinen ja tutkimuksen toteuttamiseen valittiin tulevaisuudentutkimus ja erityisesti yksi sen menetelmistä, delfoi-menetelmä. Tutkimuksen aineisto kerättiin kahdelta puolistrukturoidulta haastattelukierrokselta. Työn empiirinen osuus keskittyi kuvailemaan Suomen terveydenhuollon sektoria, siinä meneillään olevia projekteja sekä teknisiä esteitä. Lisäksi pyrittiin vastaamaan tutkimuksen pääkysymykseen. Tutkimustulokset osoittivat, että tärkeät alueet, joihin langaton kommunikaatio tulisi vaikuttamaan merkittävästi, ovat ensiaputoiminta, kroonisten potilaiden etämonitorointi, välineiden kehittäminen langattomaan kommunikaatioon kotihoidon parantamiseksi ja uusien toimintamallien luomiseksi sekä lääketieteellinen yhteistyö jakamalla terveydenhuoltoon liittyvät informaation lähteet. Työn tulosten perusteellavoitiin antaa myös muutamia toimenpide-ehdotuksia jatkotutkimuksia varten.
Resumo:
Tämän diplomityön tarkoituksena oli tehdä selvitys EDI:in liittyvistä vaikutuksista, tarpeista ja eduista sekä valmistella Oracle Applications- toiminnanohjausjärjestelmän EDI Gateway- modulin ottamista tuotantokäyttöön. Tietoa tarvekartoitukseen saatiin keskustelujen avulla. Uusia kaupallisista lähtökohdista johdettuja, yritysten väliseen kaupankäyntiin ja internet-teknologian hyödyntämiseen kehitettyjä aloitteita käsiteltiin EDI-näkökulmasta tulevaisuutta varten. Ajankohtaisinta tietoa tätä diplomityötä varten löydettiin myös internetistä. Tämän jälkeen oli mahdollista toteuttaa sopivan laaja mutta rajattu EDI pilottiprojekti EDI-konseptin luomista varten. EDI:n vaikutuksiin ostossa keskityttiin tässä diplomityössä enemmän ja EDI:ä päätettiin soveltaa aluksi ostotilauksissa. EDI:n hyötyjä on vaikea mitata numeerisesti. Suurta määrää rahaa tai tuoteyksiköitä on käsiteltävä EDI-partnerin kanssa riittävän usein. EDI:n käyttöönottovaiheessa pääongelmat ovat sovelluksiin liittyviä tietotekniikkaongelmia. Selvityksistä ja EDI-projektista saatu tieto on mahdollista hyödyntää jatkokehityksessä. Lisätoimenpiteitä tarvitaan kokonaan toimivan järjestelmän luomiseksi.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
In recent years, technological advances have allowed manufacturers to implement dual-energy computed tomography (DECT) on clinical scanners. With its unique ability to differentiate basis materials by their atomic number, DECT has opened new perspectives in imaging. DECT has been used successfully in musculoskeletal imaging with applications ranging from detection, characterization, and quantification of crystal and iron deposits; to simulation of noncalcium (improving the visualization of bone marrow lesions) or noniodine images. Furthermore, the data acquired with DECT can be postprocessed to generate monoenergetic images of varying kiloelectron volts, providing new methods for image contrast optimization as well as metal artifact reduction. The first part of this article reviews the basic principles and technical aspects of DECT including radiation dose considerations. The second part focuses on applications of DECT to musculoskeletal imaging including gout and other crystal-induced arthropathies, virtual noncalcium images for the study of bone marrow lesions, the study of collagenous structures, applications in computed tomography arthrography, as well as the detection of hemosiderin and metal particles.
Resumo:
In recent years, technological advances have allowed manufacturers to implement dual-energy computed tomography (DECT) on clinical scanners. With its unique ability to differentiate basis materials by their atomic number, DECT has opened new perspectives in imaging. DECT has been successfully used in musculoskeletal imaging with applications ranging from detection, characterization, and quantification of crystal and iron deposits, to simulation of noncalcium (improving the visualization of bone marrow lesions) or noniodine images. Furthermore, the data acquired with DECT can be postprocessed to generate monoenergetic images of varying kiloelectron volts, providing new methods for image contrast optimization as well as metal artifact reduction. The first part of this article reviews the basic principles and technical aspects of DECT including radiation dose considerations. The second part focuses on applications of DECT to musculoskeletal imaging including gout and other crystal-induced arthropathies, virtual noncalcium images for the study of bone marrow lesions, the study of collagenous structures, applications in computed tomography arthrography, as well as the detection of hemosiderin and metal particles.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
Data traffic caused by mobile advertising client software when it is communicating with the network server can be a pain point for many application developers who are considering advertising-funded application distribution, since the cost of the data transfer might scare their users away from using the applications. For the thesis project, a simulation environment was built to mimic the real client-server solution for measuring the data transfer over varying types of connections with different usage scenarios. For optimising data transfer, a few general-purpose compressors and XML-specific compressors were tried for compressing the XML data, and a few protocol optimisations were implemented. For optimising the cost, cache usage was improved and pre-loading was enhanced to use free connections to load the data. The data traffic structure and the various optimisations were analysed, and it was found that the cache usage and pre-loading should be enhanced and that the protocol should be changed, with report aggregation and compression using WBXML or gzip.
Resumo:
The aim of the thesis is to study the principles of the permanent magnet linear synchronous motor (PMLSM) and to develop a simulator model of direct force controlled PMLSM. The basic motor model is described by the traditional two-axis equations. The end effects, cogging force and friction model are also included into the final motor model. Direct thrust force control of PMLSM is described and modelled. The full system model is proven by comparison with the data provided by the motor manufacturer.
Resumo:
In liberalized electricity markets, which have taken place in many countries over the world, the electricity distribution companies operate in the competitive conditions. Therefore, accurate information about the customers’ energy consumption plays an essential role for the budget keeping of the distribution company and for correct planning and operation of the distribution network. This master’s thesis is focused on the description of the possible benefits for the electric utilities and residential customers from the automatic meter reading system usage. Major benefits of the AMR, illustrated in the thesis, are distribution network management, power quality monitoring, load modelling, and detection of the illegal usage of the electricity. By the example of the power system state estimation, it was illustrated that even the partial installation of the AMR in the customer side leads to more accurate data about the voltage and power levels in the whole network. The thesis also contains the description of the present situation of the AMR integration in Russia.
Resumo:
The purpose of this thesis was to define how product carbon footprint analysis and its results can be used in company's internal development as well as in customer and interest group guidance, and how these factors are related to corporate social responsibility. From-cradle-to-gate carbon footprint was calculated for three products; Torino Whole grain barley, Torino Pearl barley, and Elovena Barley grit & oat bran, all of them made of Finnish barley. The carbon footprint of the Elovena product was used to determine carbon footprints for industrial kitchen cooked porridge portions. The basic calculation data was collected from several sources. Most of the data originated from Raisio Group's contractual farmers and Raisio Group's cultivation, processing and packaging specialists. Data from national and European literature and database sources was also used. The electricity consumption for porridge portions' carbon footprint calculations was determined with practical measurements. The carbon footprint calculations were conducted according to the ISO 14044 standard, and the PAS 2050 guide was also applied. A consequential functional unit was applied in porridge portions' carbon footprint calculations. Most of the emissions from barley products' life cycle originate from primary production. The nitrous oxide emissions from cultivated soil and the use and production of nitrogenous fertilisers contribute over 50% of products' carbon footprint. Torino Pearl barley has the highest carbon footprint due to the lowest processing output. The reductions in products' carbon footprint can be achieved with developments in cultivation and grain processing. The carbon footprint of porridge portion can be reduced by using domestically produced plant-based ingredients and by making the best possible use of the kettle. Carbon footprint calculation can be used to determine possible improvement points related to corporate environmental responsibility. Several improvement actions are related to economical and social responsibility through better raw material utilization and expense reductions.
Resumo:
In general, laboratory activities are costly in terms of time, space, and money. As such, the ability to provide realistically simulated laboratory data that enables students to practice data analysis techniques as a complementary activity would be expected to reduce these costs while opening up very interesting possibilities. In the present work, a novel methodology is presented for design of analytical chemistry instrumental analysis exercises that can be automatically personalized for each student and the results evaluated immediately. The proposed system provides each student with a different set of experimental data generated randomly while satisfying a set of constraints, rather than using data obtained from actual laboratory work. This allows the instructor to provide students with a set of practical problems to complement their regular laboratory work along with the corresponding feedback provided by the system's automatic evaluation process. To this end, the Goodle Grading Management System (GMS), an innovative web-based educational tool for automating the collection and assessment of practical exercises for engineering and scientific courses, was developed. The proposed methodology takes full advantage of the Goodle GMS fusion code architecture. The design of a particular exercise is provided ad hoc by the instructor and requires basic Matlab knowledge. The system has been employed with satisfactory results in several university courses. To demonstrate the automatic evaluation process, three exercises are presented in detail. The first exercise involves a linear regression analysis of data and the calculation of the quality parameters of an instrumental analysis method. The second and third exercises address two different comparison tests, a comparison test of the mean and a t-paired test.
Resumo:
Contactless integrated circuit cards are one form of application of radio frequency identification. They are used in applications such as access control, identification, and payment in public transport. The contactless IC cards are passive which means that both the data and the energy are transferred to the card without contact using inductive coupling. Antenna design and optimization of the design for contactless IC cards defined by ISO/IEC14443 is studied. The basic operation principles of contactless system are presented and the structure of contactless IC card is illustrated. The structure was divided between the contactless chip and the antenna. The operation of the antenna was covered in depth and the parameters affecting to the performance of the antenna were presented. Also the different antenna technologies and connection technologies were provided. The antenna design process with the parameters and the design tools isillustrated and optimization of the design is studied. To make the design process more ideal a target of development was discovered, which was the implementation of test application. The optimization of the antenna design was presented based on the optimization criteria defined in this study. The solution for the implementation of these criteria and the effect of each criterion was found. For enhancing the performance of the antenna a focus for future study was proposed.
Resumo:
Asian rust of soybean [Glycine max (L.) Merril] is one of the most important fungal diseases of this crop worldwide. The recent introduction of Phakopsora pachyrhizi Syd. & P. Syd in the Americas represents a major threat to soybean production in the main growing regions, and significant losses have already been reported. P. pachyrhizi is extremely aggressive under favorable weather conditions, causing rapid plant defoliation. Epidemiological studies, under both controlled and natural environmental conditions, have been done for several decades with the aim of elucidating factors that affect the disease cycle as a basis for disease modeling. The recent spread of Asian soybean rust to major production regions in the world has promoted new development, testing and application of mathematical models to assess the risk and predict the disease. These efforts have included the integration of new data, epidemiological knowledge, statistical methods, and advances in computer simulation to develop models and systems with different spatial and temporal scales, objectives and audience. In this review, we present a comprehensive discussion on the models and systems that have been tested to predict and assess the risk of Asian soybean rust. Limitations, uncertainties and challenges for modelers are also discussed.