20 resultados para Spatiotemporal visualization
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The management and conservation of coastal waters in the Baltic is challenged by a number of complex environmental problems, including eutrophication and habitat degradation. Demands for a more holistic, integrated and adaptive framework of ecosystem-based management emphasize the importance of appropriate information on the status and changes of the aquatic ecosystems. The thesis focuses on the spatiotemporal aspects of environmental monitoring in the extensive and geomorphologically complex coastal region of SW Finland, where the acquisition of spatially and temporally representative monitoring data is inherently challenging. Furthermore, the region is subject to multiple human interests and uses. A holistic geographical approach is emphasized, as it is ultimately the physical conditions that set the frame for any human activity. Characteristics of the coastal environment were examined using water quality data from the database of the Finnish environmental administration and Landsat TM/ETM+ images. A basic feature of the complex aquatic environment in the Archipelago Sea is its high spatial and temporal variability; this foregrounds the importance of geographical information as a basis of environmental assessments. While evidence of a consistent water turbidity pattern was observed, the coastal hydrodynamic realm is also characterized by high spatial and temporal variability. It is therefore also crucial to consider the spatial and temporal representativeness of field monitoring data. Remote sensing may facilitate evaluation of hydrodynamic conditions in the coastal region and the spatial extrapolation of in situ data despite their restrictions. Additionally, remotely sensed images can be used in the mapping of many of those coastal habitats that need to be considered in environmental management. With regard to surface water monitoring, only a small fraction of the currently available data stored in the Hertta-PIVET register can be used effectively in scientific studies and environmental assessments. Long-term consistent data collection from established sampling stations should be emphasized but research-type seasonal assessments producing abundant data should also be encouraged. Thus a more comprehensive coordination of field work efforts is called for. The integration of remote sensing and various field measurement techniques would be especially useful in the complex coastal waters. The integration and development of monitoring system in Finnish coastal areas also requires further scientific assesement of monitoring practices. A holistic approach to the gathering and management of environmental monitoring data could be a cost-effective way of serving a multitude of information needs, and would fit the holistic, ecosystem-based management regimes that are currently being strongly promoted in Europe.
Resumo:
During the past decades testing has matured from ad-hoc activity into being an integral part of the development process. The benefits of testing are obvious for modern communication systems, which operate in heterogeneous environments amongst devices from various manufacturers. The increased demand for testing also creates demand for tools and technologies that support and automate testing activities. This thesis discusses applicability of visualization techniques in the result analysis part of the testing process. Particularly, the primary focus of this work is visualization of test execution logs produced by a TTCN-3 test system. TTCN-3 is an internationally standardized test specification and implementation language. The TTCN-3 standard suite includes specification of a test logging interface and a graphical presentation format, but no immediate relationship between them. This thesis presents a technique for mapping the log events to the graphical presentation format along with a concrete implementation, which is integrated with the Eclipse Platform and the OpenTTCN Tester toolchain. Results of this work indicate that for majority of the log events, a visual representation may be derived from the TTCN-3 standard suite. The remaining events were analysed and three categories relevant in either log analysis or implementation of the visualization tool were identified: events indicating insertion of something into the incoming queue of a port, events indicating a mismatch and events describing the control flow during the execution. Applicability of the results is limited into the domain of TTCN-3, but the developed mapping and the implementation may be utilized with any TTCN-3 tool that is able to produce the execution log in the standardized XML format.
Resumo:
Visual data mining (VDM) tools employ information visualization techniques in order to represent large amounts of high-dimensional data graphically and to involve the user in exploring data at different levels of detail. The users are looking for outliers, patterns and models – in the form of clusters, classes, trends, and relationships – in different categories of data, i.e., financial, business information, etc. The focus of this thesis is the evaluation of multidimensional visualization techniques, especially from the business user’s perspective. We address three research problems. The first problem is the evaluation of projection-based visualizations with respect to their effectiveness in preserving the original distances between data points and the clustering structure of the data. In this respect, we propose the use of existing clustering validity measures. We illustrate their usefulness in evaluating five visualization techniques: Principal Components Analysis (PCA), Sammon’s Mapping, Self-Organizing Map (SOM), Radial Coordinate Visualization and Star Coordinates. The second problem is concerned with evaluating different visualization techniques as to their effectiveness in visual data mining of business data. For this purpose, we propose an inquiry evaluation technique and conduct the evaluation of nine visualization techniques. The visualizations under evaluation are Multiple Line Graphs, Permutation Matrix, Survey Plot, Scatter Plot Matrix, Parallel Coordinates, Treemap, PCA, Sammon’s Mapping and the SOM. The third problem is the evaluation of quality of use of VDM tools. We provide a conceptual framework for evaluating the quality of use of VDM tools and apply it to the evaluation of the SOM. In the evaluation, we use an inquiry technique for which we developed a questionnaire based on the proposed framework. The contributions of the thesis consist of three new evaluation techniques and the results obtained by applying these evaluation techniques. The thesis provides a systematic approach to evaluation of various visualization techniques. In this respect, first, we performed and described the evaluations in a systematic way, highlighting the evaluation activities, and their inputs and outputs. Secondly, we integrated the evaluation studies in the broad framework of usability evaluation. The results of the evaluations are intended to help developers and researchers of visualization systems to select appropriate visualization techniques in specific situations. The results of the evaluations also contribute to the understanding of the strengths and limitations of the visualization techniques evaluated and further to the improvement of these techniques.
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.
Resumo:
In coastal waters, physico-chemical and biological properties and constituents vary at different time scales. In the study area of this thesis, within the Archipelago Sea in the northern Baltic Sea, seasonal cycles of light and temperature set preconditions for intra-annual variations, but developments at other temporal scales occur as well. Weather-induced runoffs and currents may alter water properties over the short term, and the consequences over time of eutrophication and global changes are to a degree unpredictable. The dynamic characteristics of northern Baltic Sea waters are further diversified at the archipelago coasts. Water properties may differ in adjacent basins, which are separated by island and underwater thresholds limiting water exchange, making the area not only a mosaic of islands but also one of water masses. Long-term monitoring and in situ observations provide an essential data reserve for coastal management and research. Since the seasonal amplitudes of water properties are so high, inter-annual comparisons of water-quality variables have to be based on observations sampled at the same time each year. In this thesis I compare areas by their temporal characteristics, using both inter-annual and seasonal data. After comparing spatial differences in seasonal cycles, I conclude that spatial comparisons and temporal generalizations have to be made with caution. In classifying areas by the state of their waters, the results may be biased even if the sampling is annually simultaneous, since the dynamics of water properties may vary according to the area. The most comprehensive view of the spatiotemporal dynamics of water properties would be achieved by means of comparisons with data consisting of multiple annual samples. For practical reasons, this cannot be achieved with conventional in situ sampling. A holistic understanding of the spatiotemporal features of the water properties of the Archipelago Sea will have to be based on the application of multiple methods, complementing each other’s spatial and temporal coverage. The integration of multi-source observational data and time-series analysis may be methodologically challenging, but it will yield new information as to the spatiotemporal regime of the Archipelago Sea.
Resumo:
The purpose of this thesis was to develop a program that can illustrate thermal-hydraulic node dimensions used in SMABRE simulations. These created node illustrations are used to verify the correctness of the designed simulation model and in addition they can be included in scientific reports. This thesis will include theory about SMABRE and relevant programs that were used to achieve the ending results. This thesis will give explanations for different modules that were created and used in the finished program, and it will present the different problems encountered and provide the solutions. The most important objective in this thesis is to display the results of generic VVER-1000 node dimensions and verify the correctness in the displayed part. The finished program was created using code language Python.
Resumo:
This piece of work which is Identification of Research Portfolio for Development of Filtration Equipment aims at presenting a novel approach to identify promising research topics in the field of design and development of filtration equipment and processes. The projected approach consists of identifying technological problems often encountered in filtration processes. The sources of information for the problem retrieval were patent documents and scientific papers that discussed filtration equipments and processes. The problem identification method adopted in this work focussed on the semantic nature of a sentence in order to generate series of subject-action-object structures. This was achieved with software called Knowledgist. List of problems often encountered in filtration processes that have been mentioned in patent documents and scientific papers were generated. These problems were carefully studied and categorized. Suggestions were made on the various classes of these problems that need further investigation in order to propose a research portfolio. The uses and importance of other methods of information retrieval were also highlighted in this work.
Resumo:
Työn tavoitteena oli kehittää ja toteuttaa reaaliaikasimulaattori maastoajoneuvon käyttöliittymälle, jota ajoneuvon kuljettajakäyttää ajon aikana. Simulaattori oli tarkoitettu ensisijaisesti käyttöliittymän testaukseen, mutta sen pitää olla helposti laajennettavissa esimerkiksi koulutuskäyttöön. Mallinnustyökaluina oli tarkoitus käyttääpääsääntöisesti markkinoilta saatavia valmiita ohjelmistoja. Simulaattorin toteutuksessa käytettiin myös manuaalista ohjelmointia, koska valituilla ohjelmistoilla ei suoraan voinut saavuttaa reaaliaikaista visualisointia. Käsin kirjoitetut koodit hoitavat valmiilla ohjelmistoilla tehtyjen osien välisen tiedonsiirron. Varsinainen mallintaminen oli valituilla ohjelmistoilla helppoa ja nopeaa. Työn tuloksena saatiintoteutettua simulaattori, jonka vaikutelma oli reaaliaikainen. Käytettävyystestit onnistuivat hyvin simulaattorin avulla. Simulointimallin modulaarisuuden ansiosta mallia on helppo päivittää. Simulaattorin jatkokehityksessä oleelliset seikat ovat visualisoinnin parantaminen ja todellista ajoneuvoa vastaavan dynamiikan lisääminen.
Resumo:
Ohjelmistojen uudelleenkäyttö on hyvin tärkeä käsite ohjelmistotekniikan alueella.Ohjelmistojen uudelleenkäyttötekniikat parantavat ohjelmistokehitysprosessin laatua. Yleisiä ratkaisuja sekä ohjelmiston suunnittelun että arkkitehtuurin uudelleenkäyttöön ovat olio-ohjelmointi ja sovelluskehykset. Tähän asti ei ole ollut olemassa yleisiä tapoja sovelluskehysten erikoistamiseen. Monet nykyääntunnetuista sovelluskehyksistä ovat hyvin suuria ja mutkikkaita. Tällaisten sovelluskehyksien käyttö on monimutkaista myös kokeneille ohjelmoijille. Hyvin dokumentoidut uudelleenkäytettävät sovelluskehyksen rajapinnat parantavat kehyksen käytettävyyttä ja tehostavat myös erikoistamisprosessiakin sovelluskehyksen käyttäjille. Sovelluskehyseditori (framework editor, JavaFrames) on prototyyppityökalu, jota voidaan käyttää yksinkertaistamaan sovelluskehyksen käyttöä. Perusajatus JavaFrames lähestymistavassa ovat erikoistamismallit, joita käytetään kuvamaan sovelluskehyksen uudelleenkäytettäviä rajapintoja. Näihin malleihin perustuen JavaFrames tarjoaa automaattisen lähdekoodi generaattorin, dokumentoinninja arkkitehtuurisääntöjen tarkistuksen. Tämä opinnäyte koskee graafisen mallieditorin kehittämistä JavaFrames ympäristöön. Työssä on laadittu työkalu,jonka avulla voidaan esittää graafisesti erikoistamismalli. Editori sallii uusien mallien luomisen, vanhojen käyttämättä olevien poistamisen, kuten myös yhteyksien lisäämisen mallien välille. Tällainen graafinen tuki JavaFrames ympäristöönvoi huomattavasti yksinkertaistaa sen käyttöä ja tehdä sovellusten kehittämisprosessista joustavamman.
Resumo:
Contrast enhancement is an image processing technique where the objective is to preprocess the image so that relevant information can be either seen or further processed more reliably. These techniques are typically applied when the image itself or the device used for image reproduction provides poor visibility and distinguishability of different regions of interest inthe image. In most studies, the emphasis is on the visualization of image data,but this human observer biased goal often results to images which are not optimal for automated processing. The main contribution of this study is to express the contrast enhancement as a mapping from N-channel image data to 1-channel gray-level image, and to devise a projection method which results to an image with minimal error to the correct contrast image. The projection, the minimum-error contrast image, possess the optimal contrast between the regions of interest in the image. The method is based on estimation of the probability density distributions of the region values, and it employs Bayesian inference to establish the minimum error projection.
Resumo:
This thesis gives an overview of the use of the level set methods in the field of image science. The similar fast marching method is discussed for comparison, also the narrow band and the particle level set methods are introduced. The level set method is a numerical scheme for representing, deforming and recovering structures in an arbitrary dimensions. It approximates and tracks the moving interfaces, dynamic curves and surfaces. The level set method does not define how and why some boundary is advancing the way it is but simply represents and tracks the boundary. The principal idea of the level set method is to represent the N dimensional boundary in the N+l dimensions. This gives the generality to represent even the complex boundaries. The level set methods can be powerful tools to represent dynamic boundaries, but they can require lot of computing power. Specially the basic level set method have considerable computational burden. This burden can be alleviated with more sophisticated versions of the level set algorithm like the narrow band level set method or with the programmable hardware implementation. Also the parallel approach can be used in suitable applications. It is concluded that these methods can be used in a quite broad range of image applications, like computer vision and graphics, scientific visualization and also to solve problems in computational physics. Level set methods and methods derived and inspired by it will be in the front line of image processing also in the future.
Resumo:
Työn tavoitteena oli hankkia ja rakentaa kaupallisilla ohjelmistoilla ja laitteistoilla toteutettu reaaliaikasimulaattori. Työssä keskityttiin erityisesti valmiin Patu 655 puutavarakuormaimen simulointimallin visualisointiin reaaliaikasimulaattoriin hankitulla 3D-animointiohjelmalla. Lisäksi työssä selvitettiin reaaliaikasimuloinnin mahdollisuuksia konejärjestelmän tuotekehityksessä. Reaaliaikasimulaattorina käytettiin dSPACE:n reaaliaikasimulointiin valmistamia kaupallisia laitteita ja ohjelmia. Puutavarakuormaimen simulointimalli käännettiin simulaattorissa suoritettavaksi, jonka jälkeen mallin liikkeet visualisoitiin käyttämällä RealMotion 3D-animointiohjelmaa. Animoitu grafiikka tuotiin sekä AutoCAD- että ADAMS -ohjelmasta. Työn tuloksena saatiin hankittua reaaliaikasimulaattori, jonka havaittiin olevan toimiva kokonaisuus. Puutavarakuormaimen ja muiden simulointimallien automatisoitu kääntö simulaattoriin onnistui hyvin. Simulointimallien visualisointi toimi sujuvasti käytetyn 3D-animointiohjelman avulla. Konejärjestelmien tuotekehitysprosessia havaittiin voitavan nopeuttaa reaaliaikasimulaattorin avulla.
Resumo:
Diplomityossa kartoitetaan Visualisointiympäristön rakentamiseen ja toteuttamiseen soveltuvia tekniikoita. Kartoituksen perusteella laadittiin lista tarvittavista komponenteista Visualisointiympäristön toteuttamiseksi Lappeenrannan teknillisen yliopiston konetekniikan osastolle mekatroniikan ja virtuaalisuunnittelun laboratoriolle. Työssä tarkastellaan keinotodellisuuden ulottuvuuksia ja esitellään sen hyödyntämismahdollisuuksia eri aloilla nyt ja tulevaisuudessa. Keinotodellisuuteen liittyvät tekniikat eri toteuttamistapoineen esitellaan käyttäen esimerkkeinä tällä hetkellä markkinoilla olevia tuotteita. Lopuksi arvioitiin virtuaalitekniikan kehitystä ja sen merkitystä tulevaisuudessa. Tutkimus osoittaa, etta keinotodellisuudelle löytyy runsaasti sovelluksia eri aloilla, ja edullisen PC-tekniikan kehittyessä kustannukset laskevat jatkuvasti, jolloin vähitellen keinotodellisuus yleistyy.
Resumo:
Kolmiulotteisten kappaleiden rekonstruktio on yksi konenäön haastavimmista ongelmista, koska kappaleiden kolmiulotteisia etäisyyksiä ei voida selvittää yhdestä kaksiulotteisesta kuvasta. Ongelma voidaan ratkaista stereonäön avulla, jossa näkymän kolmiulotteinen rakenne päätellään usean kuvan perusteella. Tämä lähestymistapa mahdollistaa kuitenkin vain rekonstruktion niille kappaleiden osille, jotka näkyvät vähintään kahdessa kuvassa. Piilossa olevien osien rekonstruktio ei ole mahdollista pelkästään stereonäön avulla. Tässä työssä on kehitetty uusi menetelmä osittain piilossa olevien kolmiulotteisten tasomaisten kappaleiden rekonstruktioon. Menetelmän avulla voidaan selvittää hyvällä tarkkuudella tasomaisista pinnoista koostuvan kappaleen muoto ja paikka käyttäen kahta kuvaa kappaleesta. Menetelmä perustuu epipolaarigeometriaan, jonka avulla selvitetään molemmissa kuvissa näkyvät kappaleiden osat. Osittain piilossa olevien piirteiden rekonstruointi suoritetaan käyttämäen stereonäköä sekä tietoa kappaleen rakenteesta. Esitettyä ratkaisua voitaisiin käyttää esimerkiksi kolmiulotteisten kappaleiden visualisointiin, robotin navigointiin tai esineentunnistukseen.
Resumo:
Viimeisten vuosien aikana laajakaistaoperaattoreiden laajakaistaverkot ovat nopeiden ja kiinteähintaisten laajakaistaliittymien johdosta kasvaneet suuriksi kokonaisuuksiksi. Kokonaisuuksia hallitaan erilaisilla verkonhallintatyökaluilla. Verkonhallintatyökalut sisältävät suuren määrän eri tasoista tietoa laitteista ja laitteiden välisistä suhteista. Kokonaisuuksien hahmottaminen ilman tiedoista rakennettua kuvaa on vaikeaa ja hidasta. Laajakaistaverkon topologian visualisoinnissa muodostetaan kuva laitteista ja niiden välisistä suhteista. Visualisoitua kuvaa voidaan käyttää osana verkonhallintatyökalua, jolloin käyttäjälle muodostuu nopeasti näkymä verkon laitteista ja rakenteesta eli topologiasta. Visualisoinnissa kuvan piirto-ongelma täytyy muuttaa graafin piirto-ongelmaksi. Graafin piirto-ongelmassa verkon rakennetta käsitellään graafina, joka mahdollistaa kuvan muodostamisen automaattisia piirtomenetelmiä hyväksikäyttäen. Halutunlainen ulkoasu kuvalle muodostetaan automaattisilla piirtomenetelmillä, joilla laitteiden ja laitteiden välisten suhteiden esitystapoja voidaan muuttaa. Esitystavoilla voidaan muuttaa esimerkiksi laitteiden muotoa, väriä ja kokoa. Esitystapojen lisäksi piirtomenetelmien tärkein tehtävä on laskea laitteiden sijaintien koordinaattien arvot, jotka loppujen lopuksi määräävät koko kuvan rakenteen. Koordinaattien arvot lasketaan piirtoalgoritmeilla, joista voimiin perustuvat algoritmit sopivat parhaiten laajakaistaverkkojen laitteiden sijaintien laskemiseen. Tämän diplomityön käytännön työssä toteutettiin laajakaistaverkon topologian visualisointityökalu.