38 resultados para Equivalence-preserving
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The growing population on earth along with diminishing fossil deposits and the climate change debate calls out for a better utilization of renewable, bio-based materials. In a biorefinery perspective, the renewable biomass is converted into many different products such as fuels, chemicals, and materials, quite similar to the petroleum refinery industry. Since forests cover about one third of the land surface on earth, ligno-cellulosic biomass is the most abundant renewable resource available. The natural first step in a biorefinery is separation and isolation of the different compounds the biomass is comprised of. The major components in wood are cellulose, hemicellulose, and lignin, all of which can be made into various end-products. Today, focus normally lies on utilizing only one component, e.g., the cellulose in the Kraft pulping process. It would be highly desirable to utilize all the different compounds, both from an economical and environmental point of view. The separation process should therefore be optimized. Hemicelluloses can partly be extracted with hot-water prior to pulping. Depending in the severity of the extraction, the hemicelluloses are degraded to various degrees. In order to be able to choose from a variety of different end-products, the hemicelluloses should be as intact as possible after the extraction. The main focus of this work has been on preserving the hemicellulose molar mass throughout the extraction at a high yield by actively controlling the extraction pH at the high temperatures used. Since it has not been possible to measure pH during an extraction due to the high temperatures, the extraction pH has remained a “black box”. Therefore, a high-temperature in-line pH measuring system was developed, validated, and tested for hot-water wood extractions. One crucial step in the measurements is calibration, therefore extensive efforts was put on developing a reliable calibration procedure. Initial extractions with wood showed that the actual extraction pH was ~0.35 pH units higher than previously believed. The measuring system was also equipped with a controller connected to a pump. With this addition it was possible to control the extraction to any desired pH set point. When the pH dropped below the set point, the controller started pumping in alkali and by that the desired set point was maintained very accurately. Analyses of the extracted hemicelluloses showed that less hemicelluloses were extracted at higher pH but with a higher molar-mass. Monomer formation could, at a certain pH level, be completely inhibited. Increasing the temperature, but maintaining a specific pH set point, would speed up the extraction without degrading the molar-mass of the hemicelluloses and thereby intensifying the extraction. The diffusion of the dissolved hemicelluloses from the wood particle is a major part of the extraction process. Therefore, a particle size study ranging from 0.5 mm wood particles to industrial size wood chips was conducted to investigate the internal mass transfer of the hemicelluloses. Unsurprisingly, it showed that hemicelluloses were extracted faster from smaller wood particles than larger although it did not seem to have a substantial effect on the average molar mass of the extracted hemicelluloses. However, smaller particle sizes require more energy to manufacture and thus increases the economic cost. Since bark comprises 10 – 15 % of a tree, it is important to also consider it in a biorefinery concept. Spruce inner and outer bark was hot-water extracted separately to investigate the possibility to isolate the bark hemicelluloses. It was showed that the bark hemicelluloses comprised mostly of pectic material and differed considerably from the wood hemicelluloses. The bark hemicelluloses, or pectins, could be extracted at lower temperatures than the wood hemicelluloses. A chemical characterization, done separately on inner and outer bark, showed that inner bark contained over 10 % stilbene glucosides that could be extracted already at 100 °C with aqueous acetone.
Resumo:
Quantum computation and quantum communication are two of the most promising future applications of quantum mechanics. Since the information carriers used in both of them are essentially open quantum systems it is necessary to understand both quantum information theory and the theory of open quantum systems in order to investigate realistic implementations of such quantum technologies. In this thesis we consider the theory of open quantum systems from a quantum information theory perspective. The thesis is divided into two parts: review of the literature and original research. In the review of literature we present some important definitions and known results of open quantum systems and quantum information theory. We present the definitions of trace distance, two channel capacities and superdense coding capacity and give a reasoning why they can be used to represent the transmission efficiency of a communication channel. We also show derivations of some properties useful to link completely positive and trace preserving maps to trace distance and channel capacities. With the help of these properties we construct three measures of non-Markovianity and explain why they detect non-Markovianity. In the original research part of the thesis we study the non-Markovian dynamics in an experimentally realized quantum optical set-up. For general one-qubit dephasing channels we calculate the explicit forms of the two channel capacities and the superdense coding capacity. For the general two-qubit dephasing channel with uncorrelated local noises we calculate the explicit forms of the quantum capacity and the mutual information of a four-letter encoding. By using the dynamics in the experimental implementation as a set of specific dephasing channels we also calculate and compare the measures in one- and two-qubit dephasing channels and study the options of manipulating the environment to achieve revivals and higher transmission rates in superdense coding protocol with dephasing noise. Kvanttilaskenta ja kvanttikommunikaatio ovat kaksi puhutuimmista tulevaisuuden kvanttimekaniikan käytännön sovelluksista. Koska molemmissa näistä informaatio koodataan systeemeihin, jotka ovat oleellisesti avoimia kvanttisysteemejä, sekä kvantti-informaatioteorian, että avointen kvanttisysteemien tuntemus on välttämätöntä. Tässä tutkielmassa käsittelemme avointen kvanttisysteemien teoriaa kvantti-informaatioteorian näkökulmasta. Tutkielma on jaettu kahteen osioon: kirjallisuuskatsaukseen ja omaan tutkimukseen. Kirjallisuuskatsauksessa esitämme joitakin avointen kvanttisysteemien ja kvantti-informaatioteorian tärkeitä määritelmiä ja tunnettuja tuloksia. Esitämme jälkietäisyyden, kahden kanavakapasiteetin ja superdense coding -kapasiteetin määritelmät ja esitämme perustelun sille, miksi niitä voidaan käyttää kuvaamaan kommunikointikanavan lähetystehokkuutta. Näytämme myös todistukset kahdelle ominaisuudelle, jotka liittävät täyspositiiviset ja jäljensäilyttävät kuvaukset jälkietäisyyteen ja kanavakapasiteetteihin. Näiden ominaisuuksien avulla konstruoimme kolme epä-Markovisuusmittaa ja perustelemme, miksi ne havaitsevat dynamiikan epä-Markovisuutta. Oman tutkimuksen osiossa tutkimme epä-Markovista dynamiikkaa kokeellisesti toteutetussa kvanttioptisessa mittausjärjestelyssä. Yleisen yhden qubitin dephasing-kanavan tapauksessa laskemme molempien kanavakapasiteettien ja superdense coding -kapasiteetin eksplisiittiset muodot. Yleisen kahden qubitin korreloimattomien ympäristöjen dephasing-kanavan tapauksessa laskemme yhteisen informaation lausekkeen nelikirjaimisessa koodauksessa ja kvanttikanavakapasiteetin. Käyttämällä kokeellisen mittajärjestelyn dynamiikkoja esimerkki dephasing-kanavina me myös laskemme konstruoitujen epä-Markovisuusmittojen arvot ja vertailemme niitä yksi- ja kaksi-qubitti-dephasing-kanavissa. Lisäksi käyttäen kokeellisia esimerkkikanavia tutkimme, kuinka ympäristöä manipuloimalla superdense coding –skeemassa voidaan saada yhteinen informaatio ajoittain kasvamaan tai saavuttaa kaikenkaikkiaan korkeampi lähetystehokkuus.
Resumo:
Our surrounding landscape is in a constantly dynamic state, but recently the rate of changes and their effects on the environment have considerably increased. In terms of the impact on nature, this development has not been entirely positive, but has rather caused a decline in valuable species, habitats, and general biodiversity. Regardless of recognizing the problem and its high importance, plans and actions of how to stop the detrimental development are largely lacking. This partly originates from a lack of genuine will, but is also due to difficulties in detecting many valuable landscape components and their consequent neglect. To support knowledge extraction, various digital environmental data sources may be of substantial help, but only if all the relevant background factors are known and the data is processed in a suitable way. This dissertation concentrates on detecting ecologically valuable landscape components by using geospatial data sources, and applies this knowledge to support spatial planning and management activities. In other words, the focus is on observing regionally valuable species, habitats, and biotopes with GIS and remote sensing data, using suitable methods for their analysis. Primary emphasis is given to the hemiboreal vegetation zone and the drastic decline in its semi-natural grasslands, which were created by a long trajectory of traditional grazing and management activities. However, the applied perspective is largely methodological, and allows for the application of the obtained results in various contexts. Models based on statistical dependencies and correlations of multiple variables, which are able to extract desired properties from a large mass of initial data, are emphasized in the dissertation. In addition, the papers included combine several data sets from different sources and dates together, with the aim of detecting a wider range of environmental characteristics, as well as pointing out their temporal dynamics. The results of the dissertation emphasise the multidimensionality and dynamics of landscapes, which need to be understood in order to be able to recognise their ecologically valuable components. This not only requires knowledge about the emergence of these components and an understanding of the used data, but also the need to focus the observations on minute details that are able to indicate the existence of fragmented and partly overlapping landscape targets. In addition, this pinpoints the fact that most of the existing classifications are too generalised as such to provide all the required details, but they can be utilized at various steps along a longer processing chain. The dissertation also emphases the importance of landscape history as an important factor, which both creates and preserves ecological values, and which sets an essential standpoint for understanding the present landscape characteristics. The obtained results are significant both in terms of preserving semi-natural grasslands, as well as general methodological development, giving support to science-based framework in order to evaluate ecological values and guide spatial planning.
Resumo:
The study focuses on five lower secondary school pupils’ daily use of their one-toone computers, the overall aim being to investigate literacy in this form of computing. Theoretically, the study is rooted in the New Literacy tradition with an ecological perspective, in combination with socio-semiotic theory in a multimodal perspective. New Literacy in the ecological perspective focuses on literacy practices and place/space and on the links between them. Literacy is viewed as socially based, in specific situations and in recurring social practices. Socio-semiotic theory embodying the multimodal perspective is used for the text analysis. The methodology is known as socio-semiotic ethnography. The ethnographic methods encompass just over two years of fieldwork with participating observations of the five participants’ computing activities at home, at school and elsewhere. The participants, one boy and two girls from the Blue (Anemone) School and two girls from the White (Anemone) School, were chosen to reflect a broad spectrum in terms of sociocultural and socioeconomic background. The study shows the existence of a both broad and deep variation in the way digital literacy features in the participants’ one-to-one computing. These variations are associated with experience in relation to the home, the living environment, place, personal qualities and school. The more varied computer usage of the Blue School participants is connected with the interests they developed in their homes and living environments and in the computing practices undertaken in school. Their more varied usage of the computer is reflected in their broader digital literacy repertoires and their greater number and variety of digital literacy abilities. The Blue School participants’ text production is more multifaceted, covers a wider range of subjects and displays a broader palette of semiotic resources. It also combines more text types and the texts are generally longer than those of the White School participants. The Blue School girls have developed a text culture that is close to that of the school. In their case, there is clear linkage between school-initiated and self-initiated computing activities, while other participants do not have the same opportunities to link and integrate self-initiated computing activities into the school context. It also becomes clear that the Blue School girls can relate and adapt their texts to different communicative practices and recipients. In addition, the study shows that the Blue School girls have some degree of scope in their school practice as a result of incorporating into it certain communicative practices that they have developed in nonschool contexts. Quite contrary to the hopes expressed that one-to-one computing would reduce digital inequality, it has increased between these participants. Whether the same or similar results apply in a larger perspective, on a more structural level, is a question that this study cannot answer. It can only draw attention to the need to investigate the matter. The study shows in a variety of ways that the White School participants do not have the same opportunity to develop their digital literacy as the Blue School participants. In an equivalence perspective, schools have a compensational task to perform. It is abundantly clear from the study that investing in one-to-one projects is not enough to combat digital inequality and achieve the digitisation goals established for school education. Alongside their investments in technology, schools need to develop a didactic that legitimises and compensates for the different circumstances of different pupils. The compensational role of schools in this connection is important not only for the present participants but also for the community at large, in that it can help to secure a cohesive, open and democratic society.