872 resultados para Thematic Text Analysis
Resumo:
There exists various suggestions for building a functional and a fault-tolerant large-scale quantum computer. Topological quantum computation is a more exotic suggestion, which makes use of the properties of quasiparticles manifest only in certain two-dimensional systems. These so called anyons exhibit topological degrees of freedom, which, in principle, can be used to execute quantum computation with intrinsic fault-tolerance. This feature is the main incentive to study topological quantum computation. The objective of this thesis is to provide an accessible introduction to the theory. In this thesis one has considered the theory of anyons arising in two-dimensional quantum mechanical systems, which are described by gauge theories based on so called quantum double symmetries. The quasiparticles are shown to exhibit interactions and carry quantum numbers, which are both of topological nature. Particularly, it is found that the addition of the quantum numbers is not unique, but that the fusion of the quasiparticles is described by a non-trivial fusion algebra. It is discussed how this property can be used to encode quantum information in a manner which is intrinsically protected from decoherence and how one could, in principle, perform quantum computation by braiding the quasiparticles. As an example of the presented general discussion, the particle spectrum and the fusion algebra of an anyon model based on the gauge group S_3 are explicitly derived. The fusion algebra is found to branch into multiple proper subalgebras and the simplest one of them is chosen as a model for an illustrative demonstration. The different steps of a topological quantum computation are outlined and the computational power of the model is assessed. It turns out that the chosen model is not universal for quantum computation. However, because the objective was a demonstration of the theory with explicit calculations, none of the other more complicated fusion subalgebras were considered. Studying their applicability for quantum computation could be a topic of further research.
Resumo:
Determination of testosterone and related compounds in body fluids is of utmost importance in doping control and the diagnosis of many diseases. Capillary electromigration techniques are a relatively new approach for steroid research. Owing to their electrical neutrality, however, separation of steroids by capillary electromigration techniques requires the use of charged electrolyte additives that interact with the steroids either specifically or non-specifically. The analysis of testosterone and related steroids by non-specific micellar electrokinetic chromatography (MEKC) was investigated in this study. The partial filling (PF) technique was employed, being suitable for detection by both ultraviolet spectrophotometry (UV) and electrospray ionization mass spectrometry (ESI-MS). Efficient, quantitative PF-MEKC UV methods for steroid standards were developed through the use of optimized pseudostationary phases comprising surfactants and cyclodextrins. PF-MEKC UV proved to be a more sensitive, efficient and repeatable method for the steroids than PF-MEKC ESI-MS. It was discovered that in PF-MEKC analyses of electrically neutral steroids, ESI-MS interfacing sets significant limitations not only on the chemistry affecting the ionization and detection processes, but also on the separation. The new PF-MEKC UV method was successfully employed in the determination of testosterone in male urine samples after microscale immunoaffinity solid-phase extraction (IA-SPE). The IA-SPE method, relying on specific interactions between testosterone and a recombinant anti-testosterone Fab fragment, is the first such method described for testosterone. Finally, new data for interactions between steroids and human and bovine serum albumins were obtained through the use of affinity capillary electrophoresis. A new algorithm for the calculation of association constants between proteins and neutral ligands is introduced.
Resumo:
Miniaturized mass spectrometric ionization techniques for environmental analysis and bioanalysis Novel miniaturized mass spectrometric ionization techniques based on atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) were studied and evaluated in the analysis of environmental samples and biosamples. The three analytical systems investigated here were gas chromatography-microchip atmospheric pressure chemical ionization-mass spectrometry (GC-µAPCI-MS) and gas chromatography-microchip atmospheric pressure photoionization-mass spectrometry (GC-µAPPI-MS), where sample pretreatment and chromatographic separation precede ionization, and desorption atmospheric pressure photoionization-mass spectrometry (DAPPI-MS), where the samples are analyzed either as such or after minimal pretreatment. The gas chromatography-microchip atmospheric pressure ionization-mass spectrometry (GC-µAPI-MS) instrumentations were used in the analysis of polychlorinated biphenyls (PCBs) in negative ion mode and 2-quinolinone-derived selective androgen receptor modulators (SARMs) in positive ion mode. The analytical characteristics (i.e., limits of detection, linear ranges, and repeatabilities) of the methods were evaluated with PCB standards and SARMs in urine. All methods showed good analytical characteristics and potential for quantitative environmental analysis or bioanalysis. Desorption and ionization mechanisms in DAPPI were studied. Desorption was found to be a thermal process, with the efficiency strongly depending on thermal conductivity of the sampling surface. Probably the size and polarity of the analyte also play a role. In positive ion mode, the ionization is dependent on the ionization energy and proton affinity of the analyte and the spray solvent, while in negative ion mode the ionization mechanism is determined by the electron affinity and gas-phase acidity of the analyte and the spray solvent. DAPPI-MS was tested in the fast screening analysis of environmental, food, and forensic samples, and the results demonstrated the feasibility of DAPPI-MS for rapid screening analysis of authentic samples.
Resumo:
In this paper, we present the results of an exploratory study that examined the problem of automating content analysis of student online discussion transcripts. We looked at the problem of coding discussion transcripts for the levels of cognitive presence, one of the three main constructs in the Community of Inquiry (CoI) model of distance education. Using Coh-Metrix and LIWC features, together with a set of custom features developed to capture discussion context, we developed a random forest classification system that achieved 70.3% classification accuracy and 0.63 Cohen's kappa, which is significantly higher than values reported in the previous studies. Besides improvement in classification accuracy, the developed system is also less sensitive to overfitting as it uses only 205 classification features, which is around 100 times less features than in similar systems based on bag-of-words features. We also provide an overview of the classification features most indicative of the different phases of cognitive presence that gives an additional insights into the nature of cognitive presence learning cycle. Overall, our results show great potential of the proposed approach, with an added benefit of providing further characterization of the cognitive presence coding scheme.
Resumo:
Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.
Resumo:
In this Ph.D. thesis I have studied how the objectives of sustainable development have been integrated into Northwest Russian urban and regional planning, and how the Russian planning discourse has changed after the collapse of the Soviet Union. By analysing the planning discussion, processes, and strategic documents I have also investigated the use of power and governmentality in urban and regional planning. As a methodological foundation I have used an approach that I call geographical constructivism . It was possible to answer in a relevant manner the question of how sustainable development has become a part of planning in Northwest Russia through a discourse analysis of the planning discussion. During the last decades, the aim of sustainable development has become globally one of the most central societal challenges. Urban and regional planning has a central role to play in promoting this process, since many meta-level objectives actually take shape within its sphere. An ever more actual challenge brought by sustainable development is to plan regions and places while balancing the conflicts of the pressures of safeguarding a good environment and of taking into consideration social and economic needs. I have given these unavoidable conflicts of sustainable development a central place in my work. In my view, complementing instrumental and communicative rationality with conflict rationality gives environmental planning a well-equipped toolbox. Sustainable development can be enhanced in urban and regional planning by seeking open, and especially hidden, potential conflicts. Thus, the expressed thinking (mentality) and actions taken by power regimes in and around conflicts open an interesting viewpoint into Northwest Russian governmentality. I examine the significance of sustainable development in planning through Northwest Russian geography, and also through recent planning legislation and four case studies. In addition, I project my analysis of empirical material onto the latest discussion of planning theory. My four case studies, which are based on independent and separate empirical material (42 thematic interviews and planning documents), consider the republics of Karelia and Komi, Leningrad oblast and the city of Saint Petersburg. In the dissertation I argue how sustainable development is, in the local governmentalities of Northwest Russia, understood as a concept where solving environmental problems is central, and that they can be solved through planning carried out by the planning professionals. Despite this idealism, environmental improvements have been overlooked by appealing to difficult economic factors. This is what I consider environmental racism, which I think is the most central barrier to sustainable development in Northwest Russia. The situation concerning the social dimension of sustainable development is even more difficult, since, for example, the development of local democracy is not highly valued. In the planning discourse this democracy racism is explained by a short history of democracy in Russia. However, precisely through planning conflicts, for example in St. Petersburg, planning has become socially more sustainable: protests by local inhabitants have bypassed the poorly functioning representational democracy, when the governmentality has changed from a mute use of power to one that adopts a stand on a conflicting issue. Keywords: Russia, urban and regional planning, sustainable development, environmental planning, power and conflicts in planning, governmentality, rationalities.
Resumo:
Elucidating the mechanisms responsible for the patterns of species abundance, diversity, and distribution within and across ecological systems is a fundamental research focus in ecology. Species abundance patterns are shaped in a convoluted way by interplays between inter-/intra-specific interactions, environmental forcing, demographic stochasticity, and dispersal. Comprehensive models and suitable inferential and computational tools for teasing out these different factors are quite limited, even though such tools are critically needed to guide the implementation of management and conservation strategies, the efficacy of which rests on a realistic evaluation of the underlying mechanisms. This is even more so in the prevailing context of concerns over climate change progress and its potential impacts on ecosystems. This thesis utilized the flexible hierarchical Bayesian modelling framework in combination with the computer intensive methods known as Markov chain Monte Carlo, to develop methodologies for identifying and evaluating the factors that control the structure and dynamics of ecological communities. These methodologies were used to analyze data from a range of taxa: macro-moths (Lepidoptera), fish, crustaceans, birds, and rodents. Environmental stochasticity emerged as the most important driver of community dynamics, followed by density dependent regulation; the influence of inter-specific interactions on community-level variances was broadly minor. This thesis contributes to the understanding of the mechanisms underlying the structure and dynamics of ecological communities, by showing directly that environmental fluctuations rather than inter-specific competition dominate the dynamics of several systems. This finding emphasizes the need to better understand how species are affected by the environment and acknowledge species differences in their responses to environmental heterogeneity, if we are to effectively model and predict their dynamics (e.g. for management and conservation purposes). The thesis also proposes a model-based approach to integrating the niche and neutral perspectives on community structure and dynamics, making it possible for the relative importance of each category of factors to be evaluated in light of field data.