44 resultados para Computational power

em Helda - Digital Repository of University of Helsinki


Relevância:

60.00% 60.00%

Publicador:

Resumo:

There exists various suggestions for building a functional and a fault-tolerant large-scale quantum computer. Topological quantum computation is a more exotic suggestion, which makes use of the properties of quasiparticles manifest only in certain two-dimensional systems. These so called anyons exhibit topological degrees of freedom, which, in principle, can be used to execute quantum computation with intrinsic fault-tolerance. This feature is the main incentive to study topological quantum computation. The objective of this thesis is to provide an accessible introduction to the theory. In this thesis one has considered the theory of anyons arising in two-dimensional quantum mechanical systems, which are described by gauge theories based on so called quantum double symmetries. The quasiparticles are shown to exhibit interactions and carry quantum numbers, which are both of topological nature. Particularly, it is found that the addition of the quantum numbers is not unique, but that the fusion of the quasiparticles is described by a non-trivial fusion algebra. It is discussed how this property can be used to encode quantum information in a manner which is intrinsically protected from decoherence and how one could, in principle, perform quantum computation by braiding the quasiparticles. As an example of the presented general discussion, the particle spectrum and the fusion algebra of an anyon model based on the gauge group S_3 are explicitly derived. The fusion algebra is found to branch into multiple proper subalgebras and the simplest one of them is chosen as a model for an illustrative demonstration. The different steps of a topological quantum computation are outlined and the computational power of the model is assessed. It turns out that the chosen model is not universal for quantum computation. However, because the objective was a demonstration of the theory with explicit calculations, none of the other more complicated fusion subalgebras were considered. Studying their applicability for quantum computation could be a topic of further research.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ever-increasing demand for faster computers in various areas, ranging from entertaining electronics to computational science, is pushing the semiconductor industry towards its limits on decreasing the sizes of electronic devices based on conventional materials. According to the famous law by Gordon E. Moore, a co-founder of the world s largest semiconductor company Intel, the transistor sizes should decrease to the atomic level during the next few decades to maintain the present rate of increase in the computational power. As leakage currents become a problem for traditional silicon-based devices already at sizes in the nanometer scale, an approach other than further miniaturization is needed to accomplish the needs of the future electronics. A relatively recently proposed possibility for further progress in electronics is to replace silicon with carbon, another element from the same group in the periodic table. Carbon is an especially interesting material for nanometer-sized devices because it forms naturally different nanostructures. Furthermore, some of these structures have unique properties. The most widely suggested allotrope of carbon to be used for electronics is a tubular molecule having an atomic structure resembling that of graphite. These carbon nanotubes are popular both among scientists and in industry because of a wide list of exciting properties. For example, carbon nanotubes are electronically unique and have uncommonly high strength versus mass ratio, which have resulted in a multitude of proposed applications in several fields. In fact, due to some remaining difficulties regarding large-scale production of nanotube-based electronic devices, fields other than electronics have been faster to develop profitable nanotube applications. In this thesis, the possibility of using low-energy ion irradiation to ease the route towards nanotube applications is studied through atomistic simulations on different levels of theory. Specifically, molecular dynamic simulations with analytical interaction models are used to follow the irradiation process of nanotubes to introduce different impurity atoms into these structures, in order to gain control on their electronic character. Ion irradiation is shown to be a very efficient method to replace carbon atoms with boron or nitrogen impurities in single-walled nanotubes. Furthermore, potassium irradiation of multi-walled and fullerene-filled nanotubes is demonstrated to result in small potassium clusters in the hollow parts of these structures. Molecular dynamic simulations are further used to give an example on using irradiation to improve contacts between a nanotube and a silicon substrate. Methods based on the density-functional theory are used to gain insight on the defect structures inevitably created during the irradiation. Finally, a new simulation code utilizing the kinetic Monte Carlo method is introduced to follow the time evolution of irradiation-induced defects on carbon nanotubes on macroscopic time scales. Overall, the molecular dynamic simulations presented in this thesis show that ion irradiation is a promisingmethod for tailoring the nanotube properties in a controlled manner. The calculations made with density-functional-theory based methods indicate that it is energetically favorable for even relatively large defects to transform to keep the atomic configuration as close to the pristine nanotube as possible. The kinetic Monte Carlo studies reveal that elevated temperatures during the processing enhance the self-healing of nanotubes significantly, ensuring low defect concentrations after the treatment with energetic ions. Thereby, nanotubes can retain their desired properties also after the irradiation. Throughout the thesis, atomistic simulations combining different levels of theory are demonstrated to be an important tool for determining the optimal conditions for irradiation experiments, because the atomic-scale processes at short time scales are extremely difficult to study by any other means.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modern smart phones often come with a significant amount of computational power and an integrated digital camera making them an ideal platform for intelligents assistants. This work is restricted to retail environments, where users could be provided with for example navigational in- structions to desired products or information about special offers within their close proximity. This kind of applications usually require information about the user's current location in the domain environment, which in our case corresponds to a retail store. We propose a vision based positioning approach that recognizes products the user's mobile phone's camera is currently pointing at. The products are related to locations within the store, which enables us to locate the user by pointing the mobile phone's camera to a group of products. The first step of our method is to extract meaningful features from digital images. We use the Scale- Invariant Feature Transform SIFT algorithm, which extracts features that are highly distinctive in the sense that they can be correctly matched against a large database of features from many images. We collect a comprehensive set of images from all meaningful locations within our domain and extract the SIFT features from each of these images. As the SIFT features are of high dimensionality and thus comparing individual features is infeasible, we apply the Bags of Keypoints method which creates a generic representation, visual category, from all features extracted from images taken from a specific location. A category for an unseen image can be deduced by extracting the corresponding SIFT features and by choosing the category that best fits the extracted features. We have applied the proposed method within a Finnish supermarket. We consider grocery shelves as categories which is a sufficient level of accuracy to help users navigate or to provide useful information about nearby products. We achieve a 40% accuracy which is quite low for commercial applications while significantly outperforming the random guess baseline. Our results suggest that the accuracy of the classification could be increased with a deeper analysis on the domain and by combining existing positioning methods with ours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-scale chromosome rearrangements such as copy number variants (CNVs) and inversions encompass a considerable proportion of the genetic variation between human individuals. In a number of cases, they have been closely linked with various inheritable diseases. Single-nucleotide polymorphisms (SNPs) are another large part of the genetic variance between individuals. They are also typically abundant and their measuring is straightforward and cheap. This thesis presents computational means of using SNPs to detect the presence of inversions and deletions, a particular variety of CNVs. Technically, the inversion-detection algorithm detects the suppressed recombination rate between inverted and non-inverted haplotype populations whereas the deletion-detection algorithm uses the EM-algorithm to estimate the haplotype frequencies of a window with and without a deletion haplotype. As a contribution to population biology, a coalescent simulator for simulating inversion polymorphisms has been developed. Coalescent simulation is a backward-in-time method of modelling population ancestry. Technically, the simulator also models multiple crossovers by using the Counting model as the chiasma interference model. Finally, this thesis includes an experimental section. The aforementioned methods were tested on synthetic data to evaluate their power and specificity. They were also applied to the HapMap Phase II and Phase III data sets, yielding a number of candidates for previously unknown inversions, deletions and also correctly detecting known such rearrangements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empire is central to U.S. history. When we see the U.S. projecting its influence on a global scale in today s world it is important to understand that U.S. empire has a long history. This dissertation offers a case study of colonialism and U.S. empire by discussing the social worlds, labor regimes, and culture of the U.S. Army during the conquest of southern Arizona and New Mexico (1866-1886). It highlights some of the defining principles, mentalities, and characteristics of U.S. imperialism and shows how U.S. forces have in years past constructed their power and represented themselves, their missions, and the places and peoples that faced U.S. imperialism/colonialism. Using insights from postcolonial studies and whiteness studies, this work balances its attention between discursive representations (army stories) and social experience (army actions), pays attention to silences in the process of historical production, and focuses on collective group mentalities and identities. In the end the army experience reveals an empire in denial constructed on the rule of difference and marked by frustration. White officers, their wives, and the white enlisted men not only wanted the monopoly of violence for the U.S. regime but also colonial (mental/cultural) authority and power, and constructed their identity, authority, and power in discourse and in the social contexts of the everyday through difference. Engaged in warfare against the Apaches, they did not recognize their actions as harmful or acknowledge the U.S. invasion as the bloody colonial conquest it was. White army personnel painted themselves and the army as liberators, represented colonial peoples as racial inferiors, approached colonial terrain in terms of struggle, and claimed that the region was a terrible periphery with little value before the arrival of white civilization. Officers and wives also wanted to place themselves at the top of colonial hierarchies as the refined and respectable class who led the regeneration of the colony by example: they tried to turn army villages into islands of civilization and made journeys, leisure, and domestic life to showcase their class sensibilities and level of sophistication. Often, however, their efforts failed, resulting in frustration and bitterness. Many blamed the colony and its peoples for their failures. The army itself was divided by race and class. All soldiers were treated as laborers unfit for self-government. White enlisted men, frustrated by their failures in colonial warfare and by constant manual labor, constructed worlds of resistance, whereas indigenous soldiers sought to negotiate the effects of colonialism by working in the army. As colonized labor their position was defined by tension between integration and exclusion and between freedom and colonial control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Failures in industrial organizations dealing with hazardous technologies can have widespread consequences for the safety of the workers and the general population. Psychology can have a major role in contributing to the safe and reliable operation of these technologies. Most current models of safety management in complex sociotechnical systems such as nuclear power plant maintenance are either non-contextual or based on an overly-rational image of an organization. Thus, they fail to grasp either the actual requirements of the work or the socially-constructed nature of the work in question. The general aim of the present study is to develop and test a methodology for contextual assessment of organizational culture in complex sociotechnical systems. This is done by demonstrating the findings that the application of the emerging methodology produces in the domain of maintenance of a nuclear power plant (NPP). The concepts of organizational culture and organizational core task (OCT) are operationalized and tested in the case studies. We argue that when the complexity of the work, technology and social environment is increased, the significance of the most implicit features of organizational culture as a means of coordinating the work and achieving safety and effectiveness of the activities also increases. For this reason a cultural perspective could provide additional insight into the problem of safety management. The present study aims to determine; (1) the elements of the organizational culture in complex sociotechnical systems; (2) the demands the maintenance task sets for the organizational culture; (3) how the current organizational culture at the case organizations supports the perception and fulfilment of the demands of the maintenance work; (4) the similarities and differences between the maintenance cultures at the case organizations, and (5) the necessary assessment of the organizational culture in complex sociotechnical systems. Three in-depth case studies were carried out at the maintenance units of three Nordic NPPs. The case studies employed an iterative and multimethod research strategy. The following methods were used: interviews, CULTURE-survey, seminars, document analysis and group work. Both cultural analysis and task modelling were carried out. The results indicate that organizational culture in complex sociotechnical systems can be characterised according to three qualitatively different elements: structure, internal integration and conceptions. All three of these elements of culture as well as their interrelations have to be considered in organizational assessments or important aspects of the organizational dynamics will be overlooked. On the basis of OCT modelling, the maintenance core task was defined as balancing between three critical demands: anticipating the condition of the plant and conducting preventive maintenance accordingly, reacting to unexpected technical faults and monitoring and reflecting on the effects of maintenance actions and the condition of the plant. The results indicate that safety was highly valued at all three plants, and in that sense they all had strong safety cultures. In other respects the cultural features were quite different, and thus the culturally-accepted means of maintaining high safety also differed. The handicraft nature of maintenance work was emphasised as a source of identity at the NPPs. Overall, the importance of safety was taken for granted, but the cultural norms concerning the appropriate means to guarantee it were little reflected. A sense of control, personal responsibility and organizational changes emerged as challenging issues at all the plants. The study shows that in complex sociotechnical systems it is both necessary and possible to analyse the safety and effectiveness of the organizational culture. Safety in complex sociotechnical systems cannot be understood or managed without understanding the demands of the organizational core task and managing the dynamics between the three elements of the organizational culture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Atherosclerosis is a disease of the arteries; its characteristic features include chronic inflammation, extra- and intracellular lipid accumulation, extracellular matrix remodeling, and an increase in extracellular matrix volume. The underlying mechanisms in the pathogenesis of advanced atherosclerotic plaques, that involve local acidity of the extracellular fluid, are still incompletely understood. In this thesis project, my co-workers and I studied the different mechanisms by which local extracellular acidity could promote accumulation of the atherogenic apolipoprotein B-100 (apoB-100)-containing plasma lipoprotein particles in the inner layer of the arterial wall, the intima. We found that lipolysis of atherogenic apoB-100-containing plasma lipoprotein particles (LDL, IDL, and sVLDL) by the secretory phospholipase A2 group V (sPLA2-V) enzyme, was increased at acidic pH. Also, the binding of apoB-100-containing plasma lipoprotein particles to human aortic proteoglycans was dramatically enhanced at acidic pH. Additionally, lipolysis by sPLA2-V enzyme further increased this binding. Using proteoglycan-affinity chromatography, we found that sVLDL lipoprotein particles consist of populations, differing in their affinities toward proteoglycans. These populations also contained different amounts of apolipoprotein E (apoE) and apolipoprotein C-III (apoC-III); the amounts of apoC-III and apoE per particle were highest in the population with the lowest affinity toward proteoglycans. Since PLA2-modification of LDL particles has been shown to change their aggregation behavior, we also studied the effect of acidic pH on the monolayer structure covering lipoprotein particles after PLA2-induced hydrolysis. Using molecular dynamics simulations, we found that, in acidity, the monolayer is more tightly packed laterally; moreover, its spontaneous curvature is negative, suggesting that acidity may promote lipoprotein particles fusion. In addition to extracellular lipid accumulation, the apoB-100-containing plasma lipoprotein particles can be taken up by inflammatory cells, namely macrophages. Using radiolabeled lipoprotein particles and cell cultures, we showed that sPLA2-V-modification of LDL, IDL, and sVLDL lipoproteins particles, at neutral or acidic pH, increased their uptake by human monocyte-derived macrophages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this thesis is to find out how dominant firms in a liberalised electricity market will react when they face an increase in the level of costs due to emissions trading, and how this will effect the price of electricity. The Nordic electricity market is chosen as the setting in which to examine the question, since recent studies on the subject suggest that interaction between electricity markets and emissions trading is very much dependent on conditions specific to each market area. There is reason to believe that imperfect competition prevails in the Nordic market, thus the issue is approached through the theory of oligopolistic competition. The generation capacity available at the market, marginal cost of electricity production and seasonal levels of demand form the data based on which the dominant firms are modelled using the Cournot model of competition. The calculations are made for two levels of demand, high and low, and with several values of demand elasticity. The producers are first modelled under no carbon costs and then by adding the cost of carbon dioxide at 20€/t to those technologies subject to carbon regulation. In all cases the situation under perfect competition is determined as a comparison point for the results of the Cournot game. The results imply that the potential for market power does exist on the Nordic market, but the possibility for exercising market power depends on the demand level. In season of high demand the dominant firms may raise the price significantly above competitive levels, and the situation is aggravated when the cost of carbon dioixide is accounted for. Under low demand leves there is no difference between perfect and imperfect competition. The results are highly dependent on the price elasticity of demand.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The molecular level structure of mixtures of water and alcohols is very complicated and has been under intense research in the recent past. Both experimental and computational methods have been used in the studies. One method for studying the intra- and intermolecular bindings in the mixtures is the use of the so called difference Compton profiles, which are a way to obtain information about changes in the electron wave functions. In the process of Compton scattering a photon scatters inelastically from an electron. The Compton profile that is obtained from the electron wave functions is directly proportional to the probability of photon scattering at a given energy to a given solid angle. In this work we develop a method to compute Compton profiles numerically for mixtures of liquids. In order to obtain the electronic wave functions necessary to calculate the Compton profiles we need some statistical information about atomic coordinates. Acquiring this using ab-initio molecular dynamics is beyond our computational capabilities and therefore we use classical molecular dynamics to model the movement of atoms in the mixture. We discuss the validity of the chosen method in view of the results obtained from the simulations. There are some difficulties in using classical molecular dynamics for the quantum mechanical calculations, but these can possibly be overcome by parameter tuning. According to the calculations clear differences can be seen in the Compton profiles of different mixtures. This prediction needs to be tested in experiments in order to find out whether the approximations made are valid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is intense activity in the area of theoretical chemistry of gold. It is now possible to predict new molecular species, and more recently, solids by combining relativistic methodology with isoelectronic thinking. In this thesis we predict a series of solid sheet-type crystals for Group-11 cyanides, MCN (M=Cu, Ag, Au), and Group-2 and 12 carbides MC2 (M=Be-Ba, Zn-Hg). The idea of sheets is then extended to nanostrips which can be bent to nanorings. The bending energies and deformation frequencies can be systematized by treating these molecules as an elastic bodies. In these species Au atoms act as an 'intermolecular glue'. Further suggested molecular species are the new uncongested aurocarbons, and the neutral Au_nHg_m clusters. Many of the suggested species are expected to be stabilized by aurophilic interactions. We also estimate the MP2 basis-set limit of the aurophilicity for the model compounds [ClAuPH_3]_2 and [P(AuPH_3)_4]^+. Beside investigating the size of the basis-set applied, our research confirms that the 19-VE TZVP+2f level, used a decade ago, already produced 74 % of the present aurophilic attraction energy for the [ClAuPH_3]_2 dimer. Likewise we verify the preferred C4v structure for the [P(AuPH_3)_4]^+ cation at the MP2 level. We also perform the first calculation on model aurophilic systems using the SCS-MP2 method and compare the results to high-accuracy CCSD(T) ones. The recently obtained high-resolution microwave spectra on MCN molecules (M=Cu, Ag, Au) provide an excellent testing ground for quantum chemistry. MP2 or CCSD(T) calculations, correlating all 19 valence electrons of Au and including BSSE and SO corrections, are able to give bond lengths to 0.6 pm, or better. Our calculated vibrational frequencies are expected to be better than the currently available experimental estimates. Qualitative evidence for multiple Au-C bonding in triatomic AuCN is also found.