74 resultados para One-shot information theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show how to communicate Heisenberg-limited continuous (quantum) variables between Alice and Bob in the case where they occupy two inertial reference frames that differ by an unknown Lorentz boost. There are two effects that need to be overcome: the Doppler shift and the absence of synchronized clocks. Furthermore, we show how Alice and Bob can share Doppler-invariant entanglement, and we demonstrate that the protocol is robust under photon loss.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-quality data about protein structures and their gene sequences are essential to the understanding of the relationship between protein folding and protein coding sequences. Firstly we constructed the EcoPDB database, which is a high-quality database of Escherichia coli genes and their corresponding PDB structures. Based on EcoPDB, we presented a novel approach based on information theory to investigate the correlation between cysteine synonymous codon usages and local amino acids flanking cysteines, the correlation between cysteine synonymous codon usages and synonymous codon usages of local amino acids flanking cysteines, as well as the correlation between cysteine synonymous codon usages and the disulfide bonding states of cysteines in the E. coli genome. The results indicate that the nearest neighboring residues and their synonymous codons of the C-terminus have the greatest influence on the usages of the synonymous codons of cysteines and the usage of the synonymous codons has a specific correlation with the disulfide bond formation of cysteines in proteins. The correlations may result from the regulation mechanism of protein structures at gene sequence level and reflect the biological function restriction that cysteines pair to form disulfide bonds. The results may also be helpful in identifying residues that are important for synonymous codon selection of cysteines to introduce disulfide bridges in protein engineering and molecular biology. The approach presented in this paper can also be utilized as a complementary computational method and be applicable to analyse the synonymous codon usages in other model organisms. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Management are keen to maximize the life span of an information system because of the high cost, organizational disruption, and risk of failure associated with the re-development or replacement of an information system. This research investigates the effects that various factors have on an information system's life span by understanding how the factors affect an information system's stability. The research builds on a previously developed two-stage model of information system change whereby an information system is either in a stable state of evolution in which the information system's functionality is evolving, or in a state of revolution, in which the information system is being replaced because it is not providing the functionality expected by its users. A case study surveyed a number of systems within one organization. The aim was to test whether a relationship existed between the base value of the volatility index (a measure of the stability of an information system) and certain system characteristics. Data relating to some 3000 user change requests covering 40 systems over a 10-year period were obtained. The following factors were hypothesized to have significant associations with the base value of the volatility index: language level (generation of language of construction), system size, system age, and the timing of changes applied to a system. Significant associations were found in the hypothesized directions except that the timing of user changes was not associated with any change in the value of the volatility index. Copyright (C) 2002 John Wiley Sons, Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, technology is described as involving processes whereby resources are utilised to satisfy human needs or to take advantage of opportunities, to develop practical solutions to problems. This study, set within one type of technology context, information technology, investigated how, through a one semester undergraduate university course, elements of technological processes were made explicit to students. While it was acknowledged in the development and implementation of this course that students needed to learn technical skills, technological skills and knowledge, including design, were seen as vital also, to enable students to think about information technology from a perspective that was not confined and limited to 'technology as hardware and software'. This paper describes how the course, set within a three year program of study, was aimed at helping students to develop their thinking and their knowledge about design processes in an explicit way. An interpretive research approach was used and data sources included a repertory grid 'survey'; student interviews; video recordings of classroom interactions, audio recordings of lectures, observations of classroom interactions made by researchers; and artefacts which included students' journals and portfolios. The development of students' knowledge about design practices is discussed and reflections upon student knowledge development in conjunction with their learning experiences are made. Implications for ensuring explicitness of design practice within information technology contexts are presented, and the need to identify what constitutes design knowledge is argued.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Mineral processing plants use two main processes; these are comminution and separation. The objective of the comminution process is to break complex particles consisting of numerous minerals into smaller simpler particles where individual particles consist primarily of only one mineral. The process in which the mineral composition distribution in particles changes due to breakage is called 'liberation'. The purpose of separation is to separate particles consisting of valuable mineral from those containing nonvaluable mineral. The energy required to break particles to fine sizes is expensive, and therefore the mineral processing engineer must design the circuit so that the breakage of liberated particles is reduced in favour of breaking composite particles. In order to effectively optimize a circuit through simulation it is necessary to predict how the mineral composition distributions change due to comminution. Such a model is called a 'liberation model for comminution'. It was generally considered that such a model should incorporate information about the ore, such as the texture. However, the relationship between the feed and product particles can be estimated using a probability method, with the probability being defined as the probability that a feed particle of a particular composition and size will form a particular product particle of a particular size and composition. The model is based on maximizing the entropy of the probability subject to mass constraints and composition constraint. Not only does this methodology allow a liberation model to be developed for binary particles, but also for particles consisting of many minerals. Results from applying the model to real plant ore are presented. A laboratory ball mill was used to break particles. The results from this experiment were used to estimate the kernel which represents the relationship between parent and progeny particles. A second feed, consisting primarily of heavy particles subsampled from the main ore was then ground through the same mill. The results from the first experiment were used to predict the product of the second experiment. The agreement between the predicted results and the actual results are very good. It is therefore recommended that more extensive validation is needed to fully evaluate the substance of the method. (C) 2003 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Regional tourism organizations (RTOs) plays a central role in planning, coordinating and marketing tourism in many areas, including Queensland, Australia. RTOs rely on interaction with a network of other organizations for their efficient functioning. This paper describes an exploratory case study that develops a method for use of social network analysis techniques to analyse the inter-organizational network in one RTO region in Queensland. Results indicate that differences exist in the structure of inter-organizational links between commercial tourism organizations and planning organizations, between tourism organizations and other sectoral clusters, and between organizations at local, regional and state levels. The results highlight areas or improvement in the role and responsibilities of RTOs in Queensland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using spontaneous parametric down-conversion, we produce polarization-entangled states of two photons and characterize them using two-photon tomography to measure the density matrix. A controllable decoherence is imposed on the states by passing the photons through thick, adjustable birefringent elements. When the system is subject to collective decoherence, one particular entangled state is seen to be decoherence-free, as predicted by theory. Such decoherence-free systems may have an important role for the future of quantum computation and information processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theory of Owicki and Gries has been used as a platform for safety-based verifcation and derivation of concurrent programs. It has also been integrated with the progress logic of UNITY which has allowed newer techniques of progress-based verifcation and derivation to be developed. However, a theoretical basis for the integrated theory has thus far been missing. In this paper, we provide a theoretical background for the logic of Owicki and Gries integrated with the logic of progress from UNITY. An operational semantics for the new framework is provided which is used to prove soundness of the progress logic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power system real time security assessment is one of the fundamental modules of the electricity markets. Typically, when a contingency occurs, it is required that security assessment and enhancement module shall be ready for action within about 20 minutes’ time to meet the real time requirement. The recent California black out again highlighted the importance of system security. This paper proposed an approach for power system security assessment and enhancement based on the information provided from the pre-defined system parameter space. The proposed scheme opens up an efficient way for real time security assessment and enhancement in a competitive electricity market for single contingency case

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).