51 resultados para Tutorial on Computing
Resumo:
Colonius suggests that, in using standard set theory as the language in which to express our computational-level theory of human memory, we would need to violate the axiom of foundation in order to express meaningful memory bindings in which a context is identical to an item in the list. We circumvent Colonius's objection by allowing that a list item may serve as a label for a context without being identical to that context. This debate serves to highlight the value of specifying memory operations in set theoretic notation, as it would have been difficult if not impossible to formulate such an objection at the algorithmic level.
Resumo:
One of the most important advantages of database systems is that the underlying mathematics is rich enough to specify very complex operations with a small number of statements in the database language. This research covers an aspect of biological informatics that is the marriage of information technology and biology, involving the study of real-world phenomena using virtual plants derived from L-systems simulation. L-systems were introduced by Aristid Lindenmayer as a mathematical model of multicellular organisms. Not much consideration has been given to the problem of persistent storage for these simulations. Current procedures for querying data generated by L-systems for scientific experiments, simulations and measurements are also inadequate. To address these problems the research in this paper presents a generic process for data-modeling tools (L-DBM) between L-systems and database systems. This paper shows how L-system productions can be generically and automatically represented in database schemas and how a database can be populated from the L-system strings. This paper further describes the idea of pre-computing recursive structures in the data into derived attributes using compiler generation. A method to allow a correspondence between biologists' terms and compiler-generated terms in a biologist computing environment is supplied. Once the L-DBM gets any specific L-systems productions and its declarations, it can generate the specific schema for both simple correspondence terminology and also complex recursive structure data attributes and relationships.
Resumo:
A variety of current and future wired and wireless networking technologies can be transformed into a seamless communication environments through application of context-based vertical handovers. Such seamless communication environments are needed for future pervasive/ubiquitous systems. Pervasive systems are context aware and need to adapt to context changes, including network disconnections and changes in network Quality of Service (QoS). Vertical handover is one of many possible adaptation methods. It allows users to roam freely between heterogeneous networks while maintaining the continuity of their applications. This paper proposes a vertical handover mechanism suitable for multimedia applications in pervasive systems. The paper focuses on the handover decision making process which uses context information regarding user devices, user location, network environment and requested QoS. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
It was hypothesized that employees' perceptions of an organizational culture strong in human relations values and open systems values would be associated with heightened levels of readiness for change which, in turn, would be predictive of change implementation success. Similarly, it was predicted that reshaping capabilities would lead to change implementation success, via its effects on employees' perceptions of readiness for change. Using a temporal research design, these propositions were tested for 67 employees working in a state government department who were about to undergo the implementation of a new end-user computing system in their workplace. Change implementation success was operationalized as user satisfaction and system usage. There was evidence to suggest that employees who perceived strong human relations values in their division at Time 1 reported higher levels of readiness for change at pre-implementation which, in turn, predicted system usage at Time 2. In addition, readiness for change mediated the relationship between reshaping capabilities and system usage. Analyses also revealed that pre-implementation levels of readiness for change exerted a positive main effect on employees' satisfaction with the system's accuracy, user friendliness, and formatting functions at post-implementation. These findings are discussed in terms of their theoretical contribution to the readiness for change literature, and in relation to the practical importance of developing positive change attitudes among employees if change initiatives are to be successful.
Resumo:
In this paper we explore the possibility of fundamental tests for coherent-state optical quantum computing gates [ T. C. Ralph et al. Phys. Rev. A 68 042319 (2003)] using sophisticated but not unrealistic quantum states. The major resource required in these gates is a state diagonal to the basis states. We use the recent observation that a squeezed single-photon state [S(r)∣1⟩] approximates well an odd superposition of coherent states (∣α⟩−∣−α⟩) to address the diagonal resource problem. The approximation only holds for relatively small α, and hence these gates cannot be used in a scalable scheme. We explore the effects on fidelities and probabilities in teleportation and a rotated Hadamard gate.
Resumo:
Typically linear optical quantum computing (LOQC) models assume that all input photons are completely indistinguishable. In practice there will inevitably be nonidealities associated with the photons and the experimental setup which will introduce a degree of distinguishability between photons. We consider a nondeterministic optical controlled-NOT gate, a fundamental LOQC gate, and examine the effect of temporal and spectral distinguishability on its operation. We also consider the effect of utilizing nonideal photon counters, which have finite bandwidth and time response.
Resumo:
Modelling and optimization of the power draw of large SAG/AG mills is important due to the large power draw which modern mills require (5-10 MW). The cost of grinding is the single biggest cost within the entire process of mineral extraction. Traditionally, modelling of the mill power draw has been done using empirical models. Although these models are reliable, they cannot model mills and operating conditions which are not within the model database boundaries. Also, due to its static nature, the impact of the changing conditions within the mill on the power draw cannot be determined using such models. Despite advances in computing power, discrete element method (DEM) modelling of large mills with many thousands of particles could be a time consuming task. The speed of computation is determined principally by two parameters: number of particles involved and material properties. The computational time step is determined by the size of the smallest particle present in the model and material properties (stiffness). In the case of small particles, the computational time step will be short, whilst in the case of large particles; the computation time step will be larger. Hence, from the point of view of time required for modelling (which usually corresponds to time required for 3-4 mill revolutions), it will be advantageous that the smallest particles in the model are not unnecessarily too small. The objective of this work is to compare the net power draw of the mill whose charge is characterised by different size distributions, while preserving the constant mass of the charge and mill speed. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
We propose a scheme for quantum information processing based on donor electron spins in semiconductors, with an architecture complementary to the original Kane proposal. We show that a naive implementation of electron spin qubits provides only modest improvement over the Kane scheme, however through the introduction of global gate control we are able to take full advantage of the fast electron evolution timescales. We estimate that the latent clock speed is 100-1000 times that of the nuclear spin quantum computer with the ratio T-2/T-ops approaching the 10(6) level.
Resumo:
Wigner functions play a central role in the phase space formulation of quantum mechanics. Although closely related to classical Liouville densities, Wigner functions are not positive definite and may take negative values on subregions of phase space. We investigate the accumulation of these negative values by studying bounds on the integral of an arbitrary Wigner function over noncompact subregions of the phase plane with hyperbolic boundaries. We show using symmetry techniques that this problem reduces to computing the bounds on the spectrum associated with an exactly solvable eigenvalue problem and that the bounds differ from those on classical Liouville distributions. In particular, we show that the total "quasiprobability" on such a region can be greater than 1 or less than zero. (C) 2005 American Institute of Physics.
Resumo:
The Moreton Bay Waterways and Catchments Partnership, now branded the Healthy Waterways Partnership, has built on the experience of the past 15 years here in South East Queensland (SEQ). It focuses on water quality and the ecosystem health of our freshwater, estuarine and marine systems through the implementation of actions by individual partners and the collective oversight of a regional work program that assists partners to prioritise their investments and address emerging issues. This regional program includes monitoring, reporting, marketing and communication, development of decision support tools, research that is directed to problem solving, and maintaining extensive consultative and engagement arrangements. The Partnership has produced information-based outcomes which have led to significant cost savings in the protection of water quality and ecosystem resources by its stakeholders. This has been achieved by: – providing a clear focus for management actions that has ownership of governments, industry and community; – targeted scientific research to address issues requiring appropriate management actions; – management actions based on a sound understanding of the waterways and rigorous public consultation; and, – development and implementation of a strategy that incorporates commitments from all levels of stakeholders. While focusing on our waterways, the Partnership’s approach includes addressing catchment management issues particularly relating to the management of diffuse pollution sources in both urban and rural landscapes as well as point source loads. We are now working with other stakeholders to develop a framework for integrated water management that will link water quality and water quantity goals and priorities.
Resumo:
The problem of distributed compression for correlated quantum sources is considered. The classical version of this problem was solved by Slepian and Wolf, who showed that distributed compression could take full advantage of redundancy in the local sources created by the presence of correlations. Here it is shown that, in general, this is not the case for quantum sources, by proving a lower bound on the rate sum for irreducible sources of product states which is stronger than the one given by a naive application of Slepian-Wolf. Nonetheless, strategies taking advantage of correlation do exist for some special classes of quantum sources. For example, Devetak and Winter demonstrated the existence of such a strategy when one of the sources is classical. Optimal nontrivial strategies for a different extreme, sources of Bell states, are presented here. In addition, it is explained how distributed compression is connected to other problems in quantum information theory, including information-disturbance questions, entanglement distillation and quantum error correction.
Resumo:
We prove upper and lower bounds relating the quantum gate complexity of a unitary operation, U, to the optimal control cost associated to the synthesis of U. These bounds apply for any optimal control problem, and can be used to show that the quantum gate complexity is essentially equivalent to the optimal control cost for a wide range of problems, including time-optimal control and finding minimal distances on certain Riemannian, sub-Riemannian, and Finslerian manifolds. These results generalize the results of [Nielsen, Dowling, Gu, and Doherty, Science 311, 1133 (2006)], which showed that the gate complexity can be related to distances on a Riemannian manifold.