98 resultados para Computation time delay
Resumo:
Since their discovery 150 years ago, Neanderthals have been considered incapable of behavioural change and innovation. Traditional synchronic approaches to the study of Neanderthal behaviour have perpetuated this view and shaped our understanding of their lifeways and eventual extinction. In this thesis I implement an innovative diachronic approach to the analysis of Neanderthal faunal extraction, technology and symbolic behaviour as contained in the archaeological record of the critical period between 80,000 and 30,000 years BP. The thesis demonstrates patterns of change in Neanderthal behaviour which are at odds with traditional perspectives and which are consistent with an interpretation of increasing behavioural complexity over time, an idea that has been suggested but never thoroughly explored in Neanderthal archaeology. Demonstrating an increase in behavioural complexity in Neanderthals provides much needed new data with which to fuel the debate over the behavioural capacities of Neanderthals and the first appearance of Modern Human Behaviour in Europe. It supports the notion that Neanderthal populations were active agents of behavioural innovation prior to the arrival of Anatomically Modern Humans in Europe and, ultimately, that they produced an early Upper Palaeolithic cultural assemblage (the Châtelperronian) independent of modern humans. Overall, this thesis provides an initial step towards the development of a quantitative approach to measuring behavioural complexity which provides fresh insights into the cognitive and behavioural capabilities of Neanderthals.
Resumo:
In high-velocity open channel flows, the measurements of air-water flow properties are complicated by the strong interactions between the flow turbulence and the entrained air. In the present study, an advanced signal processing of traditional single- and dual-tip conductivity probe signals is developed to provide further details on the air-water turbulent level, time and length scales. The technique is applied to turbulent open channel flows on a stepped chute conducted in a large-size facility with flow Reynolds numbers ranging from 3.8 E+5 to 7.1 E+5. The air water flow properties presented some basic characteristics that were qualitatively and quantitatively similar to previous skimming flow studies. Some self-similar relationships were observed systematically at both macroscopic and microscopic levels. These included the distributions of void fraction, bubble count rate, interfacial velocity and turbulence level at a macroscopic scale, and the auto- and cross-correlation functions at the microscopic level. New correlation analyses yielded a characterisation of the large eddies advecting the bubbles. Basic results included the integral turbulent length and time scales. The turbulent length scales characterised some measure of the size of large vortical structures advecting air bubbles in the skimming flows, and the data were closely related to the characteristic air-water depth Y90. In the spray region, present results highlighted the existence of an upper spray region for C > 0.95 to 0.97 in which the distributions of droplet chord sizes and integral advection scales presented some marked differences with the rest of the flow.
Resumo:
Effective surface passivation of lead sulfide (PbS) nanocrystals (NCs) in an aqueous colloidal solution has been achieved following treatment with CdS precursors. The resultant photoluminescent emission displays two distinct components, one originating from the absorption band edge and the other from above the absorption band edge. We show that both of these components are strongly polarized but display distinctly different behaviours. The polarization arising from the band edge shows little dependence on the excitation energy while the polarization of the above-band-edge component is strongly dependent on the excitation energy. In addition, time-resolved polarization spectroscopy reveals that the above-band-edge polarization is restricted to the first couple of nanoseconds, while the band edge polarization is nearly constant over hundreds of nanoseconds. We recognize an incompatibility between the two different polarization behaviours, which enables us to identify two distinct types of surface-passivated PbS NC.
Resumo:
The calculation of quantum dynamics is currently a central issue in theoretical physics, with diverse applications ranging from ultracold atomic Bose-Einstein condensates to condensed matter, biology, and even astrophysics. Here we demonstrate a conceptually simple method of determining the regime of validity of stochastic simulations of unitary quantum dynamics by employing a time-reversal test. We apply this test to a simulation of the evolution of a quantum anharmonic oscillator with up to 6.022×1023 (Avogadro's number) of particles. This system is realizable as a Bose-Einstein condensate in an optical lattice, for which the time-reversal procedure could be implemented experimentally.
Resumo:
P-representation techniques, which have been very successful in quantum optics and in other fields, are also useful for general bosonic quantum-dynamical many-body calculations such as Bose-Einstein condensation. We introduce a representation called the gauge P representation, which greatly widens the range of tractable problems. Our treatment results in an infinite set of possible time evolution equations, depending on arbitrary gauge functions that can be optimized for a given quantum system. In some cases, previous methods can give erroneous results, due to the usual assumption of vanishing boundary conditions being invalid for those particular systems. Solutions are given to this boundary-term problem for all the cases where it is known to occur: two-photon absorption and the single-mode laser. We also provide some brief guidelines on how to apply the stochastic gauge method to other systems in general, quantify the freedom of choice in the resulting equations, and make a comparison to related recent developments.
Resumo:
The one-way quantum computing model introduced by Raussendorf and Briegel [Phys. Rev. Lett. 86, 5188 (2001)] shows that it is possible to quantum compute using only a fixed entangled resource known as a cluster state, and adaptive single-qubit measurements. This model is the basis for several practical proposals for quantum computation, including a promising proposal for optical quantum computation based on cluster states [M. A. Nielsen, Phys. Rev. Lett. (to be published), quant-ph/0402005]. A significant open question is whether such proposals are scalable in the presence of physically realistic noise. In this paper we prove two threshold theorems which show that scalable fault-tolerant quantum computation may be achieved in implementations based on cluster states, provided the noise in the implementations is below some constant threshold value. Our first threshold theorem applies to a class of implementations in which entangling gates are applied deterministically, but with a small amount of noise. We expect this threshold to be applicable in a wide variety of physical systems. Our second threshold theorem is specifically adapted to proposals such as the optical cluster-state proposal, in which nondeterministic entangling gates are used. A critical technical component of our proofs is two powerful theorems which relate the properties of noisy unitary operations restricted to act on a subspace of state space to extensions of those operations acting on the entire state space. We expect these theorems to have a variety of applications in other areas of quantum-information science.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Resumo:
Power system real time security assessment is one of the fundamental modules of the electricity markets. Typically, when a contingency occurs, it is required that security assessment and enhancement module shall be ready for action within about 20 minutes’ time to meet the real time requirement. The recent California black out again highlighted the importance of system security. This paper proposed an approach for power system security assessment and enhancement based on the information provided from the pre-defined system parameter space. The proposed scheme opens up an efficient way for real time security assessment and enhancement in a competitive electricity market for single contingency case
Resumo:
Data mining is the process to identify valid, implicit, previously unknown, potentially useful and understandable information from large databases. It is an important step in the process of knowledge discovery in databases, (Olaru & Wehenkel, 1999). In a data mining process, input data can be structured, seme-structured, or unstructured. Data can be in text, categorical or numerical values. One of the important characteristics of data mining is its ability to deal data with large volume, distributed, time variant, noisy, and high dimensionality. A large number of data mining algorithms have been developed for different applications. For example, association rules mining can be useful for market basket problems, clustering algorithms can be used to discover trends in unsupervised learning problems, classification algorithms can be applied in decision-making problems, and sequential and time series mining algorithms can be used in predicting events, fault detection, and other supervised learning problems (Vapnik, 1999). Classification is among the most important tasks in the data mining, particularly for data mining applications into engineering fields. Together with regression, classification is mainly for predictive modelling. So far, there have been a number of classification algorithms in practice. According to (Sebastiani, 2002), the main classification algorithms can be categorized as: decision tree and rule based approach such as C4.5 (Quinlan, 1996); probability methods such as Bayesian classifier (Lewis, 1998); on-line methods such as Winnow (Littlestone, 1988) and CVFDT (Hulten 2001), neural networks methods (Rumelhart, Hinton & Wiliams, 1986); example-based methods such as k-nearest neighbors (Duda & Hart, 1973), and SVM (Cortes & Vapnik, 1995). Other important techniques for classification tasks include Associative Classification (Liu et al, 1998) and Ensemble Classification (Tumer, 1996).
Resumo:
The transient statistics of a gain-switched coherently pumped class-C laser displays a linear correlation between the first passage time and subsequent peak intensity. Measurements are reported showing a positive or negative sign of this linear correlation, controlled through the switching time and the laser detuning. Further measurements of the small-signal laser gain combined with calculations involving a three-level laser model indicate that this sign fundamentally depends upon the way the laser inversion varies during the gain switching, despite the added dynamics of the laser polarization in the class-C laser. [S1050-2947(97)07112-6].
Resumo:
There is concern that Pacific Island economies dependent on remittances of migrants will endure foreign exchange shortages and falling living standards as remittance levels fall because of lower migration rates and the belief that migrants' willingness to remit declines over time. The empirical validity of the remittance-decay hypothesis has never been tested. From survey data on Tongan and Western Samoan migrants in Sydney, this paper estimates remittance functions using multivariate regression analysis. It is found that the remittance-decay hypothesis has no empirical validity, and migrants are motivated by factors other than altruistic family support, including asset accumulation and investment back home.
Resumo:
A dissociation between two putative measures of resource allocation skin conductance responding, and secondary task reaction time (RT), has been observed during auditory discrimination tasks. Four experiments investigated the time course of the dissociation effect with a visual discrimination task. participants were presented with circles and ellipses and instructed to count the number of longer-than-usual presentations of one shape (task-relevant) and to ignore presentations of the other shape (task-irrelevant). Concurrent with this task, participants made a speeded motor response to an auditory probe. Experiment 1 showed that skin conductance responses were larger during task-relevant stimuli than during task-irrelevant stimuli, whereas RT to probes presented at 150 ms following shape onset was slower during task-irrelevant stimuli. Experiments 2 to 4 found slower RT during task-irrelevant stimuli at probes presented at 300 ms before shape onset until 150 ms following shape onset. At probes presented 3,000 and 4,000 ms following shape onset probe RT was slower during task-relevant stimuli. The similarities between the observed time course and the so-called psychological refractory period (PRF) effect are discussed.
Resumo:
Coset enumeration is a most important procedure for investigating finitely presented groups. We present a practical parallel procedure for coset enumeration on shared memory processors. The shared memory architecture is particularly interesting because such parallel computation is both faster and cheaper. The lower cost comes when the program requires large amounts of memory, and additional CPU's. allow us to lower the time that the expensive memory is being used. Rather than report on a suite of test cases, we take a single, typical case, and analyze the performance factors in-depth. The parallelization is achieved through a master-slave architecture. This results in an interesting phenomenon, whereby the CPU time is divided into a sequential and a parallel portion, and the parallel part demonstrates a speedup that is linear in the number of processors. We describe an early version for which only 40% of the program was parallelized, and we describe how this was modified to achieve 90% parallelization while using 15 slave processors and a master. In the latter case, a sequential time of 158 seconds was reduced to 29 seconds using 15 slaves.