43 resultados para MULTIPLE TIME FORMALISM


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time correlation functions yield profound information about the dynamics of a physical system and hence are frequently calculated in computer simulations. For systems whose dynamics span a wide range of time, currently used methods require significant computer time and memory. In this paper, we discuss the multiple-tau correlator method for the efficient calculation of accurate time correlation functions on the fly during computer simulations. The multiple-tau correlator is efficacious in terms of computational requirements and can be tuned to the desired level of accuracy. Further, we derive estimates for the error arising from the use of the multiple-tau correlator and extend it for use in the calculation of mean-square particle displacements and dynamic structure factors. The method described here, in hardware implementation, is routinely used in light scattering experiments but has not yet found widespread use in computer simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bayesian Model Averaging (BMA) is used for testing for multiple break points in univariate series using conjugate normal-gamma priors. This approach can test for the number of structural breaks and produce posterior probabilities for a break at each point in time. Results are averaged over specifications including: stationary; stationary around trend and unit root models, each containing different types and number of breaks and different lag lengths. The procedures are used to test for structural breaks on 14 annual macroeconomic series and 11 natural resource price series. The results indicate that there are structural breaks in all of the natural resource series and most of the macroeconomic series. Many of the series had multiple breaks. Our findings regarding the existence of unit roots, having allowed for structural breaks in the data, are largely consistent with previous work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a study of the geographic location of lightning affecting the ionospheric sporadic-E (Es) layer over the ionospheric monitoring station at Chilton, UK. Data from the UK Met Office's Arrival Time Difference (ATD) lightning detection system were used to locate lightning strokes in the vicinity of the ionospheric monitoring station. A superposed epoch study of this data has previously revealed an enhancement in the Es layer caused by lightning within 200km of Chilton. In the current paper, we use the same data to investigate the location of the lightning strokes which have the largest effect on the Es layer above Chilton. We find that there are several locations where the effect of lightning on the ionosphere is most significant statistically, each producing different ionospheric responses. We interpret this as evidence that there is more than one mechanism combining to produce the previously observed enhancement in the ionosphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper,the Prony's method is applied to the time-domain waveform data modelling in the presence of noise.The following three problems encountered in this work are studied:(1)determination of the order of waveform;(2)de-termination of numbers of multiple roots;(3)determination of the residues.The methods of solving these problems are given and simulated on the computer.Finally,an output pulse of model PG-10N signal generator and the distorted waveform obtained by transmitting the pulse above mentioned through a piece of coaxial cable are modelled,and satisfactory results are obtained.So the effectiveness of Prony's method in waveform data modelling in the presence of noise is confirmed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Greater attention has been focused on the use of CDMA for future cellular mobile communications. CA near-far resistant detector for asynchronous code-division multiple-access (CDMA) systems operating in additive white Gaussian noise (AWGN) channels is presented. The multiuser interference caused by K users transmitting simultaneously, each with a specific signature sequence, is completely removed at the receiver. The complexity of this detector grows only linearly with the number of users, as compared to the optimum multiuser detector which requires exponential complexity in the number of users. A modified algorithm based on time diversity is described. It performs detection on a bit-by-bit basis and overcomes the complexity of using a sequence detector. The performance of this detector is shown to be superior to that of the conventional receiver.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although a number of studies have reported that force feedback gravity wells can improve performance in "point-and-click" tasks, there have been few studies addressing issues surrounding the use of gravity wells for multiple on-screen targets. This paper investigates the performance of users, both with and without motion-impairments, in a "point-and-click" task when an undesired haptic distractor is present. The importance of distractor location is studied explicitly. Results showed that gravity wells can still improve times and error rates, even on occasions when the cursor is pulled into a distractor. The greatest improvement is seen for the most impaired users. In addition to traditional measures such as time and errors, performance is studied in terms of measures of cursor movement along a path. Two cursor measures, angular distribution and temporal components, are proposed and their ability to explain performance differences is explored.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liquid clouds play a profound role in the global radiation budget but it is difficult to remotely retrieve their vertical profile. Ordinary narrow field-of-view (FOV) lidars receive a strong return from such clouds but the information is limited to the first few optical depths. Wideangle multiple-FOV lidars can isolate radiation scattered multiple times before returning to the instrument, often penetrating much deeper into the cloud than the singly-scattered signal. These returns potentially contain information on the vertical profile of extinction coefficient, but are challenging to interpret due to the lack of a fast radiative transfer model for simulating them. This paper describes a variational algorithm that incorporates a fast forward model based on the time-dependent two-stream approximation, and its adjoint. Application of the algorithm to simulated data from a hypothetical airborne three-FOV lidar with a maximum footprint width of 600m suggests that this approach should be able to retrieve the extinction structure down to an optical depth of around 6, and total opticaldepth up to at least 35, depending on the maximum lidar FOV. The convergence behavior of Gauss-Newton and quasi-Newton optimization schemes are compared. We then present results from an application of the algorithm to observations of stratocumulus by the 8-FOV airborne “THOR” lidar. It is demonstrated how the averaging kernel can be used to diagnose the effective vertical resolution of the retrieved profile, and therefore the depth to which information on the vertical structure can be recovered. This work enables exploitation of returns from spaceborne lidar and radar subject to multiple scattering more rigorously than previously possible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study addresses three methodological questions that have been ignored in previous research on EEG indices of the human mirror neuron system (hMNS), particularly in regard to autistic individuals. The first question regards how to elicit the EEG indexed hMNS during movement observation: Is hMNS activation best elicited using long stimulus presentations or multiple short repetitions? The second question regards what EEG sensorimotor frequency bands reflect sensorimotor reactivity during hand movement observation? The third question regards how widespread is the EEG reactivity over the sensorimotor cortex during movement observation? The present study explored sensorimotor alpha and low beta reactivity during hand movement versus static hand or bouncing balls observation and compared two experimental protocols (long exposure vs. multiple repetitions) in the same participants. Results using the multiple repetitions protocol indicated a greater low beta desynchronisation over the sensorimotor cortex during hand movement compared to static hand and bouncing balls observation. This result was not achieved using the long exposure protocol. Therefore, the present study suggests that the multiple repetitions protocol is a more robust protocol to use when exploring the sensorimotor reactivity induced by hand action observation. In addition, sensorimotor low beta desynchronisation was differently modulated during hand movement, static hand and bouncing balls observation (non-biological motion) while it was not the case for sensorimotor alpha and that suggest that low beta may be a more sensitive index of hMNS activation during biological motion observation. In conclusion the present study indicates that sensorimotor reactivity of low beta during hand movement observation was found to be more widespread over the sensorimotor cortex than previously thought.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variational data assimilation in continuous time is revisited. The central techniques applied in this paper are in part adopted from the theory of optimal nonlinear control. Alternatively, the investigated approach can be considered as a continuous time generalization of what is known as weakly constrained four-dimensional variational assimilation (4D-Var) in the geosciences. The technique allows to assimilate trajectories in the case of partial observations and in the presence of model error. Several mathematical aspects of the approach are studied. Computationally, it amounts to solving a two-point boundary value problem. For imperfect models, the trade-off between small dynamical error (i.e. the trajectory obeys the model dynamics) and small observational error (i.e. the trajectory closely follows the observations) is investigated. This trade-off turns out to be trivial if the model is perfect. However, even in this situation, allowing for minute deviations from the perfect model is shown to have positive effects, namely to regularize the problem. The presented formalism is dynamical in character. No statistical assumptions on dynamical or observational noise are imposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The radiation of the mammals provides a 165-million-year test case for evolutionary theories of how species occupy and then fill ecological niches. It is widely assumed that species often diverge rapidly early in their evolution, and that this is followed by a longer, drawn-out period of slower evolutionary fine-tuning as natural selection fits organisms into an increasingly occupied niche space1,2. But recent studies have hinted that the process may not be so simple3–5. Here we apply statistical methods that automatically detect temporal shifts in the rate of evolution through time to a comprehensive mammalian phylogeny6 and data set7 of body sizes of 3,185 extant species. Unexpectedly, the majority of mammal species, including two of the most speciose orders (Rodentia and Chiroptera), have no history of substantial and sustained increases in the rates of evolution. Instead, a subset of the mammals has experienced an explosive increase (between 10- and 52-fold) in the rate of evolution along the single branch leading to the common ancestor of their monophyletic group (for example Chiroptera), followed by a quick return to lower or background levels. The remaining species are a taxonomically diverse assemblage showing a significant, sustained increase or decrease in their rates of evolution. These results necessarily decouple morphological diversification from speciation and suggest that the processes that give rise to the morphological diversity of a class of animals are far more free to vary than previously considered. Niches do not seem to fill up, and diversity seems to arise whenever, wherever and at whatever rate it is advantageous.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the first climate prediction of the coming decade made with multiple models, initialized with prior observations. This prediction accrues from an international activity to exchange decadal predictions in near real-time, in order to assess differences and similarities, provide a consensus view to prevent over-confidence in forecasts from any single model, and establish current collective capability. We stress that the forecast is experimental, since the skill of the multi-model system is as yet unknown. Nevertheless, the forecast systems used here are based on models that have undergone rigorous evaluation and individually have been evaluated for forecast skill. Moreover, it is important to publish forecasts to enable open evaluation, and to provide a focus on climate change in the coming decade. Initialized forecasts of the year 2011 agree well with observations, with a pattern correlation of 0.62 compared to 0.31 for uninitialized projections. In particular, the forecast correctly predicted La Niña in the Pacific, and warm conditions in the north Atlantic and USA. A similar pattern is predicted for 2012 but with a weaker La Niña. Indices of Atlantic multi-decadal variability and Pacific decadal variability show no signal beyond climatology after 2015, while temperature in the Niño3 region is predicted to warm slightly by about 0.5 °C over the coming decade. However, uncertainties are large for individual years and initialization has little impact beyond the first 4 years in most regions. Relative to uninitialized forecasts, initialized forecasts are significantly warmer in the north Atlantic sub-polar gyre and cooler in the north Pacific throughout the decade. They are also significantly cooler in the global average and over most land and ocean regions out to several years ahead. However, in the absence of volcanic eruptions, global temperature is predicted to continue to rise, with each year from 2013 onwards having a 50 % chance of exceeding the current observed record. Verification of these forecasts will provide an important opportunity to test the performance of models and our understanding and knowledge of the drivers of climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use new neutron scattering instrumentation to follow in a single quantitative time-resolving experiment, the three key scales of structural development which accompany the crystallisation of synthetic polymers. These length scales span 3 orders of magnitude of the scattering vector. The study of polymer crystallisation dates back to the pioneering experiments of Keller and others who discovered the chain-folded nature of the thin lamellae crystals which are normally found in synthetic polymers. The inherent connectivity of polymers makes their crystallisation a multiscale transformation. Much understanding has developed over the intervening fifty years but the process has remained something of a mystery. There are three key length scales. The chain folded lamellar thickness is ~ 10nm, the crystal unit cell is ~ 1nm and the detail of the chain conformation is ~ 0.1nm. In previous work these length scales have been addressed using different instrumention or were coupled using compromised geometries. More recently researchers have attempted to exploit coupled time-resolved small-angle and wide-angle x-ray experiments. These turned out to be challenging experiments much related to the challenge of placing the scattering intensity on an absolute scale. However, they did stimulate the possibility of new phenomena in the very early stages of crystallisation. Although there is now considerable doubt on such experiments, they drew attention to the basic question as to the process of crystallisation in long chain molecules. We have used NIMROD on the second target station at ISIS to follow all three length scales in a time-resolving manner for poly(e-caprolactone). The technique can provide a single set of data from 0.01 to 100Å-1 on the same vertical scale. We present the results using a multiple scale model of the crystallisation process in polymers to analyse the results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The coupled climate dynamics underlying large, rapid, and potentially irreversible changes in ice cover are studied. A global atmosphere–ocean–sea ice general circulation model with idealized aquaplanet geometry is forced by gradual multi-millennial variations in solar luminosity. The model traverses a hysteresis loop between warm ice-free conditions and cold glacial conditions in response to ±5 W m−2 variations in global, annual-mean insolation. Comparison of several model configurations confirms the importance of polar ocean processes in setting the sensitivity and time scales of the transitions. A “sawtooth” character is found with faster warming and slower cooling, reflecting the opposing effects of surface heating and cooling on upper-ocean buoyancy and, thus, effective heat capacity. The transition from a glacial to warm, equable climate occurs in about 200 years. In contrast to the “freshwater hosing” scenario, transitions are driven by radiative forcing and sea ice feedbacks. The ocean circulation, and notably the meridional overturning circulation (MOC), does not drive the climate change. The MOC (and associated heat transport) collapses poleward of the advancing ice edge, but this is a purely passive response to cooling and ice expansion. The MOC does, however, play a key role in setting the time scales of the transition and contributes to the asymmetry between warming and cooling.