109 resultados para Time-varying
Resumo:
In various signal-channel-estimation problems, the channel being estimated may be well approximated by a discrete finite impulse response (FIR) model with sparsely separated active or nonzero taps. A common approach to estimating such channels involves a discrete normalized least-mean-square (NLMS) adaptive FIR filter, every tap of which is adapted at each sample interval. Such an approach suffers from slow convergence rates and poor tracking when the required FIR filter is "long." Recently, NLMS-based algorithms have been proposed that employ least-squares-based structural detection techniques to exploit possible sparse channel structure and subsequently provide improved estimation performance. However, these algorithms perform poorly when there is a large dynamic range amongst the active taps. In this paper, we propose two modifications to the previous algorithms, which essentially remove this limitation. The modifications also significantly improve the applicability of the detection technique to structurally time varying channels. Importantly, for sparse channels, the computational cost of the newly proposed detection-guided NLMS estimator is only marginally greater than that of the standard NLMS estimator. Simulations demonstrate the favourable performance of the newly proposed algorithm. © 2006 IEEE.
Resumo:
In recent years many real time applications need to handle data streams. We consider the distributed environments in which remote data sources keep on collecting data from real world or from other data sources, and continuously push the data to a central stream processor. In these kinds of environments, significant communication is induced by the transmitting of rapid, high-volume and time-varying data streams. At the same time, the computing overhead at the central processor is also incurred. In this paper, we develop a novel filter approach, called DTFilter approach, for evaluating the windowed distinct queries in such a distributed system. DTFilter approach is based on the searching algorithm using a data structure of two height-balanced trees, and it avoids transmitting duplicate items in data streams, thus lots of network resources are saved. In addition, theoretical analysis of the time spent in performing the search, and of the amount of memory needed is provided. Extensive experiments also show that DTFilter approach owns high performance.
Resumo:
The purpose of this study was to examine the effects of different methods of measuring training volume, controlled in different ways, on selected variables that reflect acute neuromuscular responses. Eighteen resistance-trained males performed three fatiguing protocols of dynamic constant external resistance exercise, involving elbow flexors, that manipulated either time-under-tension (TUT) or volume load (VL), defined as the product of training load and repetitions. Protocol A provided a standard for TUT and VL. Protocol B involved the same VL as Protocol A but only 40% concentric TUT; Protocol C was equated to Protocol A for TUT but only involved 50% VL. Fatigue was assessed by changes in maximum voluntary isometric contraction (MVIC), interpolated doublet (ID), muscle twitch characteristics (peak twitch, time to peak twitch, 0.5 relaxation time, and mean rates of force development and twitch relaxation). All protocols produced significant changes (P
Resumo:
We have performed MRI examinations to determine the water diffusion tensor in the brain of six patients who were admitted to the hospital within 12 h after the onset of cerebral ischemic symptoms. The examinations have been carried out immediately after admission, and thereafter at varying intervals up to 90 days post admission. Maps of the trace of the diffusion tensor, the fractional anisotropy and the lattice index, as well as maps of cerebral blood perfusion parameters, were generated to quantitatively assess the character of the water diffusion tensor in the infarcted area. In patients with significant perfusion deficits and substantial lesion volume changes, four of six cases, our measurements show a monotonic and significant decrease in the diffusion anisotropy within the ischemic lesion as a function of time. We propose that retrospective analysis of this quantity, in combination with brain tissue segmentation and cerebral perfusion maps, may be used in future studies to assess the severity of the ischemic event. (C) 1999 Elsevier Science Inc.
Resumo:
Previous research using punctuate reaction time and counting tasks has found that the startle eyeblink reflex is sensitive to attentional demands. The present experiment explored whether startle eyeblink is also modulated during a complex continuous task and is sensitive to different levels of mental workload. Participants (N=14) performed a visual horizontal tracking task either alone (single-task condition) or in combination with a visual gauge monitoring task (multiple-task condition) for three minutes. On some task trials, the startle eyeblink reflex was elicited by a noise burst. Results showed that startle eyeblink was attenuated during both tasks and that the attenuation was greater during the multiple-task condition than during the single-task condition. Subjective ratings, endogenous eyeblink rate, heart period, and heart period variability provided convergent validity of the workload manipulations. The findings suggest that the startle eyeblink is sensitive to the workload demands associated with a continuous visual task. The application of startle eyeblink modulation as a workload metric and the possibility that it may be diagnostic of workload demands in different stimulus modalities is discussed.
Resumo:
Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.
Resumo:
Latitudinal clines provide natural systems that may allow the effect of natural selection on the genetic variance to be determined. Ten clinal populations of Drosophila serrata collected from the eastern coast of Australia were used to examine clinal patterns in the trait mean and genetic variance of the life-history trait egg-to-adult development time. Development time significantly lengthened from tropical areas to temperate areas. The additive genetic variance for development time in each population was not associated with latitude but was associated with the population mean development time. Additive genetic variance tended to be larger in populations with more extreme development times and appeared to be consistent with allele frequency change. In contrast, the nonadditive genetic variance was not associated with the population mean but was associated with latitude. Levels of nonadditive genetic variance were greatest in the region of the cline where the gradient in the change in mean was greatest, consistent with Barton's (1999) conjecture that the generation of linkage disequilibrium may become an important component of the genetic variance in systems with a spatially varying optimum.
Resumo:
Background: The solubility of dental pulp tissue in sodium hypochlorite has been extensively investigated but results have been inconsistent; due most likely to variations in experimental design, the volume and/or rate of replenishment of the solutions used and the nature of the tissues assessed. Traditionally, the sodium hypochlorite solutions used for endodontic irrigation in Australia have been either Milton or commercial bleach, with Milton being the most common. Recently, a range of Therapeutic Goods Administration (TGA) approved proprietary sodium hypochlorite solutions, which contain surfactant, has become available. Some domestic chlorine bleaches now also contain surfactants. The purpose of this study was to perform new solubility assessments, comparing Milton with new TGA approved products, Hypochlor 1% and Hypochlor 4% forte, and with a domestic bleach containing surfactant (White King). Methods: Ten randomly assigned pulp samples of porcine dental pulp of approximately equal dimensions were immersed in the above solutions, as well as representative concentrations of sodium hydroxide. Time to complete dissolution was measured and assessed statistically. Results: White King 4% showed the shortest dissolution time, closely followed by Hypochlor 4% forte. White King 1% and Hypochlor 1% each took around three times as long to completely dissolve the samples of pulp as their respective 4% concentrations, while Milton took nearly 10 times as long. The sodium hydroxide solutions showed no noticeable dissolution of the pulp samples. Conclusions: The composition and content of sodium hypochlorite solutions had a profound effect on the ability of these solutions to dissolve pulp tissue in vitro. Greater concentrations provided more rapid dissolution of tissue. One per cent solutions with added surfactant and which contained higher concentrations of sodium hydroxide were significantly more effective in dissolution of pulp tissue than Milton.
Resumo:
Since their discovery 150 years ago, Neanderthals have been considered incapable of behavioural change and innovation. Traditional synchronic approaches to the study of Neanderthal behaviour have perpetuated this view and shaped our understanding of their lifeways and eventual extinction. In this thesis I implement an innovative diachronic approach to the analysis of Neanderthal faunal extraction, technology and symbolic behaviour as contained in the archaeological record of the critical period between 80,000 and 30,000 years BP. The thesis demonstrates patterns of change in Neanderthal behaviour which are at odds with traditional perspectives and which are consistent with an interpretation of increasing behavioural complexity over time, an idea that has been suggested but never thoroughly explored in Neanderthal archaeology. Demonstrating an increase in behavioural complexity in Neanderthals provides much needed new data with which to fuel the debate over the behavioural capacities of Neanderthals and the first appearance of Modern Human Behaviour in Europe. It supports the notion that Neanderthal populations were active agents of behavioural innovation prior to the arrival of Anatomically Modern Humans in Europe and, ultimately, that they produced an early Upper Palaeolithic cultural assemblage (the Châtelperronian) independent of modern humans. Overall, this thesis provides an initial step towards the development of a quantitative approach to measuring behavioural complexity which provides fresh insights into the cognitive and behavioural capabilities of Neanderthals.
Resumo:
In high-velocity open channel flows, the measurements of air-water flow properties are complicated by the strong interactions between the flow turbulence and the entrained air. In the present study, an advanced signal processing of traditional single- and dual-tip conductivity probe signals is developed to provide further details on the air-water turbulent level, time and length scales. The technique is applied to turbulent open channel flows on a stepped chute conducted in a large-size facility with flow Reynolds numbers ranging from 3.8 E+5 to 7.1 E+5. The air water flow properties presented some basic characteristics that were qualitatively and quantitatively similar to previous skimming flow studies. Some self-similar relationships were observed systematically at both macroscopic and microscopic levels. These included the distributions of void fraction, bubble count rate, interfacial velocity and turbulence level at a macroscopic scale, and the auto- and cross-correlation functions at the microscopic level. New correlation analyses yielded a characterisation of the large eddies advecting the bubbles. Basic results included the integral turbulent length and time scales. The turbulent length scales characterised some measure of the size of large vortical structures advecting air bubbles in the skimming flows, and the data were closely related to the characteristic air-water depth Y90. In the spray region, present results highlighted the existence of an upper spray region for C > 0.95 to 0.97 in which the distributions of droplet chord sizes and integral advection scales presented some marked differences with the rest of the flow.
Resumo:
The calculation of quantum dynamics is currently a central issue in theoretical physics, with diverse applications ranging from ultracold atomic Bose-Einstein condensates to condensed matter, biology, and even astrophysics. Here we demonstrate a conceptually simple method of determining the regime of validity of stochastic simulations of unitary quantum dynamics by employing a time-reversal test. We apply this test to a simulation of the evolution of a quantum anharmonic oscillator with up to 6.022×1023 (Avogadro's number) of particles. This system is realizable as a Bose-Einstein condensate in an optical lattice, for which the time-reversal procedure could be implemented experimentally.
Resumo:
The transient statistics of a gain-switched coherently pumped class-C laser displays a linear correlation between the first passage time and subsequent peak intensity. Measurements are reported showing a positive or negative sign of this linear correlation, controlled through the switching time and the laser detuning. Further measurements of the small-signal laser gain combined with calculations involving a three-level laser model indicate that this sign fundamentally depends upon the way the laser inversion varies during the gain switching, despite the added dynamics of the laser polarization in the class-C laser. [S1050-2947(97)07112-6].
Resumo:
There is concern that Pacific Island economies dependent on remittances of migrants will endure foreign exchange shortages and falling living standards as remittance levels fall because of lower migration rates and the belief that migrants' willingness to remit declines over time. The empirical validity of the remittance-decay hypothesis has never been tested. From survey data on Tongan and Western Samoan migrants in Sydney, this paper estimates remittance functions using multivariate regression analysis. It is found that the remittance-decay hypothesis has no empirical validity, and migrants are motivated by factors other than altruistic family support, including asset accumulation and investment back home.
Resumo:
To simulate cropping systems, crop models must not only give reliable predictions of yield across a wide range of environmental conditions, they must also quantify water and nutrient use well, so that the status of the soil at maturity is a good representation of the starting conditions for the next cropping sequence. To assess the suitability for this task a range of crop models, currently used in Australia, were tested. The models differed in their design objectives, complexity and structure and were (i) tested on diverse, independent data sets from a wide range of environments and (ii) model components were further evaluated with one detailed data set from a semi-arid environment. All models were coded into the cropping systems shell APSIM, which provides a common soil water and nitrogen balance. Crop development was input, thus differences between simulations were caused entirely by difference in simulating crop growth. Under nitrogen non-limiting conditions between 73 and 85% of the observed kernel yield variation across environments was explained by the models. This ranged from 51 to 77% under varying nitrogen supply. Water and nitrogen effects on leaf area index were predicted poorly by all models resulting in erroneous predictions of dry matter accumulation and water use. When measured light interception was used as input, most models improved in their prediction of dry matter and yield. This test highlighted a range of compensating errors in all modelling approaches. Time course and final amount of water extraction was simulated well by two models, while others left up to 25% of potentially available soil water in the profile. Kernel nitrogen percentage was predicted poorly by all models due to its sensitivity to small dry matter changes. Yield and dry matter could be estimated adequately for a range of environmental conditions using the general concepts of radiation use efficiency and transpiration efficiency. However, leaf area and kernel nitrogen dynamics need to be improved to achieve better estimates of water and nitrogen use if such models are to be use to evaluate cropping systems. (C) 1998 Elsevier Science B.V.