139 resultados para High Frequency
Resumo:
Map-matching algorithms that utilise road segment connectivity along with other data (i.e.position, speed and heading) in the process of map-matching are normally suitable for high frequency (1 Hz or higher) positioning data from GPS. While applying such map-matching algorithms to low frequency data (such as data from a fleet of private cars, buses or light duty vehicles or smartphones), the performance of these algorithms reduces to in the region of 70% in terms of correct link identification, especially in urban and sub-urban road networks. This level of performance may be insufficient for some real-time Intelligent Transport System (ITS) applications and services such as estimating link travel time and speed from low frequency GPS data. Therefore, this paper develops a new weight-based shortest path and vehicle trajectory aided map-matching (stMM) algorithm that enhances the map-matching of low frequency positioning data on a road map. The well-known A* search algorithm is employed to derive the shortest path between two points while taking into account both link connectivity and turn restrictions at junctions. In the developed stMM algorithm, two additional weights related to the shortest path and vehicle trajectory are considered: one shortest path-based weight is related to the distance along the shortest path and the distance along the vehicle trajectory, while the other is associated with the heading difference of the vehicle trajectory. The developed stMM algorithm is tested using a series of real-world datasets of varying frequencies (i.e. 1 s, 5 s, 30 s, 60 s sampling intervals). A high-accuracy integrated navigation system (a high-grade inertial navigation system and a carrier-phase GPS receiver) is used to measure the accuracy of the developed algorithm. The results suggest that the algorithm identifies 98.9% of the links correctly for every 30 s GPS data. Omitting the information from the shortest path and vehicle trajectory, the accuracy of the algorithm reduces to about 73% in terms of correct link identification. The algorithm can process on average 50 positioning fixes per second making it suitable for real-time ITS applications and services.
Resumo:
In estuaries and natural water channels, the estimate of velocity and dispersion coefficients is critical to the knowledge of scalar transport and mixing. This estimate is rarely available experimentally at sub-tidal time scale in shallow water channels where high frequency is required to capture its spatio-temporal variation. This study estimates Lagrangian integral scales and autocorrelation curves, which are key parameters for obtaining velocity fluctuations and dispersion coefficients, and their spatio-temporal variability from deployments of Lagrangian drifters sampled at 10 Hz for a 4-hour period. The power spectral densities of the velocities between 0.0001 and 0.8 Hz were well fitted with a slope of 5/3 predicted by Kolmogorov’s similarity hypothesis within the inertial subrange, and were similar to the Eulerian power spectral previously observed within the estuary. The result showed that large velocity fluctuations determine the magnitude of the integral time scale, TL. Overlapping of short segments improved the stability of the estimate of TL by taking advantage of the redundant data included in the autocorrelation function. The integral time scales were about 20 s and varied by up to a factor of 8. These results are essential inputs for spatial binning of velocities, Lagrangian stochastic modelling and single particle analysis of the tidal estuary.
Resumo:
We used event-related fMRI to investigate the neural correlates of encoding strength and word frequency effects in recognition memory. At test, participants made Old/New decisions to intermixed low (LF) and high frequency (HF) words that had been presented once or twice at study and to new, unstudied words. The Old/New effect for all hits vs. correctly rejected unstudied words was associated with differential activity in multiple cortical regions, including the anterior medial temporal lobe (MTL), hippocampus, left lateral parietal cortex and anterior left inferior prefrontal cortex (LIPC). Items repeated at study had superior hit rates (HR) compared to items presented once and were associated with reduced activity in the right anterior MTL. By contrast, other regions that had shown conventional Old/New effects did not demonstrate modulation according to memory strength. A mirror effect for word frequency was demonstrated, with the LF word HR advantage associated with increased activity in the left lateral temporal cortex. However, none of the regions that had demonstrated Old/New item retrieval effects showed modulation according to word frequency. These findings are interpreted as supporting single-process memory models proposing a unitary strength-like memory signal and models attributing the LF word HR advantage to the greater lexico-semantic context-noise associated with HF words due to their being experienced in many pre-experimental contexts.
Resumo:
Word frequency (WF) and strength effects are two important phenomena associated with episodic memory. The former refers to the superior hit-rate (HR) for low (LF) compared to high frequency (HF) words in recognition memory, while the latter describes the incremental effect(s) upon HRs associated with repeating an item at study. Using the "subsequent memory" method with event-related fMRI, we tested the attention-at-encoding (AE) [M. Glanzer, J.K. Adams, The mirror effect in recognition memory: data and theory, J. Exp. Psychol.: Learn Mem. Cogn. 16 (1990) 5-16] explanation of the WF effect. In addition to investigating encoding strength, we addressed if study involves accessing prior representations of repeated items via the same mechanism as that at test [J.L. McClelland, M. Chappell, Familiarity breeds differentiation: a subjective-likelihood approach to the effects of experience in recognition memory, Psychol. Rev. 105 (1998) 724-760], entailing recollection [K.J. Malmberg, J.E. Holden, R.M. Shiffrin, Modeling the effects of repetitions, similarity, and normative word frequency on judgments of frequency and recognition memory, J. Exp. Psychol.: Learn Mem. Cogn. 30 (2004) 319-331] and whether less processing effort is entailed for encoding each repetition [M. Cary, L.M. Reder, A dual-process account of the list-length and strength-based mirror effects in recognition, J. Mem. Lang. 49 (2003) 231-248]. The increased BOLD responses observed in the left inferior prefrontal cortex (LIPC) for the WF effect provide support for an AE account. Less effort does appear to be required for encoding each repetition of an item, as reduced BOLD responses were observed in the LIPC and left lateral temporal cortex; both regions demonstrated increased responses in the conventional subsequent memory analysis. At test, a left lateral parietal BOLD response was observed for studied versus unstudied items, while only medial parietal activity was observed for repeated items at study, indicating that accessing prior representations at encoding does not necessarily occur via the same mechanism as that at test, and is unlikely to involve a conscious recall-like process such as recollection. This information may prove useful for constraining cognitive theories of episodic memory.
Resumo:
In two fMRI experiments, participants named pictures with superimposed distractors that were high or low in frequency or varied in terms of age of acquisition. Pictures superimposed with low-frequency words were named more slowly than those superimposed with high-frequency words, and late-acquired words interfered with picture naming to a greater extent than early-acquired words. The distractor frequency effect (Experiment 1) was associated with increased activity in left premotor and posterior superior temporal cortices, consistent with the operation of an articulatory response buffer and verbal selfmonitoring system. Conversely, the distractor age-of-acquisition effect (Experiment 2) was associated with increased activity in the left middle and posterior middle temporal cortex, consistent with the operation of lexical level processes such as lemma and phonological word form retrieval. The spatially dissociated patterns of activity across the two experiments indicate that distractor effects in picture-word interference may occur at lexical or postlexical levels of processing in speech production.
Resumo:
Frequency Domain Spectroscopy (FDS) is one of the major techniques used for determining the condition of the cellulose based paper and pressboard components in large oil/paper insulated power transformers. This technique typically makes use of a sinusoidal voltage source swept from 0.1 mHz to 1 kHz. The excitation test voltage source used must meet certain characteristics, such as high output voltage, high fidelity, low noise and low harmonic content. The amplifier used; in the test voltage source; must be able to drive highly capacitive loads. This paper proposes that a switch-mode assisted linear amplifier (SMALA) can be used in the test voltage source to meet these criteria. A three level SMALA prototype amplifier was built to experimentally demonstrate the effectiveness of this proposal. The developed SMALA prototype shows no discernable harmonic distortion in the output voltage waveform, or the need for output filters, and is therefore seen as a preferable option to pulse width modulated digital amplifiers. The lack of harmonic distortion and high frequency switching noise in the output voltage of this SMALA prototype demonstrates its feasibility for applications in FDS, particularly on highly capacitive test objects such as transformer insulation systems.
Resumo:
High-voltage circuit breakers are among the most important equipments for ensuring the efficient and safe operation of an electric power system. On occasion, circuit breaker operators may wish to check whether equipment is performing satisfactorily and whether controlled switching systems are producing reliable and repeatable stress control. Monitoring of voltage and current waveforms during switching using established methods will provide information about the magnitude and frequency of voltage transients as a result of re-ignitions and restrikes. However, high frequency waveform measurement requires shutdown of circuit breaker and use of specialized equipment. Two utilities, Hydro-Québec in Canada and Powerlink Queensland in Australia, have been working on the development and application of a non-intrusive, cost-effective and flexible diagnostic system for monitoring high-voltage circuit breakers for reactive switching. The proposed diagnostic approach relies on the non-intrusive assessment of key parameters such as operating times, prestrike characteristics, re-ignition and restrike detection. Transient electromagnetic emissions have been identified as a promising means to evaluate the abovementioned parameters non-intrusively. This paper describes two complimentary methods developed concurrently by Powerlink and Hydro-Québec. Also, return of experiences on the application to capacitor bank and shunt reactor switching is presented.
Resumo:
The “distractor-frequency effect” refers to the finding that high-frequency (HF) distractor words slow picture naming less than low-frequency distractors in the picture–word interference paradigm. Rival input and output accounts of this effect have been proposed. The former attributes the effect to attentional selection mechanisms operating during distractor recognition, whereas the latter attributes it to monitoring/decision mechanisms operating on distractor and target responses in an articulatory buffer. Using high-density (128-channel) EEG, we tested hypotheses from these rival accounts. In addition to conducting stimulus- and response-locked whole-brain corrected analyses, we investigated the correct-related negativity, an ERP observed on correct trials at fronto-central electrodes proposed to reflect the involvement of domain general monitoring. The wholebrain ERP analysis revealed a significant effect of distractor frequency at inferior right frontal and temporal sites between 100 and 300-msec post-stimulus onset, during which lexical access is thought to occur. Response-locked, region of interest (ROI) analyses of fronto-central electrodes revealed a correct-related negativity starting 121 msec before and peaking 125 msec after vocal onset on the grand averages. Slope analysis of this component revealed a significant difference between HF and lowfrequency distractor words, with the former associated with a steeper slope on the time windowspanning from100 msec before to 100 msec after vocal onset. The finding of ERP effects in time windows and components corresponding to both lexical processing and monitoring suggests the distractor frequency effect is most likely associated with more than one physiological mechanism.
Resumo:
This paper presents the analysis of shaft voltage in different configurations of a doubly fed induction generator (DFIG) and an induction generator (IG) with a back-to-back inverter in wind turbine applications. Detailed high frequency model of the proposed systems have been developed based on existing capacitive couplings in IG & DFIG structures and common mode voltage sources. In this research work, several arrangements of DFIG based wind energy conversion systems (WES) are investigated in case of shaft voltage calculation and its mitigation techniques. Placements of an LC line filter in different locations and its effects on shaft voltage elimination are studied via Mathematical analysis and simulations. A pulse width modulation (PWM) technique and a back-to-back inverter with a bidirectional buck converter have been presented to eliminate the shaft voltage in a DFIG wind turbine.
Resumo:
In this paper, several aspects of high frequency related issues of modern AC motor drive systems, such as common mode voltage, shaft voltage and resultant bearing current and leakage currents, have been discussed. Conducted emission is a major problem in modern motor drives that produce undesirable effects on electronic devices. In modern power electronic systems, increasing power density and decreasing cost and size of system are market requirements. Switching losses, harmonics and EMI are the key factors which should be considered at the beginning stage of a design to optimise a drive system.
Resumo:
Noise and vibration in complex ship structures are becoming a prominent issue for ship building industry and ship companies due to the constant demand of building faster ships of lighter weight, and the stringent noise and libration regulation of the industry. In order to retain the full benefit of building faster ships without compromising too much on ride comfort and safety, noise and vibration control needs to be implemented. Due to the complexity of ship structures, the coupling of different wave types and multiple wave propagation paths, active control of global hull modes is difficult to implement and very expensive. Traditional passive control such as adding damping materials is only effective in the high frequency range. However, most severe damage to ship structures is caused by large structural deformation of hull structures and high dynamic stress concentration at low frequencies. The most discomfort and fatigue of passengers and the crew onboard ships is also due to the low frequency noise and vibration. Innovative approaches are therefore, required to attenuate the noise and vibration at low frequencies. This book was developed from several specialized research topics on vibration and vibration control of ship structures, mostly from the author's own PhD work at the University of Western Australia. The book aims to provide a better understanding of vibration characteristics of ribbed plate structures, plate/plate coupled structures and the mechanism governing wave propagation and attenuation in periodic and irregular ribbed structures as well as in complex ship structures. The book is designed to be a reference book for ship builders, vibro-acoustic engineers and researchers. The author also hopes that the book can stimulate more exciting future work in this area of research. It is the author's humble desire that the book can be some use for those who purchase it. This book is divided into eight chapters. Each chapter focuses on providing solution to address a particular issue on vibration problems of ship structures. A brief summary of each chapter is given in the general introduction. All chapters are inter-dependent to each other to form an integration volume on the subject of vibration and vibration control of ship structures and alike. I am in debt to many people in completing this work. In particular, I would like to thank Professor J. Pan, Dr N.H. Farag, Dr K. Sum and many others from the University of Western Australia for useful advices and helps during my times at the University and beyond. I would also like to thank my wife, Miaoling Wang, my children, Anita, Sophia and Angela Lin, for their sacrifice and continuing supports to make this work possible. Financial supports from Australian Research Council, Australian Defense Science and Technology Organization and Strategic Marine Pty Ltd at Western Australia for this work is gratefully acknowledged.
Resumo:
Near-infrared (NIR) and Fourier transform infrared (FTIR) spectroscopy have been used to determine the mineralogical character of isomorphic substitutions for Mg2+ by divalent transition metals Fe, Mn, Co and Ni in natural halotrichite series. The minerals are characterised by d-d transitions in NIR region 12000-7500 cm-1. NIR spectrum of halotrichite reveals broad feature from 12000 to 7500 cm-1 with a splitting of two bands resulting from ferrous ion transition 5T2g ® 5Eg. The presence of overtones of OH- fundamentals near 7000 cm-1 confirms molecular water in the mineral structure of the halotrichite series. The appearance of the most intense peak at around 5132 cm-1 is a common feature in the three minerals and is derived from combination of OH- vibrations of water molecules and 2 water bending modes. The influence of cations like Mg2+, Fe2+, Mn2+, Co2+, Ni2+ shows on the spectra of halotrichites. Especially wupatkiite-OH stretching vibrations in which bands are distorted conspicuously to low wave numbers at 3270, 2904 and 2454 cm-1. The observation of high frequency 2 mode in the infrared spectrum at 1640 cm-1 indicates coordination of water molecules is strongly hydrogen bonded in natural halotrichites. The splittings of bands in 3 and 4 (SO4)2- stretching regions may be attributed to the reduction of symmetry from Td to C2v for sulphate ion. This work has shown the usefulness of NIR spectroscopy for the rapid identification and classification of the halotrichite minerals.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Assessment of the condition of connectors in the overhead electricity network has traditionally relied on the heat dissipation or voltage drop from existing load current (50Hz) as a measurable parameter to differentiate between satisfactory and failing connectors. This research has developed a technique which does not rely on the 50Hz current and a prototype connector tester has been developed. In this system a high frequency signal is injected into the section of line under test and measures the resistive voltage drop and the current at the test frequency to yield the resistance in micro-ohms. From the value of resistance a decision as to whether a connector is satisfactory or approaching failure can be made. Determining the resistive voltage drop in the presence of a large induced voltage was achieved by the innovative approach of using a representative sample of the magnetic flux producing the induced voltage as the phase angle reference for the signal processing rather than the phase angle of the current, which can be affected by the presence of nearby metal objects. Laboratory evaluation of the connector tester has validated the measurement technique. The magnitude of the load current (50Hz) has minimal effect on the measurement accuracy. Addition of a suitable battery based power supply system and isolated communications, probably radio and refinement of the printed circuit board design and software are the remaining development steps to a production instrument.
Resumo:
Basic competencies in assessing and treating substance use disorders should be core to the training of any clinical psychologist, because of the high frequency of risky or problematic substance use in the community, and its high co-occurrence with other problems. Skills in establishing trust and a therapeutic alliance are particularly important in addiction, given the stigma and potential for legal sanctions that surround it. The knowledge and skills of all clinical practitioners should be sufficient to allow valid screening and diagnosis of substance use disorders, accurate estimation of consumption and a basic functional analysis. Practitioners should also be able to undertake brief interventions including motivational interviews, and appropriately apply generic interventions such as problem solving or goal setting to addiction. Furthermore, clinical psychologists should have an understanding of the nature, evidence base and indications for biochemical assays, pharmacotherapies and other medical treatments, and ways these can be integrated with psychological practice. Specialists in addiction should have more sophisticated competencies in each of these areas. They need to have a detailed understating of current addiction theories and basic and applied research, be able to undertake and report on a detailed psychological assessment, and display expert competence in addiction treatment. These skills should include an ability to assess and manage complex or co-occurring problems, to adapt interventions to the needs of different groups, and to assist people who have not responded to basic treatments. They should also be able to provide consultation to others, undertake evaluations of their practice, and monitor and evaluate emerging research data in the field.