889 resultados para Signal priority


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optimization of adaptive traffic signal timing is one of the most complex problems in traffic control systems. This dissertation presents a new method that applies the parallel genetic algorithm (PGA) to optimize adaptive traffic signal control in the presence of transit signal priority (TSP). The method can optimize the phase plan, cycle length, and green splits at isolated intersections with consideration for the performance of both the transit and the general vehicles. Unlike the simple genetic algorithm (GA), PGA can provide better and faster solutions needed for real-time optimization of adaptive traffic signal control. ^ An important component in the proposed method involves the development of a microscopic delay estimation model that was designed specifically to optimize adaptive traffic signal with TSP. Macroscopic delay models such as the Highway Capacity Manual (HCM) delay model are unable to accurately consider the effect of phase combination and phase sequence in delay calculations. In addition, because the number of phases and the phase sequence of adaptive traffic signal may vary from cycle to cycle, the phase splits cannot be optimized when the phase sequence is also a decision variable. A "flex-phase" concept was introduced in the proposed microscopic delay estimation model to overcome these limitations. ^ The performance of PGA was first evaluated against the simple GA. The results show that PGA achieved both faster convergence and lower delay for both under- or over-saturated traffic conditions. A VISSIM simulation testbed was then developed to evaluate the performance of the proposed PGA-based adaptive traffic signal control with TSP. The simulation results show that the PGA-based optimizer for adaptive TSP outperformed the fully actuated NEMA control in all test cases. The results also show that the PGA-based optimizer was able to produce TSP timing plans that benefit the transit vehicles while minimizing the impact of TSP on the general vehicles. The VISSIM testbed developed in this research provides a powerful tool to design and evaluate different TSP strategies under both actuated and adaptive signal control. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerous studies have shown that attention is biased toward threatening events. More recent evidence has also found attentional biases for stimuli that are relevant to the current and temporary goals of an individual. We examined whether goal-relevant information still evokes an attentional bias when this information competes with threatening events. In three experiments, participants performed a dot probe task combined with a separate task that induced a temporary goal. The results of Experiment 1 showed that attention was oriented to goal-relevant pictures in the dot probe task when these pictures were simultaneously presented with neutral or threatening pictures. Whether goal-relevant pictures themselves were threatening or neutral did not influence the results. Experiment 2 replicated these findings in a sample of highly trait-anxious participants. Experiment 3 showed that attention was automatically deployed to stimuli relevant to a temporary goal even in the presence of stimuli that signal imminent and genuine threat (i.e., a colored patch signaling the presentation of an aversive noise). These findings further corroborate the conclusion that an individual's current and temporary goals guide early attentional processes

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research on Blindsight, Neglect/Extinction and Phantom limb syndromes, as well as electrical measurements of mammalian brain activity, have suggested the dependence of vivid perception on both incoming sensory information at primary sensory cortex and reentrant information from associative cortex. Coherence between incoming and reentrant signals seems to be a necessary condition for (conscious) perception. General reticular activating system and local electrical synchronization are some of the tools used by the brain to establish coarse coherence at the sensory cortex, upon which biochemical processes are coordinated. Besides electrical synchrony and chemical modulation at the synapse, a central mechanism supporting such a coherence is the N-methyl-D-aspartate channel, working as a 'coincidence detector' for an incoming signal causing the depolarization necessary to remove Mg 2+, and reentrant information releasing the glutamate that finally prompts Ca 2+ entry. We propose that a signal transduction pathway activated by Ca 2+ entry into cortical neurons is in charge of triggering a quantum computational process that accelerates inter-neuronal communication, thus solving systemic conflict and supporting the unity of consciousness. © 2001 Elsevier Science Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deregulations and market practices in power industry have brought great challenges to the system planning area. In particular, they introduce a variety of uncertainties to system planning. New techniques are required to cope with such uncertainties. As a promising approach, probabilistic methods are attracting more and more attentions by system planners. In small signal stability analysis, generation control parameters play an important role in determining the stability margin. The objective of this paper is to investigate power system state matrix sensitivity characteristics with respect to system parameter uncertainties with analytical and numerical approaches and to identify those parameters have great impact on system eigenvalues, therefore, the system stability properties. Those identified parameter variations need to be investigated with priority. The results can be used to help Regional Transmission Organizations (RTOs) and Independent System Operators (ISOs) perform planning studies under the open access environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research pursued the conceptualization and real-time verification of a system that allows a computer user to control the cursor of a computer interface without using his/her hands. The target user groups for this system are individuals who are unable to use their hands due to spinal dysfunction or other afflictions, and individuals who must use their hands for higher priority tasks while still requiring interaction with a computer. ^ The system receives two forms of input from the user: Electromyogram (EMG) signals from muscles in the face and point-of-gaze coordinates produced by an Eye Gaze Tracking (EGT) system. In order to produce reliable cursor control from the two forms of user input, the development of this EMG/EGT system addressed three key requirements: an algorithm was created to accurately translate EMG signals due to facial movements into cursor actions, a separate algorithm was created that recognized an eye gaze fixation and provided an estimate of the associated eye gaze position, and an information fusion protocol was devised to efficiently integrate the outputs of these algorithms. ^ Experiments were conducted to compare the performance of EMG/EGT cursor control to EGT-only control and mouse control. These experiments took the form of two different types of point-and-click trials. The data produced by these experiments were evaluated using statistical analysis, Fitts' Law analysis and target re-entry (TRE) analysis. ^ The experimental results revealed that though EMG/EGT control was slower than EGT-only and mouse control, it provided effective hands-free control of the cursor without a spatial accuracy limitation, and it also facilitated a reliable click operation. This combination of qualities is not possessed by either EGT-only or mouse control, making EMG/EGT cursor control a unique and practical alternative for a user's cursor control needs. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The balance between the costs and benefits of conspicuous signals ensures that the expression of those signals is related to the quality of the bearer. Plastic signals could enable males to maximize conspicuous traits to impress mates and competitors, but reduce the expression of those traits to minimize signaling costs, potentially compromising the information conveyed by the signals. ^ I investigated the effect of signal enhancement on the information coded by the biphasic electric signal pulse of the gymnotiform fish Brachyhypopomus gauderio. Increases in population density drive males to enhance the amplitude of their signals. I found that signal amplitude enhancement improves the information about the signaler's size. Furthermore, I found that the elongation of the signal's second phase conveys information about androgen levels in both sexes, gonad size in males and estrogen levels in females. Androgens link the duration of the signal's second phase to other androgen-mediated traits making the signal an honest indicator of reproductive state and aggressive motivation. ^ Signal amplitude enhancement facilitates the assessment of the signaler's resource holding potential, important for male-male interactions, while signal duration provides information about aggressive motivation to same-sex competitors and reproductive state to the opposite sex. Moreover, I found that female signals also change in accordance to the social environment. Females also increase the amplitude of their signal when population density increases and elongate the duration of their signal's second phase when the sex ratio becomes female-biased. Indicating that some degree of sexual selection operates in females. ^ I studied whether male B. gauderio use signal plasticity to reduce the cost of reproductive signaling when energy is limited. Surprisingly, I found that food limitation promotes the investment in reproduction manifested as signal enhancement and elevated androgen levels. The short lifespan and single breeding season of B. gauderio diminishes the advantage of energy savings and gives priority to sustaining reproduction. I conclude that the electric signal of B. gauderio provides reliable information about the signaler, the quality of this information is reinforced rather than degraded with signal enhancement.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Friction and triboelectrification of materials show a strong correlation during sliding contacts. Friction force fluctuations are always accompanied by two tribocharging events at metal-insulator [e.g., polytetrafluoroethylene (PTFE)] interfaces: injection of charged species from the metal into PTFE followed by the flow of charges from PTFE to the metal surface. Adhesion maps that were obtained by atomic force microscopy (AFM) show that the region of contact increases the pull-off force from 10 to 150 nN, reflecting on a resilient electrostatic adhesion between PTFE and the metallic surface. The reported results suggest that friction and triboelectrification have a common origin that must be associated with the occurrence of strong electrostatic interactions at the interface.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The analysis of Macdonald for electrolytes is generalized to the case in which two groups of ions are present. We assume that the electrolyte can be considered as a dispersion of ions in a dielectric liquid, and that the ionic recombination can be neglected. We present the differential equations governing the ionic redistribution when the liquid is subjected to an external electric field, describing the simultaneous diffusion of the two groups of ions in the presence of their own space charge fields. We investigate the influence of the ions on the impedance spectroscopy of an electrolytic cell. In the analysis, we assume that each group of ions have equal mobility, the electrodes perfectly block and that the adsorption phenomena can be neglected. In this framework, it is shown that the real part of the electrical impedance of the cell has a frequency dependence presenting two plateaux, related to a type of ambipolar and free diffusion coefficients. The importance of the considered problem on the ionic characterization performed by means of the impedance spectroscopy technique was discussed. (c) 2008 American Institute of Physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of this paper is to study and propose a new technique for noise reduction used during the reconstruction of speech signals, particularly for biomedical applications. The proposed method is based on Kalman filtering in the time domain combined with spectral subtraction. Comparison with discrete Kalman filter in the frequency domain shows better performance of the proposed technique. The performance is evaluated by using the segmental signal-to-noise ratio and the Itakura-Saito`s distance. Results have shown that Kalman`s filter in time combined with spectral subtraction is more robust and efficient, improving the Itakura-Saito`s distance by up to four times. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Void fraction sensors are important instruments not only for monitoring two-phase flow, but for furnishing an important parameter for obtaining flow map pattern and two-phase flow heat transfer coefficient as well. This work presents the experimental results obtained with the analysis of two axially spaced multiple-electrode impedance sensors tested in an upward air-water two-phase flow in a vertical tube for void fraction measurements. An electronic circuit was developed for signal generation and post-treatment of each sensor signal. By phase shifting the electrodes supplying the signal, it was possible to establish a rotating electric field sweeping across the test section. The fundamental principle of using a multiple-electrode configuration is based on reducing signal sensitivity to the non-uniform cross-section void fraction distribution problem. Static calibration curves were obtained for both sensors, and dynamic signal analyses for bubbly, slug, and turbulent churn flows were carried out. Flow parameters such as Taylor bubble velocity and length were obtained by using cross-correlation techniques. As an application of the void fraction tested, vertical flow pattern identification could be established by using the probability density function technique for void fractions ranging from 0% to nearly 70%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real-time viscosity measurement remains a necessity for highly automated industry. To resolve this problem, many studies have been carried out using an ultrasonic shear wave reflectance method. This method is based on the determination of the complex reflection coefficient`s magnitude and phase at the solid-liquid interface. Although magnitude is a stable quantity and its measurement is relatively simple and precise, phase measurement is a difficult task because of strong temperature dependence. A simplified method that uses only the magnitude of the reflection coefficient and that is valid under the Newtonian regimen has been proposed by some authors, but the obtained viscosity values do not match conventional viscometry measurements. In this work, a mode conversion measurement cell was used to measure glycerin viscosity as a function of temperature (15 to 25 degrees C) and corn syrup-water mixtures as a function of concentration (70 to 100 wt% of corn syrup). Tests were carried out at 1 MHz. A novel signal processing technique that calculates the reflection coefficient magnitude in a frequency band, instead of a single frequency, was studied. The effects of the bandwidth on magnitude and viscosity were analyzed and the results were compared with the values predicted by the Newtonian liquid model. The frequency band technique improved the magnitude results. The obtained viscosity values came close to those measured by the rotational viscometer with percentage errors up to 14%, whereas errors up to 96% were found for the single frequency method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the results of the in-depth study of the Barkhausen effect signal properties for the plastically deformed Fe-2%Si samples. The investigated samples have been deformed by cold rolling up to plastic strain epsilon(p) = 8%. The first approach consisted of time-domain-resolved pulse and frequency analysis of the Barkhausen noise signals whereas the complementary study consisted of the time-resolved pulse count analysis as well as a total pulse count. The latter included determination of time distribution of pulses for different threshold voltage levels as well as the total pulse count as a function of both the amplitude and the duration time of the pulses. The obtained results suggest that the observed increase in the Barkhausen noise signal intensity as a function of deformation level is mainly due to the increase in the number of bigger pulses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The general objective of this study was to evaluate the ordered weighted averaging (OWA) method, integrated to a geographic information systems (GIS), in the definition of priority areas for forest conservation in a Brazilian river basin, aiming at to increase the regional biodiversity. We demonstrated how one could obtain a range of alternatives by applying OWA, including the one obtained by the weighted linear combination method and, also the use of the analytic hierarchy process (AHP) to structure the decision problem and to assign the importance to each criterion. The criteria considered important to this study were: proximity to forest patches; proximity among forest patches with larger core area; proximity to surface water; distance from roads: distance from urban areas; and vulnerability to erosion. OWA requires two sets of criteria weights: the weights of relative criterion importance and the order weights. Thus, Participatory Technique was used to define the criteria set and the criterion importance (based in AHP). In order to obtain the second set of weights we considered the influence of each criterion, as well as the importance of each one, on this decision-making process. The sensitivity analysis indicated coherence among the criterion importance weights, the order weights, and the solution. According to this analysis, only the proximity to surface water criterion is not important to identify priority areas for forest conservation. Finally, we can highlight that the OWA method is flexible, easy to be implemented and, mainly, it facilitates a better understanding of the alternative land-use suitability patterns. (C) 2008 Elsevier B.V. All rights reserved.