957 resultados para Traffic Speed Change.
Resumo:
Purpose Road policing is a key method used to improve driver compliance with road laws. However, we have a very limited understanding of the perceptions of young drivers regarding police enforcement of road laws. This paper addresses this gap. Design/Methodology/Approach Within this study 238 young drivers from Queensland, Australia, aged 17-24 years (M = 18, SD = 1.54), with a provisional (intermediate) driver’s licence completed an online survey regarding their perceptions of police enforcement and their driver thrill seeking tendencies. This study considered whether these factors influenced self-reported transient (e.g., travelling speed) and fixed (e.g., blood alcohol concentration) road violations by the young drivers. Findings The results indicate that being detected by police for a traffic offence, and the frequency with which they display P-plates on their vehicle to indicate their licence status, are associated with both self-reported transient and fixed rule violations. Licence type, police avoidance behaviours and driver thrill seeking affected transient rule violations only, while perceptions of police enforcement affected fixed rule violations only. Practical implications This study suggests that police enforcement of young driver violations of traffic laws may not be as effective as expected and that we need to improve the way in which police enforce road laws for young novice drivers. Originality/value: This paper identifies that perceptions of police enforcement by young drivers does not influence all types of road offences.
Resumo:
This thesis contains three subject areas concerning particulate matter in urban area air quality: 1) Analysis of the measured concentrations of particulate matter mass concentrations in the Helsinki Metropolitan Area (HMA) in different locations in relation to traffic sources, and at different times of year and day. 2) The evolution of traffic exhaust originated particulate matter number concentrations and sizes in local street scale are studied by a combination of a dispersion model and an aerosol process model. 3) Some situations of high particulate matter concentrations are analysed with regard to their meteorological origins, especially temperature inversion situations, in the HMA and three other European cities. The prediction of the occurrence of meteorological conditions conducive to elevated particulate matter concentrations in the studied cities is examined. The performance of current numerical weather forecasting models in the case of air pollution episode situations is considered. The study of the ambient measurements revealed clear diurnal variation of the PM10 concentrations in the HMA measurement sites, irrespective of the year and the season of the year. The diurnal variation of local vehicular traffic flows seemed to have no substantial correlation with the PM2.5 concentrations, indicating that the PM10 concentrations were originated mainly from local vehicular traffic (direct emissions and suspension), while the PM2.5 concentrations were mostly of regionally and long-range transported origin. The modelling study of traffic exhaust dispersion and transformation showed that the number concentrations of particles originating from street traffic exhaust undergo a substantial change during the first tens of seconds after being emitted from the vehicle tailpipe. The dilution process was shown to dominate total number concentrations. Minimal effect of both condensation and coagulation was seen in the Aitken mode number concentrations. The included air pollution episodes were chosen on the basis of occurrence in either winter or spring, and having at least partly local origin. In the HMA, air pollution episodes were shown to be linked to predominantly stable atmospheric conditions with high atmospheric pressure and low wind speeds in conjunction with relatively low ambient temperatures. For the other European cities studied, the best meteorological predictors for the elevated concentrations of PM10 were shown to be temporal (hourly) evolutions of temperature inversions, stable atmospheric stability and in some cases, wind speed. Concerning the weather prediction during particulate matter related air pollution episodes, the use of the studied models were found to overpredict pollutant dispersion, leading to underprediction of pollutant concentration levels.
Resumo:
This paper reports new results concerning the capabilities of a family of service disciplines aimed at providing per-connection end-to-end delay (and throughput) guarantees in high-speed networks. This family consists of the class of rate-controlled service disciplines, in which traffic from a connection is reshaped to conform to specific traffic characteristics, at every hop on its path. When used together with a scheduling policy at each node, this reshaping enables the network to provide end-to-end delay guarantees to individual connections. The main advantages of this family of service disciplines are their implementation simplicity and flexibility. On the other hand, because the delay guarantees provided are based on summing worst case delays at each node, it has also been argued that the resulting bounds are very conservative which may more than offset the benefits. In particular, other service disciplines such as those based on Fair Queueing or Generalized Processor Sharing (GPS), have been shown to provide much tighter delay bounds. As a result, these disciplines, although more complex from an implementation point-of-view, have been considered for the purpose of providing end-to-end guarantees in high-speed networks. In this paper, we show that through ''proper'' selection of the reshaping to which we subject the traffic of a connection, the penalty incurred by computing end-to-end delay bounds based on worst cases at each node can be alleviated. Specifically, we show how rate-controlled service disciplines can be designed to outperform the Rate Proportional Processor Sharing (RPPS) service discipline. Based on these findings, we believe that rate-controlled service disciplines provide a very powerful and practical solution to the problem of providing end-to-end guarantees in high-speed networks.
Resumo:
In this paper, a method of tracking the peak power in a wind energy conversion system (WECS) is proposed, which is independent of the turbine parameters and air density. The algorithm searches for the peak power by varying the speed in the desired direction. The generator is operated in the speed control mode with the speed reference being dynamically modified in accordance with the magnitude and direction of change of active power. The peak power points in the P-omega curve correspond to dP/domega = 0. This fact is made use of in the optimum point search algorithm. The generator considered is a wound rotor induction machine whose stator is connected directly to the grid and the rotor is fed through back-to-back pulse-width-modulation (PWM) converters. Stator flux-oriented vector control is applied to control the active and reactive current loops independently. The turbine characteristics are generated by a dc motor fed from a commercial dc drive. All of the control loops are executed by a single-chip digital signal processor (DSP) controller TMS320F240. Experimental results show that the performance of the control algorithm compares well with the conventional torque control method.
Resumo:
With the emergence of large-volume and high-speed streaming data, the recent techniques for stream mining of CFIpsilas (closed frequent itemsets) will become inefficient. When concept drift occurs at a slow rate in high speed data streams, the rate of change of information across different sliding windows will be negligible. So, the user wonpsilat be devoid of change in information if we slide window by multiple transactions at a time. Therefore, we propose a novel approach for mining CFIpsilas cumulatively by making sliding width(ges1) over high speed data streams. However, it is nontrivial to mine CFIpsilas cumulatively over stream, because such growth may lead to the generation of exponential number of candidates for closure checking. In this study, we develop an efficient algorithm, stream-close, for mining CFIpsilas over stream by exploring some interesting properties. Our performance study reveals that stream-close achieves good scalability and has promising results.
Resumo:
In this study, the influence of tool rotation speed and feed rate on the forming limit of friction stir welded Al 6061-T651 sheets has been investigated. The forming limit curve was evaluated by limit dome height test performed on all the friction stir welded sheets. The welding trials were conducted at a tool rotation speed of 1300 and 1400 r/min and feed rate of 90 and 100 mm/min. A third trial of welding was performed at a rotational speed of 1500 r/min and feed rate 120 mm/min. It is found that with increase in the tool rotation speed, from 1300 to 1400 r/min, for a constant feed rate, the forming limit of friction stir welded blank has improved and with increase in feed rate, from 90 to 100 mm/min, for a constant tool rotation speed, it has decreased. The forming limit of friction stir welded sheets is better than unwelded sheets. The thickness gradient after forming is severe in the cases of friction stir welded blanks made at higher feed rate and lower rotation speed. The strain hardening exponent of weld (n) increases with increase in tool rotation speed and it decreases with increase in feed rate. It has been demonstrated that the change in the forming limit of friction stir welded sheets with respect to welding parameters is due to the thickness distribution severity and strain hardening exponent of the weld region during forming. There is not much variation in the dome height among the friction stir welded sheets tested. When compared with unwelded sheets, dome height of friction stir welded sheets is higher in near-plane-strain condition, but it is lesser in stretching strain paths.
Resumo:
This paper presents an approach to model the expected impacts of climate change on irrigation water demand in a reservoir command area. A statistical downscaling model and an evapotranspiration model are used with a general circulation model (GCM) output to predict the anticipated change in the monthly irrigation water requirement of a crop. Specifically, we quantify the likely changes in irrigation water demands at a location in the command area, as a response to the projected changes in precipitation and evapotranspiration at that location. Statistical downscaling with a canonical correlation analysis is carried out to develop the future scenarios of meteorological variables (rainfall, relative humidity (RH), wind speed (U-2), radiation, maximum (Tmax) and minimum (Tmin) temperatures) starting with simulations provided by a GCM for a specified emission scenario. The medium resolution Model for Interdisciplinary Research on Climate GCM is used with the A1B scenario, to assess the likely changes in irrigation demands for paddy, sugarcane, permanent garden and semidry crops over the command area of Bhadra reservoir, India. Results from the downscaling model suggest that the monthly rainfall is likely to increase in the reservoir command area. RH, Tmax and Tmin are also projected to increase with small changes in U-2. Consequently, the reference evapotranspiration, modeled by the Penman-Monteith equation, is predicted to increase. The irrigation requirements are assessed on monthly scale at nine selected locations encompassing the Bhadra reservoir command area. The irrigation requirements are projected to increase, in most cases, suggesting that the effect of projected increase in rainfall on the irrigation demands is offset by the effect due to projected increase/change in other meteorological variables (viz., Tmax and Tmin, solar radiation, RH and U-2). The irrigation demand assessment study carried out at a river basin will be useful for future irrigation management systems. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
Enhancing the handover process in broadband wireless communication deployment has traditionally motivated many research initiatives. In a high-speed railway domain, the challenge is even greater. Owing to the long distances covered, the mobile node gets involved in a compulsory sequence of handover processes. Consequently, poor performance during the execution of these handover processes significantly degrades the global end-to-end performance. This article proposes a new handover strategy for the railway domain: the RMPA handover, a Reliable Mobility Pattern Aware IEEE 802.16 handover strategy "customized" for a high-speed mobility scenario. The stringent high mobility feature is balanced with three other positive features in a high-speed context: mobility pattern awareness, different sources for location discovery techniques, and a previously known traffic data profile. To the best of the authors' knowledge, there is no IEEE 802.16 handover scheme that simultaneously covers the optimization of the handover process itself and the efficient timing of the handover process. Our strategy covers both areas of research while providing a cost-effective and standards-based solution. To schedule the handover process efficiently, the RMPA strategy makes use of a context aware handover policy; that is, a handover policy based on the mobile node mobility pattern, the time required to perform the handover, the neighboring network conditions, the data traffic profile, the received power signal, and current location and speed information of the train. Our proposal merges all these variables in a cross layer interaction in the handover policy engine. It also enhances the handover process itself by establishing the values for the set of handover configuration parameters and mechanisms of the handover process. RMPA is a cost-effective strategy because compatibility with standards-based equipment is guaranteed. The major contributions of the RMPA handover are in areas that have been left open to the handover designer's discretion. Our simulation analysis validates the RMPA handover decision rules and design choices. Our results supporting a high-demand video application in the uplink stream show a significant improvement in the end-to-end quality of service parameters, including end-to-end delay (22%) and jitter (80%), when compared with a policy based on signal-to-noise-ratio information.
Resumo:
In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.
Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.
Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.
Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.
Resumo:
Daily and seasonal activity rhythms, swimming speed, and modes of swimming were studied in a school of spring-spawned age-0 bluefish (Pomatomus saltatrix) for nine months in a 121-kL research aquarium. Temperature was lowered from 20° to 15°C, then returned to 20°C to match the seasonal cycle. The fish grew from a mean 198 mm to 320 mm (n= 67). Bluefish swam faster and in a more organized school during day (overall mean 47 cm/s) than at night (31 cm/s). Swimming speed declined in fall as temperature declined and accelerated in spring in response to change in photoperiod. Besides powered swimming, bluefish used a gliding-upswimming mode, which has not been previously described for this species. To glide, a bluefish rolled onto its side, ceased body and tail beating, and coasted diagonally downward. Bluefish glided in all months of the study, usually in the dark, and most intensely in winter. Energy savings while the fish is gliding and upswimming may be as much as 20% of the energy used in powered swimming. Additional savings accrue from increased lift due to the hydrofoil created by the horizontal body orientation and slightly concave shape. Energy-saving swimming would be advantageous during migration and overwintering.
Resumo:
This paper presents a novel architecture of vision chip for fast traffic lane detection (FTLD). The architecture consists of a 32*32 SIMD processing element (PE) array processor and a dual-core RISC processor. The PE array processor performs low-level pixel-parallel image processing at high speed and outputs image features for high-level image processing without I/O bottleneck. The dual-core processor carries out high-level image processing. A parallel fast lane detection algorithm for this architecture is developed. The FPGA system with a CMOS image sensor is used to implement the architecture. Experiment results show that the system can perform the fast traffic lane detection at 50fps rate. It is much faster than previous works and has good robustness that can operate in various intensity of light. The novel architecture of vision chip is able to meet the demand of real-time lane departure warning system.
Resumo:
The quality of available network connections can often have a large impact on the performance of distributed applications. For example, document transfer applications such as FTP, Gopher and the World Wide Web suffer increased response times as a result of network congestion. For these applications, the document transfer time is directly related to the available bandwidth of the connection. Available bandwidth depends on two things: 1) the underlying capacity of the path from client to server, which is limited by the bottleneck link; and 2) the amount of other traffic competing for links on the path. If measurements of these quantities were available to the application, the current utilization of connections could be calculated. Network utilization could then be used as a basis for selection from a set of alternative connections or servers, thus providing reduced response time. Such a dynamic server selection scheme would be especially important in a mobile computing environment in which the set of available servers is frequently changing. In order to provide these measurements at the application level, we introduce two tools: bprobe, which provides an estimate of the uncongested bandwidth of a path; and cprobe, which gives an estimate of the current congestion along a path. These two measures may be used in combination to provide the application with an estimate of available bandwidth between server and client thereby enabling application-level congestion avoidance. In this paper we discuss the design and implementation of our probe tools, specifically illustrating the techniques used to achieve accuracy and robustness. We present validation studies for both tools which demonstrate their reliability in the face of actual Internet conditions; and we give results of a survey of available bandwidth to a random set of WWW servers as a sample application of our probe technique. We conclude with descriptions of other applications of our measurement tools, several of which are currently under development.
Resumo:
High volumes of data traffic along with bandwidth hungry applications, such as cloud computing and video on demand, is driving the core optical communication links closer and closer to their maximum capacity. The research community has clearly identifying the coming approach of the nonlinear Shannon limit for standard single mode fibre [1,2]. It is in this context that the work on modulation formats, contained in Chapter 3 of this thesis, was undertaken. The work investigates the proposed energy-efficient four-dimensional modulation formats. The work begins by studying a new visualisation technique for four dimensional modulation formats, akin to constellation diagrams. The work then carries out one of the first implementations of one such modulation format, polarisation-switched quadrature phase-shift keying (PS-QPSK). This thesis also studies two potential next-generation fibres, few-mode and hollow-core photonic band-gap fibre. Chapter 4 studies ways to experimentally quantify the nonlinearities in few-mode fibre and assess the potential benefits and limitations of such fibres. It carries out detailed experiments to measure the effects of stimulated Brillouin scattering, self-phase modulation and four-wave mixing and compares the results to numerical models, along with capacity limit calculations. Chapter 5 investigates hollow-core photonic band-gap fibre, where such fibres are predicted to have a low-loss minima at a wavelength of 2μm. To benefit from this potential low loss window requires the development of telecoms grade subsystems and components. The chapter will outline some of the development and characterisation of these components. The world's first wavelength division multiplexed (WDM) subsystem directly implemented at 2μm is presented along with WDM transmission over hollow-core photonic band-gap fibre at 2μm. References: [1]P. P. Mitra, J. B. Stark, Nature, 411, 1027-1030, 2001 [2] A. D. Ellis et al., JLT, 28, 423-433, 2010.
Resumo:
Simultaneous measurements of high-altitude optical emissions and magnetic fields produced by sprite-associated lightning discharges enable a close examination of the link between low-altitude lightning processes and high-altitude sprite processes. We report results of the coordinated analysis of high-speed sprite video and wideband magnetic field measurements recorded simultaneously at Yucca Ridge Field Station and Duke University. From June to August 2005, sprites were detected following 67 lightning strokes, all of which had positive polarity. Our data showed that 46% of the 83 discrete sprite events in these sequences initiated more than 10 ms after the lightning return stroke, and we focus on these delayed sprites in this work. All delayed sprites were preceded by continuing current moments that averaged at least 11 kA km between the return stroke and sprites. The total lightning charge moment change at sprite initiation varied from 600 to 18,600 C km, and the minimum value to initiate long-delayed sprites ranged from 600 for 15 ms delay to 2000 C km for more than 120 ms delay. We numerically simulated electric fields at altitudes above these lightning discharges and found that the maximum normalized electric fields are essentially the same as fields that produce short-delayed sprites. Both estimated and simulation-predicted sprite initiation altitudes indicate that long-delayed sprites generally initiate around 5 km lower than short-delayed sprites. The simulation results also reveal that slow (5-20 ms) intensifications in continuing current can play a major role in initiating delayed sprites. Copyright 2008 by the American Geophysical Union.