796 resultados para Intersection delay
Resumo:
Proxy reports from parents and self-reported data from pupils have often been used interchangeably to identify factors influencing school travel behaviour. However, few studies have examined the validity of proxy reports as an alternative to self-reported data. In addition, despite research that has been conducted in a different context, little is known to date about the impact of different factors on school travel behaviour in a sectarian divided society. This research examines these issues using 1624 questionnaires collected from four independent samples (e.g. primary pupils, parent of primary pupils, secondary pupils, and parent of secondary pupils) across Northern Ireland. An independent sample t test was conducted to identify the differences in data reporting between pupils and parents for different age groups using the reported number of trips for different modes as dependent variables. Multivariate multiple regression analyses were conducted to then identify the impacts of different factors (e.g. gender, rural–urban context, multiple deprivations, and school management type, net residential density, land use diversity, intersection density) on mode choice behaviour in this context. Results show that proxy report is a valid alternative to self-reported data, but only for primary pupils. Land use diversity and rural–urban context were found to be the most important factors in influencing mode choice behaviour.
Resumo:
The deployment of new emerging technologies, such as cooperative systems, allows the traffic community to foresee relevant improvements in terms of traffic safety and efficiency. Vehicles are able to communicate on the local traffic state in real time, which could result in an automatic and therefore better reaction to the mechanism of traffic jam formation. An upstream single hop radio broadcast network can improve the perception of each cooperative driver within radio range and hence the traffic stability. The impact of a cooperative law on traffic congestion appearance is investigated, analytically and through simulation. Ngsim field data is used to calibrate the Optimal Velocity with Relative Velocity (OVRV) car following model and the MOBIL lane-changing model is implemented. Assuming that congestion can be triggered either by a perturbation in the instability domain or by a critical lane changing behavior, the calibrated car following behavior is used to assess the impact of a microscopic cooperative law on abnormal lane changing behavior. The cooperative law helps reduce and delay traffic congestion as it increases traffic flow stability.
Resumo:
An advanced rule-based Transit Signal Priority (TSP) control method is presented in this paper. An on-line transit travel time prediction model is the key component of the proposed method, which enables the selection of the most appropriate TSP plans for the prevailing traffic and transit condition. The new method also adopts a priority plan re-development feature that enables modifying or even switching the already implemented priority plan to accommodate changes in the traffic conditions. The proposed method utilizes conventional green extension and red truncation strategies and also two new strategies including green truncation and queue clearance. The new method is evaluated against a typical active TSP strategy and also the base case scenario assuming no TSP control in microsimulation. The evaluation results indicate that the proposed method can produce significant benefits in reducing the bus delay time and improving the service regularity with negligible adverse impacts on the non-transit street traffic.
Resumo:
Deploying networked control systems (NCSs) over wireless networks is becoming more and more popular. However, the widely-used transport layer protocols, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), are not designed for real-time applications. Therefore, they may not be suitable for many NCS application scenarios because of their limitations on reliability and/or delay performance, which real-control systems concern. Considering a typical type of NCSs with periodic and sporadic real-time traffic, this paper proposes a highly reliable transport layer protocol featuring a packet loss-sensitive retransmission mechanism and a prioritized transmission mechanism. The packet loss-sensitive retransmission mechanism is designed to improve the reliability of all traffic flows. And the prioritized transmission mechanism offers differentiated services for periodic and sporadic flows. Simulation results show that the proposed protocol has better reliability than UDP and improved delay performance than TCP over wireless networks, particularly when channel errors and congestions occur.
Resumo:
Reliable communications is one of the major concerns in wireless sensor networks (WSNs). Multipath routing is an effective way to improve communication reliability in WSNs. However, most of existing multipath routing protocols for sensor networks are reactive and require dynamic route discovery. If there are many sensor nodes from a source to a destination, the route discovery process will create a long end-to-end transmission delay, which causes difficulties in some time-critical applications. To overcome this difficulty, the efficient route update and maintenance processes are proposed in this paper. It aims to limit the amount of routing overhead with two-tier routing architecture and introduce the combination of piggyback and trigger update to replace the periodic update process, which is the main source of unnecessary routing overhead. Simulations are carried out to demonstrate the effectiveness of the proposed processes in improvement of total amount of routing overhead over existing popular routing protocols.
Resumo:
Driving on an approach to a signalized intersection while distracted is particularly dangerous, as potential vehicular conflicts and resulting angle collisions tend to be severe. Given the prevalence and importance of this particular scenario, the decisions and actions of distracted drivers during the onset of yellow lights are the focus of this study. Driving simulator data were obtained from a sample of 58 drivers under baseline and handheld mobile phone conditions at the University of Iowa - National Advanced Driving Simulator. Explanatory variables included age, gender, cell phone use, distance to stop-line, and speed. Although there is extensive research on drivers’ responses to yellow traffic signals, the examination has been conducted from a traditional regression-based approach, which does not necessary provide the underlying relations and patterns among the sampled data. In this paper, we exploit the benefits of both classical statistical inference and data mining techniques to identify the a priori relationships among main effects, non-linearities, and interaction effects. Results suggest that novice (16-17 years) and young drivers’ (18-25 years) have heightened yellow light running risk while distracted by a cell phone conversation. Driver experience captured by age has a multiplicative effect with distraction, making the combined effect of being inexperienced and distracted particularly risky. Overall, distracted drivers across most tested groups tend to reduce the propensity of yellow light running as the distance to stop line increases, exhibiting risk compensation on a critical driving situation.
Resumo:
This tutorial is designed to assist users who wish to use the LCD screen on the Spartan-3E board. In this tutorial, the PicoBlaze microcontroller is used to control the LCD. The tutorial is organised into three Parts. In Part A, code is written to display the message "Hello World" on the LCD. Part B demonstrates how to define and display custom characters. Finally, Part C shows how the display can be shifted and flashed. Shifting is done by using a delay in the main PicoBlaze program loop, while flashing is done using the PicoBlaze interrupt. The slider switches can be used to select the shifting direction, and to turn shifting and flashing on and off.
Resumo:
This article explores the role of principal leadership in creating a thinking school. It contributes to the school leadership literature by exploring the intersection of two important areas of study in education - school leadership and education for thinking - which is a particularly apt area of study, because effective school leadership is crucial if students are to learn to be critical and creative thinkers, yet this connection has not be widely investigated. We describe how one principal, Hinton, turned around an underperforming school by using critical and creative philosophical thinking as the focus for students, staff and parents. Then, drawing on the school leadership literature, the article describes seven attributes of school leadership beginning with four articulated by Leithwood and colleagues (2006) (building vision and setting direction; redesigning the organisation; understanding and developing people; managing the teaching and learning program), and adding three others (influence; self-development; and responding to context). This framework is then used in a case study format in a collaboration between practitioner and researchers to first explore evidence from empirical studies and personal reflection about Hinton's leadership of Buranda State School, and second to illuminate how these general features of school leadership apply to creating a thinking school. Based on the case study and using the general characteristics of school leadership, a framework for leading a thinking school is described. Because the framework is based on a turnaround school, this framework has wide applicability: to schools that are doing well as an indication of how to implement a contemporary approach to curriculum and pedagogy; and to schools that are underperforming and want a rigorous, high expectation and contemporary way to improve student learning.
Resumo:
Secure communications in distributed Wireless Sensor Networks (WSN) operating under adversarial conditions necessitate efficient key management schemes. In the absence of a priori knowledge of post-deployment network configuration and due to limited resources at sensor nodes, key management schemes cannot be based on post-deployment computations. Instead, a list of keys, called a key-chain, is distributed to each sensor node before the deployment. For secure communication, either two nodes should have a key in common in their key-chains, or they should establish a key through a secure-path on which every link is secured with a key. We first provide a comparative survey of well known key management solutions for WSN. Probabilistic, deterministic and hybrid key management solutions are presented, and they are compared based on their security properties and re-source usage. We provide a taxonomy of solutions, and identify trade-offs in them to conclude that there is no one size-fits-all solution. Second, we design and analyze deterministic and hybrid techniques to distribute pair-wise keys to sensor nodes before the deployment. We present novel deterministic and hybrid approaches based on combinatorial design theory and graph theory for deciding how many and which keys to assign to each key-chain before the sensor network deployment. Performance and security of the proposed schemes are studied both analytically and computationally. Third, we address the key establishment problem in WSN which requires key agreement algorithms without authentication are executed over a secure-path. The length of the secure-path impacts the power consumption and the initialization delay for a WSN before it becomes operational. We formulate the key establishment problem as a constrained bi-objective optimization problem, break it into two sub-problems, and show that they are both NP-Hard and MAX-SNP-Hard. Having established inapproximability results, we focus on addressing the authentication problem that prevents key agreement algorithms to be used directly over a wireless link. We present a fully distributed algorithm where each pair of nodes can establish a key with authentication by using their neighbors as the witnesses.
Resumo:
Particulate matter is common in our environment and has been linked to human health problems particularly in the ultrafine size range. A range of chemical species have been associated with particulate matter and of special concern are the hazardous chemicals that can accentuate health problems. If the sources of such particles can be identified then strategies can be developed for the reduction of air pollution and consequently, the improvement of the quality of life. In this investigation, particle number size distribution data and the concentrations of chemical species were obtained at two sites in Brisbane, Australia. Source apportionment was used to determine the sources (or factors) responsible for the particle size distribution data. The apportionment was performed by Positive Matrix Factorisation (PMF) and Principal Component Analysis/Absolute Principal Component Scores (PCA/APCS), and the results were compared with information from the gaseous chemical composition analysis. Although PCA/APCS resolved more sources, the results of the PMF analysis appear to be more reliable. Six common sources identified by both methods include: traffic 1, traffic 2, local traffic, biomass burning, and two unassigned factors. Thus motor vehicle related activities had the most impact on the data with the average contribution from nearly all sources to the measured concentrations higher during peak traffic hours and weekdays. Further analyses incorporated the meteorological measurements into the PMF results to determine the direction of the sources relative to the measurement sites, and this indicated that traffic on the nearby road and intersection was responsible for most of the factors. The described methodology which utilised a combination of three types of data related to particulate matter to determine the sources could assist future development of particle emission control and reduction strategies.
Resumo:
The effects of ethanol fumigation on the inter-cycle variability of key in-cylinder pressure parameters in a modern common rail diesel engine have been investigated. Specifically, maximum rate of pressure rise, peak pressure, peak pressure timing and ignition delay were investigated. A new methodology for investigating the start of combustion was also proposed and demonstrated—which is particularly useful with noisy in-cylinder pressure data as it can have a significant effect on the calculation of an accurate net rate of heat release indicator diagram. Inter-cycle variability has been traditionally investigated using the coefficient of variation. However, deeper insight into engine operation is given by presenting the results as kernel density estimates; hence, allowing investigation of otherwise unnoticed phenomena, including: multi-modal and skewed behaviour. This study has found that operation of a common rail diesel engine with high ethanol substitutions (>20% at full load, >30% at three quarter load) results in a significant reduction in ignition delay. Further, this study also concluded that if the engine is operated with absolute air to fuel ratios (mole basis) less than 80, the inter-cycle variability is substantially increased compared to normal operation.
Resumo:
The issue of carbon sequestration rights has become topical following the United Nations Convention on Climate Change (United Nations 1992 at page 1414) and the subsequent Kyoto Protocol (United Nations Climate Change Secretariat 1998) which identified emissions trading as one of the mechanisms to reduce greenhouse gas emissions. Australian states have responded by creating a legal framework for the recognition of rights to bio-sequestered carbon. There is a lack of uniformity in the approach of each state to the recognition of these rights, which vary from the creation of new and novel interests in land to the adoption of more traditional rights such as a profit a prendre. Rights to bio-sequestered carbon are likely to have an impact on the utility, marketability, value and financing of rural land holdings. Despite the creation of the legal framework for recognition of rights to sequestrated carbon, there has been a delay in the introduction of a formalised carbon trading scheme in Australia. In the absence of an established carbon market, this paper addresses the applicability of contingent valuation theory to assess the value of bio-sequestered carbon rights to a rural land holder. Limitations and potential controversies associated with this application of contingent valuation theory are also addressed in this paper.
Resumo:
This paper proposes a unique and innovative approach to integrate transit signal priority control into a traffic adaptive signal control strategy. The proposed strategy was named OSTRAC (Optimized Strategy for integrated TRAffic and TRAnsit signal Control). The cornerstones of OSTRAC include an online microscopic traffic f low prediction model and a Genetic Algorithm (GA) based traffic signal timing module. A sensitivity analysis was conducted to determine the critical GA parameters. The developed traffic f low model demonstrated reliable prediction results through a test. OSTRAC was evaluated by comparing its performance to three other signal control strategies. The evaluation results revealed that OSTRAC efficiently and effectively reduced delay time of general traffic and also transit vehicles.
Resumo:
The player experience is at the core of videogame play. Understanding the facets of player experience presents many research challenges, as the phenomenon sits at the intersection of psychology, design, human-computer interaction, sociology, and physiology. This workshop brings together an interdisciplinary group of researchers to systematically and rigorously analyse all aspects of the player experience. Methods and tools for conceptualising, operationalising and measuring the player experience form the core of this research. Our aim is to take a holistic approach to identifying, adapting and extending theories and models of the player experience, to understand how these theories and models interact, overlap and differ, and to construct a unified vision for future research.
Resumo:
Purpose - Researchers debate whether tacit knowledge sharing through Information Technology (IT) is actually possible. However, with the advent of social web tools, it has been argued that most shortcomings of tacit knowledge sharing are likely to disappear. This paper has two purposes: firstly, to demonstrate the existing debates in the literature regarding tacit knowledge sharing using IT, and secondly, to identify key research gaps that lay the foundations for future research into tacit knowledge sharing using social web. Design/methodology/approach - This paper reviews current literature on IT-mediated tacit knowledge sharing and opens a discussion on tacit knowledge sharing through the use of social web. Findings - First, the existing schools of thoughts in regards to IT ability for tacit knowledge sharing are introduced. Next, difficulties of sharing tacit knowledge through the use of IT are discussed. Then, potentials and pitfalls of social web tools are presented. Finally, the paper concludes that whilst there are significant theoretical arguments supporting that the social web facilitates tacit knowledge sharing there is a lack of empirical evidence to support these arguments and further work is required. Research limitations/implications - The limitations of the review includes: covering only papers that were published in English, issues of access to full texts of some resources, possibility of missing some resources due to search strings used or limited coverage of databases searched. Originality/value - The paper contributes to the fast growing literature on the intersection of KM and IT particularly by focusing on tacit knowledge sharing in social media space. The paper highlights the need for further studies in this area by discussing the current situation in the literature and disclosing the emerging questions and gaps for future studies.