962 resultados para High occupancy vehicle lanes.
Resumo:
The Georgia Institute of Technology is currently performing research that will result in the development and deployment of three instrumentation packages that allow for automated capture of personal travel-related data for a given time period (up to 10 days). These three packages include: A handheld electronic travel diary (ETD) with Global Positioning System (GPS) capabilities to capture trip information for all modes of travel; A comprehensive electronic travel monitoring system (CETMS), which includes an ETD, a rugged laptop computer, a GPS receiver and antenna, and an onboard engine monitoring system, to capture all trip and vehicle information; and a passive GPS receiver, antenna, and data logger to capture vehicle trips only.
Resumo:
This paper reports on statements from Professional Development participants who were asked to comment on NAPLAN. The participants were involved in a project designed by the YuMi Deadly Centre (YDC) for implementation into 25 Queensland School to enhance the teaching and learning of mathematics to Aboriginal and Torres Strait Islander students and low SES students. Using an action research framework and a survey questionnaire, the preliminary data obtained from participating principals is mixed, with statements indicating that NAPLAN is a high priority for some schools while others indicated that it does not “tell” the whole story of student learning.
Resumo:
This paper presents a critical review of past research in the work-related driving field in light vehicle fleets (e.g., vehicles < 4.5 tonnes) and an intervention framework that provides future direction for practitioners and researchers. Although work-related driving crashes have become the most common cause of death, injury, and absence from work in Australia and overseas, very limited research has progressed in establishing effective strategies to improve safety outcomes. In particular, the majority of past research has been data-driven, and therefore, limited attention has been given to theoretical development in establishing the behavioural mechanism underlying driving behaviour. As such, this paper argues that to move forward in the field of work-related driving safety, practitioners and researchers need to gain a better understanding of the individual and organisational factors influencing safety through adopting relevant theoretical frameworks, which in turn will inform the development of specifically targeted theory-driven interventions. This paper presents an intervention framework that is based on relevant theoretical frameworks and sound methodological design, incorporating interventions that can be directed at the appropriate level, individual and driving target group.
Resumo:
Objective: Flood is the most common natural disaster in Australia and causes more loss of life than any other disaster. This article describes the incidence and causes of deaths directly associated with floods in contemporary Australia. ---------- Methods: The present study compiled a database of flood fatalities in Australia in the period of 1997–2008 inclusive. The data were derived from newspapers and historic accounts, as well as government and scientific reports. Assembled data include the date and location of fatalities, age and gender of victims and the circumstances of the death. ---------- Results: At least 73 persons died as a direct result of floods in Australia in the period of 1997–2008. The largest number of fatalities occurred in New South Wales and Queensland. Most fatalities occurred during February, and among men (71.2%). People between the ages of 10 and 29 and those over 70 years are overrepresented among those drowned. There is no evident decline in the number of deaths over time. 48.5% fatalities related to motor vehicle use. 26.5% fatalities occurred as a result of inappropriate or high-risk behaviour during floods. ---------- Conclusion: In modern developed countries with adequate emergency response systems and extensive resources, deaths that occur in floods are almost all eminently preventable. Over 90% of the deaths are caused by attempts to ford flooded waterways or inappropriate situational conduct. Knowledge of the leading causes of flood fatalities should inform public awareness programmes and public safety police enforcement activities.
Resumo:
The design and implementation of a high-power (2 MW peak) vector control drive is described. The inverter switching frequency is low, resulting in high-harmonic-content current waveforms. A block diagram of the physical system is given, and each component is described in some detail. The problem of commanded slip noise sensitivity, inherent in high-power vector control drives, is discussed, and a solution is proposed. Results are given which demonstrate the successful functioning of the system
Resumo:
Improving efficiency and flexibility in pulsed power supply technologies are the most substantial concerns of pulsed power systems specifically for plasma generation. Recently, the improvement of pulsed power supply becomes of greater concern due to extension of pulsed power applications to environmental and industrial areas. A current source based topology is proposed in this paper which gives the possibility of power flow control. The main contribution in this configuration is utilization of low-medium voltage semiconductor switches for high voltage generation. A number of switch-diode-capacitor units are designated at the output of topology to exchange the current source energy into voltage form and generate a pulsed power with sufficient voltage magnitude and stress. Simulations have been carried out in Matlab/SIMULINK platform to verify the capability of this topology in performing desired duties. Being efficient and flexible are the main advantages of this topology.
Resumo:
This paper presents a high voltage pulsed power system based on low voltage switch-capacitor units connected to a current source for several applications such as plasma systems. A modified positive buck-boost converter topology is used to utilize the current source concept and a series of low voltage switch-capacitor units is connected to the current source in order to provide high voltage with high voltage stress (dv/dt) as demanded by loads. This pulsed power converter is flexible in terms of energy control, in that the stored energy in the current source can be adjusted by changing the current magnitude to significantly improve the efficiency of various systems with different requirements. Output voltage magnitude and stress (dv/dt) can be controlled by a proper selection of components and control algorithm to turn on and off switching devices.
Resumo:
The objective of this study was to evaluate the feasibility and potential of a hybrid scaffold system in large- and high-load-bearing osteochondral defects repair. The implants were made of medical-grade PCL (mPCL) for the bone compartment whereas fibrin glue was used for the cartilage part. Both matrices were seeded with allogenic bone marrow-derived mesenchymal cells (BMSC) and implanted in the defect (4 mm diameter×5 mm depth) on medial femoral condyle of adult New Zealand White rabbits. Empty scaffolds were used at the control side. Cell survival was tracked via fluorescent labeling. The regeneration process was evaluated by several techniques at 3 and 6 months post-implantation. Mature trabecular bone regularly formed in the mPCL scaffold at both 3 and 6 months post-operation. Micro-Computed Tomography showed progression of mineralization from the host–tissue interface towards the inner region of the grafts. At 3 months time point, the specimens showed good cartilage repair. In contrast, the majority of 6 months specimens revealed poor remodeling and fissured integration with host cartilage while other samples could maintain good cartilage appearance. In vivo viability of the transplanted cells was demonstrated for the duration of 5 weeks. The results demonstrated that mPCL scaffold is a potential matrix for osteochondral bone regeneration and that fibrin glue does not inherit the physical properties to allow for cartilage regeneration in a large and high-load-bearing defect site. Keywords: Osteochondral tissue engineering; Scaffold; Bone marrow-derived precursor cells; Fibrin glue
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Large trucks are involved in a disproportionately small fraction of the total crashes but a disproportionately large fraction of fatal crashes. Large truck crashes often result in significant congestion due to their large physical dimensions and from difficulties in clearing crash scenes. Consequently, preventing large truck crashes is critical to improving highway safety and operations. This study identifies high risk sites (hot spots) for large truck crashes in Arizona and examines potential risk factors related to the design and operation of the high risk sites. High risk sites were identified using both state of the practice methods (accident reduction potential using negative binomial regression with long crash histories) and a newly proposed method using Property Damage Only Equivalents (PDOE). The hot spots identified via the count model generally exhibited low fatalities and major injuries but large minor injuries and PDOs, while the opposite trend was observed using the PDOE methodology. The hot spots based on the count model exhibited large AADTs, whereas those based on the PDOE showed relatively small AADTs but large fractions of trucks and high posted speed limits. Documented site investigations of hot spots revealed numerous potential risk factors, including weaving activities near freeway junctions and ramps, absence of acceleration lanes near on-ramps, small shoulders to accommodate large trucks, narrow lane widths, inadequate signage, and poor lighting conditions within a tunnel.
Resumo:
Tracking/remote monitoring systems using GNSS are a proven method to enhance the safety and security of personnel and vehicles carrying precious or hazardous cargo. While GNSS tracking appears to mitigate some of these threats, if not adequately secured, it can be a double-edged sword allowing adversaries to obtain sensitive shipment and vehicle position data to better coordinate their attacks, and to provide a false sense of security to monitoring centers. Tracking systems must be designed with the ability to perform route-compliance and thwart attacks ranging from low-level attacks such as the cutting of antenna cables to medium and high-level attacks involving radio jamming and signal / data-level simulation, especially where the goods transported have a potentially high value to terrorists. This paper discusses the use of GNSS in critical tracking applications, addressing the mitigation of GNSS security issues, augmentation systems and communication systems in order to provide highly robust and survivable tracking systems.
Resumo:
On-board mass (OBM) monitoring devices on heavy vehicles (HVs) have been tested in a national programme jointly by Transport Certification Australia Limited and the National Transport Commission. The tests were for, amongst other parameters, accuracy and tamper-evidence. The latter by deliberately tampering with the signals from OBM primary transducers during the tests. The OBM feasibility team is analysing dynamic data recorded at the primary transducers of OBM systems to determine if it can be used to detect tamper events. Tamper-evidence of current OBM systems needs to be determined if jurisdictions are to have confidence in specifying OBM for HVs as part of regulatory schemes. An algorithm has been developed to detect tamper events. The results of its application are detailed here.
Resumo:
There are several noninvasive techniques for assessing the kinetics of tear film, but no comparative studies have been conducted to evaluate their efficacies. Our aim is to test and compare techniques based on high-speed videokeratoscopy (HSV), dynamic wavefront sensing (DWS), and lateral shearing interferometry (LSI). Algorithms are developed to estimate the tear film build-up time TBLD, and the average tear film surface quality in the stable phase of the interblink interval TFSQAv. Moderate but significant correlations are found between TBLD measured with LSI and DWS based on vertical coma (Pearson's r2=0.34, p<0.01) and higher order rms (r2=0.31, p<0.01), as well as between TFSQAv measured with LSI and HSV (r2=0.35, p<0.01), and between LSI and DWS based on the rms fit error (r2=0.40, p<0.01). No significant correlation is found between HSV and DWS. All three techniques estimate tear film build-up time to be below 2.5 sec, and they achieve a remarkably close median value of 0.7 sec. HSV appears to be the most precise method for measuring tear film surface quality. LSI appears to be the most sensitive method for analyzing tear film build-up.
Resumo:
Voice recognition is one of the key enablers to reduce driver distraction as in-vehicle systems become more and more complex. With the integration of voice recognition in vehicles, safety and usability are improved as the driver’s eyes and hands are not required to operate system controls. Whilst speaker independent voice recognition is well developed, performance in high noise environments (e.g. vehicles) is still limited. La Trobe University and Queensland University of Technology have developed a low-cost hardware-based speech enhancement system for automotive environments based on spectral subtraction and delay–sum beamforming techniques. The enhancement algorithms have been optimised using authentic Australian English collected under typical driving conditions. Performance tests conducted using speech data collected under variety of vehicle noise conditions demonstrate a word recognition rate improvement in the order of 10% or more under the noisiest conditions. Currently developed to a proof of concept stage there is potential for even greater performance improvement.