645 resultados para Transit Vehicle Design.
Resumo:
The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool.
Resumo:
One major gap in transportation system safety management is the ability to assess the safety ramifications of design changes for both new road projects and modifications to existing roads. To fulfill this need, FHWA and its many partners are developing a safety forecasting tool, the Interactive Highway Safety Design Model (IHSDM). The tool will be used by roadway design engineers, safety analysts, and planners throughout the United States. As such, the statistical models embedded in IHSDM will need to be able to forecast safety impacts under a wide range of roadway configurations and environmental conditions for a wide range of driver populations and will need to be able to capture elements of driving risk across states. One of the IHSDM algorithms developed by FHWA and its contractors is for forecasting accidents on rural road segments and rural intersections. The methodological approach is to use predictive models for specific base conditions, with traffic volume information as the sole explanatory variable for crashes, and then to apply regional or state calibration factors and accident modification factors (AMFs) to estimate the impact on accidents of geometric characteristics that differ from the base model conditions. In the majority of past approaches, AMFs are derived from parameter estimates associated with the explanatory variables. A recent study for FHWA used a multistate database to examine in detail the use of the algorithm with the base model-AMF approach and explored alternative base model forms as well as the use of full models that included nontraffic-related variables and other approaches to estimate AMFs. That research effort is reported. The results support the IHSDM methodology.
Resumo:
The Georgia Institute of Technology is currently performing research that will result in the development and deployment of three instrumentation packages that allow for automated capture of personal travel-related data for a given time period (up to 10 days). These three packages include: A handheld electronic travel diary (ETD) with Global Positioning System (GPS) capabilities to capture trip information for all modes of travel; A comprehensive electronic travel monitoring system (CETMS), which includes an ETD, a rugged laptop computer, a GPS receiver and antenna, and an onboard engine monitoring system, to capture all trip and vehicle information; and a passive GPS receiver, antenna, and data logger to capture vehicle trips only.
Resumo:
A group key exchange (GKE) protocol allows a set of parties to agree upon a common secret session key over a public network. In this thesis, we focus on designing efficient GKE protocols using public key techniques and appropriately revising security models for GKE protocols. For the purpose of modelling and analysing the security of GKE protocols we apply the widely accepted computational complexity approach. The contributions of the thesis to the area of GKE protocols are manifold. We propose the first GKE protocol that requires only one round of communication and is proven secure in the standard model. Our protocol is generically constructed from a key encapsulation mechanism (KEM). We also suggest an efficient KEM from the literature, which satisfies the underlying security notion, to instantiate the generic protocol. We then concentrate on enhancing the security of one-round GKE protocols. A new model of security for forward secure GKE protocols is introduced and a generic one-round GKE protocol with forward security is then presented. The security of this protocol is also proven in the standard model. We also propose an efficient forward secure encryption scheme that can be used to instantiate the generic GKE protocol. Our next contributions are to the security models of GKE protocols. We observe that the analysis of GKE protocols has not been as extensive as that of two-party key exchange protocols. Particularly, the security attribute of key compromise impersonation (KCI) resilience has so far been ignored for GKE protocols. We model the security of GKE protocols addressing KCI attacks by both outsider and insider adversaries. We then show that a few existing protocols are not secure against KCI attacks. A new proof of security for an existing GKE protocol is given under the revised model assuming random oracles. Subsequently, we treat the security of GKE protocols in the universal composability (UC) framework. We present a new UC ideal functionality for GKE protocols capturing the security attribute of contributiveness. An existing protocol with minor revisions is then shown to realize our functionality in the random oracle model. Finally, we explore the possibility of constructing GKE protocols in the attribute-based setting. We introduce the concept of attribute-based group key exchange (AB-GKE). A security model for AB-GKE and a one-round AB-GKE protocol satisfying our security notion are presented. The protocol is generically constructed from a new cryptographic primitive called encapsulation policy attribute-based KEM (EP-AB-KEM), which we introduce in this thesis. We also present a new EP-AB-KEM with a proof of security assuming generic groups and random oracles. The EP-AB-KEM can be used to instantiate our generic AB-GKE protocol.
Resumo:
Travel surveys were conducted for collecting data related to school students’ travel at Kelvin Grove Urban Village (KGUV). Currently, KGUV has school students studying at grade 10 to 12. As a part of data collection process, travel surveys were undertaken for school students studying. This document contains the questionnaire form used to collect the demographic and travel data related to school students at KGUV. The surveys forms were hand delivered to the school and the responses were collected back via reply paid envelop provided with the questionnaire form.
Resumo:
Value Management (VM) has been proven to provide a structured framework, together with other supporting tools and techniques, that facilitate effective decision-making in many types of projects, thus achieving ‘best value’ for clients. One of the major success factors of VM in achieving better project objectives for clients is through the provision of beneficial input by multi-disciplinary team members being involved in critical decision-making discussions during the early stage of construction projects. This paper describes a doctoral research proposal based on the application of VM in design and build construction projects, especially focusing on the design stage. The research aims to study the effects of implementing VM in design and build construction projects, in particular how well the methodology addresses issues related to cost overruns resulting from poor coordination and overlooking of critical constructability issues amongst team members in construction projects in Malaysia. It is proposed that through contractors’ early involvement during the design stage, combined with the use of the VM methodology, particularly as a decision-making tool, better optimization of construction cost can be achieved, thus promoting more efficient and effective constructability. The main methods used in this research involve a thorough literature study, semi-structured interviews, and a survey of major stakeholders, a detailed case study and a VM workshop and focus group discussions involving construction professionals in order to explore and possibly develop a framework and a specific methodology for the facilitating successful application of VM within design and build construction projects.
Resumo:
This paper investigates the control of a HVDC link, fed from an AC source through a controlled rectifier and feeding an AC line through a controlled inverter. The overall objective is to maintain maximum possible link voltage at the inverter while regulating the link current. In this paper the practical feedback design issues are investigated with a view of obtaining simple, robust designs that are easy to evaluate for safety and operability. The investigations are applicable to back-to-back links used for frequency decoupling and to long DC lines. The design issues discussed include: (i) a review of overall system dynamics to establish the time scale of different feedback loops and to highlight feedback design issues; (ii) the concept of using the inverter firing angle control to regulate link current when the rectifier firing angle controller saturates; and (iii) the design issues for the individual controllers including robust design for varying line conditions and the trade-off between controller complexity and the reduction of nonlinearity and disturbance effects
Resumo:
Dwell times at stations and inter-station run times are the two major operational parameters to maintain train schedule in railway service. Current practices on dwell-time and run-time control are that they are only optimal with respect to certain nominal traffic conditions, but not necessarily the current service demand. The advantages of dwell-time and run-time control on trains are therefore not fully considered. The application of a dynamic programming approach, with the aid of an event-based model, to devise an optimal set of dwell times and run times for trains under given operational constraints over a regional level is presented. Since train operation is interactive and of multi-attributes, dwell-time and run-time coordination among trains is a multi-dimensional problem. The computational demand on devising trains' instructions, a prime concern in real-time applications, is excessively high. To properly reduce the computational demand in the provision of appropriate dwell times and run times for trains, a DC railway line is divided into a number of regions and each region is controlled by a dwell- time and run-time controller. The performance and feasibility of the controller in formulating the dwell-time and run-time solutions for real-time applications are demonstrated through simulations.
Resumo:
With daily commercial and social activity in cities, regulation of train service in mass rapid transit railways is necessary to maintain service and passenger flow. Dwell-time adjustment at stations is one commonly used approach to regulation of train service, but its control space is very limited. Coasting control is a viable means of meeting the specific run-time in an inter-station run. The current practice is to start coasting at a fixed distance from the departed station. Hence, it is only optimal with respect to a nominal operational condition of the train schedule, but not the current service demand. The advantage of coasting can only be fully secured when coasting points are determined in real-time. However, identifying the necessary starting point(s) for coasting under the constraints of current service conditions is no simple task as train movement is governed by a large number of factors. The feasibility and performance of classical and heuristic searching measures in locating coasting point(s) is studied with the aid of a single train simulator, according to specified inter-station run times.
Resumo:
Background This research addresses the development of a digital stethoscope for use with a telehealth communications network to allow doctors to examine patients remotely (a digital telehealth stethoscope). A telehealth stethoscope would allow remote auscultation of patients who do not live near a major hospital. Travelling from remote areas to major hospitals is expensive for patients and a telehealth stethoscope could result in significant cost savings. Using a stethoscope requires great skill. To design a telehealth stethoscope that meets doctors’ expectations, the use of existing stethoscopes in clinical contexts must be examined. Method Observations were conducted of 30 anaesthetic preadmission consultations. The observations were video- taped. Interaction between doctor, patient and non-human elements in the consultation were “coded” to transform the video into data. The data were analysed to reveal essential aspects of the interactions. Results The analysis has shown that the doctor controls the interaction during auscultation. The conduct of auscultation draws heavily on the doctor’s tacit knowledge, allowing the doctor to treat the acoustic stethoscope as infrastructure – that is, the stethoscope sinks into the background and becomes completely transparent in use. Conclusion Two important, and related, implications for the design of a telehealth stethoscope have arisen from this research. First, as a telehealth stethoscope will be a shared device, doctors will not be able to make use of their existing expertise in using their own stethoscopes. Very simply, a telehealth stethoscope will sound different to a doctor’s own stethoscope. Second, the collaborative interaction required to use a telehealth stethoscope will have to be invented and refined. A telehealth stethoscope will need to be carefully designed to address these issues and result in successful use. This research challenges the concept of a telehealth stethoscope by raising questions about the ease and confidence with which doctors could use such a device.
Resumo:
This paper reviews the main studies on transit users’ route choice in thecontext of transit assignment. The studies are categorized into three groups: static transit assignment, within-day dynamic transit assignment, and emerging approaches. The motivations and behavioural assumptions of these approaches are re-examined. The first group includes shortest-path heuristics in all-or-nothing assignment, random utility maximization route-choice models in stochastic assignment, and user equilibrium based assignment. The second group covers within-day dynamics in transit users’ route choice, transit network formulations, and dynamic transit assignment. The third group introduces the emerging studies on behavioural complexities, day-to-day dynamics, and real-time dynamics in transit users’ route choice. Future research directions are also discussed.
Resumo:
This chapter recognizes that research is a cultural invention and explains why. It discusses what equity, research and research design mean, and suggests that the concept of equity is enriched considerably when ideas from Indigenous, critical and politically committed research traditions are involved in research design. When research design and the processes of research are guided by principles of equity, several issues warrant investigation. These include power relations, deficit models of research, homogeneity and reflexivity. Research design that is informed by principles of equity is explicit in its political purpose of seeking socially just outcomes for the short and long term.
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes. Considerable past research has explored relationships between vehicle accidents and geometric design and operation of road sections, but relatively little research has examined factors that contribute to accidents at railway-highway crossings. Between 1998 and 2002 in Korea, about 95% of railway accidents occurred at highway-rail grade crossings, resulting in 402 accidents, of which about 20% resulted in fatalities. These statistics suggest that efforts to reduce crashes at these locations may significantly reduce crash costs. The objective of this paper is to examine factors associated with railroad crossing crashes. Various statistical models are used to examine the relationships between crossing accidents and features of crossings. The paper also compares accident models developed in the United States and the safety effects of crossing elements obtained using Korea data. Crashes were observed to increase with total traffic volume and average daily train volumes. The proximity of crossings to commercial areas and the distance of the train detector from crossings are associated with larger numbers of accidents, as is the time duration between the activation of warning signals and gates. The unique contributions of the paper are the application of the gamma probability model to deal with underdispersion and the insights obtained regarding railroad crossing related vehicle crashes.