234 resultados para main components


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper discusses the operating principles and control characteristics of a dynamic voltage restorer (DVR). It is assumed that the source voltages contain interharmonic components in addition to fundamental components. The main aim of the DVR is to produce a set of clean balanced sinusoidal voltages across the load terminals irrespective of unbalance, distortion and voltage sag/swell in the supply voltage. An algorithm has been discussed for extracting fundamental phasor sequence components from the samples of three-phase voltages or current waveforms having integer harmonics and interharmonics. The DVR operation based on extracted components is demonstrated. The switching signal is generated using a deadbeat controller. It has been shown that the DVR is able to compensate these interharmonic components such that the load voltages are perfectly regulated. The DVR operation under deep voltage sag is also discussed. The proposed DVR operation is verified through the computer simulation studies using the MATLAB software package.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Component software has many benefits, most notably increased software re-use; however, the component software process places heavy burdens on programming language technology, which modern object-oriented programming languages do not address. In particular, software components require specifications that are both sufficiently expressive and sufficiently abstract, and, where possible, these specifications should be checked formally by the programming language. This dissertation presents a programming language called Mentok that provides two novel programming language features enabling improved specification of stateful component roles. Negotiable interfaces are interface types extended with protocols, and allow specification of changing method availability, including some patterns of out-calls and re-entrance. Type layers are extensions to module signatures that allow specification of abstract control flow constraints through the interfaces of a component-based application. Development of Mentok's unique language features included creation of MentokC, the Mentok compiler, and formalization of key properties of Mentok in mini-languages called MentokP and MentokL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nitrous oxide (N2O) is primarily produced by the microbially-mediated nitrification and denitrification processes in soils. It is influenced by a suite of climate (i.e. temperature and rainfall) and soil (physical and chemical) variables, interacting soil and plant nitrogen (N) transformations (either competing or supplying substrates) as well as land management practices. It is not surprising that N2O emissions are highly variable both spatially and temporally. Computer simulation models, which can integrate all of these variables, are required for the complex task of providing quantitative determinations of N2O emissions. Numerous simulation models have been developed to predict N2O production. Each model has its own philosophy in constructing simulation components as well as performance strengths. The models range from those that attempt to comprehensively simulate all soil processes to more empirical approaches requiring minimal input data. These N2O simulation models can be classified into three categories: laboratory, field and regional/global levels. Process-based field-scale N2O simulation models, which simulate whole agroecosystems and can be used to develop N2O mitigation measures, are the most widely used. The current challenge is how to scale up the relatively more robust field-scale model to catchment, regional and national scales. This paper reviews the development history, main construction components, strengths, limitations and applications of N2O emissions models, which have been published in the literature. The three scale levels are considered and the current knowledge gaps and challenges in modelling N2O emissions from soils are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AC motors are largely used in a wide range of modern systems, from household appliances to automated industry applications such as: ventilations systems, fans, pumps, conveyors and machine tool drives. Inverters are widely used in industrial and commercial applications due to the growing need for speed control in ASD systems. Fast switching transients and the common mode voltage, in interaction with parasitic capacitive couplings, may cause many unwanted problems in the ASD applications. These include shaft voltage and leakage currents. One of the inherent characteristics of Pulse Width Modulation (PWM) techniques is the generation of the common mode voltage, which is defined as the voltage between the electrical neutral of the inverter output and the ground. Shaft voltage can cause bearing currents when it exceeds the amount of breakdown voltage level of the thin lubricant film between the inner and outer rings of the bearing. This phenomenon is the main reason for early bearing failures. A rapid development in power switches technology has lead to a drastic decrement of switching rise and fall times. Because there is considerable capacitance between the stator windings and the frame, there can be a significant capacitive current (ground current escaping to earth through stray capacitors inside a motor) if the common mode voltage has high frequency components. This current leads to noises and Electromagnetic Interferences (EMI) issues in motor drive systems. These problems have been dealt with using a variety of methods which have been reported in the literature. However, cost and maintenance issues have prevented these methods from being widely accepted. Extra cost or rating of the inverter switches is usually the price to pay for such approaches. Thus, the determination of cost-effective techniques for shaft and common mode voltage reduction in ASD systems, with the focus on the first step of the design process, is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. Electrical power generation from renewable energy sources, such as wind energy systems, has become a crucial issue because of environmental problems and a predicted future shortage of traditional energy sources. Thus, Chapter 2 focuses on the shaft voltage analysis of stator-fed induction generators (IG) and Doubly Fed Induction Generators DFIGs in wind turbine applications. This shaft voltage analysis includes: topologies, high frequency modelling, calculation and mitigation techniques. A back-to-back AC-DC-AC converter is investigated in terms of shaft voltage generation in a DFIG. Different topologies of LC filter placement are analysed in an effort to eliminate the shaft voltage. Different capacitive couplings exist in the motor/generator structure and any change in design parameters affects the capacitive couplings. Thus, an appropriate design for AC motors should lead to the smallest possible shaft voltage. Calculation of the shaft voltage based on different capacitive couplings, and an investigation of the effects of different design parameters are discussed in Chapter 3. This is achieved through 2-D and 3-D finite element simulation and experimental analysis. End-winding parameters of the motor are also effective factors in the calculation of the shaft voltage and have not been taken into account in previous reported studies. Calculation of the end-winding capacitances is rather complex because of the diversity of end winding shapes and the complexity of their geometry. A comprehensive analysis of these capacitances has been carried out with 3-D finite element simulations and experimental studies to determine their effective design parameters. These are documented in Chapter 4. Results of this analysis show that, by choosing appropriate design parameters, it is possible to decrease the shaft voltage and resultant bearing current in the primary stage of generator/motor design without using any additional active and passive filter-based techniques. The common mode voltage is defined by a switching pattern and, by using the appropriate pattern; the common mode voltage level can be controlled. Therefore, any PWM pattern which eliminates or minimizes the common mode voltage will be an effective shaft voltage reduction technique. Thus, common mode voltage reduction of a three-phase AC motor supplied with a single-phase diode rectifier is the focus of Chapter 5. The proposed strategy is mainly based on proper utilization of the zero vectors. Multilevel inverters are also used in ASD systems which have more voltage levels and switching states, and can provide more possibilities to reduce common mode voltage. A description of common mode voltage of multilevel inverters is investigated in Chapter 6. Chapter 7 investigates the elimination techniques of the shaft voltage in a DFIG based on the methods presented in the literature by the use of simulation results. However, it could be shown that every solution to reduce the shaft voltage in DFIG systems has its own characteristics, and these have to be taken into account in determining the most effective strategy. Calculation of the capacitive coupling and electric fields between the outer and inner races and the balls at different motor speeds in symmetrical and asymmetrical shaft and balls positions is discussed in Chapter 8. The analysis is carried out using finite element simulations to determine the conditions which will increase the probability of high rates of bearing failure due to current discharges through the balls and races.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aims--Telemonitoring (TM) and structured telephone support (STS) have the potential to deliver specialised management to more patients with chronic heart failure (CHF), but their efficacy is still to be proven. Objectives To review randomised controlled trials (RCTs) of TM or STS on all- cause mortality and all-cause and CHF-related hospitalisations in patients with CHF, as a non-invasive remote model of specialised disease-management intervention.--Methods and Results--Data sources:We searched 15 electronic databases and hand-searched bibliographies of relevant studies, systematic reviews, and meeting abstracts. Two reviewers independently extracted all data. Study eligibility and participants: We included any randomised controlled trials (RCT) comparing TM or STS to usual care of patients with CHF. Studies that included intensified management with additional home or clinic visits were excluded. Synthesis: Primary outcomes (mortality and hospitalisations) were analysed; secondary outcomes (cost, length of stay, quality of life) were tabulated.--Results: Thirty RCTs of STS and TM were identified (25 peer-reviewed publications (n=8,323) and five abstracts (n=1,482)). Of the 25 peer-reviewed studies, 11 evaluated TM (2,710 participants), 16 evaluated STS (5,613 participants) and two tested both interventions. TM reduced all-cause mortality (risk ratio (RR 0•66 [95% CI 0•54-0•81], p<0•0001) and STS showed similar trends (RR 0•88 [95% CI 0•76-1•01], p=0•08). Both TM (RR 0•79 [95% CI 0•67-0•94], p=0•008) and STS (RR 0•77 [95% CI 0•68-0•87], p<0•0001) reduced CHF-related hospitalisations. Both interventions improved quality of life, reduced costs, and were acceptable to patients. Improvements in prescribing, patient-knowledge and self-care, and functional class were observed.--Conclusion: TM and STS both appear effective interventions to improve outcomes in patients with CHF.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The new configuration proposed in this paper for Marx Generator (MG) aims to generate high voltage for pulsed power applications through reduced number of semiconductor components with a more efficient load supplying process. The main idea is to charge two groups of capacitors in parallel through an inductor and take advantage of resonant phenomenon in charging each capacitor up to a double input voltage level. In each resonant half a cycle, one of those capacitor groups are charged, and eventually the charged capacitors will be connected in series and the summation of the capacitor voltages can be appeared at the output of the topology. This topology can be considered as a modified Marx generator which works based on the resonant concept. Simulated models of this converter have been investigated in Matlab/SIMULINK platform and a prototype set up has been implemented in laboratory. The acquired results of either fully satisfy the anticipations in proper operation of the converter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The new configuration proposed in this paper for Marx Generator (MG.) aims to generate high voltage for pulsed power applications through reduced number of semiconductor components with a more efficient load supplying process. The main idea is to charge two groups of capacitors in parallel through an inductor and take the advantage of resonant phenomenon in charging each capacitor up to a double input voltage level. In each resonant half a cycle, one of those capacitor groups are charged, and eventually the charged capacitors will be connected in series and the summation of the capacitor voltages can be appeared at the output of the topology. This topology can be considered as a modified Marx generator which works based on the resonant concept. Simulated models of this converter have been investigated in Matlab/SIMULINK platform and the acquired results fully satisfy the anticipations in proper operation of the converter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: We wished to explore the ways in which palliative care is included in undergraduate health services curricula in Australia and the barriers to, and opportunities for, such inclusion. Methods: A scoping study of current Australian undergraduate health care curricula, using an email survey of deans (or equivalent) of health faculties was designed utilising all Australian undergraduate courses that prepare medicine, nursing and allied health professionals for entry to practice. Participants were deans or faculty heads from health and related faculties which offered courses relevant to the project, identified from the Australian Government Department of Education, Science and Training website. Sixty-two deans (or equivalent) from 41 Australian universities were surveyed. A total of 42 completed surveys were returned (68% of deans). Main outcome measures were total hours, content, teaching and learning strategies and resources for palliative care education in undergraduate curricula; perceived gaps, barriers, and opportunities to support the inclusion of palliative care education in undergraduate curricula. Results: Forty-five percent of respondents reported the content of current curricula reflected the palliative approach to a large degree. More than half of the respondents reported that their course had palliative care components integrated to a minor degree and a further third to a moderate degree. The number of hours dedicated to palliative care and teaching and learning strategies varied across all respondents, although there was a high degree of commonality in content areas taught. Conclusion: Current Australian undergraduate courses vary widely in the nature and extent to which they provide education in palliative care.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In fault detection and diagnostics, limitations coming from the sensor network architecture are one of the main challenges in evaluating a system’s health status. Usually the design of the sensor network architecture is not solely based on diagnostic purposes, other factors like controls, financial constraints, and practical limitations are also involved. As a result, it quite common to have one sensor (or one set of sensors) monitoring the behaviour of two or more components. This can significantly extend the complexity of diagnostic problems. In this paper a systematic approach is presented to deal with such complexities. It is shown how the problem can be formulated as a Bayesian network based diagnostic mechanism with latent variables. The developed approach is also applied to the problem of fault diagnosis in HVAC systems, an application area with considerable modeling and measurement constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The flood flow in urbanised areas constitutes a major hazard to the population and infrastructure as seen during the summer 2010-2011 floods in Queensland (Australia). Flood flows in urban environments have been studied relatively recently, although no study considered the impact of turbulence in the flow. During the 12-13 January 2011 flood of the Brisbane River, some turbulence measurements were conducted in an inundated urban environment in Gardens Point Road next to Brisbane's central business district (CBD) at relatively high frequency (50 Hz). The properties of the sediment flood deposits were characterised and the acoustic Doppler velocimeter unit was calibrated to obtain both instantaneous velocity components and suspended sediment concentration in the same sampling volume with the same temporal resolution. While the flow motion in Gardens Point Road was subcritical, the water elevations and velocities fluctuated with a distinctive period between 50 and 80 s. The low frequency fluctuations were linked with some local topographic effects: i.e, some local choke induced by an upstream constriction between stairwells caused some slow oscillations with a period close to the natural sloshing period of the car park. The instantaneous velocity data were analysed using a triple decomposition, and the same triple decomposition was applied to the water depth, velocity flux, suspended sediment concentration and suspended sediment flux data. The velocity fluctuation data showed a large energy component in the slow fluctuation range. For the first two tests at z = 0.35 m, the turbulence data suggested some isotropy. At z = 0.083 m, on the other hand, the findings indicated some flow anisotropy. The suspended sediment concentration (SSC) data presented a general trend with increasing SSC for decreasing water depth. During a test (T4), some long -period oscillations were observed with a period about 18 minutes. The cause of these oscillations remains unknown to the authors. The last test (T5) took place in very shallow waters and high suspended sediment concentrations. It is suggested that the flow in the car park was disconnected from the main channel. Overall the flow conditions at the sampling sites corresponded to a specific momentum between 0.2 to 0.4 m2 which would be near the upper end of the scale for safe evacuation of individuals in flooded areas. But the authors do not believe the evacuation of individuals in Gardens Point Road would have been safe because of the intense water surges and flow turbulence. More generally any criterion for safe evacuation solely based upon the flow velocity, water depth or specific momentum cannot account for the hazards caused by the flow turbulence, water depth fluctuations and water surges.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many drivers in highly motorised countries believe that aggressive driving is increasing. While the prevalence of the behaviour is difficult to reliably identify, the consequences of on-road aggression can be severe, with extreme cases resulting in property damage, injury and even death. This research program was undertaken to explore the nature of aggressive driving from within the framework of relevant psychological theory in order to enhance our understanding of the behaviour and to inform the development of relevant interventions. To guide the research a provisional ‘working’ definition of aggressive driving was proposed encapsulating the recurrent characteristics of the behaviour cited in the literature. The definition was: “aggressive driving is any on-road behaviour adopted by a driver that is intended to cause physical or psychological harm to another road user and is associated with feelings of frustration, anger or threat”. Two main theoretical perspectives informed the program of research. The first was Shinar’s (1998) frustration-aggression model, which identifies both the person-related and situational characteristics that contribute to aggressive driving, as well as proposing that aggressive behaviours can serve either an ‘instrumental’ or ‘hostile’ function. The second main perspective was Anderson and Bushman’s (2002) General Aggression Model. In contrast to Shinar’s model, the General Aggression Model reflects a broader perspective on human aggression that facilitates a more comprehensive examination of the emotional and cognitive aspects of aggressive behaviour. Study One (n = 48) examined aggressive driving behaviour from the perspective of young drivers as an at-risk group and involved conducting six focus groups, with eight participants in each. Qualitative analyses identified multiple situational and person-related factors that contribute to on-road aggression. Consistent with human aggression theory, examination of self-reported experiences of aggressive driving identified key psychological elements and processes that are experienced during on-road aggression. Participants cited several emotions experienced during an on-road incident: annoyance, frustration, anger, threat and excitement. Findings also suggest that off-road generated stress may transfer to the on-road environment, at times having severe consequences including crash involvement. Young drivers also appeared quick to experience negative attributions about the other driver, some having additional thoughts of taking action. Additionally, the results showed little difference between males and females in the severity of behavioural responses they were prepared to adopt, although females appeared more likely to displace their negative emotions. Following the self-reported on-road incident, evidence was also found of a post-event influence, with females being more likely to experience ongoing emotional effects after the event. This finding was evidenced by ruminating thoughts or distraction from tasks. However, the impact of such a post-event influence on later behaviours or interpersonal interactions appears to be minimal. Study Two involved the quantitative analysis of n = 926 surveys completed by a wide age range of drivers from across Queensland. The study aimed to explore the relationships between the theoretical components of aggressive driving that were identified in the literature review, and refined based on the findings of Study One. Regression analyses were used to examine participant emotional, cognitive and behavioural responses to two differing on-road scenarios whilst exploring the proposed theoretical framework. A number of socio-demographic, state and trait person-related variables such as age, pre-study emotions, trait aggression and problem-solving style were found to predict the likelihood of a negative emotional response such as frustration, anger, perceived threat, negative attributions and the likelihood of adopting either an instrumental or hostile behaviour in response to Scenarios One and Two. Complex relationships were found to exist between the variables, however, they were interpretable based on the literature review findings. Factor analysis revealed evidence supporting Shinar’s (1998) dichotomous description of on-road aggressive behaviours as being instrumental or hostile. The second stage of Study Two used logistic regression to examine the factors that predicted the potentially hostile aggressive drivers (n = 88) within the sample. These drivers were those who indicated a preparedness to engage in direct acts of interpersonal aggression on the road. Young, male drivers 17–24 years of age were more likely to be classified as potentially hostile aggressive drivers. Young drivers (17–24 years) also scored significantly higher than other drivers on all subscales of the Aggression Questionnaire (Buss & Perry, 1992) and on the ‘negative problem orientation’ and ‘impulsive careless style’ subscales of the Social Problem Solving Inventory – Revised (D’Zurilla, Nezu & Maydeu-Olivares, 2002). The potentially hostile aggressive drivers were also significantly more likely to engage in speeding and drink/drug driving behaviour. With regard to the emotional, cognitive and behavioural variables examined, the potentially hostile aggressive driver group also scored significantly higher than the ‘other driver’ group on most variables examined in the proposed theoretical framework. The variables contained in the framework of aggressive driving reliably distinguished potentially hostile aggressive drivers from other drivers (Nagalkerke R2 = .39). Study Three used a case study approach to conduct an in-depth examination of the psychosocial characteristics of n = 10 (9 males and 1 female) self-confessed hostile aggressive drivers. The self-confessed hostile aggressive drivers were aged 24–55 years of age. A large proportion of these drivers reported a Year 10 education or better and average–above average incomes. As a group, the drivers reported committing a number of speeding and unlicensed driving offences in the past three years and extensive histories of violations outside of this period. Considerable evidence was also found of exposure to a range of developmental risk factors for aggression that may have contributed to the driver’s on-road expression of aggression. These drivers scored significantly higher on the Aggression Questionnaire subscales and Social Problem Solving Inventory Revised subscales, ‘negative problem orientation’ and ‘impulsive/careless style’, than the general sample of drivers included in Study Two. The hostile aggressive driver also scored significantly higher on the Barrett Impulsivity Scale – 11 (Patton, Stanford & Barratt, 1995) measure of impulsivity than a male ‘inmate’, or female ‘general psychiatric’ comparison group. Using the Carlson Psychological Survey (Carlson, 1982), the self-confessed hostile aggressive drivers scored equal or higher scores than the comparison group of incarcerated individuals on the subscale measures of chemical abuse, thought disturbance, anti-social tendencies and self-depreciation. Using the Carlson Psychological Survey personality profiles, seven participants were profiled ‘markedly anti-social’, two were profiled ‘negative-explosive’ and one was profiled as ‘self-centred’. Qualitative analysis of the ten case study self-reports of on-road hostile aggression revealed a similar range of on-road situational factors to those identified in the literature review and Study One. Six of the case studies reported off-road generated stress that they believed contributed to the episodes of aggressive driving they recalled. Intense ‘anger’ or ‘rage’ were most frequently used to describe the emotions experienced in response to the perceived provocation. Less frequently ‘excitement’ and ‘fear’ were cited as relevant emotions. Notably, five of the case studies experienced difficulty articulating their emotions, suggesting emotional difficulties. Consistent with Study Two, these drivers reported negative attributions and most had thoughts of aggressive actions they would like to take. Similarly, these drivers adopted both instrumental and hostile aggressive behaviours during the self-reported incident. Nine participants showed little or no remorse for their behaviour and these drivers also appeared to exhibit low levels of personal insight. Interestingly, few incidents were brought to the attention of the authorities. Further, examination of the person-related characteristics of these drivers indicated that they may be more likely to have come from difficult or dysfunctional backgrounds and to have a history of anti-social behaviours on and off the road. The research program has several key theoretical implications. While many of the findings supported Shinar’s (1998) frustration-aggression model, two key areas of difference emerged. Firstly, aggressive driving behaviour does not always appear to be frustration driven, but can also be driven by feelings of excitation (consistent with the tenets of the General Aggression Model). Secondly, while the findings supported a distinction being made between instrumental and hostile aggressive behaviours, the characteristics of these two types of behaviours require more examination. For example, Shinar (1998) proposes that a driver will adopt an instrumental aggressive behaviour when their progress is impeded if it allows them to achieve their immediate goals (e.g. reaching their destination as quickly as possible); whereas they will engage in hostile aggressive behaviour if their path to their goal is blocked. However, the current results question this assertion, since many of the hostile aggressive drivers studied appeared prepared to engage in hostile acts irrespective of whether their goal was blocked or not. In fact, their behaviour appeared to be characterised by a preparedness to abandon their immediate goals (even if for a short period of time) in order to express their aggression. The use of the General Aggression Model enabled an examination of the three components of the ‘present internal state’ comprising emotions, cognitions and arousal and how these influence the likelihood of a person responding aggressively to an on-road situation. This provided a detailed insight into both the cognitive and emotional aspects of aggressive driving that have important implications for the design of relevant countermeasures. For example, the findings highlighted the potential value of utilising Cognitive Behavioural Therapy with aggressive drivers, particularly the more hostile offenders. Similarly, educational efforts need to be mindful of the way that person-related factors appear to influence one’s perception of another driver’s behaviour as aggressive or benign. Those drivers with a predisposition for aggression were more likely to perceive aggression or ‘wrong doing’ in an ambiguous on-road situation and respond with instrumental and/or hostile behaviour, highlighting the importance of perceptual processes in aggressive driving behaviour.