994 resultados para hazard models


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fire safety of buildings has been recognised as very important by the building industry and the community at large. Gypsum plasterboards are widely used to protect light gauge steel frame (LSF) walls all over the world. Gypsum contains free and chemically bound water in its crystal structure. Plasterboard also contains gypsum (CaSO4.2H2O) and calcium carbonate (CaCO3). The dehydration of gypsum and the decomposition of calcium carbonate absorb heat, and thus are able to protect LSF walls from fires. Kolarkar and Mahendran (2008) developed an innovative composite wall panel system, where the insulation was sandwiched between two plasterboards to improve the thermal and structural performance of LSF wall panels under fire conditions. In order to understand the performance of gypsum plasterboards and LSF wall panels under standard fire conditions, many experiments were conducted in the Fire Research Laboratory of Queensland University of Technology (Kolarkar, 2010). Fire tests were conducted on single, double and triple layers of Type X gypsum plasterboards and load bearing LSF wall panels under standard fire conditions. However, suitable numerical models have not been developed to investigate the thermal performance of LSF walls using the innovative composite panels under standard fire conditions. Continued reliance on expensive and time consuming fire tests is not acceptable. Therefore this research developed suitable numerical models to investigate the thermal performance of both plasterboard assemblies and load bearing LSF wall panels. SAFIR, a finite element program, was used to investigate the thermal performance of gypsum plasterboard assemblies and LSF wall panels under standard fire conditions. Appropriate values of important thermal properties were proposed for plasterboards and insulations based on laboratory tests, literature review and comparisons of finite element analysis results of small scale plasterboard assemblies from this research and corresponding experimental results from Kolarkar (2010). The important thermal properties (thermal conductivity, specific heat capacity and density) of gypsum plasterboard and insulation materials were proposed as functions of temperature and used in the numerical models of load bearing LSF wall panels. Using these thermal properties, the developed finite element models were able to accurately predict the time temperature profiles of plasterboard assemblies while they predicted them reasonably well for load bearing LSF wall systems despite the many complexities that are present in these LSF wall systems under fires. This thesis presents the details of the finite element models of plasterboard assemblies and load bearing LSF wall panels including those with the composite panels developed by Kolarkar and Mahendran (2008). It examines and compares the thermal performance of composite panels developed based on different insulating materials of varying densities and thicknesses based on 11 small scale tests, and makes suitable recommendations for improved fire performance of stud wall panels protected by these composite panels. It also presents the thermal performance data of LSF wall systems and demonstrates the superior performance of LSF wall systems using the composite panels. Using the developed finite element of models of LSF walls, this thesis has proposed new LSF wall systems with increased fire rating. The developed finite element models are particularly useful in comparing the thermal performance of different wall panel systems without time consuming and expensive fire tests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The question of under what conditions conceptual representation is compositional remains debatable within cognitive science. This paper proposes a well developed mathematical apparatus for a probabilistic representation of concepts, drawing upon methods developed in quantum theory to propose a formal test that can determine whether a specific conceptual combination is compositional, or not. This test examines a joint probability distribution modeling the combination, asking whether or not it is factorizable. Empirical studies indicate that some combinations should be considered non-compositionally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Digital human models (DHM) have evolved as useful tools for ergonomic workplace design and product development, and found in various industries and education. DHM systems which dominate the market were developed for specific purposes and differ significantly, which is not only reflected in non-compatible results of DHM simulations, but also provoking misunderstanding of how DHM simulations relate to real world problems. While DHM developers are restricted by uncertainty about the user need and lack of model data related standards, users are confined to one specific product and cannot exchange results, or upgrade to another DHM system, as their previous results would be rendered worthless. Furthermore, origin and validity of anthropometric and biomechanical data is not transparent to the user. The lack of standardisation in DHM systems has become a major roadblock in further system development, affecting all stakeholders in the DHM industry. Evidently, a framework for standardising digital human models is necessary to overcome current obstructions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Individual-based models describing the migration and proliferation of a population of cells frequently restrict the cells to a predefined lattice. An implicit assumption of this type of lattice based model is that a proliferative population will always eventually fill the lattice. Here we develop a new lattice-free individual-based model that incorporates cell-to-cell crowding effects. We also derive approximate mean-field descriptions for the lattice-free model in two special cases motivated by commonly used experimental setups. Lattice-free simulation results are compared to these mean-field descriptions and to a corresponding lattice-based model. Data from a proliferation experiment is used to estimate the parameters for the new model, including the cell proliferation rate, showing that the model fits the data well. An important aspect of the lattice-free model is that the confluent cell density is not predefined, as with lattice-based models, but an emergent model property. As a consequence of the more realistic, irregular configuration of cells in the lattice-free model, the population growth rate is much slower at high cell densities and the population cannot reach the same confluent density as an equivalent lattice-based model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The importance of actively managing and analyzing business processes is acknowledged more than ever in organizations nowadays. Business processes form an essential part of an organization and their ap-plication areas are manifold. Most organizations keep records of various activities that have been carried out for auditing purposes, but they are rarely used for analysis purposes. This paper describes the design and implementation of a process analysis tool that replays, analyzes and visualizes a variety of performance metrics using a process definition and its execution logs. Performing performance analysis on existing and planned process models offers a great way for organizations to detect bottlenecks within their processes and allow them to make more effective process improvement decisions. Our technique is applied to processes modeled in the YAWL language. Execution logs of process instances are compared against the corresponding YAWL process model and replayed in a robust manner, taking into account any noise in the logs. Finally, performance characteristics, obtained from replaying the log in the model, are projected onto the model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, a number of phylogenetic methods have been developed for estimating molecular rates and divergence dates under models that relax the molecular clock constraint by allowing rate change throughout the tree. These methods are being used with increasing frequency, but there have been few studies into their accuracy. We tested the accuracy of several relaxed-clock methods (penalized likelihood and Bayesian inference using various models of rate change) using nucleotide sequences simulated on a nine-taxon tree. When the sequences evolved with a constant rate, the methods were able to infer rates accurately, but estimates were more precise when a molecular clock was assumed. When the sequences evolved under a model of autocorrelated rate change, rates were accurately estimated using penalized likelihood and by Bayesian inference using lognormal and exponential models of rate change, while other models did not perform as well. When the sequences evolved under a model of uncorrelated rate change, only Bayesian inference using an exponential rate model performed well. Collectively, the results provide a strong recommendation for using the exponential model of rate change if a conservative approach to divergence time estimation is required. A case study is presented in which we use a simulation-based approach to examine the hypothesis of elevated rates in the Cambrian period, and it is found that these high rate estimates might be an artifact of the rate estimation method. If this bias is present, then the ages of metazoan divergences would be systematically underestimated. The results of this study have implications for studies of molecular rates and divergence dates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presently, global rates of skin cancers induced by ultraviolet radiation (UVR) exposure are on the rise. In view of this, current knowledge gaps in the biology of photocarcinogenesis and skin cancer progression urgently need to be addressed. One factor that has limited skin cancer research has been the need for a reproducible and physiologically-relevant model able to represent the complexity of human skin. This review outlines the main currently-used in vitro models of UVR-induced skin damage. This includes the use of conventional two-dimensional cell culture techniques and the major animal models that have been employed in photobiology and photocarcinogenesis research. Additionally, the progression towards the use of cultured skin explants and tissue-engineered skin constructs, and their utility as models of native skin's responses to UVR are described. The inherent advantages and disadvantages of these in vitro systems are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – The internet is transforming possibilities for creative interaction, experimentation and cultural consumption in China and raising important questions about the role that “publishers” might play in an open and networked digital world. The purpose of this paper is to consider the role that copyright is playing in the growth of a publishing industry that is being “born digital”. Design/methodology/approach – The paper approaches online literature as an example of a creative industry that is generating value for a wider creative economy through its social network market functions. It builds on the social network market definition of the creative industries proposed by Potts et al. and uses this definition to interrogate the role that copyright plays in a rapidly-evolving creative economy. Findings – The rapid growth of a market for crowd-sourced content is combining with growing commercial freedom in cultural space to produce a dynamic landscape of business model experimentation. Using the social web to engage audiences, generate content, establish popularity and build reputation and then converting those assets into profit through less networked channels appears to be a driving strategy in the expansion of wider creative industries markets in China. Originality/value – At a moment when publishing industries all over the world are struggling to come to terms with digital technology, the emergence of a rapidly-growing area of publishing that is being born digital offers important clues about the future of publishing and what social network markets might mean for the role of copyright in a digital age.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis develops a detailed conceptual design method and a system software architecture defined with a parametric and generative evolutionary design system to support an integrated interdisciplinary building design approach. The research recognises the need to shift design efforts toward the earliest phases of the design process to support crucial design decisions that have a substantial cost implication on the overall project budget. The overall motivation of the research is to improve the quality of designs produced at the author's employer, the General Directorate of Major Works (GDMW) of the Saudi Arabian Armed Forces. GDMW produces many buildings that have standard requirements, across a wide range of environmental and social circumstances. A rapid means of customising designs for local circumstances would have significant benefits. The research considers the use of evolutionary genetic algorithms in the design process and the ability to generate and assess a wider range of potential design solutions than a human could manage. This wider ranging assessment, during the early stages of the design process, means that the generated solutions will be more appropriate for the defined design problem. The research work proposes a design method and system that promotes a collaborative relationship between human creativity and the computer capability. The tectonic design approach is adopted as a process oriented design that values the process of design as much as the product. The aim is to connect the evolutionary systems to performance assessment applications, which are used as prioritised fitness functions. This will produce design solutions that respond to their environmental and function requirements. This integrated, interdisciplinary approach to design will produce solutions through a design process that considers and balances the requirements of all aspects of the design. Since this thesis covers a wide area of research material, 'methodological pluralism' approach was used, incorporating both prescriptive and descriptive research methods. Multiple models of research were combined and the overall research was undertaken following three main stages, conceptualisation, developmental and evaluation. The first two stages lay the foundations for the specification of the proposed system where key aspects of the system that have not previously been proven in the literature, were implemented to test the feasibility of the system. As a result of combining the existing knowledge in the area with the newlyverified key aspects of the proposed system, this research can form the base for a future software development project. The evaluation stage, which includes building the prototype system to test and evaluate the system performance based on the criteria defined in the earlier stage, is not within the scope this thesis. The research results in a conceptual design method and a proposed system software architecture. The proposed system is called the 'Hierarchical Evolutionary Algorithmic Design (HEAD) System'. The HEAD system has shown to be feasible through the initial illustrative paper-based simulation. The HEAD system consists of the two main components - 'Design Schema' and the 'Synthesis Algorithms'. The HEAD system reflects the major research contribution in the way it is conceptualised, while secondary contributions are achieved within the system components. The design schema provides constraints on the generation of designs, thus enabling the designer to create a wide range of potential designs that can then be analysed for desirable characteristics. The design schema supports the digital representation of the human creativity of designers into a dynamic design framework that can be encoded and then executed through the use of evolutionary genetic algorithms. The design schema incorporates 2D and 3D geometry and graph theory for space layout planning and building formation using the Lowest Common Design Denominator (LCDD) of a parameterised 2D module and a 3D structural module. This provides a bridge between the standard adjacency requirements and the evolutionary system. The use of graphs as an input to the evolutionary algorithm supports the introduction of constraints in a way that is not supported by standard evolutionary techniques. The process of design synthesis is guided as a higher level description of the building that supports geometrical constraints. The Synthesis Algorithms component analyses designs at four levels, 'Room', 'Layout', 'Building' and 'Optimisation'. At each level multiple fitness functions are embedded into the genetic algorithm to target the specific requirements of the relevant decomposed part of the design problem. Decomposing the design problem to allow for the design requirements of each level to be dealt with separately and then reassembling them in a bottom up approach reduces the generation of non-viable solutions through constraining the options available at the next higher level. The iterative approach, in exploring the range of design solutions through modification of the design schema as the understanding of the design problem improves, assists in identifying conflicts in the design requirements. Additionally, the hierarchical set-up allows the embedding of multiple fitness functions into the genetic algorithm, each relevant to a specific level. This supports an integrated multi-level, multi-disciplinary approach. The HEAD system promotes a collaborative relationship between human creativity and the computer capability. The design schema component, as the input to the procedural algorithms, enables the encoding of certain aspects of the designer's subjective creativity. By focusing on finding solutions for the relevant sub-problems at the appropriate levels of detail, the hierarchical nature of the system assist in the design decision-making process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ocean processes are complex and have high variability in both time and space. Thus, ocean scientists must collect data over long time periods to obtain a synoptic view of ocean processes and resolve their spatiotemporal variability. One way to perform these persistent observations is to utilise an autonomous vehicle that can remain on deployment for long time periods. However, such vehicles are generally underactuated and slow moving. A challenge for persistent monitoring with these vehicles is dealing with currents while executing a prescribed path or mission. Here we present a path planning method for persistent monitoring that exploits ocean currents to increase navigational accuracy and reduce energy consumption.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Unmanned Aircraft Systems (UAS) describe a diverse range of aircraft that are operated without a human pilot on-board. Unmanned aircraft range from small rotorcraft, which can fit in the palm of your hand, through to fixed wing aircraft comparable in size to that of a commercial passenger jet. The absence of a pilot on-board allows these aircraft to be developed with unique performance capabilities facilitating a wide range of applications in surveillance, environmental management, agriculture, defence, and search and rescue. However, regulations relating to the safe design and operation of UAS first need to be developed before the many potential benefits from these applications can be realised. According to the International Civil Aviation Organization (ICAO), a Risk Management Process (RMP) should support all civil aviation policy and rulemaking activities (ICAO 2009). The RMP is described in International standard, ISO 31000:2009 (ISO, 2009a). This standard is intentionally generic and high-level, providing limited guidance on how it can be effectively applied to complex socio-technical decision problems such as the development of regulations for UAS. Through the application of principles and tools drawn from systems philosophy and systems engineering, this thesis explores how the RMP can be effectively applied to support the development of safety regulations for UAS. A sound systems-theoretic foundation for the RMP is presented in this thesis. Using the case-study scenario of a UAS operation over an inhabited area and through the novel application of principles drawn from general systems modelling philosophy, a consolidated framework of the definitions of the concepts of: safe, risk and hazard is made. The framework is novel in that it facilitates the representation of broader subjective factors in an assessment of the safety of a system; describes the issues associated with the specification of a system-boundary; makes explicit the hierarchical nature of the relationship between the concepts and the subsequent constraints that exist between them; and can be evaluated using a range of analytic or deliberative modelling techniques. Following the general sequence of the RMP, the thesis explores the issues associated with the quantified specification of safety criteria for UAS. A novel risk analysis tool is presented. In contrast to existing risk tools, the analysis tool presented in this thesis quantifiably characterises both the societal and individual risk of UAS operations as a function of the flight path of the aircraft. A novel structuring of the risk evaluation and risk treatment decision processes is then proposed. The structuring is achieved through the application of the Decision Support Problem Technique; a modelling approach that has been previously used to effectively model complex engineering design processes and to support decision-making in relation to airspace design. The final contribution made by this thesis is in the development of an airworthiness regulatory framework for civil UAS. A novel "airworthiness certification matrix" is proposed as a basis for the definition of UAS "Part 21" regulations. The outcome airworthiness certification matrix provides a flexible, systematic and justifiable method for promulgating airworthiness regulations for UAS. In addition, an approach for deriving "Part 1309" regulations for UAS is presented. In contrast to existing approaches, the approach presented in this thesis facilitates a traceable and objective tailoring of system-level reliability requirements across the diverse range of UAS operations. The significance of the research contained in this thesis is clearly demonstrated by its practical real world outcomes. Industry regulatory development groups and the Civil Aviation Safety Authority have endorsed the proposed airworthiness certification matrix. The risk models have also been used to support research undertaken by the Australian Department of Defence. Ultimately, it is hoped that the outcomes from this research will play a significant part in the shaping of regulations for civil UAS, here in Australia and around the world.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Newly licensed drivers on a provisional or intermediate licence have the highest crash risk when compared with any other group of drivers. In comparison, learner drivers have the lowest crash risk. Graduated driver licensing is one countermeasure that has been demonstrated to effectively reduce the crashes of novice drivers. This thesis examined the graduated driver licensing systems in two Australian states in order to better understand the behaviour of learner drivers, provisional drivers and the supervisors of learner drivers. By doing this, the thesis investigated the personal, social and environmental influences on novice driver behaviour as well as providing effective baseline data against which to measure subsequent changes to the licensing systems. In the first study, conducted prior to the changes to the graduated driver licensing system introduced in mid-2007, drivers who had recently obtained their provisional licence in Queensland and New South Wales were interviewed by telephone regarding their experiences while driving on their learner licence. Of the 687 eligible people approached to participate at driver licensing centres, 392 completed the study representing a response rate of 57.1 per cent. At the time the data was collected, New South Wales represented a more extensive graduated driver licensing system when compared with Queensland. The results suggested that requiring learners to complete a mandated number of hours of supervised practice impacts on the amount of hours that learners report completing. While most learners from New South Wales reported meeting the requirement to complete 50 hours of practice, it appears that many stopped practising soon after this goal was achieved. In contrast, learners from Queensland, who were not required to complete a specific number of hours at the time of the survey, tended to fall into three groups. The first group appeared to complete the minimum number of hours required to pass the test (less than 26 hours), the second group completed 26 to 50 hours of supervised practice while the third group completed significantly more practice than the first two groups (over 100 hours of supervised practice). Learner drivers in both states reported generally complying with the road laws and were unlikely to report that they had been caught breaking the road rules. They also indicated that they planned to obey the road laws once they obtained their provisional licence. However, they were less likely to intend to comply with recommended actions to reduce crash risk such as limiting their driving at night. This study also identified that there were relatively low levels of unaccompanied driving (approximately 15 per cent of the sample), very few driving offences committed (five per cent of the sample) and that learner drivers tended to use a mix of private and professional supervisors (although the majority of practice is undertaken with private supervisors). Consistent with the international literature, this study identified that very few learner drivers had experienced a crash (six per cent) while on their learner licence. The second study was also conducted prior to changes to the graduated driver licensing system and involved follow up interviews with the participants of the first study after they had approximately 21 months driving experience on their provisional licence. Of the 392 participants that completed the first study, 233 participants completed the second interview (representing a response rate of 59.4 per cent). As with the first study, at the time the data was collected, New South Wales had a more extensive graduated driver licensing system than Queensland. For instance, novice drivers from New South Wales were required to progress through two provisional licence phases (P1 and P2) while there was only one provisional licence phase in Queensland. Among the participants in this second study, almost all provisional drivers (97.9 per cent) owned or had access to a vehicle for regular driving. They reported that they were unlikely to break road rules, such as driving after a couple of drinks, but were also unlikely to comply with recommended actions, such as limiting their driving at night. When their provisional driving behaviour was compared to the stated intentions from the first study, the results suggested that their intentions were not a strong predictor of their subsequent behaviour. Their perception of risk associated with driving declined from when they first obtained their learner licence to when they had acquired provisional driving experience. Just over 25 per cent of participants in study two reported that they had been caught committing driving offences while on their provisional licence. Nearly one-third of participants had crashed while driving on a provisional licence, although few of these crashes resulted in injuries or hospitalisations. To complement the first two studies, the third study examined the experiences of supervisors of learner drivers, as well as their perceptions of their learner’s experiences. This study was undertaken after the introduction of the new graduated driver licensing systems in Queensland and New South Wales in mid- 2007, providing insights into the impacts of these changes from the perspective of supervisors. The third study involved an internet survey of 552 supervisors of learner drivers. Within the sample, approximately 50 per cent of participants supervised their own child. Other supervisors of the learner drivers included other parents or stepparents, professional driving instructors and siblings. For two-thirds of the sample, this was the first learner driver that they had supervised. Participants had provided an average of 54.82 hours (sd = 67.19) of supervision. Seventy-three per cent of participants indicated that their learners’ logbooks were accurate or very accurate in most cases, although parents were more likely than non-parents to report that their learners’ logbook was accurate (F (1,546) = 7.74, p = .006). There was no difference between parents and non-parents regarding whether they believed the log book system was effective (F (1,546) = .01, p = .913). The majority of the sample reported that their learner driver had had some professional driving lessons. Notwithstanding this, a significant proportion (72.5 per cent) believed that parents should be either very involved or involved in teaching their child to drive, with parents being more likely than non-parents to hold this belief. In the post mid-2007 graduated driver licensing system, Queensland learner drivers are able to record three hours of supervised practice in their log book for every hour that is completed with a professional driving instructor, up to a total of ten hours. Despite this, there was no difference identified between Queensland and New South Wales participants regarding the amount of time that they reported their learners spent with professional driving instructors (X2(1) = 2.56, p = .110). Supervisors from New South Wales were more likely to ensure that their learner driver complied with the road laws. Additionally, with the exception of drug driving laws, New South Wales supervisors believed it was more important to teach safety-related behaviours such as remaining within the speed limit, car control and hazard perception than those from Queensland. This may be indicative of more intensive road safety educational efforts in New South Wales or the longer time that graduated driver licensing has operated in that jurisdiction. However, other factors may have contributed to these findings and further research is required to explore the issue. In addition, supervisors reported that their learner driver was involved in very few crashes (3.4 per cent) and offences (2.7 per cent). This relatively low reported crash rate is similar to that identified in the first study. Most of the graduated driver licensing research to date has been applied in nature and lacked a strong theoretical foundation. These studies used Akers’ social learning theory to explore the self-reported behaviour of novice drivers and their supervisors. This theory was selected as it has previously been found to provide a relatively comprehensive framework for explaining a range of driver behaviours including novice driver behaviour. Sensation seeking was also used in the first two studies to complement the non-social rewards component of Akers’ social learning theory. This program of research identified that both Akers’ social learning theory and sensation seeking were useful in predicting the behaviour of learner and provisional drivers over and above socio-demographic factors. Within the first study, Akers’ social learning theory accounted for an additional 22 per cent of the variance in learner driver compliance with the law, over and above a range of socio-demographic factors such as age, gender and income. The two constructs within Akers’ theory which were significant predictors of learner driver compliance were the behavioural dimension of differential association relating to friends, and anticipated rewards. Sensation seeking predicted an additional six per cent of the variance in learner driver compliance with the law. When considering a learner driver’s intention to comply with the law while driving on a provisional licence, Akers’ social learning theory accounted for an additional 10 per cent of the variance above socio-demographic factors with anticipated rewards being a significant predictor. Sensation seeking predicted an additional four per cent of the variance. The results suggest that the more rewards individuals anticipate for complying with the law, the more likely they are to obey the road rules. Further research is needed to identify which specific rewards are most likely to encourage novice drivers’ compliance with the law. In the second study, Akers’ social learning theory predicted an additional 40 per cent of the variance in self-reported compliance with road rules over and above socio-demographic factors while sensation seeking accounted for an additional five per cent of the variance. A number of Aker’s social learning theory constructs significantly predicted provisional driver compliance with the law, including the behavioural dimension of differential association for friends, the normative dimension of differential association, personal attitudes and anticipated punishments. The consistent prediction of additional variance by sensation seeking over and above the variables within Akers’ social learning theory in both studies one and two suggests that sensation seeking is not fully captured within the non social rewards dimension of Akers’ social learning theory, at least for novice drivers. It appears that novice drivers are strongly influenced by the desire to engage in new and intense experiences. While socio-demographic factors and the perception of risk associated with driving had an important role in predicting the behaviour of the supervisors of learner drivers, Akers’ social learning theory provided further levels of prediction over and above these factors. The Akers’ social learning theory variables predicted an additional 14 per cent of the variance in the extent to which supervisors ensured that their learners complied with the law and an additional eight per cent of the variance in the supervisors’ provision of a range of practice experiences. The normative dimension of differential association, personal attitudes towards the use of professional driving instructors and anticipated rewards were significant predictors for supervisors ensuring that their learner complied with the road laws, while the normative dimension was important for range of practice. This suggests that supervisors who engage with other supervisors who ensure their learner complies with the road laws and provide a range of practice to their own learners are more likely to also engage in these behaviours. Within this program of research, there were several limitations including the method of recruitment of participants within the first study, the lower participation rate in the second study, an inability to calculate a response rate for study three and the use of self-report data for all three studies. Within the first study, participants were only recruited from larger driver licensing centres to ensure that there was a sufficient throughput of drivers to approach. This may have biased the results due to the possible differences in learners that obtain their licences in locations with smaller licensing centres. Only 59.4 per cent of the sample in the first study completed the second study. This may be a limitation if there was a common reason why those not participating were unable to complete the interview leading to a systematic impact on the results. The third study used a combination of a convenience and snowball sampling which meant that it was not possible to calculate a response rate. All three studies used self-report data which, in many cases, is considered a limitation. However, self-report data may be the only method that can be used to obtain some information. This program of research has a number of implications for countermeasures in both the learner licence phase and the provisional licence phase. During the learner phase, licensing authorities need to carefully consider the number of hours that they mandate learner drivers must complete before they obtain their provisional driving licence. If they mandate an insufficient number of hours, there may be inadvertent negative effects as a result of setting too low a limit. This research suggests that logbooks may be a useful tool for learners and their supervisors in recording and structuring their supervised practice. However, it would appear that the usage rates for logbooks will remain low if they remain voluntary. One strategy for achieving larger amounts of supervised practice is for learner drivers and their supervisors to make supervised practice part of their everyday activities. As well as assisting the learner driver to accumulate the required number of hours of supervised practice, it would ensure that they gain experience in the types of environments that they will probably encounter when driving unaccompanied in the future, such as to and from education or work commitments. There is also a need for policy processes to ensure that parents and professional driving instructors communicate effectively regarding the learner driver’s progress. This is required as most learners spend at least some time with a professional instructor despite receiving significant amounts of practice with a private supervisor. However, many supervisors did not discuss their learner’s progress with the driving instructor. During the provisional phase, there is a need to strengthen countermeasures to address the high crash risk of these drivers. Although many of these crashes are minor, most involve at least one other vehicle. Therefore, there are social and economic benefits to reducing these crashes. If the new, post-2007 graduated driver licensing systems do not significantly reduce crash risk, there may be a need to introduce further provisional licence restrictions such as separate night driving and peer passenger restrictions (as opposed to the hybrid version of these two restrictions operating in both Queensland and New South Wales). Provisional drivers appear to be more likely to obey some provisional licence laws, such as lower blood alcohol content limits, than others such as speed limits. Therefore, there may be a need to introduce countermeasures to encourage provisional drivers to comply with specific restrictions. When combined, these studies provided significant information regarding graduated driver licensing programs. This program of research has investigated graduated driver licensing utilising a cross-sectional and longitudinal design in order to develop our understanding of the experiences of novice drivers that progress through the system in order to help reduce crash risk once novice drivers commence driving by themselves.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motorcyclists are the most crash-prone road-user group in many Asian countries including Singapore; however, factors influencing motorcycle crashes are still not well understood. This study examines the effects of various roadway characteristics, traffic control measures and environmental factors on motorcycle crashes at different location types including expressways and intersections. Using techniques of categorical data analysis, this study has developed a set of log-linear models to investigate multi-vehicle motorcycle crashes in Singapore. Motorcycle crash risks in different circumstances have been calculated after controlling for the exposure estimated by the induced exposure technique. Results show that night-time influence increases crash risks of motorcycles particularly during merging and diverging manoeuvres on expressways, and turning manoeuvres at intersections. Riders appear to exercise more care while riding on wet road surfaces particularly during night. Many hazardous interactions at intersections tend to be related to the failure of drivers to notice a motorcycle as well as to judge correctly the speed/distance of an oncoming motorcycle. Road side conflicts due to stopping/waiting vehicles and interactions with opposing traffic on undivided roads have been found to be as detrimental factors on motorcycle safety along arterial, main and local roads away from intersections. Based on the findings of this study, several targeted countermeasures in the form of legislations, rider training, and safety awareness programmes have been recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditional crash prediction models, such as generalized linear regression models, are incapable of taking into account the multilevel data structure, which extensively exists in crash data. Disregarding the possible within-group correlations can lead to the production of models giving unreliable and biased estimates of unknowns. This study innovatively proposes a -level hierarchy, viz. (Geographic region level – Traffic site level – Traffic crash level – Driver-vehicle unit level – Vehicle-occupant level) Time level, to establish a general form of multilevel data structure in traffic safety analysis. To properly model the potential cross-group heterogeneity due to the multilevel data structure, a framework of Bayesian hierarchical models that explicitly specify multilevel structure and correctly yield parameter estimates is introduced and recommended. The proposed method is illustrated in an individual-severity analysis of intersection crashes using the Singapore crash records. This study proved the importance of accounting for the within-group correlations and demonstrated the flexibilities and effectiveness of the Bayesian hierarchical method in modeling multilevel structure of traffic crash data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Singapore crash statistics show that motorcycles are involved in about 54% of crashes at intersections. Moreover, about 46% of fatal and 67% of injury motorcycle crashes occur at signalized intersections. The objective of this study is to identify causal factors affecting the motorcycle crashes at both four-legged and three-legged signalized intersections. Treating the data in time-series cross-section panels, this study explores different Hierarchical Poisson models and found that the model allowing autoregressive lag 1 dependent specification in the error term is the most suitable. Analysis of the results shows the number of lanes at the intersections significantly increases motorcycle crashes largely because of the higher exposure resulting from higher motorcycle accumulation at the stop line. Furthermore, the presence of a wide median at four-legged intersections and an exclusive right-turn lane and an uncontrolled left-turn lane at three-legged intersections exacerbate this potential hazard. Moreover, motorcycle crashes increase on high-speed roadways because of the vulnerability of the motorcyclists. The presence of red light cameras reduces motorcycle crashes significantly on the intersection roadways for both four-legged and three-legged intersections. With the red-light camera, motorcycles are less exposed to conflicts because it is observed that they are more disciplined in queuing at the stop line and less likely to jump start at the start of green.