886 resultados para Type of error


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Principal Topic: There is increasing recognition that the organizational configurations of corporate venture units should depend on the types of ventures the unit seeks to develop (Burgelman, 1984; Hill and Birkinshaw, 2008). Distinction have been made between internal and external as well as exploitative versus explorative ventures (Hill and Birkinshaw, 2008; Narayan et al., 2009; Schildt et al., 2005). Assuming that firms do not want to limit themselves to a single type of venture, but rather employ a portfolio of ventures, the logical consequence is that firms should employ multiple corporate venture units. Each venture unit tailor-made for the type of venture it seeks to develop. Surprisingly, there is limited attention in the literature for the challenges of managing multiple corporate venture units in a single firm. Maintaining multiple venture units within one firm provides easier access to funding for new ideas (Hamel, 1999). It allows for freedom and flexibility to tie the organizational systems (Rice et al., 2000), autonomy (Hill and Rothaermel, 2003), and involvement of management (Day, 1994; Wadwha and Kotha, 2006) to the requirements of the individual ventures. Yet, the strategic objectives of a venture may change when uncertainty around the venture is resolved (Burgelman, 1984). For example, firms may decide to spin-in external ventures (Chesbrough, 2002) or spun-out ventures that prove strategically unimportant (Burgelman, 1984). This suggests that ventures might need to be transferred between venture units, e.g. from a more internally-driven corporate venture division to a corporate venture capital unit. Several studies suggested that ventures require different managerial skills across their phase of development (Desouza et al., 2007; O'Connor and Ayers, 2005; Kazanjian and Drazin, 1990; Westerman et al., 2006). To facilitate effective transfer between venture units and manage the overall venturing process, it is important that firms set up and manage integrative linkages. Integrative linkages provide synergies and coordination between differentiated units (Lawrence and Lorsch, 1967). Prior findings pointed to the important role of senior management (Westerman et al., 2006; Gilbert, 2006) and a shared organizational vision (Burgers et al., 2009) to coordinate venture units with mainstream businesses. We will draw on these literatures to investigate the key question of how to integratively manage multiple venture units. ---------- Methodology/Key Propositions: In order to seek an answer to the research question, we employ a case study approach that provides unique insights into how firms can break up their venturing process. We selected three Fortune 500 companies that employ multiple venturing units, IBM, Royal Dutch/ Shell and Nokia, and investigated and compared their approaches. It was important that the case companies somewhat differed in the type of venture units they employed as well as the way they integrate and coordinate their venture units. The data are based on extensive interviews and a variety of internal and external company documents to triangulate our findings (Eisenhardt, 1989). The key proposition of the article is that firms can best manage their multiple venture units through an ambidextrous design of loosely coupled units. This provides venture units with sufficient flexibility to employ organizational configurations that best support the type of venture they seek to develop, as well as provides sufficient integration to facilitate smooth transfer of ventures between venture units. Based on the case findings, we develop a generic framework for a new way of managing the venturing process through multiple corporate venture units. ---------- Results and Implications: One of our main findings is that these firms tend to organize their venture units according to phases in the venture development process. That is, they tend to have venture units aimed at incubation of venture ideas as well as units aimed more at the commercialization of ventures into a new business unit for the firm or a start-up. The companies in our case studies tended to coordinate venture units through integrative management skills or a coordinative venture unit that spanned multiple phases. We believe this paper makes two significant contributions. First, we extend prior venturing literature by addressing how firms manage a portfolio of venture units, each achieving different strategic objectives. Second, our framework provides recommendations on how firms should manage such an approach towards venturing. This helps to increase the likelihood of success of their venturing programs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Choosing the “right” type of technology, either synchronous or asynchronous, to facilitate learning outcomes is a new challenge as the pace of emerging technologies increases and diversifies. Teachers are encouraged to design courses that require collaboration in online learning communities to facilitate the development of higher order thinking skills for life long learning. It is therefore important to gather evidence of the kinds of opportunities afforded to students and whether the students themselves endorse collaborative online tools as a legitimate method of assisting in their learning. This paper outlines the way in which one secondary school has used an online discussion forum to support students in the International Baccalaureate (IB) Diploma Program in enhancing research skills and skills for lifelong learning. The paper considers whether online collaborative tools are perceived by students as a positive way to improve learning outcomes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Each year, The Australian Centre for Philanthropy and Nonprofit Studies (CPNS) at Queensland University of Technology (QUT) collects and analyses statistics on the amount and extent of tax-deductible donations made and claimed by Australians in their individual income tax returns to deductible gift recipients (DGRs). The information presented below is based on the amount and type of tax-deductible donations made and claimed by Australian individual taxpayers to DGRs for the period 1 July 2006 to 30 June 2007. This information has been extracted mainly from the Australian Taxation Office's (ATO) publication Taxation Statistics 2006-07. The 2006-07 report is the latest report that has been made publicly available. It represents information in tax returns for the 2006-07 year processed by the ATO as at 31 October 2008. This study uses information based on published ATO material and represents only the extent of tax-deductible donations made and claimed by Australian taxpayers to DGRs at Item D9 Gifts or Donations in their individual income tax returns for the 2006-07 income year. The data does not include corporate taxpayers. Expenses such as raffles, sponsorships, fundraising purchases (e.g., sweets, tea towels, special events) or volunteering are generally not deductible as „gifts‟. The Giving Australia Report used a more liberal definition of gift to arrive at an estimated total of giving at $11 billion for 2005 (excluding Tsunami giving of $300 million). The $11 billion total comprised $5.7 billion from adult Australians, $2 billion from charity gambling or special events and $3.3 billion from business sources.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Zeolite N was produced from a variety of kaolinites and montmorillonites at low temperature (b100 °C) in a constantly stirred reactor using potassic and potassic+sodic mother liquors with chloride or hydroxyl anions. Reactions were complete (N95% product) in less than 20 h depending on initial batch composition and type of clay minerals. Zeolite N with 1.0bSi/Alb2.2 was produced under these conditions using KOH in the presence of KCl, NaCl, KCl+NaCl and KCl+NaOH. Zeolite N was also formed under high potassium molarities in the absence of KCl. Zeolite synthesis was more sensitive to water content and temperature when sodium was used in initial batch compositions. Syntheses of zeolite N by these methods were undertaken at bench, pilot and industrial scales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

US state-based data breach notification laws have unveiled serious corporate and government failures regarding the security of personal information. These laws require organisations to notify persons who may be affected by an unauthorized acquisition of their personal information. Safe harbours to notification exist if personal information is encrypted. Three types of safe harbour have been identified in the literature: exemptions, rebuttable presumptions and factors. The underlying assumption of exemptions is that encrypted personal information is secure and therefore unauthorized access does not pose a risk. However, the viability of this assumption is questionable when examined against data breaches involving encrypted information and the demanding practical requirements of effective encryption management. Recent recommendations by the Australian Law Reform Commission (ALRC) would amend the Privacy Act 1988 (Cth) to implement a data breach scheme that includes a different type of safe harbour, factor based analysis. The authors examine the potential capability of the ALRC’s proposed encryption safe harbour in relation to the US experience at the state legislature level.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes an autonomous docking system and web interface that allows long-term unaided use of a sophisticated robot by untrained web users. These systems have been applied to the biologically inspired RatSLAM system as a foundation for testing both its long-term stability and its practicality. While docking and web interface systems already exist, this system allows for a significantly larger margin of error in docking accuracy due to the mechanical design, thereby increasing robustness against navigational errors. Also a standard vision sensor is used for both long-range and short-range docking, compared to the many systems that require both omni-directional cameras and high resolution Laser range finders for navigation. The web interface has been designed to accommodate the significant delays experienced on the Internet, and to facilitate the non- Cartesian operation of the RatSLAM system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates a mobile, wireless sensor/actuator network application for use in the cattle breeding industry. Our goal is to prevent fighting between bulls in on-farm breeding paddocks by autonomously applying appropriate stimuli when one bull approaches another bull. This is an important application because fighting between high-value animals such as bulls during breeding seasons causes significant financial loss to producers. Furthermore, there are significant challenges in this type of application because it requires dynamic animal state estimation, real-time actuation and efficient mobile wireless transmissions. We designed and implemented an animal state estimation algorithm based on a state-machine mechanism for each animal. Autonomous actuation is performed based on the estimated states of an animal relative to other animals. A simple, yet effective, wireless communication model has been proposed and implemented to achieve high delivery rates in mobile environments. We evaluated the performance of our design by both simulations and field experiments, which demonstrated the effectiveness of our autonomous animal control system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As a special type of novel flexible structures, tensegrity holds promise for many potential applications in such fields as materials science, biomechanics, civil and aerospace engineering. Rhombic systems are an important class of tensegrity structures, in which each bar constitutes the longest diagonal of a rhombus of four strings. In this paper, we address the design methods of rhombic structures based on the idea that many tensegrity structures can be constructed by assembling one-bar elementary cells. By analyzing the properties of rhombic cells, we first develop two novel schemes, namely, direct enumeration scheme and cell-substitution scheme. In addition, a facile and efficient method is presented to integrate several rhombic systems into a larger tensegrity structure. To illustrate the applications of these methods, some novel rhombic tensegrity structures are constructed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Up to 1% of adults will suffer from leg ulceration at some time. The majority of leg ulcers are venous in origin and are caused by high pressure in the veins due to blockage or weakness of the valves in the veins of the leg. Prevention and treatment of venous ulcers is aimed at reducing the pressure either by removing / repairing the veins, or by applying compression bandages / stockings to reduce the pressure in the veins. The vast majority of venous ulcers are healed using compression bandages. Once healed they often recur and so it is customary to continue applying compression in the form of bandages, tights, stockings or socks in order to prevent recurrence. Compression bandages or hosiery (tights, stockings, socks) are often applied for ulcer prevention. Objectives To assess the effects of compression hosiery (socks, stockings, tights) or bandages in preventing the recurrence of venous ulcers. To determine whether there is an optimum pressure/type of compression to prevent recurrence of venous ulcers. Search methods The searches for the review were first undertaken in 2000. For this update we searched the Cochrane Wounds Group Specialised Register (October 2007), The Cochrane Central Register of Controlled Trials (CENTRAL) - The Cochrane Library 2007 Issue 3, Ovid MEDLINE - 1950 to September Week 4 2007, Ovid EMBASE - 1980 to 2007 Week 40 and Ovid CINAHL - 1982 to October Week 1 2007. Selection criteria Randomised controlled trials evaluating compression bandages or hosiery for preventing venous leg ulcers. Data collection and analysis Data extraction and assessment of study quality were undertaken by two authors independently. Results No trials compared recurrence rates with and without compression. One trial (300 patients) compared high (UK Class 3) compression hosiery with moderate (UK Class 2) compression hosiery. A intention to treat analysis found no significant reduction in recurrence at five years follow up associated with high compression hosiery compared with moderate compression hosiery (relative risk of recurrence 0.82, 95% confidence interval 0.61 to 1.12). This analysis would tend to underestimate the effectiveness of the high compression hosiery because a significant proportion of people changed from high compression to medium compression hosiery. Compliance rates were significantly higher with medium compression than with high compression hosiery. One trial (166 patients) found no statistically significant difference in recurrence between two types of medium (UK Class 2) compression hosiery (relative risk of recurrence with Medi was 0.74, 95% confidence interval 0.45 to 1.2). Both trials reported that not wearing compression hosiery was strongly associated with ulcer recurrence and this is circumstantial evidence that compression reduces ulcer recurrence. No trials were found which evaluated compression bandages for preventing ulcer recurrence. Authors' conclusions No trials compared compression with vs no compression for prevention of ulcer recurrence. Not wearing compression was associated with recurrence in both studies identified in this review. This is circumstantial evidence of the benefit of compression in reducing recurrence. Recurrence rates may be lower in high compression hosiery than in medium compression hosiery and therefore patients should be offered the strongest compression with which they can comply. Further trials are needed to determine the effectiveness of hosiery prescribed in other settings, i.e. in the UK community, in countries other than the UK.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is the result of an investigation of a Queensland example of curriculum reform based on outcomes, a type of reform common to many parts of the world during the last decade. The purpose of the investigation was to determine the impact of outcomes on teacher perspectives of professional practice. The focus was chosen to permit investigation not only of changes in behaviour resulting from the reform but also of teachers' attitudes and beliefs developed during implementation. The study is based on qualitative methodology, chosen because of its suitability for the investigation of attitudes and perspectives. The study exploits the researcher's opportunities for prolonged, direct contact with groups of teachers through the selection of an over-arching ethnography approach, an approach designed to capture the holistic nature of the reform and to contextualise the data within a broad perspective. The selection of grounded theory as a basis for data analysis reflects the open nature of this inquiry and demonstrates the study's constructivist assumptions about the production of knowledge. The study also constitutes a multi-site case study by virtue of the choice of three individual school sites as objects to be studied and to form the basis of the report. Three primary school sites administered by Brisbane Catholic Education were chosen as the focus of data collection. Data were collected from three school sites as teachers engaged in the first year of implementation of Student Performance Standards, the Queensland version of English outcomes based on the current English syllabus. Teachers' experience of outcomes-driven curriculum reform was studied by means of group interviews conducted at individual school sites over a period of fourteen months, researcher observations and the collection of artefacts such as report cards. Analysis of data followed grounded theory guidelines based on a system of coding. Though classification systems were not generated prior to data analysis, the labelling of categories called on standard, non-idiosyncratic terminology and analytic frames and concepts from existing literature wherever practicable in order to permit possible comparisons with other related research. Data from school sites were examined individually and then combined to determine teacher understandings of the reform, changes that have been made to practice and teacher responses to these changes in terms of their perspectives of professionalism. Teachers in the study understood the reform as primarily an accountability mechanism. Though teachers demonstrated some acceptance of the intentions of the reform, their responses to its conceptualisation, supporting documentation and implications for changing work practices were generally characterised by reduced confidence, anger and frustration. Though the impact of outcomes-based curriculum reform must be interpreted through the inter-relationships of a broad range of elements which comprise teachers' work and their attitudes towards their work, it is proposed that the substantive findings of the study can be understood in terms of four broad themes. First, when the conceptual design of outcomes did not serve teachers' accountability requirements and outcomes were perceived to be expressed in unfamiliar technical language, most teachers in the study lost faith in the value of the reform and lost confidence in their own abilities to understand or implement it. Second, this reduction of confidence was intensified when the scope of outcomes was outside the scope of the teachers' existing curriculum and assessment planning and teachers were confronted with the necessity to include aspects of syllabuses or school programs which they had previously omitted because of a lack of understanding or appreciation. The corollary was that outcomes promoted greater syllabus fidelity when frameworks were closely aligned. Third, other benefits the teachers associated with outcomes included the development of whole school curriculum resources and greater opportunity for teacher collaboration, particularly among schools. The teachers, however, considered a wide range of factors when determining the overall impact of the reform, and perceived a number of them in terms of the costs of implementation. These included the emergence of ethical dilemmas concerning relationships with students, colleagues and parents, reduced individual autonomy, particularly with regard to the selection of valued curriculum content and intensification of workload with the capacity to erode the relationships with students which teachers strongly associated with the rewards of their profession. Finally, in banding together at the school level to resist aspects of implementation, some teachers showed growing awareness of a collective authority capable of being exercised in response to top-down reform. These findings imply that Student Performance Standards require review and, additional implementation resourcing to support teachers through times of reduced confidence in their own abilities. Outcomes prove an effective means of high-fidelity syllabus implementation, and, provided they are expressed in an accessible way and aligned with syllabus frameworks and terminology, should be considered for inclusion in future syllabuses across a range of learning areas. The study also identifies a range of unintended consequences of outcomes-based curriculum and acknowledges the complexity of relationships among all the aspects of teachers' work. It also notes that the impact of reform on teacher perspectives of professional practice may alter teacher-teacher and school-system relationships in ways that have the potential to influence the effectiveness of future curriculum reform.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In order to estimate the safety impact of roadway interventions engineers need to collect, analyze, and interpret the results of carefully implemented data collection efforts. The intent of these studies is to develop Accident Modification Factors (AMF's), which are used to predict the safety impact of various road safety features at other locations or in upon future enhancements. Models are typically estimated to estimate AMF's for total crashes, but can and should be estimated for crash outcomes as well. This paper first describes data collected with the intent estimate AMF's for rural intersections in the state of Georgia within the United Sates. Modeling results of crash prediction models for the crash outcomes: angle, head-on, rear-end, sideswipe (same direction and opposite direction) and pedestrian-involved crashes are then presented and discussed. The analysis reveals that factors such as the Annual Average Daily Traffic (AADT), the presence of turning lanes, and the number of driveways have a positive association with each type of crash, while the median width and the presence of lighting are negatively associated with crashes. The model covariates are related to crash outcome in different ways, suggesting that crash outcomes are associated with different pre-crash conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many studies focused on the development of crash prediction models have resulted in aggregate crash prediction models to quantify the safety effects of geometric, traffic, and environmental factors on the expected number of total, fatal, injury, and/or property damage crashes at specific locations. Crash prediction models focused on predicting different crash types, however, have rarely been developed. Crash type models are useful for at least three reasons. The first is motivated by the need to identify sites that are high risk with respect to specific crash types but that may not be revealed through crash totals. Second, countermeasures are likely to affect only a subset of all crashes—usually called target crashes—and so examination of crash types will lead to improved ability to identify effective countermeasures. Finally, there is a priori reason to believe that different crash types (e.g., rear-end, angle, etc.) are associated with road geometry, the environment, and traffic variables in different ways and as a result justify the estimation of individual predictive models. The objectives of this paper are to (1) demonstrate that different crash types are associated to predictor variables in different ways (as theorized) and (2) show that estimation of crash type models may lead to greater insights regarding crash occurrence and countermeasure effectiveness. This paper first describes the estimation results of crash prediction models for angle, head-on, rear-end, sideswipe (same direction and opposite direction), and pedestrian-involved crash types. Serving as a basis for comparison, a crash prediction model is estimated for total crashes. Based on 837 motor vehicle crashes collected on two-lane rural intersections in the state of Georgia, six prediction models are estimated resulting in two Poisson (P) models and four NB (NB) models. The analysis reveals that factors such as the annual average daily traffic, the presence of turning lanes, and the number of driveways have a positive association with each type of crash, whereas median widths and the presence of lighting are negatively associated. For the best fitting models covariates are related to crash types in different ways, suggesting that crash types are associated with different precrash conditions and that modeling total crash frequency may not be helpful for identifying specific countermeasures.