977 resultados para federated search tool


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The contribution of risky behaviour to the increased crash and fatality rates of young novice drivers is recognised in the road safety literature around the world. Exploring such risky driver behaviour has led to the development of tools like the Driver Behaviour Questionnaire (DBQ) to examine driving violations, errors, and lapses [1]. Whilst the DBQ has been utilised in young novice driver research, some items within this tool seem specifically designed for the older, more experienced driver, whilst others appear to asses both behaviour and related motives. The current study was prompted by the need for a risky behaviour measurement tool that can be utilised with young drivers with a provisional driving licence. Sixty-three items exploring young driver risky behaviour developed from the road safety literature were incorporated into an online survey. These items assessed driver, passenger, journey, car and crash-related issues. A sample of 476 drivers aged 17-25 years (M = 19, SD = 1.59 years) with a provisional driving licence and matched for age, gender, and education were drawn from a state-wide sample of 761 young drivers who completed the survey. Factor analysis based upon a principal components extraction of factors was followed by an oblique rotation to investigate the underlying dimensions to young novice driver risky behaviour. A five factor solution comprising 44 items was identified, accounting for 55% of the variance in young driver risky behaviour. Factor 1 accounted for 32.5% of the variance and appeared to measure driving violations that were transient in nature - risky behaviours that followed risky decisions that occurred during the journey (e.g., speeding). Factor 2 accounted for 10.0% of variance and appeared to measure driving violations that were fixed in nature; the risky decisions being undertaken before the journey (e.g., drink driving). Factor 3 accounted for 5.4% of variance and appeared to measure misjudgment (e.g., misjudged speed of oncoming vehicle). Factor 4 accounted for 4.3% of variance and appeared to measure risky driving exposure (e.g., driving at night with friends as passengers). Factor 5 accounted for 2.8% of variance and appeared to measure driver emotions or mood (e.g., anger). Given that the aim of the study was to create a research tool, the factors informed the development of five subscales and one composite scale. The composite scale had a very high internal consistency measure (Cronbach’s alpha) of .947. Self-reported data relating to police-detected driving offences, their crash involvement, and their intentions to break road rules within the next year were also collected. While the composite scale was only weakly correlated with self-reported crashes (r = .16, p < .001), it was moderately correlated with offences (r = .26, p < .001), and highly correlated with their intentions to break the road rules (r = .57, p < .001). Further application of the developed scale is needed to confirm the factor structure within other samples of young drivers both in Australia and in other countries. In addition, future research could explore the applicability of the scale for investigating the behaviour of other types of drivers.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many large coal mining operations in Australia rely heavily on the rail network to transport coal from mines to coal terminals at ports for shipment. Over the last few years, due to the fast growing demand, the coal rail network is becoming one of the worst industrial bottlenecks in Australia. As a result, this provides great incentives for pursuing better optimisation and control strategies for the operation of the whole rail transportation system under network and terminal capacity constraints. This PhD research aims to achieve a significant efficiency improvement in a coal rail network on the basis of the development of standard modelling approaches and generic solution techniques. Generally, the train scheduling problem can be modelled as a Blocking Parallel- Machine Job-Shop Scheduling (BPMJSS) problem. In a BPMJSS model for train scheduling, trains and sections respectively are synonymous with jobs and machines and an operation is regarded as the movement/traversal of a train across a section. To begin, an improved shifting bottleneck procedure algorithm combined with metaheuristics has been developed to efficiently solve the Parallel-Machine Job- Shop Scheduling (PMJSS) problems without the blocking conditions. Due to the lack of buffer space, the real-life train scheduling should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold a train until the next section on the routing becomes available. As a consequence, the problem has been considered as BPMJSS with the blocking conditions. To develop efficient solution techniques for BPMJSS, extensive studies on the nonclassical scheduling problems regarding the various buffer conditions (i.e. blocking, no-wait, limited-buffer, unlimited-buffer and combined-buffer) have been done. In this procedure, an alternative graph as an extension of the classical disjunctive graph is developed and specially designed for the non-classical scheduling problems such as the blocking flow-shop scheduling (BFSS), no-wait flow-shop scheduling (NWFSS), and blocking job-shop scheduling (BJSS) problems. By exploring the blocking characteristics based on the alternative graph, a new algorithm called the topological-sequence algorithm is developed for solving the non-classical scheduling problems. To indicate the preeminence of the proposed algorithm, we compare it with two known algorithms (i.e. Recursive Procedure and Directed Graph) in the literature. Moreover, we define a new type of non-classical scheduling problem, called combined-buffer flow-shop scheduling (CBFSS), which covers four extreme cases: the classical FSS (FSS) with infinite buffer, the blocking FSS (BFSS) with no buffer, the no-wait FSS (NWFSS) and the limited-buffer FSS (LBFSS). After exploring the structural properties of CBFSS, we propose an innovative constructive algorithm named the LK algorithm to construct the feasible CBFSS schedule. Detailed numerical illustrations for the various cases are presented and analysed. By adjusting only the attributes in the data input, the proposed LK algorithm is generic and enables the construction of the feasible schedules for many types of non-classical scheduling problems with different buffer constraints. Inspired by the shifting bottleneck procedure algorithm for PMJSS and characteristic analysis based on the alternative graph for non-classical scheduling problems, a new constructive algorithm called the Feasibility Satisfaction Procedure (FSP) is proposed to obtain the feasible BPMJSS solution. A real-world train scheduling case is used for illustrating and comparing the PMJSS and BPMJSS models. Some real-life applications including considering the train length, upgrading the track sections, accelerating a tardy train and changing the bottleneck sections are discussed. Furthermore, the BPMJSS model is generalised to be a No-Wait Blocking Parallel- Machine Job-Shop Scheduling (NWBPMJSS) problem for scheduling the trains with priorities, in which prioritised trains such as express passenger trains are considered simultaneously with non-prioritised trains such as freight trains. In this case, no-wait conditions, which are more restrictive constraints than blocking constraints, arise when considering the prioritised trains that should traverse continuously without any interruption or any unplanned pauses because of the high cost of waiting during travel. In comparison, non-prioritised trains are allowed to enter the next section immediately if possible or to remain in a section until the next section on the routing becomes available. Based on the FSP algorithm, a more generic algorithm called the SE algorithm is developed to solve a class of train scheduling problems in terms of different conditions in train scheduling environments. To construct the feasible train schedule, the proposed SE algorithm consists of many individual modules including the feasibility-satisfaction procedure, time-determination procedure, tune-up procedure and conflict-resolve procedure algorithms. To find a good train schedule, a two-stage hybrid heuristic algorithm called the SE-BIH algorithm is developed by combining the constructive heuristic (i.e. the SE algorithm) and the local-search heuristic (i.e. the Best-Insertion- Heuristic algorithm). To optimise the train schedule, a three-stage algorithm called the SE-BIH-TS algorithm is developed by combining the tabu search (TS) metaheuristic with the SE-BIH algorithm. Finally, a case study is performed for a complex real-world coal rail network under network and terminal capacity constraints. The computational results validate that the proposed methodology would be very promising because it can be applied as a fundamental tool for modelling and solving many real-world scheduling problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the development and evaluation of a tactical lane change model using the forward search algorithm, for use in a traffic simulator. The tactical lane change model constructs a set of possible choices of near-term maneuver sequences available to the driver and selects the lane change action at the present time to realize the best maneuver plan. Including near term maneuver planning in the driver behavior model can allow a better representation of the complex interactions in situations such as a weaving section and high-occupancy vehicle (HOV) lane systems where drivers must weave across several lanes in order to access the HOV lanes. To support the investigation, a longitudinal control model and a basic lane change model were also analyzed. The basic lane change model is similar to those used by today's commonly-used traffic simulators. Parameters in all models were best-fit estimated for selected vehicles from a real-world freeway vehicle trajectory data set. The best-fit estimation procedure minimizes the discrepancy between the model vehicle and real vehicle's trajectories. With the best fit parameters, the proposed tactical lane change model gave a better overall performance for a greater number of cases than the basic lane change model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim This paper is a report of a study conducted to validate an instrument for measuring advanced practice nursing role delineation in an international contemporary health service context using the Delphi technique. Background Although most countries now have clear definitions and competency standards for nurse practitioners, no such clarity exists for many advanced practice nurse roles, leaving healthcare providers uncertain whether their service needs can or should be met by an advanced practice nurse or a nurse practitioner. The validation of a tool depicting advanced practice nursing is essential for the appropriate deployment of advanced practice nurses. This paper is the second in a three-phase study to develop an operational framework for assigning advanced practice nursing roles. Method An expert panel was established to review the activities in the Strong Model of Advanced Practice Role Delineation tool. Using the Delphi technique, data were collected via an on-line survey through a series of iterative rounds in 2008. Feedback and statistical summaries of responses were distributed to the panel until the 75% consensus cut-off was obtained. Results After three rounds and modification of five activities, consensus was obtained for validation of the content of this tool. Conclusion The Strong Model of Advanced Practice Role Delineation tool is valid for depicting the dimensions of practice of the advanced practice role in an international contemporary health service context thereby having the potential to optimize the utilization of the advanced practice nursing workforce.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Lane Change Test (LCT) is one of the growing number of methods developed to quantify driving performance degradation brought about by the use of in-vehicle devices. Beyond its validity and reliability, for such a test to be of practical use, it must also be sensitive to the varied demands of individual tasks. The current study evaluated the ability of several recent LCT lateral control and event detection parameters to discriminate between visual-manual and cognitive surrogate In-Vehicle Information System tasks with different levels of demand. Twenty-seven participants (mean age 24.4 years) completed a PC version of the LCT while performing visual search and math problem solving tasks. A number of the lateral control metrics were found to be sensitive to task differences, but the event detection metrics were less able to discriminate between tasks. The mean deviation and lane excursion measures were able to distinguish between the visual and cognitive tasks, but were less sensitive to the different levels of task demand. The other LCT metrics examined were less sensitive to task differences. A major factor influencing the sensitivity of at least some of the LCT metrics could be the type of lane change instructions given to participants. The provision of clear and explicit lane change instructions and further refinement of its metrics will be essential for increasing the utility of the LCT as an evaluation tool.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Digital collections are growing exponentially in size as the information age takes a firm grip on all aspects of society. As a result Information Retrieval (IR) has become an increasingly important area of research. It promises to provide new and more effective ways for users to find information relevant to their search intentions. Document clustering is one of the many tools in the IR toolbox and is far from being perfected. It groups documents that share common features. This grouping allows a user to quickly identify relevant information. If these groups are misleading then valuable information can accidentally be ignored. There- fore, the study and analysis of the quality of document clustering is important. With more and more digital information available, the performance of these algorithms is also of interest. An algorithm with a time complexity of O(n2) can quickly become impractical when clustering a corpus containing millions of documents. Therefore, the investigation of algorithms and data structures to perform clustering in an efficient manner is vital to its success as an IR tool. Document classification is another tool frequently used in the IR field. It predicts categories of new documents based on an existing database of (doc- ument, category) pairs. Support Vector Machines (SVM) have been found to be effective when classifying text documents. As the algorithms for classifica- tion are both efficient and of high quality, the largest gains can be made from improvements to representation. Document representations are vital for both clustering and classification. Representations exploit the content and structure of documents. Dimensionality reduction can improve the effectiveness of existing representations in terms of quality and run-time performance. Research into these areas is another way to improve the efficiency and quality of clustering and classification results. Evaluating document clustering is a difficult task. Intrinsic measures of quality such as distortion only indicate how well an algorithm minimised a sim- ilarity function in a particular vector space. Intrinsic comparisons are inherently limited by the given representation and are not comparable between different representations. Extrinsic measures of quality compare a clustering solution to a “ground truth” solution. This allows comparison between different approaches. As the “ground truth” is created by humans it can suffer from the fact that not every human interprets a topic in the same manner. Whether a document belongs to a particular topic or not can be subjective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Value Management (VM) has been proven to provide a structured framework, together with other supporting tools and techniques, that facilitate effective decision-making in many types of projects, thus achieving ‘best value’ for clients. One of the major success factors of VM in achieving better project objectives for clients is through the provision of beneficial input by multi-disciplinary team members being involved in critical decision-making discussions during the early stage of construction projects. This paper describes a doctoral research proposal based on the application of VM in design and build construction projects, especially focusing on the design stage. The research aims to study the effects of implementing VM in design and build construction projects, in particular how well the methodology addresses issues related to cost overruns resulting from poor coordination and overlooking of critical constructability issues amongst team members in construction projects in Malaysia. It is proposed that through contractors’ early involvement during the design stage, combined with the use of the VM methodology, particularly as a decision-making tool, better optimization of construction cost can be achieved, thus promoting more efficient and effective constructability. The main methods used in this research involve a thorough literature study, semi-structured interviews, and a survey of major stakeholders, a detailed case study and a VM workshop and focus group discussions involving construction professionals in order to explore and possibly develop a framework and a specific methodology for the facilitating successful application of VM within design and build construction projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: This paper aims to show that identification of expectations and software functional requirements via consultation with potential users is an integral component of the development of an emergency department patient admissions prediction tool. ---------- Design/methodology/approach: Thematic analysis of semi-structured interviews with 14 key health staff delivered rich data regarding existing practice and future needs. Participants included emergency department staff, bed managers, nurse unit managers, directors of nursing, and personnel from health administration. ---------- Findings: Participants contributed contextual insights on the current system of admissions, revealing a culture of crisis, imbued with misplayed communication. Their expectations and requirements of a potential predictive tool provided strategic data that moderated the development of the Emergency Department Patient Admissions Prediction Tool, based on their insistence that it feature availability, reliability and relevance. In order to deliver these stipulations, participants stressed that it should be incorporated, validated, defined and timely. ---------- Research limitations/implications: Participants were envisaging a concept and use of a tool that was somewhat hypothetical. However, further research will evaluate the tool in practice. ---------- Practical implications: Participants' unsolicited recommendations regarding implementation will not only inform a subsequent phase of the tool evaluation, but are eminently applicable to any process of implementation in a healthcare setting. ---------- Originality/value: The consultative process engaged clinicians and the paper delivers an insider view of an overburdened system, rather than an outsider's observations.