279 resultados para Motorcycle Operators.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In some parts of Australia, people wanting to learn to ride a motorcycle are required to complete an off-road training course before they are allowed to practice on the road. In the state of Queensland, they are only required to pass a short multiple-choice road rules knowledge test. This paper describes an analysis of police-reported crashes involving learner riders in Queensland that was undertaken as part of research investigating whether pre-learner training is needed and, if so, the issues that should be addressed in training.. The crashes of learner riders and other riders were compared to identify whether there are particular situations or locations in which learner motorcyclists are over-involved in crashes, which could then be targeted in the pre-learner package. The analyses were undertaken separately for riders aged under 25 (330 crashes) versus those aged 25 and over (237 crashes) to provide some insight into whether age or riding inexperience are the more important factors, and thus to indicate whether there are merits in having different licensing or training approaches for younger and older learner riders. Given that the average age of learner riders was 33 years, under 25 was chosen to provide a sufficiently large sample of younger riders. Learner riders appeared to be involved in more severe crashes and to be more often at fault than fully-licensed riders but this may reflect problems in reporting, rather than real differences. Compared to open licence holders, both younger and older learner riders had relatively more crashes in low speed zones and relatively fewer in high speed zones. Riders aged under 25 had elevated percentages of night-time crashes and fewer single unit (potentially involving rider error only) crashes regardless of the type of licence held. The contributing factors that were more prevalent in crashes of learner riders than holders of open licences were: inexperience (37.2% versus 0.5%), inattention (21.5% versus 15.6%), alcohol or drugs (12.0% versus 5.1%) and drink riding (9.9% versus 3.1%). The pattern of contributing factors was generally similar for younger and older learner riders, although younger learners were (not surprisingly) more likely to have inexperience coded as a contributing factor (49.7% versus 19.8%). Some of the differences in crashes between learner riders and fully-licensed riders appear to reflect relatively more riding in urban areas by learners, rather than increased risks relating to inexperience. The analysis of contributing factors in learner rider crashes suggests that hazard perception and risk management (in terms of speed and alcohol and drugs) should be included in a pre-learner program. Currently, most learner riders in Queensland complete pre-licence training and become licensed within one month of obtaining their learner permit. If the introduction of pre-learner training required that the learner permit was held for a minimum duration, then the immediate effect might be more learners riding (and crashing). Thus, it is important to consider how training and licensing initiatives work together in order to improve the safety of new riders (and how this can be evaluated).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surveys have identified that many older motorcyclists are returning riders but it is difficult to draw conclusions about their crash risk because of discrepancies in definitions and the inability to identify returning riders in official crash databases. Analyses of NSW crash data were undertaken in which returning riders were defined as aged 25 and over, holding a full licence 10 years prior to the crash, and not the registered operator of one or more motorcycles during the 5-10 years prior to the crash. Based on this definition, there were 472 riders in casualty crashes in 2005-09 who were returning riders (5.5% of riders aged 25 and over in casualty crashes) and the characteristics of their crashes were similar to those involving continuing riders. In contrast, crashes of new riders were more likely to have characteristics suggestive of relatively more riding in urban areas, probably for transport rather than recreation. More work is recommended to assess the validity of the definition to allow a better understanding of the effects of long periods away from riding on riding skills and crash risk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diesel particulate matter (DPM), in particular, has been likened in a somewhat inflammatory manner to be the ‘next asbestos’. From the business change perspective, there are three areas holding the industry back from fully engaging with the issue: 1. There is no real feedback loop in any operational sense to assess the impact of investment or application of controls to manage diesel emissions. 2. DPM are getting ever smaller and more numerous, but there is no practical way of measuring them to regulate them in the field. Mass, the current basis of regulation, is becoming less and less relevant. 3. Diesel emissions management is generally wholly viewed as a cost, yet there are significant areas of benefit available from good management. This paper discusses a feedback approach to address these three areas to move the industry forward. The six main areas of benefit from providing a feedback loop by continuously monitoring diesel emissions have been identified: 1. Condition-based maintenance. Emissions change instantaneously if engine condition changes. 2. Operator performance. An operator can use a lot more fuel for little incremental work output through poor technique or discipline. 3. Vehicle utilisation. Operating hours achieved and ratios of idling to under power affect the proportion of emissions produced with no economic value. 4. Fuel efficiency. This allows visibility into other contributing configuration and environmental factors for the vehicle. 5. Emission rates. This allows scope to directly address the required ratio of ventilation to diesel emissions. 6. Total carbon emissions - for NGER-type reporting requirements, calculating the emissions individually from each vehicle rather than just reporting on fuel delivered to a site.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An area of property valuation that has attracted less attention than other property markets over the past 20 years has been the mining and extractive industries. These operations can range from small operators on leased or private land to multinational companies. Although there are a number of national mining standards that indicate the type of valuation methods that can be adopted for this asset class, these standards do not specify how or when these methods are best suited to particular mine operations. The RICS guidance notes and the draft IVSC guidance notes also advise the various valuations methods that can be used to value mining properties; but, again they do not specify what methods should be applied where and when. One of the methods supported by these standards and guidelines is the market approach. This paper will carry out an analysis of all mine, extractive industry and waste disposal sites sale transactions in Queensland Australia, a major world mining centre, to determine if a market valuation approach such as direct comparison is actually suitable for the valuation of a mine or extractive industry. The analysis will cover the period 1984 to 2011 and covers sale transactions for minerals, petroleum and gas, waste disposal sites, clay, sand and stone. Based on this analysis, the suitability of direct comparison for valuation purposes in this property sector will be tested.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter gives an overview of the smartphone app economy and its various constituent ecosystems. It examines the role of the app store model and the proliferation of mobile apps in the shift from value chains controlled by network operators and handset manufacturers, to value networks – or ecosystems – focused around operating systems and apps. It outlines some of the benefits and disadvantages for developers of the app store model for remuneration and distribution. The chapter concludes with a discussion of recent research on the size and employment effects of the app economy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Airport efficiency is important because it has a direct impact on customer safety and satisfaction and therefore the financial performance and sustainability of airports, airlines, and affiliated service providers. This is especially so in a world characterized by an increasing volume of both domestic and international air travel, price and other forms of competition between rival airports, airport hubs and airlines, and rapid and sometimes unexpected changes in airline routes and carriers. It also reflects expansion in the number of airports handling regional, national, and international traffic and the growth of complementary airport facilities including industrial, commercial, and retail premises. This has fostered a steadily increasing volume of research aimed at modeling and providing best-practice measures and estimates of airport efficiency using mathematical and econometric frontiers. The purpose of this chapter is to review these various methods as they apply to airports throughout the world. Apart from discussing the strengths and weaknesses of the different approaches and their key findings, the paper also examines the steps faced by researchers as they move through the modeling process in defining airport inputs and outputs and the purported efficiency drivers. Accordingly, the chapter provides guidance to those conducting empirical research on airport efficiency and serves as an aid for aviation regulators and airport operators among others interpreting airport efficiency research outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a generic and integrated solar powered remote Unmanned Air Vehicles (UAV) and Wireless Sensor Network (WSN) gas sensing system. The system uses a generic gas sensing system for CH4 and CO2 concentrations using metal oxide (MoX) and non-dispersive infrared sensors, and a new solar cell encapsulation method to power the UASs as well as a data management platform to store, analyse and share the information with operators and external users. The system was successfully field tested at ground and low altitudes, collecting, storing and transmitting data in real time to a central node for analysis and 3D mapping. The system can be used in a wide range of outdoor applications, especially in agriculture, bushfires, mining studies, opening the way to a ubiquitous low cost environmental monitoring. A video of the bench and flight test performed can be seen in the following link https://www.youtube.com/watch?v=Bwas7stYIxQ.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is little research on off-road motorcycle and all-terrain vehicle riders though injury levels are high. This thesis identified formal responsibility for monitoring injuries, targeting young male and recreational riders, promotion of family members as models, and controlled and accessible riding locations as ways to increase safety. These recommendations were based on analysis of Queensland hospitalisation records, rider personal reports and survey responses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transit passenger market segmentation enables transit operators to target different classes of transit users to provide customized information and services. The Smart Card (SC) data, from Automated Fare Collection system, facilitates the understanding of multiday travel regularity of transit passengers, and can be used to segment them into identifiable classes of similar behaviors and needs. However, the use of SC data for market segmentation has attracted very limited attention in the literature. This paper proposes a novel methodology for mining spatial and temporal travel regularity from each individual passenger’s historical SC transactions and segments them into four segments of transit users. After reconstructing the travel itineraries from historical SC transactions, the paper adopts the Density-Based Spatial Clustering of Application with Noise (DBSCAN) algorithm to mine travel regularity of each SC user. The travel regularity is then used to segment SC users by an a priori market segmentation approach. The methodology proposed in this paper assists transit operators to understand their passengers and provide them oriented information and services.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are a number of pressing issues facing contemporary online environments that are causing disputes among participants and platform operators and increasing the likelihood of external regulation. A number of solutions have been proposed, including industry self-governance, top-down regulation and emergent self-governance such as EVE Online’s “Council of Stellar Management”. However, none of these solutions seem entirely satisfying; facing challenges from developers who fear regulators will not understand their platforms, or players who feel they are not sufficiently empowered to influence the platform, while many authors have raised concerns over the implementation of top-down regulation, and why the industry may be well-served to pre-empt such action. This paper considers case studies of EVE Online and the offshore gambling industry, and whether a version of self-governance may be suitable for the future of the industry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current governance challenges facing the global games industry are heavily dominated by online games. Whilst much academic and industry attention has been afforded to Virtual Worlds, the more pressing contemporary challenges may arise in casual games, especially when found on social networks. As authorities are faced with an increasing volume of disputes between participants and platform operators, the likelihood of external regulation increases, and the role that such regulation would have on the industry – both internationally and within specific regions – is unclear. Kelly (2010) argues that “when you strip away the graphics of these [social] games, what you are left with is simply a button [...] You push it and then the game returns a value of either Win or Lose”. He notes that while “every game developer wants their game to be played, preferably addictively, because it’s so awesome”, these mechanics lead not to “addiction of engagement through awesomeness” but “the addiction of compulsiveness”, surmising that “the reality is that they’ve actually sort-of kind-of half-intentionally built a virtual slot machine industry”. If such core elements of social game design are questioned, this gives cause to question the real-money options to circumvent them. With players able to purchase virtual currency and speed the completion of tasks, the money invested by the 20% purchasing in-game benefits (Zainwinger, 2012) may well be the result of compulsion. The decision by the Japanese Consumer Affairs agency to investigate the ‘Kompu Gacha’ mechanic (in which players are rewarded for completing a set of items obtained through purchasing virtual goods such as mystery boxes), and the resultant verdict that such mechanics should be regulated through gambling legislation, demonstrates that politicians are beginning to look at the mechanics deployed in these environments. Purewal (2012) states that “there’s a reasonable argument that complete gacha would be regulated under gambling law under at least some (if not most) Western jurisdictions”. This paper explores the governance challenged within these games and platforms, their role in the global industry, and current practice amongst developers in the Australian and United States to address such challenges.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many applications can benefit from the accurate surface temperature estimates that can be made using a passive thermal-infrared camera. However, the process of radiometric calibration which enables this can be both expensive and time consuming. An ad hoc approach for performing radiometric calibration is proposed which does not require specialized equipment and can be completed in a fraction of the time of the conventional method. The proposed approach utilizes the mechanical properties of the camera to estimate scene temperatures automatically, and uses these target temperatures to model the effect of sensor temperature on the digital output. A comparison with a conventional approach using a blackbody radiation source shows that the accuracy of the method is sufficient for many tasks requiring temperature estimation. Furthermore, a novel visualization method is proposed for displaying the radiometrically calibrated images to human operators. The representation employs an intuitive coloring scheme and allows the viewer to perceive a large variety of temperatures accurately.