837 resultados para Eliminate lost time

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The common approach to estimate bus dwell time at a BRT station is to apply the traditional dwell time methodology derived for suburban bus stops. In spite of being sensitive to boarding and alighting passenger numbers and to some extent towards fare collection media, these traditional dwell time models do not account for the platform crowding. Moreover, they fall short in accounting for the effects of passenger/s walking along a relatively longer BRT platform. Using the experience from Brisbane busway (BRT) stations, a new variable, Bus Lost Time (LT), is introduced in traditional dwell time model. The bus lost time variable captures the impact of passenger walking and platform crowding on bus dwell time. These are two characteristics which differentiate a BRT station from a bus stop. This paper reports the development of a methodology to estimate bus lost time experienced by buses at a BRT platform. Results were compared with the Transit Capacity and Quality of Servce Manual (TCQSM) approach of dwell time and station capacity estimation. When the bus lost time was used in dwell time calculations it was found that the BRT station platform capacity reduced by 10.1%.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bus Rapid Transit (BRT), because of its operational flexibility and simplicity, is rapidly gaining popularity with urban designers and transit planners. Earlier BRTs were bus shared lane or bus only lane, which share the roadway with general and other forms of traffic. In recent time, more sophisticated designs of BRT have emerged, such as busway, which has separate carriageway for buses and provides very high physical separation of buses from general traffic. Line capacities of a busway are predominately dependent on bus capacity of its stations. Despite new developments in BRT designs, the methodology of capacity analysis is still based on traditional principles of kerbside bus stop on bus only lane operations. Consequently, the tradition methodology lacks accounting for various dimensions of busway station operation, such as passenger crowd, passenger walking and bus lost time along the long busway station platform. This research has developed a purpose made bus capacity analysis methodology for busway station analysis. Extensive observations of kerbside bus stops and busway stations in Brisbane, Australia were made and differences in their operation were studied. A large scale data collection was conducted using the video recording technique at the Mater Hill Busway Station on the South East Busway in Brisbane. This research identified new parameters concerning busway station operation, and through intricate analysis identified the elements and processes which influence the bus dwell time at a busway station platform. A new variable, Bus lost time, was defined and its quantitative descriptions were established. Based on these finding and analysis, a busway station platform bus capacity methodology was developed, comprising of new models for busway station lost time, busway station dwell time, busway station loading area bus capacity, and busway station platform bus capacity. The new methodology not only accounts for passenger boarding and alighting, but also covers platform crowd and bus lost time in station platform bus capacity estimation. The applicability of this methodology was shown through demonstrative examples. Additionally, these examples illustrated the significance of the bus lost time variable in determining station capacities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Normally, vehicles queued at an intersection reach maximum flow rate after the fourth vehicle and results in a start-up lost time. This research demonstrated that the Enlarged Stopping Distance (ESD) concept could assist in reducing the start-up time and therefore increase traffic flow capacity at signalised intersections. In essence ESD gives sufficient space for a queuing vehicle to accelerate simultaneously without having to wait for the front vehicle to depart, hence reducing start-up lost time. In practice, the ESD concept would be most effective when enlarged stopping distance between the first and second vehicle allowing faster clearance of the intersection.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Extreme temperatures are associated with cardiovascular disease (CVD) deaths. Previous studies have investigated the relative CVD mortality risk of temperature, but this risk is heavily influenced by deaths in frail elderly persons. To better estimate the burden of extreme temperatures we estimated their effects on years of life lost due to CVD. Methods and Results: The data were daily observations on weather and CVD mortality for Brisbane, Australia between 1996 and 2004. We estimated the association between daily mean temperature and years of life lost due to CVD, after adjusting for trend, season, day of the week, and humidity. To examine the non-linear and delayed effects of temperature, a distributed lag non-linear model was used. The model’s residuals were examined to investigate if there were any added effects due to cold spells and heat waves. The exposure-response curve between temperature and years of life lost was U-shaped, with the lowest years of life lost at 24 °C. The curve had a sharper rise at extremes of heat than of cold. The effect of cold peaked two days after exposure, whereas the greatest effect of heat occurred on the day of exposure. There were significantly added effects of heat waves on years of life lost. Conclusions: Increased years of life lost due to CVD are associated with both cold and hot temperatures. Research on specific interventions is needed to reduce temperature-related years of life lost from CVD deaths.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Usability in HCI (Human-Computer Interaction) is normally understood as the simplicity and clarity with which the interaction with a computer program or a web site is designed. Identity management systems need to provide adequate usability and should have a simple and intuitive interface. The system should not only be designed to satisfy service provider requirements but it has to consider user requirements, otherwise it will lead to inconvenience and poor usability for users when managing their identities. With poor usability and a poor user interface with regard to security, it is highly likely that the system will have poor security. The rapid growth in the number of online services leads to an increasing number of different digital identities each user needs to manage. As a result, many people feel overloaded with credentials, which in turn negatively impacts their ability to manage them securely. Passwords are perhaps the most common type of credential used today. To avoid the tedious task of remembering difficult passwords, users often behave less securely by using low entropy and weak passwords. Weak passwords and bad password habits represent security threats to online services. Some solutions have been developed to eliminate the need for users to create and manage passwords. A typical solution is based on generating one-time passwords, i.e. passwords for single session or transaction usage. Unfortunately, most of these solutions do not satisfy scalability and/or usability requirements, or they are simply insecure. In this thesis, the security and usability aspects of contemporary methods for authentication based on one-time passwords (OTP) are examined and analyzed. In addition, more scalable solutions that provide a good user experience while at the same time preserving strong security are proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study explores the relationship between new venture team composition and new venture persistence and performance over time. We examine the team characteristics of a 5-year panel study of 202 new venture teams and new venture performance. Our study makes two contributions. First, we extend earlier research concerning homophily theories of the prevalence of homogeneous teams. Using structural event analysis we demonstrate that team members’ start-up experience is important in this context. Second, we attempt to reconcile conflicting evidence concerning the influence of team homogeneity on performance by considering the element of time. We hypothesize that higher team homogeneity is positively related to short term outcomes, but is less effective in the longer term. Our results confirm a difference over time. We find that more homogeneous teams are less likely to be higher performing in the long term. However, we find no relationship between team homogeneity and short-term performance outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health complaint statistics are important for identifying problems and bringing about improvements to health care provided by health service providers and to the wider health care system. This paper overviews complaints handling by the eight Australian state and territory health complaint entities, based on an analysis of data from their annual reports. The analysis shows considerable variation between jurisdictions in the ways complaint data are defined, collected and recorded. Complaints from the public are an important accountability mechanism and open a window on service quality. The lack of a national approach leads to fragmentation of complaint data and a lost opportunity to use national data to assist policy development and identify the main areas causing consumers to complain. We need a national approach to complaints data collection in order to better respond to patients’ concerns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current diagnostic methods for assessing the severity of articular cartilage degenerative conditions, such as osteoarthritis, are inadequate. There is also a lack of techniques that can be used for real-time evaluation of the tissue during surgery to inform treatment decision and eliminate subjectivity. This book, derived from Dr Afara’s doctoral research, presents a scientific framework that is based on near infrared (NIR) spectroscopy for facilitating the non-destructive evaluation of articular cartilage health relative to its structural, functional, and mechanical properties. This development is a component of the ongoing research on advanced endoscopic diagnostic techniques in the Articular Cartilage Biomechanics Research Laboratory of Professor Adekunle Oloyede at Queensland University of Technology (QUT), Brisbane Australia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary objective of this study is to develop a robust queue estimation algorithm for motorway on-ramps. Real-time queue information is a vital input for dynamic queue management on metered on-ramps. Accurate and reliable queue information enables the management of on-ramp queue in an adaptive manner to the actual traffic queue size and thus minimises the adverse impacts of queue flush while increasing the benefit of ramp metering. The proposed algorithm is developed based on the Kalman filter framework. The fundamental conservation model is used to estimate the system state (queue size) with the flow-in and flow-out measurements. This projection results are updated with the measurement equation using the time occupancies from mid-link and link-entrance loop detectors. This study also proposes a novel single point correction method. This method resets the estimated system state to eliminate the counting errors that accumulate over time. In the performance evaluation, the proposed algorithm demonstrated accurate and reliable performances and consistently outperformed the benchmarked Single Occupancy Kalman filter (SOKF) method. The improvements over SOKF are 62% and 63% in average in terms of the estimation accuracy (MAE) and reliability (RMSE), respectively. The benefit of the innovative concepts of the algorithm is well justified by the improved estimation performance in congested ramp traffic conditions where long queues may significantly compromise the benchmark algorithm’s performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary objective of this study is to develop a robust queue estimation algorithm for motorway on-ramps. Real-time queue information is the most vital input for a dynamic queue management that can treat long queues on metered on-ramps more sophistically. The proposed algorithm is developed based on the Kalman filter framework. The fundamental conservation model is used to estimate the system state (queue size) with the flow-in and flow-out measurements. This projection results are updated with the measurement equation using the time occupancies from mid-link and link-entrance loop detectors. This study also proposes a novel single point correction method. This method resets the estimated system state to eliminate the counting errors that accumulate over time. In the performance evaluation, the proposed algorithm demonstrated accurate and reliable performances and consistently outperformed the benchmarked Single Occupancy Kalman filter (SOKF) method. The improvements over SOKF are 62% and 63% in average in terms of the estimation accuracy (MAE) and reliability (RMSE), respectively. The benefit of the innovative concepts of the algorithm is well justified by the improved estimation performance in the congested ramp traffic conditions where long queues may significantly compromise the benchmark algorithm’s performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To estimate the time spent by the researchers for preparing grant proposals, and to examine whether spending more time increase the chances of success. Design: Observational study. Setting: The National Health and Medical Research Council (NHMRC) of Australia. Participants: Researchers who submitted one or more NHMRC Project Grant proposals in March 2012. Main outcome measures: Total researcher time spent preparing proposals; funding success as predicted by the time spent. Results: The NHMRC received 3727 proposals of which 3570 were reviewed and 731 (21%) were funded. Among our 285 participants who submitted 632 proposals, 21% were successful. Preparing a new proposal took an average of 38 working days of researcher time and a resubmitted proposal took 28 working days, an overall average of 34 days per proposal. An estimated 550 working years of researchers' time (95% CI 513 to 589) was spent preparing the 3727 proposals, which translates into annual salary costs of AU$66 million. More time spent preparing a proposal did not increase the chances of success for the lead researcher (prevalence ratio (PR) of success for 10 day increase=0.91, 95% credible interval 0.78 to 1.04) or other researchers (PR=0.89, 95% CI 0.67 to 1.17). Conclusions: Considerable time is spent preparing NHMRC Project Grant proposals. As success rates are historically 20–25%, much of this time has no immediate benefit to either the researcher or society, and there are large opportunity costs in lost research output. The application process could be shortened so that only information relevant for peer review, not administration, is collected. This would have little impact on the quality of peer review and the time saved could be reinvested into research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health complaint commissions in Australia: Time for a national approach • There is considerable variation between jurisdictions in the ways complaint data are defined, collected and recorded by the Health complaint commissions. • Complaints from the public are an important accountability mechanism and an indicator of service quality. • The lack of a consistent approach leads to fragmentation of complaint data and a lost opportunity to use national data to assist policy development and identify the main areas causing consumers to complain. • We need a national approach to complaints data collection by the Health complaints commissions in order to better respond to patients’ concerns

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lean strategies have been developed to eliminate or reduce waste and thus improve operational efficiency in a manufacturing environment. However, in practice, manufacturers encounter difficulties to select appropriate lean strategies within their resource constraints and to quantitatively evaluate the perceived value of manufacturing waste reduction. This paper presents a methodology developed to quantitatively evaluate the contribution of lean strategies selected to reduce manufacturing wastes within the manufacturers’ resource (time) constraints. A mathematical model has been developed for evaluating the perceived value of lean strategies to manufacturing waste reduction and a step-by-step methodology is provided for selecting appropriate lean strategies to improve the manufacturing performance within their resource constraints. A computer program is developed in MATLAB for finding the optimum solution. With the help of a case study, the proposed methodology and developed model has been validated. A ‘lean strategy-wastes’ correlation matrix has been proposed to establish the relationship between the manufacturing wastes and lean strategies. Using the correlation matrix and applying the proposed methodology and developed mathematical model, authors came out with optimised perceived value of reduction of a manufacturer's wastes by implementing appropriate lean strategies within a manufacturer's resources constraints. Results also demonstrate that the perceived value of reduction of manufacturing wastes can significantly be changed based on policies and product strategy taken by a manufacturer. The proposed methodology can also be used in dynamic situations by changing the input in the programme developed in MATLAB. By identifying appropriate lean strategies for specific manufacturing wastes, a manufacturer can better prioritise implementation efforts and resources to maximise the success of implementing lean strategies in their organisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.