988 resultados para Business travel


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates quality of service (QoS) and resource productivity implications of transit route passenger loading and travel time. It highlights the value of occupancy load factor as a direct passenger comfort QoS measure. Automatic Fare Collection data for a premium radial bus route in Brisbane, Australia, is used to investigate time series correlation between occupancy load factor and passenger average travel time. Correlation is strong across the entire span of service in both directions. Passengers tend to be making longer, peak direction commuter trips under significantly less comfortable conditions than off-peak. The Transit Capacity and Quality of Service Manual uses segment based load factor as a measure of onboard loading comfort QoS. This paper provides additional insight into QoS by relating the two route based dimensions of occupancy load factor and passenger average travel time together in a two dimensional format, both from the passenger’s and operator’s perspectives. Future research will apply Value of Time to QoS measurement, reflecting perceived passenger comfort through crowding and average time spent onboard. This would also assist in transit service quality econometric modeling. The methodology can be readily applied in a practical setting where AFC data for fixed scheduled routes is available. The study outcomes also provide valuable research and development directions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traffic incidents are recognised as one of the key sources of non-recurrent congestion that often leads to reduction in travel time reliability (TTR), a key metric of roadway performance. A method is proposed here to quantify the impacts of traffic incidents on TTR on freeways. The method uses historical data to establish recurrent speed profiles and identifies non-recurrent congestion based on their negative impacts on speeds. The locations and times of incidents are used to identify incidents among non-recurrent congestion events. Buffer time is employed to measure TTR. Extra buffer time is defined as the extra delay caused by traffic incidents. This reliability measure indicates how much extra travel time is required by travellers to arrive at their destination on time with 95% certainty in the case of an incident, over and above the travel time that would have been required under recurrent conditions. An extra buffer time index (EBTI) is defined as the ratio of extra buffer time to recurrent travel time, with zero being the best case (no delay). A Tobit model is used to identify and quantify factors that affect EBTI using a selected freeway segment in the Southeast Queensland, Australia network. Both fixed and random parameter Tobit specifications are tested. The estimation results reveal that models with random parameters offer a superior statistical fit for all types of incidents, suggesting the presence of unobserved heterogeneity across segments. What factors influence EBTI depends on the type of incident. In addition, changes in TTR as a result of traffic incidents are related to the characteristics of the incidents (multiple vehicles involved, incident duration, major incidents, etc.) and traffic characteristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing process mining techniques provide summary views of the overall process performance over a period of time, allowing analysts to identify bottlenecks and associated performance issues. However, these tools are not de- signed to help analysts understand how bottlenecks form and dissolve over time nor how the formation and dissolution of bottlenecks – and associated fluctua- tions in demand and capacity – affect the overall process performance. This paper presents an approach to analyze the evolution of process performance via a notion of Staged Process Flow (SPF). An SPF abstracts a business process as a series of queues corresponding to stages. The paper defines a number of stage character- istics and visualizations that collectively allow process performance evolution to be analyzed from multiple perspectives. The approach has been implemented in the ProM process mining framework. The paper demonstrates the advantages of the SPF approach over state-of-the-art process performance mining tools using two real-life event logs publicly available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Overprocessing waste occurs in a business process when effort is spent in a way that does not add value to the customer nor to the business. Previous studies have identied a recurrent overprocessing pattern in business processes with so-called "knockout checks", meaning activities that classify a case into "accepted" or "rejected", such that if the case is accepted it proceeds forward, while if rejected, it is cancelled and all work performed in the case is considered unnecessary. Thus, when a knockout check rejects a case, the effort spent in other (previous) checks becomes overprocessing waste. Traditional process redesign methods propose to order knockout checks according to their mean effort and rejection rate. This paper presents a more fine-grained approach where knockout checks are ordered at runtime based on predictive machine learning models. Experiments on two real-life processes show that this predictive approach outperforms traditional methods while incurring minimal runtime overhead.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research contributes a formal framework to evaluate whether existing CMFs can model and reason about various types of normative requirements. The framework can be used to determine the level of coverage of concepts provided by CMFs, establish mappings between CMF languages and the semantics for the normative concepts and evaluate the suitability of a CMF for issuing a certification of compliance. The developed framework is independent of any specific formalism and it has been formally defined and validated through the examples of such mappings of CMFs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a method for checking the conformance between an event log capturing the actual execution of a business process, and a model capturing its expected or normative execution. Given a business process model and an event log, the method returns a set of statements in natural language describing the behavior allowed by the process model but not observed in the log and vice versa. The method relies on a unified representation of process models and event logs based on a well-known model of concurrency, namely event structures. Specifically, the problem of conformance checking is approached by folding the input event log into an event structure, unfolding the process model into another event structure, and comparing the two event structures via an error-correcting synchronized product. Each behavioral difference detected in the synchronized product is then verbalized as a natural language statement. An empirical evaluation shows that the proposed method scales up to real-life datasets while producing more concise and higher-level difference descriptions than state-of-the-art conformance checking methods.