939 resultados para least common subgraph algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most statistical methods use hypothesis testing. Analysis of variance, regression, discrete choice models, contingency tables, and other analysis methods commonly used in transportation research share hypothesis testing as the means of making inferences about the population of interest. Despite the fact that hypothesis testing has been a cornerstone of empirical research for many years, various aspects of hypothesis tests commonly are incorrectly applied, misinterpreted, and ignored—by novices and expert researchers alike. On initial glance, hypothesis testing appears straightforward: develop the null and alternative hypotheses, compute the test statistic to compare to a standard distribution, estimate the probability of rejecting the null hypothesis, and then make claims about the importance of the finding. This is an oversimplification of the process of hypothesis testing. Hypothesis testing as applied in empirical research is examined here. The reader is assumed to have a basic knowledge of the role of hypothesis testing in various statistical methods. Through the use of an example, the mechanics of hypothesis testing is first reviewed. Then, five precautions surrounding the use and interpretation of hypothesis tests are developed; examples of each are provided to demonstrate how errors are made, and solutions are identified so similar errors can be avoided. Remedies are provided for common errors, and conclusions are drawn on how to use the results of this paper to improve the conduct of empirical research in transportation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Digital collections are growing exponentially in size as the information age takes a firm grip on all aspects of society. As a result Information Retrieval (IR) has become an increasingly important area of research. It promises to provide new and more effective ways for users to find information relevant to their search intentions. Document clustering is one of the many tools in the IR toolbox and is far from being perfected. It groups documents that share common features. This grouping allows a user to quickly identify relevant information. If these groups are misleading then valuable information can accidentally be ignored. There- fore, the study and analysis of the quality of document clustering is important. With more and more digital information available, the performance of these algorithms is also of interest. An algorithm with a time complexity of O(n2) can quickly become impractical when clustering a corpus containing millions of documents. Therefore, the investigation of algorithms and data structures to perform clustering in an efficient manner is vital to its success as an IR tool. Document classification is another tool frequently used in the IR field. It predicts categories of new documents based on an existing database of (doc- ument, category) pairs. Support Vector Machines (SVM) have been found to be effective when classifying text documents. As the algorithms for classifica- tion are both efficient and of high quality, the largest gains can be made from improvements to representation. Document representations are vital for both clustering and classification. Representations exploit the content and structure of documents. Dimensionality reduction can improve the effectiveness of existing representations in terms of quality and run-time performance. Research into these areas is another way to improve the efficiency and quality of clustering and classification results. Evaluating document clustering is a difficult task. Intrinsic measures of quality such as distortion only indicate how well an algorithm minimised a sim- ilarity function in a particular vector space. Intrinsic comparisons are inherently limited by the given representation and are not comparable between different representations. Extrinsic measures of quality compare a clustering solution to a “ground truth” solution. This allows comparison between different approaches. As the “ground truth” is created by humans it can suffer from the fact that not every human interprets a topic in the same manner. Whether a document belongs to a particular topic or not can be subjective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: Flood is the most common natural disaster in Australia and causes more loss of life than any other disaster. This article describes the incidence and causes of deaths directly associated with floods in contemporary Australia. ---------- Methods: The present study compiled a database of flood fatalities in Australia in the period of 1997–2008 inclusive. The data were derived from newspapers and historic accounts, as well as government and scientific reports. Assembled data include the date and location of fatalities, age and gender of victims and the circumstances of the death. ---------- Results: At least 73 persons died as a direct result of floods in Australia in the period of 1997–2008. The largest number of fatalities occurred in New South Wales and Queensland. Most fatalities occurred during February, and among men (71.2%). People between the ages of 10 and 29 and those over 70 years are overrepresented among those drowned. There is no evident decline in the number of deaths over time. 48.5% fatalities related to motor vehicle use. 26.5% fatalities occurred as a result of inappropriate or high-risk behaviour during floods. ---------- Conclusion: In modern developed countries with adequate emergency response systems and extensive resources, deaths that occur in floods are almost all eminently preventable. Over 90% of the deaths are caused by attempts to ford flooded waterways or inappropriate situational conduct. Knowledge of the leading causes of flood fatalities should inform public awareness programmes and public safety police enforcement activities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes the optimization of conductor size and the voltage regulator location & magnitude of long rural distribution lines. The optimization minimizes the lifetime cost of the lines, including capital costs and losses while observing voltage drop and operational constraints using a Genetic Algorithm (GA). The GA optimization is applied to a real Single Wire Earth Return (SWER) network in regional Queensland and results are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A statistical modeling method to accurately determine combustion chamber resonance is proposed and demonstrated. This method utilises Markov-chain Monte Carlo (MCMC) through the use of the Metropolis-Hastings (MH) algorithm to yield a probability density function for the combustion chamber frequency and find the best estimate of the resonant frequency, along with uncertainty. The accurate determination of combustion chamber resonance is then used to investigate various engine phenomena, with appropriate uncertainty, for a range of engine cycles. It is shown that, when operating on various ethanol/diesel fuel combinations, a 20% substitution yields the least amount of inter-cycle variability, in relation to combustion chamber resonance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This investigation describes the prevalence of upper-body symptoms in a population-based sample of women with breast cancer (BC) and examines their relationships with upper-body function (UBF) and lymphoedema, as two clinically important sequelae. Australian women (n=287) with unilateral BC were assessed at three-monthly intervals, from six to 18 months post-surgery (PS). Participants reported the presence and intensity of upper-body symptoms on the treated side. Objective and self-reported UBF and lymphoedema (bioimpedance spectroscopy) were also assessed. Approximately 50% of women reported at least one moderate-to-extreme symptom at 6- and at 18-months PS. There was a significant relationship between symptoms and function (p<0.01), whereby perceived and objective function declined with increasing number of symptoms present. Those with lymphoedema were more likely to report multiple symptoms and presence of symptoms at baseline increased risk of lymphoedema (ORs>1.3, p=0.02). Although, presence of symptoms explained only 5.5% of the variation in the odds of lymphoedema. Upper-body symptoms are common and persistent following breast cancer and are associated with clinical ramifications, including reduced UBF and increased risk of developing lymphoedema. However, using the presence of symptoms as a diagnostic indicator of lymphoedema is limited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The railway service is now the major transportation means in most of the countries around the world. With the increasing population and expanding commercial and industrial activities, a high quality of railway service is the most desirable. Train service usually varies with the population activities throughout a day and train coordination and service regulation are then expected to meet the daily passengers' demand. Dwell time control at stations and fixed coasting point in an inter-station run are the current practices to regulate train service in most metro railway systems. However, a flexible and efficient train control and operation is not always possible. To minimize energy consumption of train operation and make certain compromises on the train schedule, coast control is an economical approach to balance run-time and energy consumption in railway operation if time is not an important issue, particularly at off-peak hours. The capability to identify the starting point for coasting according to the current traffic conditions provides the necessary flexibility for train operation. This paper presents an application of genetic algorithms (GA) to search for the appropriate coasting point(s) and investigates the possible improvement on fitness of genes. Single and multiple coasting point control with simple GA are developed to attain the solutions and their corresponding train movement is examined. Further, a hierarchical genetic algorithm (HGA) is introduced here to identify the number of coasting points required according to the traffic conditions, and Minimum-Allele-Reserve-Keeper (MARK) is adopted as a genetic operator to achieve fitter solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In general, simple and traditional methods are applied to resolve traffic conflicts at railway junctions. They are, however, either inefficient or computationally demanding. A simple genetic algorithm is presented to enable a search for a near optimal resolution to be carried out while meeting the constraints on generation evolution and minimising the search time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new explicit rate allocation algorithm is proposed for achieving generic weight-proportional max-min (GWPMM) fairness in asynchronous transfer mode (ATM) available bit rate services. This algorithm scales well with a fixed computational complexity of O(1) and can realise GWPMM fair rate allocation in an ATM network accurately.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of constructing consolidated business process models out of collections of process models that share common fragments. The paper considers the construction of unions of multiple models (called merged models) as well as intersections (called digests). Merged models are intended for analysts who wish to create a model that subsumes a collection of process models - typically representing variants of the same underlying process - with the aim of replacing the variants with the merged model. Digests, on the other hand, are intended for analysts who wish to identify the most recurring fragments across a collection of process models, so that they can focus their efforts on optimizing these fragments. The paper presents an algorithm for computing merged models and an algorithm for extracting digests from a merged model. The merging and digest extraction algorithms have been implemented and tested against collections of process models taken from multiple application domains. The tests show that the merging algorithm produces compact models and scales up to process models containing hundreds of nodes. Furthermore, a case study conducted in a large insurance company has demonstrated the usefulness of the merging and digest extraction operators in a practical setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Osteoporosis is the most common bone disease. Low levels of oestrogens or testosterone are risk factors for primary osteoporosis. The most common cause of secondary osteoporosis is glucocorticoid treatment, but there are many other secondary causes of osteoporosis. Osteoporosis can be secondary to anti-oestrogen treatment for hormone-sensitive breast cancer and to androgen-deprivation therapy for prostate cancer. Zoledronic is the most potent bisphosphonate at inhibiting bone resorption. In osteoporosis, zoledronic acid increases bone mineral density for at least a year after a single intravenous administration. The efficacy and safety of extended release (once-yearly) zoledronic acid in the treatment of osteoporosis is reviewed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, weighted fair rate allocation for ATM available bit rate (ABR) service is discussed with the concern of the minimum cell rate (MCR). Weighted fairness with MCR guarantee has been discussed recently in the literature. In those studies, each ABR virtual connection (VC) is first allocated its MCR, then the remaining available bandwidth is further shared among ABR VCs according to their weights. For the weighted fairness defined in this paper, the bandwidth is first allocated according to each VC's weight; if a VC's weighted share is less than its MCR, it should be allocated its MCR instead of the weighted share. This weighted fairness with MCR guarantee is referred to as extended weighted (EXW) fairness. Certain theoretical issues related to EXW, such as its global solution and bottleneck structure, are first discussed in the paper. A distributed explicit rate allocation algorithm is then proposed to achieve EXW fairness in ATM networks. The algorithm is a general-purpose explicit rate algorithm in the sense that it can realise almost all the fairness principles proposed for ABR so far whilst only minor modifications may be needed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A high performance, low computational complexity rate-based flow control algorithm which can avoid congestion and achieve fairness is important to ATM available bit rate service. The explicit rate allocation algorithm proposed by Kalampoukas et al. is designed to achieve max–min fairness in ATM networks. It has several attractive features, such as a fixed computational complexity of O(1) and the guaranteed convergence to max–min fairness. In this paper, certain drawbacks of the algorithm, such as the severe overload of an outgoing link during transient period and the non-conforming use of the current cell rate field in a resource management cell, have been identified and analysed; a new algorithm which overcomes these drawbacks is proposed. The proposed algorithm simplifies the rate computation as well. Compared with Kalampoukas's algorithm, it has better performance in terms of congestion avoidance and smoothness of rate allocation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Balancing between the provision of high quality of service and running within a tight budget is one of the biggest challenges for most metro railway operators around the world. Conventionally, one possible approach for the operator to adjust the time schedule is to alter the stop time at stations, if other system constraints, such as traction equipment characteristic, are not taken into account. Yet it is not an effective, flexible and economical method because the run-time of a train simply cannot be extended without limitation, and a balance between run-time and energy consumption has to be maintained. Modification or installation of a new signalling system not only increases the capital cost, but also affects the normal train service. Therefore, in order to procure a more effective, flexible and economical means to improve the quality of service, optimisation of train performance by coasting point identification has become more attractive and popular. However, identifying the necessary starting points for coasting under the constraints of current service conditions is no simple task because train movement is attributed by a large number of factors, most of which are non-linear and inter-dependent. This paper presents an application of genetic algorithms (GA) to search for the appropriate coasting points and investigates the possible improvement on computation time and fitness of genes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a train movement model with fixed runtime that can be employed to find feasible control strategies for a single train along an inter-city railway line. The objective of the model is to minimize arrival delays at each station along railway lines. However, train movement is a typical nonlinear problem for complex running environments and different requirements. A heuristic algorithm is developed to solve the problem in this paper and the simulation results show that the train could overcome the disturbance from train delay and coordinates the operation strategies to sure punctual arrival of trains at the destination. The developed algorithm can also be used to evaluate the running reliability of trains in scheduled timetables.