999 resultados para MINIMUM CUT


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the real world there are many problems in network of networks (NoNs) that can be abstracted to a so-called minimum interconnection cut problem, which is fundamentally different from those classical minimum cut problems in graph theory. Thus, it is desirable to propose an efficient and effective algorithm for the minimum interconnection cut problem. In this paper we formulate the problem in graph theory, transform it into a multi-objective and multi-constraint combinatorial optimization problem, and propose a hybrid genetic algorithm (HGA) for the problem. The HGA is a penalty-based genetic algorithm (GA) that incorporates an effective heuristic procedure to locally optimize the individuals in the population of the GA. The HGA has been implemented and evaluated by experiments. Experimental results have shown that the HGA is effective and efficient.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

High reliability of railway power systems is one of the essential criteria to ensure quality and cost-effectiveness of railway services. Evaluation of reliability at system level is essential for not only scheduling maintenance activities, but also identifying reliability-critical components. Various methods to compute reliability on individual components or regularly structured systems have been developed and proven to be effective. However, they are not adequate for evaluating complicated systems with numerous interconnected components, such as railway power systems, and locating the reliability critical components. Fault tree analysis (FTA) integrates the reliability of individual components into the overall system reliability through quantitative evaluation and identifies the critical components by minimum cut sets and sensitivity analysis. The paper presents the reliability evaluation of railway power systems by FTA and investigates the impact of maintenance activities on overall reliability. The applicability of the proposed methods is illustrated by case studies in AC railways.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The modern society has come to expect the electrical energy on demand, while many of the facilities in power systems are aging beyond repair and maintenance. The risk of failure is increasing with the aging equipments and can pose serious consequences for continuity of electricity supply. As the equipments used in high voltage power networks are very expensive, economically it may not be feasible to purchase and store spares in a warehouse for extended periods of time. On the other hand, there is normally a significant time before receiving equipment once it is ordered. This situation has created a considerable interest in the evaluation and application of probability methods for aging plant and provisions of spares in bulk supply networks, and can be of particular importance for substations. Quantitative adequacy assessment of substation and sub-transmission power systems is generally done using a contingency enumeration approach which includes the evaluation of contingencies, classification of the contingencies based on selected failure criteria. The problem is very complex because of the need to include detailed modelling and operation of substation and sub-transmission equipment using network flow evaluation and to consider multiple levels of component failures. In this thesis a new model associated with aging equipment is developed to combine the standard tools of random failures, as well as specific model for aging failures. This technique is applied in this thesis to include and examine the impact of aging equipments on system reliability of bulk supply loads and consumers in distribution network for defined range of planning years. The power system risk indices depend on many factors such as the actual physical network configuration and operation, aging conditions of the equipment, and the relevant constraints. The impact and importance of equipment reliability on power system risk indices in a network with aging facilities contains valuable information for utilities to better understand network performance and the weak links in the system. In this thesis, algorithms are developed to measure the contribution of individual equipment to the power system risk indices, as part of the novel risk analysis tool. A new cost worth approach was developed in this thesis that can make an early decision in planning for replacement activities concerning non-repairable aging components, in order to maintain a system reliability performance which economically is acceptable. The concepts, techniques and procedures developed in this thesis are illustrated numerically using published test systems. It is believed that the methods and approaches presented, substantially improve the accuracy of risk predictions by explicit consideration of the effect of equipment entering a period of increased risk of a non-repairable failure.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper examines the case of a procurement auction for a single project, in which the breakdown of the winning bid into its component items determines the value of payments subsequently made to bidder as the work progresses. Unbalanced bidding, or bid skewing, involves the uneven distribution of mark-up among the component items in such a way as to attempt to derive increased benefit to the unbalancer but without involving any change in the total bid. One form of unbalanced bidding for example, termed Front Loading (FL), is thought to be widespread in practice. This involves overpricing the work items that occur early in the project and underpricing the work items that occur later in the project in order to enhance the bidder's cash flow. Naturally, auctioners attempt to protect themselves from the effects of unbalancing—typically reserving the right to reject a bid that has been detected as unbalanced. As a result, models have been developed to both unbalance bids and detect unbalanced bids but virtually nothing is known of their use, success or otherwise. This is of particular concern for the detection methods as, without testing, there is no way of knowing the extent to which unbalanced bids are remaining undetected or balanced bids are being falsely detected as unbalanced. This paper reports on a simulation study aimed at demonstrating the likely effects of unbalanced bid detection models in a deterministic environment involving FL unbalancing in a Texas DOT detection setting, in which bids are deemed to be unbalanced if an item exceeds a maximum (or fails to reach a minimum) ‘cut-off’ value determined by the Texas method. A proportion of bids are automatically and maximally unbalanced over a long series of simulated contract projects and the profits and detection rates of both the balancers and unbalancers are compared. The results show that, as expected, the balanced bids are often incorrectly detected as unbalanced, with the rate of (mis)detection increasing with the proportion of FL bidders in the auction. It is also shown that, while the profit for balanced bidders remains the same irrespective of the number of FL bidders involved, the FL bidder's profit increases with the greater proportion of FL bidders present in the auction. Sensitivity tests show the results to be generally robust, with (mis)detection rates increasing further when there are fewer bidders in the auction and when more data are averaged to determine the baseline value, but being smaller or larger with increased cut-off values and increased cost and estimate variability depending on the number of FL bidders involved. The FL bidder's expected benefit from unbalancing, on the other hand, increases, when there are fewer bidders in the auction. It also increases when the cut-off rate and discount rate is increased, when there is less variability in the costs and their estimates, and when less data are used in setting the baseline values.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Given an undirected unweighted graph G = (V, E) and an integer k ≥ 1, we consider the problem of computing the edge connectivities of all those (s, t) vertex pairs, whose edge connectivity is at most k. We present an algorithm with expected running time Õ(m + nk3) for this problem, where |V| = n and |E| = m. Our output is a weighted tree T whose nodes are the sets V1, V2,..., V l of a partition of V, with the property that the edge connectivity in G between any two vertices s ε Vi and t ε Vj, for i ≠ j, is equal to the weight of the lightest edge on the path between Vi and Vj in T. Also, two vertices s and t belong to the same Vi for any i if and only if they have an edge connectivity greater than k. Currently, the best algorithm for this problem needs to compute all-pairs min-cuts in an O(nk) edge graph; this takes Õ(m + n5/2kmin{k1/2, n1/6}) time. Our algorithm is much faster for small values of k; in fact, it is faster whenever k is o(n5/6). Our algorithm yields the useful corollary that in Õ(m + nc3) time, where c is the size of the global min-cut, we can compute the edge connectivities of all those pairs of vertices whose edge connectivity is at most αc for some constant α. We also present an Õ(m + n) Monte Carlo algorithm for the approximate version of this problem. This algorithm is applicable to weighted graphs as well. Our algorithm, with some modifications, also solves another problem called the minimum T-cut problem. Given T ⊆ V of even cardinality, we present an Õ(m + nk3) algorithm to compute a minimum cut that splits T into two odd cardinality components, where k is the size of this cut.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a volumetric formulation for the multi-view stereo problem which is amenable to a computationally tractable global optimisation using Graph-cuts. Our approach is to seek the optimal partitioning of 3D space into two regions labelled as "object" and "empty" under a cost functional consisting of the following two terms: (1) A term that forces the boundary between the two regions to pass through photo-consistent locations and (2) a ballooning term that inflates the "object" region. To take account of the effect of occlusion on the first term we use an occlusion robust photo-consistency metric based on Normalised Cross Correlation, which does not assume any geometric knowledge about the reconstructed object. The globally optimal 3D partitioning can be obtained as the minimum cut solution of a weighted graph.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Classical measures of network connectivity are the number of disjoint paths between a pair of nodes and the size of a minimum cut. For standard graphs, these measures can be computed efficiently using network flow techniques. However, in the Internet on the level of autonomous systems (ASs), referred to as AS-level Internet, routing policies impose restrictions on the paths that traffic can take in the network. These restrictions can be captured by the valley-free path model, which assumes a special directed graph model in which edge types represent relationships between ASs. We consider the adaptation of the classical connectivity measures to the valley-free path model, where it is -hard to compute them. Our first main contribution consists of presenting algorithms for the computation of disjoint paths, and minimum cuts, in the valley-free path model. These algorithms are useful for ASs that want to evaluate different options for selecting upstream providers to improve the robustness of their connection to the Internet. Our second main contribution is an experimental evaluation of our algorithms on four types of directed graph models of the AS-level Internet produced by different inference algorithms. Most importantly, the evaluation shows that our algorithms are able to compute optimal solutions to instances of realistic size of the connectivity problems in the valley-free path model in reasonable time. Furthermore, our experimental results provide information about the characteristics of the directed graph models of the AS-level Internet produced by different inference algorithms. It turns out that (i) we can quantify the difference between the undirected AS-level topology and the directed graph models with respect to fundamental connectivity measures, and (ii) the different inference algorithms yield topologies that are similar with respect to connectivity and are different with respect to the types of paths that exist between pairs of ASs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate the current use of Australian Type 2 Diabetes Risk Assessment Tool (AUSDRISK) as a screening tool to identify individuals at high risk of developing type 2 diabetes for entry into lifestyle modification programs.

RESEARCH DESIGN AND METHODS: AUSDRISK scores were calculated from participants aged 40-74 years in the Greater Green Triangle Risk Factor Study, a cross-sectional population survey in 3 regions of Southwest Victoria, Australia, 2004-2006. Biomedical profiles of AUSDRISK risk categories were determined along with estimates of the Victorian population included at various cut-off scores. Sensitivity, specificity, positive predictive value (PPV), negative predictive value, and receiver operating characteristics were calculated for AUSDRISK in determining fasting plasma glucose (FPG) ≥6.1 mmol/L.

RESULTS: Increasing AUSDRISK scores were associated with an increase in weight, body mass index, FPG, and metabolic syndrome. Increasing the minimum cut-off score also increased the proportion of individuals who were obese and centrally obese, had impaired fasting glucose (IFG) and metabolic syndrome. An AUSDRISK score of ≥12 was estimated to include 39.5% of the Victorian population aged 40-74 (916 000), while a score of ≥20 would include only 5.2% of the same population (120 000). At AUSDRISK≥20, the PPV for detecting FPG≥6.1 mmol/L was 28.4%.

CONCLUSIONS: AUSDRISK is powered to predict those with IFG and undiagnosed type 2 diabetes, but its effectiveness as the sole determinant for entry into a lifestyle modification program is questionable given the large proportion of the population screened-in using the current minimum cut-off of ≥12. AUSDRISK should be used in conjunction with oral glucose tolerance testing, fasting glucose, or glycated hemoglobin to identify those individuals at highest risk of progression to type 2 diabetes, who should be the primary targets for lifestyle modification.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We deal with the optimization of the production of branched sheet metal products. New forming techniques for sheet metal give rise to a wide variety of possible profiles and possible ways of production. In particular, we show how the problem of producing a given profile geometry can be modeled as a discrete optimization problem. We provide a theoretical analysis of the model in order to improve its solution time. In this context we give the complete convex hull description of some substructures of the underlying polyhedron. Moreover, we introduce a new class of facet-defining inequalities that represent connectivity constraints for the profile and show how these inequalities can be separated in polynomial time. Finally, we present numerical results for various test instances, both real-world and academic examples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a new penalty-based genetic algorithm for the multi-source and multi-sink minimum vertex cut problem, and illustrate the algorithm’s usefulness with two real-world applications. It is proved in this paper that the genetic algorithm always produces a feasible solution by exploiting some domain-specific knowledge. The genetic algorithm has been implemented on the example applications and evaluated to show how well it scales as the problem size increases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Finland, peat harvesting sites are utilized down almost to the mineral soil. In this situation the properties of mineral subsoil are likely to have considerable influence on the suitability for the various after-use forms. The aims of this study were to recognize the chemical and physical properties of mineral subsoils possibly limiting the after-use of cut-over peatlands, to define a minimum practice for mineral subsoil studies and to describe the role of different geological areas. The future percentages of the different after-use forms were predicted, which made it possible to predict also carbon accumulation in this future situation. Mineral subsoils of 54 different peat production areas were studied. Their general features and grain size distribution was analysed. Other general items studied were pH, electrical conductivity, organic matter, water soluble nutrients (P, NO3-N, NH4-N, S and Fe) and exchangeable nutrients (Ca, Mg and K). In some cases also other elements were analysed. In an additional case study carbon accumulation effectiveness before the intervention was evaluated on three sites in Oulu area (representing sites typically considered for peat production). Areas with relatively sulphur rich mineral subsoil and pool-forming areas with very fine and compact mineral subsoil together covered approximately 1/5 of all areas. These areas were unsuitable for commercial use. They were recommended for example for mire regeneration. Another approximate 1/5 of the areas included very coarse or very fine sediments. Commercial use of these areas would demand special techniques - like using the remaining peat layer for compensating properties missing from the mineral subsoil. One after-use form was seldom suitable for one whole released peat production area. Three typical distribution patterns (models) of different mineral subsoils within individual peatlands were found. 57 % of studied cut-over peatlands were well suited for forestry. In a conservative calculation 26% of the areas were clearly suitable for agriculture, horticulture or energy crop production. If till without large boulders was included, the percentage of areas suitable to field crop production would be 42 %. 9-14 % of all areas were well suitable for mire regeneration or bird sanctuaries, but all areas were considered possible for mire regeneration with correct techniques. Also another 11 % was recommended for mire regeneration to avoid disturbing the mineral subsoil, so total 20-25 % of the areas would be used for rewetting. High sulphur concentrations and acidity were typical to the areas below the highest shoreline of the ancient Litorina sea and Lake Ladoga Bothnian Bay zone. Also differences related to nutrition were detected. In coarse sediments natural nutrient concentration was clearly higher in Lake Ladoga Bothnian Bay zone and in the areas of Svecokarelian schists and gneisses, than in Granitoid area of central Finland and in Archaean gneiss areas. Based on this study the recommended minimum analysis for after-use planning was for pH, sulphur content and fine material (<0.06 mm) percentage. Nutrition capacity could be analysed using the natural concentrations of calcium, magnesium and potassium. Carbon accumulation scenarios were developed based on the land-use predictions. These scenarios were calculated for areas in peat production and the areas released from peat production (59300 ha + 15 671 ha). Carbon accumulation of the scenarios varied between 0.074 and 0.152 million t C a-1. In the three peatlands considered for peat production the long term carbon accumulation rates varied between 13 and 24 g C m-2 a-1. The natural annual carbon accumulation had been decreasing towards the time of possible intervention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple method based on the effective index method was used to estimate the minimum bend radii of curved SOI waveguides. An analytical formula was obtained to estimate the minimum radius of curvature at which the mode becomes cut off due to the side radiative loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, the machining conditions to achieve nanometric surface roughness in finish cut microelectrodischarge milling were investigated. For a constant gap voltage, the effect of feed rate and capacitance was studied on average surface roughness (Ra) and maximum peak-to-valley roughness height (Ry). Statistical models were developed using a three-level, two-factor experimental design. The developed models minimized Ra and Ry by desirability function approach. Maximum desirability was found to be more than 98%. The minimum values of Ra and Ry were 23 and 173 nm, respectively, for 1.00 μm s-1 feed rate and 0.01 nF capacitance. Verification experiments were conducted to check the accuracy of the models, where the responses were found to be very close to the predicted values. Thus, the developed models can be used to generate nanometric level surface finish, which are useful for many applications in microelectromechanical systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the polytope of the minimum-span graph labelling problems with integer distance constraints (DC-MSGL). We first introduce a few classes of new valid inequalities for the DC-MSGL defined on general graphs and briefly discuss the separation problems of some of these inequalities. These are the initial steps of a branch-and-cut algorithm for solving the DC-MSGL. Following that, we present our polyhedral results on the dimension of the DC-MSGL polytope, and that some of the inequalities are facet defining, under reasonable conditions, for the polytope of the DC-MSGL on triangular graphs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research aimed to analyze the viability of the minimum quantity of lubricant (MQL) technique towards different methods of lubri-refrigeration in surface grinding of steel, considering process quality, wheel life and the viability of using cutting fluids The proposal methods were the conventional (abundant fluid flow), the minimum quantity lubrication (MQL) and the optimized method with Webster nozzle (rounded) This analysis was carried out in equal machining conditions, through the assessment of variables such as grinding force, surface roughness, G ratio (volume of removed material/volume of wheel wear), and microhardness The results showed the possibility of improvement of the grinding process Besides, there is the opportunity for production of high quality workpieces with lower costs The MQL technique showed efficiency in machining with lower depths of cut The optimized method with Webster nozzle applies the fluid in a rational way, without considerable waste Hence, the results show that industry can rationalize and optimize the application of cutting fluids, avoiding inappropriate disposal, inadequate use and consequently environment pollution