875 resultados para Distribution network reconfiguration problem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

As a kind of special lithologic ones, Igneous rock oil and gas pool is more and more paid attention, and it has different forming condition and distribution from conventional ones, such as various terrane distribution types, serious reservoir anisotropy, complicated hydrocarbon-bearing, so there is not successful experience to follow for exploration and development of this complex subtle oil and gas pool at present. For an example of Igneous oil and gas pool of Luo151 area in Zhanhua seg, Eastern China, this article study the difficult problem, including petrologic nd lithofacies analysis, Origin, invasion age and times of Igneous rock, reservoir anisotropy, Geological Modeling, Igneous reservoir synthesis evaluation. forming condition and distribution are studied synthetically, and an integrated method to predict igneous rock oil and gas pool is formed, which is evaluated by using development data. The Igneous rock is mainly diabase construction in Luo151 area of Zhanhua Sag, and petrologic types include carbonaceous slate, hornfels, and diabases. Based on analyzing synthetically petrologic component, texture and construct, 4 lithofacies zones, such as carbonaceous slate subfacies, hornfels subfacies containing cordierite and grammite, border subfacies and central subfacies, are divided in the diabase and wall rock. By studying on isotopic chronology, terrane configuration and imaging logging data, the diabase intrusion in Zhanhua Sag is formed by tholeiite magma emplacing in Shahejie formation stratum on the rift tension background Lower Tertiary in North China. The diabase intrusion of Luo151 is composed possibly of three periods magma emplacement. There is serious anisotropy in the diabase reservoirs of Luo151 in Zhanhua Sag. Fracture is primary reservoir space, which dominated by tensile fracture in high obliquity, and the fracture zones are mainly developed round joint belt of igneous rock and wall rock and position of terrane thickness changing rapidly. The generation materials of the reservoirs in Luo151 igneous oil pools consist of Intergranular micropore hornfels, condensate blowhole-solution void diabase condensate edge, the edge and center of the condensate seam diabase, of which are divided into horizontal, vertical and reticulated cracks according fracture occurrence. Based on the above research, a conceptual model of igneous rock reservoir is generated, which is vertically divided into 4 belts and horizontally 3 areas. It is built for the first time that classification evaluation pattern of igneous rock reservoir in this area, and 3 key wells are evaluated. The diabase construction is divided into grammite hornfels micropore type and diabase porous-fracture type reservoirs. The heavy mudstone layers in Third Member of Shahejie formation (Es3) provide favorable hydrocarbon source rock and cap formation, diabase and hornfels belts serve as reservoirs, faults and microcracks in the wall rocks as type pathways for oil and gas migration. The time of diabase invasion was about in the later deposition period of Dongying Formation and the middle of that of Guantao Formation, the oil generated from oil source rock of Es3 in the period of the Minghuazhen formation and is earlier more than the period of diabase oil trap and porous space forming. Based on geological and seismic data, the horizon of igneous rocks is demarcated accurately by using VSP and synthetic seismogram, and the shape distribution and continuity of igneous rocks are determined by using cross-hole seismic technology. The reservoir capability is predicted by using logging constraining inversion and neural network technology. An integrated method to predict igneous rock oil and gas pool is formed. The study is appraised by using development data. The result show the reservoir conceptual model can guide the exploration and development of oil pool, and the integrated method yielded marked results in the production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mudstone reservoir is a subtle reservoir with extremely inhomogeneous, whose formation is greatly related to the existence of fracture. For this kind of reservoir, mudstone is oil source rock, cover rock and reservoir strata, reservoir type is various, attitude of oil layer changes greatly, and the distribution of oil and gas is different from igneous or clastic rock reservoir as well as from carbonate reservoir of self-producing and self-containing of oil and gas. No mature experience has been obtained in the description, exploration and development of the reservoir by far. Taking Zhanhua depression as an example, we studied in this thesis the tectonic evolution, deposit characteristics, diagenesis, hydrocarbon formation, abnormal formation pressure, forming of fissure in mudstone reservoir, etc. on the basis of core analysis, physical simulation, numerical simulation, integrated study of well logging and geophysical data, and systematically analyzed the developing and distributing of mudstone fissure reservoir and set up a geological model for the formation of mudstone fissure reservoir, and predicted possible fractural zone in studied area. Mudstone reservoir mainly distributed on the thrown side of sedimentary fault along the sloping area of the petroleum generatiion depression in Zhanhua depression. Growing fault controlled subsidence and sedimentation. Both the rate of subsidence and thickness of mudstone are great on the thrown side of growing fault, which result in the formation of surpressure in the area. The unlocking of fault which leads to the pressure discharges and the upward conduct of below stratum, also makes for the surpressure in mudstone. In Zhanhua depression, mudstone reservior mainly developed in sub-compacted stratum in the third segment of Shahejie formation, which is the best oil source rock because of its wide spread in distribution, great in thickness, and rich in organic matter, and rock types of which are oil source mudstone and shale of deep water or semi-deep water sediment in lacustrine facies. It revealed from core analysis that the stratum is rich in limestone, and consists of lamina of dark mudstone and that of light grey limestone alternately, such rock assemblage is in favor of high pressure and fracture in the process of hydrocarbon generation. Fracture of mudstone in the third segment of Shahejie formation was divided into structure fracture, hydrocarbon generation fracture and compound fracture and six secondary types of fracture for the fist time according to the cause of their formation in the thesis. Structural fracture is formed by tectonic movement such as fold or fault, which develops mainly near the faults, especially in the protrude area and the edge of faults, such fracture has obvious directivity, and tend to have more width and extension in length and obvious direction, and was developed periodically, discontinuously in time and successively as the result of multi-tectonic movement in studied area. Hydrocarbon generation fracture was formed in the process of hydrocarbon generation, the fracture is numerous in number and extensively in distribution, but the scale of it is always small and belongs to microfracture. The compound fracture is the result of both tectonic movement and hydrocarbon forming process. The combination of above fractures in time and space forms the three dimension reservoir space network of mudstone, which satellites with abnormal pressure zone in plane distribution and relates to sedimentary faces, rock combination, organic content, structural evolution, and high pressure, etc.. In Zhanhua depression, the mudstone of third segment in shahejie formation corresponds with a set of seismic reflection with better continuous. When mudstone containing oil and gas of abnormal high pressure, the seismic waveform would change as a result of absorb of oil and gas to the high-frequency composition of seismic reflection, and decrease of seismic reflection frequency resulted from the breakage of mudstone structure. The author solved the problem of mudstone reservoir predicting to some degree through the use of coherent data analysis in Zhanhua depression. Numerical modeling of basin has been used to simulate the ancient liquid pressure field in Zhanhua depression, to quantitative analysis the main controlling factor (such as uncompaction, tectonic movement, hydrocarbon generation) to surpressure in mudstone. Combined with factual geologic information and references, we analyzed the characteristic of basin evolution and factors influence the pressure field, and employed numerical modeling of liquid pressure evolution in 1-D and 2-D section, modeled and analyzed the forming and evolution of pressure in plane for main position in different periods, and made a conclusion that the main factors for surpressure in studied area are tectonic movement, uncompaction and hydrocarbon generation process. In Zhanhua depression, the valid fracture zone in mudstone was mainly formed in the last stage of Dongying movement, the mudstone in the third segment of Shahejie formation turn into fastigium for oil generation and migration in Guantao stage, and oil and gas were preserved since the end of the stage. Tectonic movement was weak after oil and gas to be preserved, and such made for the preserve of oil and gas. The forming of fractured mudstone reservoir can be divided into four different stages, i.e. deposition of muddy oil source rock, draining off water by compacting to producing hydrocarbon, forming of valid fracture and collecting of oil, forming of fracture reservoir. Combined with other regional geologic information, we predicted four prior mudstone fracture reservoirs, which measured 18km2 in area and 1200 X 104t in geological reserves.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article is an important part of "95" technological subject of SINOPEC. It has a large number of difficulties and workloads, and has significant theoretical meanings and practical value. The study area is composed of sandstone & conglomerate reservoir of alluvial fan & fan delta, which belong to Sha3 lower member and Sha4 upper member of lower tertiary of Yong'an Town Oilfield in Dongying Depression. The target stataum develops in the hanging wall of the synsedimentary fault in the scarp zone of Dongying Depression. The frequently intense movements result in the variation of sandstone and conglomerate reservoir and the evolution of the time and space of Sha3 lower member and Sha4 upper member in Yong'an Town Oilfield. As a result, it is difficult for the individual reservoir correlation at the root of fan, which bring about a tackle problem for the exploitation of oilfield. In this background, the research of fluid units will be more difficult. In this article, the new concepts, the new methods, and the new techniques of sedimentology, petroleum geology, reservoir geology, physics of crystal surface, dynamic & static state reservoir description and well logging geology are synthetically applied, and the computer technology are made full uses of, and the identifying, dividing and appraising of the two-formation-type sandstone & conglomerate reservoir fluid units of Sha3 lower member and Sha4 upper member systemically analyzed in Yong'an Town Oilfield, Dongying Depression. For the first time, the single-well model, the section model, the plane model, the nuclear magnetism log model, the microcosmic network model, the 4-D geology model and the simulation model of the two-formation-type reservoir fluid units of the of sandstone & conglomerate reservoir of Sha3 lower member and Sha4 upper member are established, and the formative mechanism and distributing & enrichment laws of oil-gas of the two type of sandstone and conglomerate reservoir fluid units are revealed. This article established the optimizing, identifying, classifying and appraising standard of the two-formation-type reservoir fluid units of the of sandstone and conglomerate reservoir of Sha3 lower member and Sha4 upper member, which settles the substantial foundations for static state model of the fluid units, reveals the macroscopic & microcosmic various laws of geometrical static state of the fluid units, and instructs the oil exploitation. This article established static state model of the two-formation-type sandstone and conglomerate reservoir fluid units by using the multi-subject theories, information and techniques, and reveals the geometrical configuration, special distribution and the oil-gas enrichment laws of the sandstone and conglomerate reservoir fluid units. For the first time, we established the nuclear magnetism log model of the two-formation-type sandstone and conglomerate reservoir of Sha3 lower member and Sha4 upper member, which reveals not only the character and distributing laws of the porosity and permeability, bat also the formation and distribution of the movable fluid. It established six type of microcosmic net model of the two-formation-type sandstone and conglomerate reservoir of Sha3 lower member and Sha4 upper member in the working area by using the advanced theories, such as rock thin section, SEM, image analysis, intrusive mercury, mold, rock C.T. measure & test image etc., which reveals the microcosmic characteristic of porosity & throat, filterate mode and microcosmic oil-gas enrichment laws of the sandstone and conglomerate reservoir. For the first time, it sets up the 4-D model and mathematic model of the sandstone and conglomerate reservoir, which reveals the distributing and evolving laws of macroscopic & microcosmic parameters of the two-formation-type sandstone and conglomerate reservoir and oil-gas in 4-D space. At the same time, it also forecasts the oil-gas distribution and instructs the oilfield exploitation. It established reservoir simulation model, which reveals the filterate character and distributing laws of oil-gas in different porosity & throat net models. This article established the assistant theories and techniques for researching, describing, indicating and forecasting the sandstone and conglomerate reservoir fluid units, and develops the theories and techniques of the land faces faulted basin exploitation geology. In instructing oilfield exploitation, it had won the notable economic & social benefits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Saliency Network proposed by Shashua and Ullman is a well-known approach to the problem of extracting salient curves from images while performing gap completion. This paper analyzes the Saliency Network. The Saliency Network is attractive for several reasons. First, the network generally prefers long and smooth curves over short or wiggly ones. While computing saliencies, the network also fills in gaps with smooth completions and tolerates noise. Finally, the network is locally connected, and its size is proportional to the size of the image. Nevertheless, our analysis reveals certain weaknesses with the method. In particular, we show cases in which the most salient element does not lie on the perceptually most salient curve. Furthermore, in some cases the saliency measure changes its preferences when curves are scaled uniformly. Also, we show that for certain fragmented curves the measure prefers large gaps over a few small gaps of the same total size. In addition, we analyze the time complexity required by the method. We show that the number of steps required for convergence in serial implementations is quadratic in the size of the network, and in parallel implementations is linear in the size of the network. We discuss problems due to coarse sampling of the range of possible orientations. We show that with proper sampling the complexity of the network becomes cubic in the size of the network. Finally, we consider the possibility of using the Saliency Network for grouping. We show that the Saliency Network recovers the most salient curve efficiently, but it has problems with identifying any salient curve other than the most salient one.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work comprises a new theoretical development applied to aid decision making in an increasingly important commercial sector. Agile supply, where small volumes of high margin, short life cycle innovative products are offered, is increasingly carried out through a complex global supply chain network. We outline an equilibrium solution in such a supply chain network, which works through limited cooperation and coordination along edges (links) in the network. The links constitute the stochastic modelling entities rather than the nodes of the network. We utilise newly developed phase plane analysis to identify, model and predict characteristic behaviour in supply chain networks. The phase plane charts profile the flow of inventory and identify out of control conditions. They maintain quality within the network, as well as intelligently track the way the network evolves in conditions of changing variability. The methodology is essentially distribution free, relying as it does on the study of forecasting errors, and can be used to examine contractual details as well as strategic and game theoretical concepts between decision-making components (agents) of a network. We illustrate with typical data drawn from supply chain agile fashion products.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reducing the energy consumption of water distribution networks has never had more significance. The greatest energy savings can be obtained by carefully scheduling the operations of pumps. Schedules can be defined either implicitly, in terms of other elements of the network such as tank levels, or explicitly by specifying the time during which each pump is on/off. The traditional representation of explicit schedules is a string of binary values with each bit representing pump on/off status during a particular time interval. In this paper, we formally define and analyze two new explicit representations based on time-controlled triggers, where the maximum number of pump switches is established beforehand and the schedule may contain less switches than the maximum. In these representations, a pump schedule is divided into a series of integers with each integer representing the number of hours for which a pump is active/inactive. This reduces the number of potential schedules compared to the binary representation, and allows the algorithm to operate on the feasible region of the search space. We propose evolutionary operators for these two new representations. The new representations and their corresponding operations are compared with the two most-used representations in pump scheduling, namely, binary representation and level-controlled triggers. A detailed statistical analysis of the results indicates which parameters have the greatest effect on the performance of evolutionary algorithms. The empirical results show that an evolutionary algorithm using the proposed representations improves over the results obtained by a recent state-of-the-art Hybrid Genetic Algorithm for pump scheduling using level-controlled triggers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plakhov, A.Y., (2004) 'Precise solutions of the one-dimensional Monge-Kantorovich problem', Sbornik: Mathematics 195(9) pp.1291-1307 RAE2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plakhov, A.Y.; Torres, D., (2005) 'Newton's aerodynamic problem in media of chaotically moving particles', Sbornik: Mathematics 196(6) pp.885-933 RAE2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND:In the current climate of high-throughput computational biology, the inference of a protein's function from related measurements, such as protein-protein interaction relations, has become a canonical task. Most existing technologies pursue this task as a classification problem, on a term-by-term basis, for each term in a database, such as the Gene Ontology (GO) database, a popular rigorous vocabulary for biological functions. However, ontology structures are essentially hierarchies, with certain top to bottom annotation rules which protein function predictions should in principle follow. Currently, the most common approach to imposing these hierarchical constraints on network-based classifiers is through the use of transitive closure to predictions.RESULTS:We propose a probabilistic framework to integrate information in relational data, in the form of a protein-protein interaction network, and a hierarchically structured database of terms, in the form of the GO database, for the purpose of protein function prediction. At the heart of our framework is a factorization of local neighborhood information in the protein-protein interaction network across successive ancestral terms in the GO hierarchy. We introduce a classifier within this framework, with computationally efficient implementation, that produces GO-term predictions that naturally obey a hierarchical 'true-path' consistency from root to leaves, without the need for further post-processing.CONCLUSION:A cross-validation study, using data from the yeast Saccharomyces cerevisiae, shows our method offers substantial improvements over both standard 'guilt-by-association' (i.e., Nearest-Neighbor) and more refined Markov random field methods, whether in their original form or when post-processed to artificially impose 'true-path' consistency. Further analysis of the results indicates that these improvements are associated with increased predictive capabilities (i.e., increased positive predictive value), and that this increase is consistent uniformly with GO-term depth. Additional in silico validation on a collection of new annotations recently added to GO confirms the advantages suggested by the cross-validation study. Taken as a whole, our results show that a hierarchical approach to network-based protein function prediction, that exploits the ontological structure of protein annotation databases in a principled manner, can offer substantial advantages over the successive application of 'flat' network-based methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerous problems exist that can be modeled as traffic through a network in which constraints exist to regulate flow. Vehicular road travel, computer networks, and cloud based resource distribution, among others all have natural representations in this manner. As these networks grow in size and/or complexity, analysis and certification of the safety invariants becomes increasingly costly. The NetSketch formalism introduces a lightweight verification framework that allows for greater scalability than traditional analysis methods. The NetSketch tool was developed to provide the power of this formalism in an easy to use and intuitive user interface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores reasons for the high degree of variability in the sizes of ASes that have recently been observed, and the processes by which this variable distribution develops. AS size distribution is important for a number of reasons. First, when modeling network topologies, an AS size distribution assists in labeling routers with an associated AS. Second, AS size has been found to be positively correlated with the degree of the AS (number of peering links), so understanding the distribution of AS sizes has implications for AS connectivity properties. Our model accounts for AS births, growth, and mergers. We analyze two models: one incorporates only the growth of hosts and ASes, and a second extends that model to include mergers of ASes. We show analytically that, given reasonable assumptions about the nature of mergers, the resulting size distribution exhibits a power law tail with the exponent independent of the details of the merging process. We estimate parameters of the models from measurements obtained from Internet registries and from BGP tables. We then compare the models solutions to empirical AS size distribution taken from Mercator and Skitter datasets, and find that the simple growth-based model yields general agreement with empirical data. Our analysis of the model in which mergers occur in a manner independent of the size of the merging ASes suggests that more detailed analysis of merger processes is needed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. We ask a fundamental question: What is the basic predictive power of TCP of network state, including wireless error conditions? The goal is to improve or readily exploit this predictive power to enable TCP (or variants) to perform well in generalized network settings. To that end, we use Maximum Likelihood Ratio tests to evaluate TCP as a detector/estimator. We quantify how well network state can be estimated, given network response such as distributions of packet delays or TCP throughput that are conditioned on the type of packet loss. Using our model-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient detector can be built; distributions of network loads can provide effective means for estimating packet loss type; and packet delay is a better signal of network state than short-term throughput. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Anomalies are unusual and significant changes in a network's traffic levels, which can often involve multiple links. Diagnosing anomalies is critical for both network operators and end users. It is a difficult problem because one must extract and interpret anomalous patterns from large amounts of high-dimensional, noisy data. In this paper we propose a general method to diagnose anomalies. This method is based on a separation of the high-dimensional space occupied by a set of network traffic measurements into disjoint subspaces corresponding to normal and anomalous network conditions. We show that this separation can be performed effectively using Principal Component Analysis. Using only simple traffic measurements from links, we study volume anomalies and show that the method can: (1) accurately detect when a volume anomaly is occurring; (2) correctly identify the underlying origin-destination (OD) flow which is the source of the anomaly; and (3) accurately estimate the amount of traffic involved in the anomalous OD flow. We evaluate the method's ability to diagnose (i.e., detect, identify, and quantify) both existing and synthetically injected volume anomalies in real traffic from two backbone networks. Our method consistently diagnoses the largest volume anomalies, and does so with a very low false alarm rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Detecting and understanding anomalies in IP networks is an open and ill-defined problem. Toward this end, we have recently proposed the subspace method for anomaly diagnosis. In this paper we present the first large-scale exploration of the power of the subspace method when applied to flow traffic. An important aspect of this approach is that it fuses information from flow measurements taken throughout a network. We apply the subspace method to three different types of sampled flow traffic in a large academic network: multivariate timeseries of byte counts, packet counts, and IP-flow counts. We show that each traffic type brings into focus a different set of anomalies via the subspace method. We illustrate and classify the set of anomalies detected. We find that almost all of the anomalies detected represent events of interest to network operators. Furthermore, the anomalies span a remarkably wide spectrum of event types, including denial of service attacks (single-source and distributed), flash crowds, port scanning, downstream traffic engineering, high-rate flows, worm propagation, and network outage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent measurements of local-area and wide-area traffic have shown that network traffic exhibits variability at a wide range of scales self-similarity. In this paper, we examine a mechanism that gives rise to self-similar network traffic and present some of its performance implications. The mechanism we study is the transfer of files or messages whose size is drawn from a heavy-tailed distribution. We examine its effects through detailed transport-level simulations of multiple TCP streams in an internetwork. First, we show that in a "realistic" client/server network environment i.e., one with bounded resources and coupling among traffic sources competing for resources the degree to which file sizes are heavy-tailed can directly determine the degree of traffic self-similarity at the link level. We show that this causal relationship is not significantly affected by changes in network resources (bottleneck bandwidth and buffer capacity), network topology, the influence of cross-traffic, or the distribution of interarrival times. Second, we show that properties of the transport layer play an important role in preserving and modulating this relationship. In particular, the reliable transmission and flow control mechanisms of TCP (Reno, Tahoe, or Vegas) serve to maintain the long-range dependency structure induced by heavy-tailed file size distributions. In contrast, if a non-flow-controlled and unreliable (UDP-based) transport protocol is used, the resulting traffic shows little self-similar characteristics: although still bursty at short time scales, it has little long-range dependence. If flow-controlled, unreliable transport is employed, the degree of traffic self-similarity is positively correlated with the degree of throttling at the source. Third, in exploring the relationship between file sizes, transport protocols, and self-similarity, we are also able to show some of the performance implications of self-similarity. We present data on the relationship between traffic self-similarity and network performance as captured by performance measures including packet loss rate, retransmission rate, and queueing delay. Increased self-similarity, as expected, results in degradation of performance. Queueing delay, in particular, exhibits a drastic increase with increasing self-similarity. Throughput-related measures such as packet loss and retransmission rate, however, increase only gradually with increasing traffic self-similarity as long as reliable, flow-controlled transport protocol is used.