951 resultados para Shortest Path Length
Resumo:
In the present paper, the electrochemical behavior of ergosterol has been investigated by in situ circular dichroism (CD) spectroelectrochemistry with long path-length thin layer cell. E-0 (1.02V), alpha n(alpha) (0.302) of the electroxidation process of ergosterol were obtained from the CD spectroelectrochemical data. The mechanism of the electroxidation process of ergosterol is suggested.
Resumo:
The electrochemical redox processes of tryptophan were studied by in situ circular dichroic (CD) spectroelectrochemistry with a long optical path length thin-layer cell. The oxidation of tryptophan at low concentrations in basic aqueous solution is a two-electron irreversible electrochemical process which results from an irreversible subsequent chemical reaction. A method of treatment of CD spectral data for the irreversible electrochemical reaction is suggested, from which the values E(p/2) = 0.46 V, alphan(alpha) = 0.313 and k0 = 2.4 x 10(-4) cm s-1 (the standard heterogeneous reaction rate constant for tryptophan oxidation) were obtained.
Resumo:
The rate constant of very fast chemical reaction generally can be measured by electrochemical methods, but can not by the thin layer electrochemical methods because of the influence of diffusion effect. Long optical path length thin layer cell (LOPTLC) with large ratio of electrode area to solution volume can be used to monitor the fist chemical reaction in situ with high sensitivity and accuracy. It enable the adsorption spectra to be measured without the influence of diffusion effect. In the present paper, a fast chemical reaction of Alizarin Red S (ARS) with its oxidative state has been studied. The reaction equilibrium constant (K) under different potentials can be determined by single step potential-absorption spectra in LOPTLC. An equilibrium constant of 7.94 x 10(5) l.mol(-1) for the chemical reaction has been obtained from the plot of lgK vs. (E - E-1(0)'). Rate constant (k) under different potentials can be measured by single step potential-chronoabsorptiometry. A rate constant of 426.6 l.mol(-1).s(-1) for the chemical reaction has been obtained from the plot of lgK vs. (E - E-1(0)') with (E - E-1(0)') = 0.
Resumo:
Kusiak和Finke讨论了工艺计划选择问题,并对此建立了图论和整数规划模型。本文给出一个修改的模型,模型求解归结为求非循环有向图的最短路径,使问题求解大为简化。
Resumo:
Crosshole Seismic tomography has been broadly studied and applied in the fields of resource exploration and engineering exploration because of its special observing manner and better resolution than normal seismic exploration. This thesis will state the theory and method of Crosshole Seismic tomography. Basing on the previous studies,the thesis studied the initial velocity model,ray-tracing method, and developed the three-dimension tomography software. All the cells that a ray passes through are of the same velocities if the paths from transmitters to receivers are straight. The cells that the each ray passes through are recorded, and rays that pass through each cell are calculated. The ray average velocity which passes through a cell is set as the cell velocity. Analogously we can make a initial node velocity model because the velocity sum is calculated on the all cells which own to a certain node, and the cell number is summed about each nodes,the ratio of the velocity sum to the all cells number is set as the node velocity. The inversion result from the initial node velocity model is better than that of the average velocity model. Ray-bending and Shortest Path for Rays (SPR) have shortcomings and limitations respectively. Using crooked rays obtained from SPR rather than straight lines as the starting point can not only avoid ray bending converging to the local minimum travel time path, but also settle the no smooth ray problem obtained by SPR. The hybrid method costs much computation time, which is roughly equal to the time that SPR expends. The Delphi development tool based on the Object Pascal language standard has an advantage of object-oriented. TDTOM (Three Dimensions Tomography) was developed by using Delphi from the DOS version. Improvement on the part of inversion was made, which bring faster convergence velocity. TDTOM can be used to do velocity tomography from the first arrival travel time of the seismic wave, and it has the good qualities of friendly user interface and convenient operation. TDTOM is used to reconstruct the velocity image for a set of crosshole data from Karamay Oil Field. The geological explanation is then given by comparing the inversion effects of different ray-tracing methods. High velocity zones mean the cover of oil reservoir, and low velocity zones correspond to the reservoir or the steam flooding layer.
Resumo:
3D wave equation prestack depth migration is the effective tool for obtaining the exact imaging result of complex geology structures. It's a part of the 3D seismic data processing. 3D seismic data processing belongs to high dimension signal processing, and there are some difficult problems to do with. They are: How to process high dimension operators? How to improve the focusing? and how to construct the deconvolution operator? The realization of 3D wave equation prestack depth migration, not only realized the leap from poststack to prestack, but also provided the important means to solve the difficult problems in high dimension signal processing. In this thesis, I do a series research especially for the solve of the difficult problems around the 3D wave equation prestack depth migration and using it as a mean. So this thesis service for the realization of 3D wave equation prestack depth migration for one side and improve the migration effect for another side. This thesis expatiates in five departs. Summarizes the main contents as the follows: In the first part, I have completed the projection from 3D data point area to low dimension are using de big matrix transfer and trace rearrangement, and realized the liner processing of high dimension signal. Firstly, I present the mathematics expression of 3D seismic data and the mean according to physics, present the basic ideal of big matrix transfer and describe the realization of five transfer models for example. Secondly, I present the basic ideal and rules for the rearrange and parallel calculate of 3D traces, and give a example. In the conventional DMO focusing method, I recall the history of DM0 process firstly, give the fundamental of DMO process and derive the equation of DMO process and it's impulse response. I also prove the equivalence between DMO and prestack time migration, from the kinematic character of DMO. And derive the relationship between DMO base on wave equation and prestack time migration. Finally, I give the example of DMO process flow and synthetic data of theoretical models. In the wave equation prestak depth migration, I firstly recall the history of migration from time to depth, from poststack to prestack and from 2D to 3D. And conclude the main migration methods, point out their merit and shortcoming. Finally, I obtain the common image point sets using the decomposed migration program code.In the residual moveout, I firstly describe the Viterbi algorithm based on Markov process and compound decision theory and how to solve the shortest path problem using Viterbi algorithm. And based on this ideal, I realized the residual moveout of post 3D wave equation prestack depth migration. Finally, I give the example of residual moveout of real 3D seismic data. In the migration Green function, I firstly give the concept of migration Green function and the 2D Green function migration equation for the approximate of far field. Secondly, I prove the equivalence of wave equation depth extrapolation algorithms. And then I derive the equation of Green function migration. Finally, I present the response and migration result of Green function for point resource, analyze the effect of migration aperture to prestack migration result. This research is benefit for people to realize clearly the effect of migration aperture to migration result, and study on the Green function deconvolution to improve the focusing effect of migration.
Resumo:
The modeling of petroleum flow path (petroleum charging) and the detail of corresponding software development are presented in this paper, containing principle of petroleum charging, quantitative method, and practical modeling in two oil fields. The Modeling of Petroleum Flow Path is based on the result of basin modeling, according to the principle of petroleum migrating along the shortest path from the source to trap, Petroleum System Dynamics (Prof. Wu Chonglong, 1998), the concept of Petroleum Migration and Dynamic Accumulation (Zhou Donyan, Li Honhui, 2002), etc. The simulation is done combing with all parameters of basin, and considering the flow potential, non-uniformity of source and porous layer. It's the extending of basin modeling, but not belong to it. It is a powerful simulating tool of petroleum system, and can express quantitatively every kind of geology elements of a petroleum basin, and can recuperate dynamically the geology processes with 3D graphics. At result, we can give a result that the petroleum flow shows itself the phenomena of main path, and without using the special theory such as deflection flow in fractures(Tian Kaiming, 1989, 1994, Zhang Fawang, Hou Xingwei, 1998), and flow potential(England, 1987). The contour map of petroleum flow quantitative show clearly where the coteau - dividing slot is, and which convergence region are the main flow path of petroleum, and where is the favorable play of petroleum. The farsighted trap can be determined if there are enough information about structural diagram and can be evaluated, such as the entrapment extent, spill point, area, oil column thickness, etc. Making full use of the result of basin modeling with this new tool, the critical moment and scheme of the petroleum generation and expulsion can be showed clearly. It's powerful analysis tool for geologist.
Resumo:
We present a constant-factor approximation algorithm for computing an embedding of the shortest path metric of an unweighted graph into a tree, that minimizes the multiplicative distortion.
Resumo:
The identification of subject-specific traits extracted from patterns of brain activity still represents an important challenge. The need to detect distinctive brain features, which is relevant for biometric and brain computer interface systems, has been also emphasized in monitoring the effect of clinical treatments and in evaluating the progression of brain disorders. Graph theory and network science tools have revealed fundamental mechanisms of functional brain organization in resting-state M/EEG analysis. Nevertheless, it is still not clearly understood how several methodological aspects may bias the topology of the reconstructed functional networks. In this context, the literature shows inconsistency in the chosen length of the selected epochs, impeding a meaningful comparison between results from different studies. In this study we propose an approach which aims to investigate the existence of a distinctive functional core (sub-network) using an unbiased reconstruction of network topology. Brain signals from a public and freely available EEG dataset were analyzed using a phase synchronization based measure, minimum spanning tree and k-core decomposition. The analysis was performed for each classical brain rhythm separately. Furthermore, we aim to provide a network approach insensitive to the effects that epoch length has on functional connectivity (FC) and network reconstruction. Two different measures, the phase lag index (PLI) and the Amplitude Envelope Correlation (AEC), were applied to EEG resting-state recordings for a group of eighteen healthy volunteers. Weighted clustering coefficient (CCw), weighted characteristic path length (Lw) and minimum spanning tree (MST) parameters were computed to evaluate the network topology. The analysis was performed on both scalp and source-space data. Results about distinctive functional core, show highest classification rates from k-core decomposition in gamma (EER=0.130, AUC=0.943) and high beta (EER=0.172, AUC=0.905) frequency bands. Results from scalp analysis concerning the influence of epoch length, show a decrease in both mean PLI and AEC values with an increase in epoch length, with a tendency to stabilize at a length of 12 seconds for PLI and 6 seconds for AEC. Moreover, CCw and Lw show very similar behaviour, with metrics based on AEC more reliable in terms of stability. In general, MST parameters stabilize at short epoch lengths, particularly for MSTs based on PLI (1-6 seconds versus 4-8 seconds for AEC). At the source-level the results were even more reliable, with stability already at 1 second duration for PLI-based MSTs. Our results confirm that EEG analysis may represent an effective tool to identify subject-specific characteristics that may be of great impact for several bioengineering applications. Regarding epoch length, the present work suggests that both PLI and AEC depend on epoch length and that this has an impact on the reconstructed network topology, particularly at the scalp-level. Source-level MST topology is less sensitive to differences in epoch length, therefore enabling the comparison of brain network topology between different studies.
Resumo:
Recent work has shown the prevalence of small-world phenomena [28] in many networks. Small-world graphs exhibit a high degree of clustering, yet have typically short path lengths between arbitrary vertices. Internet AS-level graphs have been shown to exhibit small-world behaviors [9]. In this paper, we show that both Internet AS-level and router-level graphs exhibit small-world behavior. We attribute such behavior to two possible causes–namely the high variability of vertex degree distributions (which were found to follow approximately a power law [15]) and the preference of vertices to have local connections. We show that both factors contribute with different relative degrees to the small-world behavior of AS-level and router-level topologies. Our findings underscore the inefficacy of the Barabasi-Albert model [6] in explaining the growth process of the Internet, and provide a basis for more promising approaches to the development of Internet topology generators. We present such a generator and show the resemblance of the synthetic graphs it generates to real Internet AS-level and router-level graphs. Using these graphs, we have examined how small-world behaviors affect the scalability of end-system multicast. Our findings indicate that lower variability of vertex degree and stronger preference for local connectivity in small-world graphs results in slower network neighborhood expansion, and in longer average path length between two arbitrary vertices, which in turn results in better scaling of end system multicast.
Resumo:
MPLS (Multi-Protocol Label Switching) has recently emerged to facilitate the engineering of network traffic. This can be achieved by directing packet flows over paths that satisfy multiple requirements. MPLS has been regarded as an enhancement to traditional IP routing, which has the following problems: (1) all packets with the same IP destination address have to follow the same path through the network; and (2) paths have often been computed based on static and single link metrics. These problems may cause traffic concentration, and thus degradation in quality of service. In this paper, we investigate by simulations a range of routing solutions and examine the tradeoff between scalability and performance. At one extreme, IP packet routing using dynamic link metrics provides a stateless solution but may lead to routing oscillations. At the other extreme, we consider a recently proposed Profile-based Routing (PBR), which uses knowledge of potential ingress-egress pairs as well as the traffic profile among them. Minimum Interference Routing (MIRA) is another recently proposed MPLS-based scheme, which only exploits knowledge of potential ingress-egress pairs but not their traffic profile. MIRA and the more conventional widest-shortest path (WSP) routing represent alternative MPLS-based approaches on the spectrum of routing solutions. We compare these solutions in terms of utility, bandwidth acceptance ratio as well as their scalability (routing state and computational overhead) and load balancing capability. While the simplest of the per-flow algorithms we consider, the performance of WSP is close to dynamic per-packet routing, without the potential instabilities of dynamic routing.
Resumo:
This thesis elaborates on the problem of preprocessing a large graph so that single-pair shortest-path queries can be answered quickly at runtime. Computing shortest paths is a well studied problem, but exact algorithms do not scale well to real-world huge graphs in applications that require very short response time. The focus is on approximate methods for distance estimation, in particular in landmarks-based distance indexing. This approach involves choosing some nodes as landmarks and computing (offline), for each node in the graph its embedding, i.e., the vector of its distances from all the landmarks. At runtime, when the distance between a pair of nodes is queried, it can be quickly estimated by combining the embeddings of the two nodes. Choosing optimal landmarks is shown to be hard and thus heuristic solutions are employed. Given a budget of memory for the index, which translates directly into a budget of landmarks, different landmark selection strategies can yield dramatically different results in terms of accuracy. A number of simple methods that scale well to large graphs are therefore developed and experimentally compared. The simplest methods choose central nodes of the graph, while the more elaborate ones select central nodes that are also far away from one another. The efficiency of the techniques presented in this thesis is tested experimentally using five different real world graphs with millions of edges; for a given accuracy, they require as much as 250 times less space than the current approach which considers selecting landmarks at random. Finally, they are applied in two important problems arising naturally in large-scale graphs, namely social search and community detection.
Resumo:
To provide real-time service or engineer constrained-based paths, networks require the underlying routing algorithm to be able to find low-cost paths that satisfy given Quality-of-Service (QoS) constraints. However, the problem of constrained shortest (least-cost) path routing is known to be NP-hard, and some heuristics have been proposed to find a near-optimal solution. However, these heuristics either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we focus on solving the delay-constrained minimum-cost path problem, and present a fast algorithm to find a near-optimal solution. This algorithm, called DCCR (for Delay-Cost-Constrained Routing), is a variant of the k-shortest path algorithm. DCCR uses a new adaptive path weight function together with an additional constraint imposed on the path cost, to restrict the search space. Thus, DCCR can return a near-optimal solution in a very short time. Furthermore, we use the method proposed by Blokh and Gutin to further reduce the search space by using a tighter bound on path cost. This makes our algorithm more accurate and even faster. We call this improved algorithm SSR+DCCR (for Search Space Reduction+DCCR). Through extensive simulations, we confirm that SSR+DCCR performs very well compared to the optimal but very expensive solution.
Resumo:
Recent empirical studies have shown that Internet topologies exhibit power laws of the form for the following relationships: (P1) outdegree of node (domain or router) versus rank; (P2) number of nodes versus outdegree; (P3) number of node pairs y = x^α within a neighborhood versus neighborhood size (in hops); and (P4) eigenvalues of the adjacency matrix versus rank. However, causes for the appearance of such power laws have not been convincingly given. In this paper, we examine four factors in the formation of Internet topologies. These factors are (F1) preferential connectivity of a new node to existing nodes; (F2) incremental growth of the network; (F3) distribution of nodes in space; and (F4) locality of edge connections. In synthetically generated network topologies, we study the relevance of each factor in causing the aforementioned power laws as well as other properties, namely diameter, average path length and clustering coefficient. Different kinds of network topologies are generated: (T1) topologies generated using our parametrized generator, we call BRITE; (T2) random topologies generated using the well-known Waxman model; (T3) Transit-Stub topologies generated using GT-ITM tool; and (T4) regular grid topologies. We observe that some generated topologies may not obey power laws P1 and P2. Thus, the existence of these power laws can be used to validate the accuracy of a given tool in generating representative Internet topologies. Power laws P3 and P4 were observed in nearly all considered topologies, but different topologies showed different values of the power exponent α. Thus, while the presence of power laws P3 and P4 do not give strong evidence for the representativeness of a generated topology, the value of α in P3 and P4 can be used as a litmus test for the representativeness of a generated topology. We also find that factors F1 and F2 are the key contributors in our study which provide the resemblance of our generated topologies to that of the Internet.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.