232 resultados para Minimal


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a physical mechanism to explain the origin of the intense burst of massive-star formation seen in colliding/merging, gas-rich, field spiral galaxies. We explicitly take account of the different parameters for the two main mass components, H-2 and H I, of the interstellar medium within a galaxy and follow their consequent different evolution during a collision between two galaxies. We also note that, in a typical spiral galaxy-like our galaxy, the Giant Molecular Clouds (GMCs) are in a near-virial equilibrium and form the current sites of massive-star formation, but have a low star formation rate. We show that this star formation rate is increased following a collision between galaxies. During a typical collision between two field spiral galaxies, the H I clouds from the two galaxies undergo collisions at a relative velocity of approximately 300 km s-1. However, the GMCs, with their smaller volume filling factor, do not collide. The collisions among the H I clouds from the two galaxies lead to the formation of a hot, ionized, high-pressure remnant gas. The over-pressure due to this hot gas causes a radiative shock compression of the outer layers of a preexisting GMC in the overlapping wedge region. This makes these layers gravitationally unstable, thus triggering a burst of massive-star formation in the initially barely stable GMCs.The resulting value of the typical IR luminosity from the young, massive stars from a pair of colliding galaxies is estimated to be approximately 2 x 10(11) L., in agreement with the observed values. In our model, the massive-star formation occurs in situ in the overlapping regions of a pair of colliding galaxies. We can thus explain the origin of enhanced star formation over an extended, central area approximately several kiloparsecs in size, as seen in typical colliding galaxies, and also the origin of starbursts in extranuclear regions of disk overlap as seen in Arp 299 (NGC 3690/IC 694) and in Arp 244 (NGC 4038/39). Whether the IR emission from the central region or that from the surrounding extranuclear galactic disk dominates depends on the geometry and the epoch of the collision and on the initial radial gas distribution in the two galaxies. In general, the central starburst would be stronger than that in the disks, due to the higher preexisting gas densities in the central region. The burst of star formation is expected to last over a galactic gas disk crossing time approximately 4 x 10(7) yr. We can also explain the simultaneous existence of nearly normal CO galaxy luminosities and shocked H-2 gas, as seen in colliding field galaxies.This is a minimal model, in that the only necessary condition for it to work is that there should be a sufficient overlap between the spatial gas distributions of the colliding galaxy pair.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A spanning tree T of a graph G is said to be a tree t-spanner if the distance between any two vertices in T is at most t times their distance in G. A graph that has a tree t-spanner is called a tree t-spanner admissible graph. The problem of deciding whether a graph is tree t-spanner admissible is NP-complete for any fixed t >= 4 and is linearly solvable for t <= 2. The case t = 3 still remains open. A chordal graph is called a 2-sep chordal graph if all of its minimal a - b vertex separators for every pair of non-adjacent vertices a and b are of size two. It is known that not all 2-sep chordal graphs admit tree 3-spanners This paper presents a structural characterization and a linear time recognition algorithm of tree 3-spanner admissible 2-sep chordal graphs. Finally, a linear time algorithm to construct a tree 3-spanner of a tree 3-spanner admissible 2-sep chordal graph is proposed. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mining and blending operations in the high grade iron ore deposit under study are performed to optimize recovery with minimal alumina content while maintaining required levels of other chemical component and a proper mix of ore types. In the present work the regionalisation of alumina in the ores has been studied independently and its effects on global and local recoverable tonnage as well as on alternatives of mining operations have been evaluated. The global tonnage recovery curves for blocks (20m x 20m x 12m) obtained by simulation closely approximated the curves obtained theoretically using a change of support under the discretised gaussian model. Variations in block size up to 80m x 20m x 12m did not affect the recovery as the horizontal dimensions of the blocks are small in relation to the range of the variogram. A comparison of the local tonnage recovery curves obtained through multiple conditional simulations made with that obtained by the method of uniform conditioning of block grades on an estimate of panel 100m x 100m x 12m panel grade reveals comparable results only in panels which have been well conditioned and possesing an ensemble simulation mean close to the ordinary kriged value for the panel. Study of simple alternative sequence of mining on the conditionally simulated deposit shows that concentration of mining operations simultaneously on a single bench enhances the fluctuation in alumina values of ore mined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of a program will ultimately be limited by its serial (scalar) portion, as pointed out by Amdahl′s Law. Reported studies thus far of instruction-level parallelism have mixed data-parallel program portions with scalar program portions, often leading to contradictory and controversial results. We report an instruction-level behavioral characterization of scalar code containing minimal data-parallelism, extracted from highly vectorized programs of the PERFECT benchmark suite running on a Cray Y-MP system. We classify scalar basic blocks according to their instruction mix, characterize the data dependencies seen in each class, and, as a first step, measure the maximum intrablock instruction-level parallelism available. We observe skewed rather than balanced instruction distributions in scalar code and in individual basic block classes of scalar code; nonuniform distribution of parallelism across instruction classes; and, as expected, limited available intrablock parallelism. We identify frequently occurring data-dependence patterns and discuss new instructions to reduce latency. Toward effective scalar hardware, we study latency-pipelining trade-offs and restricted multiple instruction issue mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address risk minimizing option pricing in a regime switching market where the floating interest rate depends on a finite state Markov process. The growth rate and the volatility of the stock also depend on the Markov process. Using the minimal martingale measure, we show that the locally risk minimizing prices for certain exotic options satisfy a system of Black-Scholes partial differential equations with appropriate boundary conditions. We find the corresponding hedging strategies and the residual risk. We develop suitable numerical methods to compute option prices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Failure to repair DNA double-strand breaks (DSBs) can lead to cell death or cancer. Although nonhomologous end joining (NHEJ) has been studied extensively in mammals, little is known about it in primary tissues. Using oligomeric DNA mimicking endogenous DSBs, NHEJ in cell-free extracts of rat tissues were studied. Results show that efficiency of NHEJ is highest in lungs compared to other somatic tissues. DSBs with compatible and blunt ends joined without modifications, while noncompatible ends joined with minimal alterations in lungs and testes. Thymus exhibited elevated joining, followed by brain and spleen, which could be correlated with NHEJ gene expression. However, NHEJ efficiency was poor in terminally differentiated organs like heart, kidney and liver. Strikingly, NHEJ junctions from these tissues also showed extensive deletions and insertions. Hence, for the first time, we show that despite mode of joining being generally comparable, efficiency of NHEJ varies among primary tissues of mammals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let G be a simple, undirected, finite graph with vertex set V(G) and edge set E(C). A k-dimensional box is a Cartesian product of closed intervals a(1), b(1)] x a(2), b(2)] x ... x a(k), b(k)]. The boxicity of G, box(G) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes, i.e. each vertex is mapped to a k-dimensional box and two vertices are adjacent in G if and only if their corresponding boxes intersect. Let P = (S, P) be a poset where S is the ground set and P is a reflexive, anti-symmetric and transitive binary relation on S. The dimension of P, dim(P) is the minimum integer l such that P can be expressed as the intersection of t total orders. Let G(P) be the underlying comparability graph of P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G(P))/(chi(G(P)) - 1) <= dim(P) <= 2box(G(P)), where chi(G(P)) is the chromatic number of G(P) and chi(G(P)) not equal 1. The second result of the paper relates the boxicity of a graph G with a natural partial order associated with its extended double cover, denoted as G(c). Let P-c be the natural height-2 poset associated with G(c) by making A the set of minimal elements and B the set of maximal elements. We show that box(G)/2 <= dim(P-c) <= 2box(G) + 4. These results have some immediate and significant consequences. The upper bound dim(P) <= 2box(G(P)) allows us to derive hitherto unknown upper bounds for poset dimension. In the other direction, using the already known bounds for partial order dimension we get the following: (I) The boxicity of any graph with maximum degree Delta is O(Delta log(2) Delta) which is an improvement over the best known upper bound of Delta(2) + 2. (2) There exist graphs with boxicity Omega(Delta log Delta). This disproves a conjecture that the boxicity of a graph is O(Delta). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0, unless NP=ZPP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cell envelope of Mycobacterium tuberculosis (M. tuberculosis) is composed of a variety of lipids including mycolic acids, sulpholipids, lipoarabinomannans, etc., which impart rigidity crucial for its survival and pathogenesis. Acyl CoA carboxylase (ACC) provides malonyl-CoA and methylmalonyl-CoA, committed precursors for fatty acid and essential for mycolic acid synthesis respectively. Biotin Protein Ligase (BPL/BirA) activates apo-biotin carboxyl carrier protein (BCCP) by biotinylating it to an active holo-BCCP. A minimal peptide (Schatz), an efficient substrate for Escherichia coli BirA, failed to serve as substrate for M. tuberculosis Biotin Protein Ligase (MtBPL). MtBPL specifically biotinylates homologous BCCP domain, MtBCCP87, but not EcBCCP87. This is a unique feature of MtBPL as EcBirA lacks such a stringent substrate specificity. This feature is also reflected in the lack of self/promiscuous biotinylation by MtBPL. The N-terminus/HTH domain of EcBirA has the selfbiotinable lysine residue that is inhibited in the presence of Schatz peptide, a peptide designed to act as a universal acceptor for EcBirA. This suggests that when biotin is limiting, EcBirA preferentially catalyzes, biotinylation of BCCP over selfbiotinylation. R118G mutant of EcBirA showed enhanced self and promiscuous biotinylation but its homologue, R69A MtBPL did not exhibit these properties. The catalytic domain of MtBPL was characterized further by limited proteolysis. Holo-MtBPL is protected from proteolysis by biotinyl-59 AMP, an intermediate of MtBPL catalyzed reaction. In contrast, apo-MtBPL is completely digested by trypsin within 20 min of co-incubation. Substrate selectivity and inability to promote self biotinylation are exquisite features of MtBPL and are a consequence of the unique molecular mechanism of an enzyme adapted for the high turnover of fatty acid biosynthesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of goal seeking by robots in unknown environments. We present a frontier based algorithm for finding a route to a goal in a fully unknown environment, where information about the goal region (GR), the region where the goal is most likely to be located, is available. Our algorithm efficiently chooses the best candidate frontier cell, which is on the boundary between explored space and unexplored space, having the maximum ``goal seeking index'', to reach the goal in minimal number of moves. Modification of the algorithm is also proposed to further reduce the number of moves toward the goal. The algorithm has been tested extensively in simulation runs and results demonstrate that the algorithm effectively directs the robot to the goal and completes the search task in minimal number of moves in bounded as well as unbounded environments. The algorithm is shown to perform as well as a state of the art agent centered search algorithm RTAA*, in cluttered environments if exact location of the goal is known at the beginning of the mission and is shown to perform better in uncluttered environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

his paper studies the problem of designing a logical topology over a wavelength-routed all-optical network (AON) physical topology, The physical topology consists of the nodes and fiber links in the network, On an AON physical topology, we can set up lightpaths between pairs of nodes, where a lightpath represents a direct optical connection without any intermediate electronics, The set of lightpaths along with the nodes constitutes the logical topology, For a given network physical topology and traffic pattern (relative traffic distribution among the source-destination pairs), our objective is to design the logical topology and the routing algorithm on that topology so as to minimize the network congestion while constraining the average delay seen by a source-destination pair and the amount of processing required at the nodes (degree of the logical topology), We will see that ignoring the delay constraints can result in fairly convoluted logical topologies with very long delays, On the other hand, in all our examples, imposing it results in a minimal increase in congestion, While the number of wavelengths required to imbed the resulting logical topology on the physical all optical topology is also a constraint in general, we find that in many cases of interest this number can be quite small, We formulate the combined logical topology design and routing problem described above (ignoring the constraint on the number of available wavelengths) as a mixed integer linear programming problem which we then solve for a number of cases of a six-node network, Since this programming problem is computationally intractable for larger networks, we split it into two subproblems: logical topology design, which is computationally hard and will probably require heuristic algorithms, and routing, which can be solved by a linear program, We then compare the performance of several heuristic topology design algorithms (that do take wavelength assignment constraints into account) against that of randomly generated topologies, as well as lower bounds derived in the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A theoretical study of the dynamics of photo-electron transfer reactions in the Marcus inverted regime is presented. This study is motivated partly by the recent proposal of Barbara et al. (J. Phys. Chem. 96, 3728, 1991) that a minimal model of an electron transfer reaction should consist of a polar solvent mode (X), a low frequency vibrational mode (Q) and one high frequency mode (q). Interplay between these modes may be responsible for the crossover observed in the dynamics from a solvent controlled to a vibrational controlled electron transfer. The following results have been obtained. (i) In the case of slowly relaxing solvents, the proximity of the point of excitation to an effective sink on the excited surface is critical in determining the decay of the reactant population. This is because the Franck-Condon overlap between the reactant ground and the product excited states decreases rapidly with increase in the quantum number of the product vibrational state. (ii) Non-exponential solvation dynamics has an important effect in determining the rates of electron transfer. Especially, a biphasic solvation and a large coupling between the reactant and the product states both may be needed to explain the experimental results. ©1996 American Institute of Physics

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel approach for lossless as well as lossy compression of monochrome images using Boolean minimization is proposed. The image is split into bit planes. Each bit plane is divided into windows or blocks of variable size. Each block is transformed into a Boolean switching function in cubical form, treating the pixel values as output of the function. Compression is performed by minimizing these switching functions using ESPRESSO, a cube based two level function minimizer. The minimized cubes are encoded using a code set which satisfies the prefix property. Our technique of lossless compression involves linear prediction as a preprocessing step and has compression ratio comparable to that of JPEG lossless compression technique. Our lossy compression technique involves reducing the number of bit planes as a preprocessing step which incurs minimal loss in the information of the image. The bit planes that remain after preprocessing are compressed using our lossless compression technique based on Boolean minimization. Qualitatively one cannot visually distinguish between the original image and the lossy image and the value of mean square error is kept low. For mean square error value close to that of JPEG lossy compression technique, our method gives better compression ratio. The compression scheme is relatively slower while the decompression time is comparable to that of JPEG.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A method based on the minimal-spanning tree is extended to a collection of points in three dimensions. Two parameters, the average edge length and its standard deviation characterize the disorder. The structural phase diagram for a monatomic system of particles and the characteristic values for the uniform random distribution of points have been obtained. The method is applied to hard spheres and Lennard-Jones systems. These systems occupy distinct regions in the structural phase diagram. The structure of the Lennard-Jones system approaches that of the defective close-packed arrangements at low temperatures whereas in the liquid regime, it deviates from the close-packed configuration.