959 resultados para Minimal Realizations
Resumo:
Active particles contain internal degrees of freedom with the ability to take in and dissipate energy and, in the process, execute systematic movement. Examples include all living organisms and their motile constituents such as molecular motors. This article reviews recent progress in applying the principles of nonequilibrium statistical mechanics and hydrodynamics to form a systematic theory of the behavior of collections of active particles-active matter-with only minimal regard to microscopic details. A unified view of the many kinds of active matter is presented, encompassing not only living systems but inanimate analogs. Theory and experiment are discussed side by side.
Resumo:
We propose a physical mechanism to explain the origin of the intense burst of massive-star formation seen in colliding/merging, gas-rich, field spiral galaxies. We explicitly take account of the different parameters for the two main mass components, H-2 and H I, of the interstellar medium within a galaxy and follow their consequent different evolution during a collision between two galaxies. We also note that, in a typical spiral galaxy-like our galaxy, the Giant Molecular Clouds (GMCs) are in a near-virial equilibrium and form the current sites of massive-star formation, but have a low star formation rate. We show that this star formation rate is increased following a collision between galaxies. During a typical collision between two field spiral galaxies, the H I clouds from the two galaxies undergo collisions at a relative velocity of approximately 300 km s-1. However, the GMCs, with their smaller volume filling factor, do not collide. The collisions among the H I clouds from the two galaxies lead to the formation of a hot, ionized, high-pressure remnant gas. The over-pressure due to this hot gas causes a radiative shock compression of the outer layers of a preexisting GMC in the overlapping wedge region. This makes these layers gravitationally unstable, thus triggering a burst of massive-star formation in the initially barely stable GMCs.The resulting value of the typical IR luminosity from the young, massive stars from a pair of colliding galaxies is estimated to be approximately 2 x 10(11) L., in agreement with the observed values. In our model, the massive-star formation occurs in situ in the overlapping regions of a pair of colliding galaxies. We can thus explain the origin of enhanced star formation over an extended, central area approximately several kiloparsecs in size, as seen in typical colliding galaxies, and also the origin of starbursts in extranuclear regions of disk overlap as seen in Arp 299 (NGC 3690/IC 694) and in Arp 244 (NGC 4038/39). Whether the IR emission from the central region or that from the surrounding extranuclear galactic disk dominates depends on the geometry and the epoch of the collision and on the initial radial gas distribution in the two galaxies. In general, the central starburst would be stronger than that in the disks, due to the higher preexisting gas densities in the central region. The burst of star formation is expected to last over a galactic gas disk crossing time approximately 4 x 10(7) yr. We can also explain the simultaneous existence of nearly normal CO galaxy luminosities and shocked H-2 gas, as seen in colliding field galaxies.This is a minimal model, in that the only necessary condition for it to work is that there should be a sufficient overlap between the spatial gas distributions of the colliding galaxy pair.
Resumo:
A spanning tree T of a graph G is said to be a tree t-spanner if the distance between any two vertices in T is at most t times their distance in G. A graph that has a tree t-spanner is called a tree t-spanner admissible graph. The problem of deciding whether a graph is tree t-spanner admissible is NP-complete for any fixed t >= 4 and is linearly solvable for t <= 2. The case t = 3 still remains open. A chordal graph is called a 2-sep chordal graph if all of its minimal a - b vertex separators for every pair of non-adjacent vertices a and b are of size two. It is known that not all 2-sep chordal graphs admit tree 3-spanners This paper presents a structural characterization and a linear time recognition algorithm of tree 3-spanner admissible 2-sep chordal graphs. Finally, a linear time algorithm to construct a tree 3-spanner of a tree 3-spanner admissible 2-sep chordal graph is proposed. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Mycotoxins are secondary metabolites of filamentous fungi. They pose a health risk to humans and animals due to their harmful biological properties and common occurrence in food and feed. Liquid chromatography/mass spectrometry (LC/MS) has gained popularity in the trace analysis of food contaminants. In this study, the applicability of the technique was evaluated in multi-residue methods of mycotoxins aiming at simultaneous detection of chemically diverse compounds. Methods were developed for rapid determination of toxins produced by fungal genera of Aspergillus, Fusarium, Penicillium and Claviceps from cheese, cereal based agar matrices and grains. Analytes were extracted from these matrices with organic solvents. Minimal sample clean-up was carried out before the analysis of the mycotoxins with reversed phase LC coupled to tandem MS (MS/MS). The methods were validated and applied for investigating mycotoxins in cheese and ergot alkaloid occurrence in Finnish grains. Additionally, the toxin production of two Fusarium species predominant in northern Europe was studied. Nine mycotoxins could be determined from cheese with the method developed. The limits of quantification (LOQ) allowed the quantification at concentrations varying from 0.6 to 5.0 µg/kg. The recoveries ranged between 96 and 143 %, and the within-day repeatability (as relative standard deviation, RSDr) between 2.3 and 12.1 %. Roquefortine C and mycophenolic acid could be detected at levels of 300 up to 12000 µg/kg in the mould cheese samples analysed. A total of 29 or 31 toxins could be analysed with the method developed for agar matrices and grains, with the LOQs ranging overall from 0.1 to 1250 µg/kg. The recoveries ranged generally between 44 and 139 %, and the RSDr between 2.0 and 38 %. Type-A trichothecenes and beauvericin were determined from the cereal based agar and grain cultures of F. sporotrichioides and F. langsethiae. T-2 toxin was the main metabolite, the average levels reaching 22000 µg/kg in the grain cultures after 28 days of incubation. The method developed for ten ergot alkaloids from grains allowed their quantification at levels varying from 0.01 to 10 µg/kg. The recoveries ranged from 51 to 139 %, and the RSDr from 0.6 to 13.9 %. Ergot alkaloids were measured in barley and rye at average levels of 59 and 720 µg/kg, respectively. The two most prevalent alkaloids were ergocornine and ergocristine. The LC/MS methods developed enabled rapid detection of mycotoxins in such applications where several toxins co-occurred. Generally, the performance of the methods was good, allowing reliable analysis of the mycotoxins of interest with sufficiently low quantification limits. However, the variation in validation results highlighted the challenges related to optimising this type of multi-residue methods. New data was obtained about the occurrence of mycotoxins in mould cheeses and of ergot alkaloids in Finnish grains. In addition, the study revealed the high mycotoxin-producing potential of two common fungi in Finnish crops. The information can be useful when risks related to fungal and mycotoxin contamination will be assessed.
Resumo:
Hantaviruses are one of the five genera of the vector-borne virus family Bunyaviridae. While other members of the family are transmitted via arthropods, hantaviruses are carried and transmitted by rodents and insectivores. Occasional transmission to humans occurs via inhalation of aerosolized rodent excreta. When transmitted to man hantaviruses cause hemorrhagic fever with renal syndrome (HFRS, in Eurasia, mortality ~10%) and hantavirus cardiopulmonary syndrome (HCPS, in the Americas, mortality ~40%). The single-stranded, negative-sense RNA genome of hantaviruses is in segments S, M and L that respectively encode for nucleocapsid (N), glycoproteins Gn and Gc, and RNA-dependent RNA-polymerase (RdRp or L protein). The genome segments, encapsidated by N protein to form ribonucleoprotein (RNP), are enclosed inside a lipid envelope decorated by spikes formed of Gn and Gc. The focus of this study was to understand the mechanisms and interactions through which the virion is formed and maintained. We observed that when extracted from virions both Gn and Gc favor homo- over hetero-oligomerization. The minimal glycoprotein complexes extracted from virion by detergent were observed, by using ultracentrifugation and gel filtration, to be tetrameric Gn and homodimeric Gc. These results led us to suggest a model where tetrameric Gn complexes are interconnected through homodimeric Gc units to form the grid-like surface architecture described for hantaviruses. This model was found to correlate with the three-dimensional (3D) reconstruction of virion surface created using cryo-electron tomography (cryo-ET). The 3D-density map showed the spike complex formed of Gn and Gc to be 10 nm high and to display a four-fold symmetry with dimensions of 15 nm times 15 nm. This unique square-shaped complex on a roughly round virion creates a hitch for the assembly, since a sphere cannot be broken into rectangles. Thus additional interactions are likely required for the virion assembly. In cryo-ET we observed that the RNP makes occasional contacts to the viral membrane, suggesting an interaction between the spike and RNP. We were able to demonstrate this interaction using various techniques, and showed that both Gn and Gc contribute to the interaction. This led us to suggest that in addition to the interactions between Gn and Gc, also the interaction between spike and RNP is required for assembly. We found galectin-3 binding protein (referred to as 90K) to co-purify with the virions and showed an interaction between 90K and the virion. Analysis of plasma samples taken from patients hospitalized for Puumala virus infection showed increased concentrations of 90K in the acute phase and the increased 90K level was found to correlate with several parameters that reflect the severity of acute HFRS. The results of these studies confirmed, but also challenged some of the dogmas on the structure and assembly of hantaviruses. We confirmed that Gn and RNP do interact, as long assumed. On the other hand we demonstrated that the glycoproteins Gn and Gc exist as homo-oligomers or appear in large hetero-oligomeric complexes, rather than form primarily heterodimers as was previously assumed. This work provided new insight into the structure and assembly of hantaviruses.
Resumo:
A linear state feedback gain vector used in the control of a single input dynamical system may be constrained because of the way feedback is realized. Some examples of feedback realizations which impose constraints on the gain vector are: static output feedback, constant gain feedback for several operating points of a system, and two-controller feedback. We consider a general class of problems of stabilization of single input dynamical systems with such structural constraints and give a numerical method to solve them. Each of these problems is cast into a problem of solving a system of equalities and inequalities. In this formulation, the coefficients of the quadratic and linear factors of the closed-loop characteristic polynomial are the variables. To solve the system of equalities and inequalities, a continuous realization of the gradient projection method and a barrier method are used under the homotopy framework. Our method is illustrated with an example for each class of control structure constraint.
Resumo:
Mining and blending operations in the high grade iron ore deposit under study are performed to optimize recovery with minimal alumina content while maintaining required levels of other chemical component and a proper mix of ore types. In the present work the regionalisation of alumina in the ores has been studied independently and its effects on global and local recoverable tonnage as well as on alternatives of mining operations have been evaluated. The global tonnage recovery curves for blocks (20m x 20m x 12m) obtained by simulation closely approximated the curves obtained theoretically using a change of support under the discretised gaussian model. Variations in block size up to 80m x 20m x 12m did not affect the recovery as the horizontal dimensions of the blocks are small in relation to the range of the variogram. A comparison of the local tonnage recovery curves obtained through multiple conditional simulations made with that obtained by the method of uniform conditioning of block grades on an estimate of panel 100m x 100m x 12m panel grade reveals comparable results only in panels which have been well conditioned and possesing an ensemble simulation mean close to the ordinary kriged value for the panel. Study of simple alternative sequence of mining on the conditionally simulated deposit shows that concentration of mining operations simultaneously on a single bench enhances the fluctuation in alumina values of ore mined.
Resumo:
Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.
Resumo:
The performance of a program will ultimately be limited by its serial (scalar) portion, as pointed out by Amdahl′s Law. Reported studies thus far of instruction-level parallelism have mixed data-parallel program portions with scalar program portions, often leading to contradictory and controversial results. We report an instruction-level behavioral characterization of scalar code containing minimal data-parallelism, extracted from highly vectorized programs of the PERFECT benchmark suite running on a Cray Y-MP system. We classify scalar basic blocks according to their instruction mix, characterize the data dependencies seen in each class, and, as a first step, measure the maximum intrablock instruction-level parallelism available. We observe skewed rather than balanced instruction distributions in scalar code and in individual basic block classes of scalar code; nonuniform distribution of parallelism across instruction classes; and, as expected, limited available intrablock parallelism. We identify frequently occurring data-dependence patterns and discuss new instructions to reduce latency. Toward effective scalar hardware, we study latency-pipelining trade-offs and restricted multiple instruction issue mechanisms.
Resumo:
We address risk minimizing option pricing in a regime switching market where the floating interest rate depends on a finite state Markov process. The growth rate and the volatility of the stock also depend on the Markov process. Using the minimal martingale measure, we show that the locally risk minimizing prices for certain exotic options satisfy a system of Black-Scholes partial differential equations with appropriate boundary conditions. We find the corresponding hedging strategies and the residual risk. We develop suitable numerical methods to compute option prices.
Resumo:
Failure to repair DNA double-strand breaks (DSBs) can lead to cell death or cancer. Although nonhomologous end joining (NHEJ) has been studied extensively in mammals, little is known about it in primary tissues. Using oligomeric DNA mimicking endogenous DSBs, NHEJ in cell-free extracts of rat tissues were studied. Results show that efficiency of NHEJ is highest in lungs compared to other somatic tissues. DSBs with compatible and blunt ends joined without modifications, while noncompatible ends joined with minimal alterations in lungs and testes. Thymus exhibited elevated joining, followed by brain and spleen, which could be correlated with NHEJ gene expression. However, NHEJ efficiency was poor in terminally differentiated organs like heart, kidney and liver. Strikingly, NHEJ junctions from these tissues also showed extensive deletions and insertions. Hence, for the first time, we show that despite mode of joining being generally comparable, efficiency of NHEJ varies among primary tissues of mammals.
Resumo:
Let G be a simple, undirected, finite graph with vertex set V(G) and edge set E(C). A k-dimensional box is a Cartesian product of closed intervals a(1), b(1)] x a(2), b(2)] x ... x a(k), b(k)]. The boxicity of G, box(G) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional boxes, i.e. each vertex is mapped to a k-dimensional box and two vertices are adjacent in G if and only if their corresponding boxes intersect. Let P = (S, P) be a poset where S is the ground set and P is a reflexive, anti-symmetric and transitive binary relation on S. The dimension of P, dim(P) is the minimum integer l such that P can be expressed as the intersection of t total orders. Let G(P) be the underlying comparability graph of P. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset P, box(G(P))/(chi(G(P)) - 1) <= dim(P) <= 2box(G(P)), where chi(G(P)) is the chromatic number of G(P) and chi(G(P)) not equal 1. The second result of the paper relates the boxicity of a graph G with a natural partial order associated with its extended double cover, denoted as G(c). Let P-c be the natural height-2 poset associated with G(c) by making A the set of minimal elements and B the set of maximal elements. We show that box(G)/2 <= dim(P-c) <= 2box(G) + 4. These results have some immediate and significant consequences. The upper bound dim(P) <= 2box(G(P)) allows us to derive hitherto unknown upper bounds for poset dimension. In the other direction, using the already known bounds for partial order dimension we get the following: (I) The boxicity of any graph with maximum degree Delta is O(Delta log(2) Delta) which is an improvement over the best known upper bound of Delta(2) + 2. (2) There exist graphs with boxicity Omega(Delta log Delta). This disproves a conjecture that the boxicity of a graph is O(Delta). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0, unless NP=ZPP.
Resumo:
The cell envelope of Mycobacterium tuberculosis (M. tuberculosis) is composed of a variety of lipids including mycolic acids, sulpholipids, lipoarabinomannans, etc., which impart rigidity crucial for its survival and pathogenesis. Acyl CoA carboxylase (ACC) provides malonyl-CoA and methylmalonyl-CoA, committed precursors for fatty acid and essential for mycolic acid synthesis respectively. Biotin Protein Ligase (BPL/BirA) activates apo-biotin carboxyl carrier protein (BCCP) by biotinylating it to an active holo-BCCP. A minimal peptide (Schatz), an efficient substrate for Escherichia coli BirA, failed to serve as substrate for M. tuberculosis Biotin Protein Ligase (MtBPL). MtBPL specifically biotinylates homologous BCCP domain, MtBCCP87, but not EcBCCP87. This is a unique feature of MtBPL as EcBirA lacks such a stringent substrate specificity. This feature is also reflected in the lack of self/promiscuous biotinylation by MtBPL. The N-terminus/HTH domain of EcBirA has the selfbiotinable lysine residue that is inhibited in the presence of Schatz peptide, a peptide designed to act as a universal acceptor for EcBirA. This suggests that when biotin is limiting, EcBirA preferentially catalyzes, biotinylation of BCCP over selfbiotinylation. R118G mutant of EcBirA showed enhanced self and promiscuous biotinylation but its homologue, R69A MtBPL did not exhibit these properties. The catalytic domain of MtBPL was characterized further by limited proteolysis. Holo-MtBPL is protected from proteolysis by biotinyl-59 AMP, an intermediate of MtBPL catalyzed reaction. In contrast, apo-MtBPL is completely digested by trypsin within 20 min of co-incubation. Substrate selectivity and inability to promote self biotinylation are exquisite features of MtBPL and are a consequence of the unique molecular mechanism of an enzyme adapted for the high turnover of fatty acid biosynthesis.
Resumo:
Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.
Resumo:
We consider the problem of goal seeking by robots in unknown environments. We present a frontier based algorithm for finding a route to a goal in a fully unknown environment, where information about the goal region (GR), the region where the goal is most likely to be located, is available. Our algorithm efficiently chooses the best candidate frontier cell, which is on the boundary between explored space and unexplored space, having the maximum ``goal seeking index'', to reach the goal in minimal number of moves. Modification of the algorithm is also proposed to further reduce the number of moves toward the goal. The algorithm has been tested extensively in simulation runs and results demonstrate that the algorithm effectively directs the robot to the goal and completes the search task in minimal number of moves in bounded as well as unbounded environments. The algorithm is shown to perform as well as a state of the art agent centered search algorithm RTAA*, in cluttered environments if exact location of the goal is known at the beginning of the mission and is shown to perform better in uncluttered environments.