977 resultados para Complete K-ary Tree


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A thermodynamic study of the Ti-O system at 1573 K has been conducted using a combination of thermogravimetric and emf techniques. The results indicate that the variation of oxygen potential with the nonstoichiometric parameter delta in stability domain of TiO2-delta with rutile structure can be represented by the relation, Delta mu o(2) = -6RT In delta - 711970(+/-1600) J/mol. The corresponding relation between non-stoichiometric parameter delta and partial pressure of oxygen across the whole stability range of TiO2-delta at 1573 K is delta proportional to P-O2(-1/6). It is therefore evident that the oxygen deficient behavior of nonstoichiometric TiO2-delta is dominated by the presence of doubly charged oxygen vacancies and free electrons. The high-precision measurements enabled the resolution of oxygen potential steps corresponding to the different Magneli phases (Ti-n O2n-1) up to n = 15. Beyond this value of n, the oxygen potential steps were too small to be resolved. Based on composition of the Magneli phase in equilibrium with TiO2-delta, the maximum value of n is estimated to be 28. The chemical potential of titanium was derived as a function of composition using the Gibbs-Duhem relation. Gibbs energies of formation of the Magneli phases were derived from the chemical potentials of oxygen and titanium. The values of -2441.8(+/-5.8) kJ/mol for Ti4O7 and -1775.4(+/-4.3) kJ/mol for Ti3O5 Obtained in this study refine values of -2436.2(+/-26.1) kJ/mol and-1771.3(+/-6.9) kJ/mol, respectively, given in the JANAF thermochemical tables.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The solution conformation of alamethicin, a 20-residue antibiotic peptide, has been investigated using two-dimensional n.m.r. spectroscopy. Complete proton resonance assignments of this peptide have been carried out using COSY, SUPERCOSY, RELAY COSY and NOESY two-dimensional spectroscopies. Observation of a large number of nuclear Overhauser effects between sequential backbone amide protons, between backbone amide protons and CβH protons of preceding residues and extensive intramolecular hydrogen bonding patterns of NH protons has established that this polypeptide is in a largely helical conformation. This result is in conformity with earlier reported solid state X-ray results and a recent n.m.r. study in methanol solution (Esposito et al. (1987) Biochemistry26, 1043-1050) but is at variance with an earlier study which favored an extended conformation for the C-terminal half of alamethicin (Bannerjee et al.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The k-colouring problem is to colour a given k-colourable graph with k colours. This problem is known to be NP-hard even for fixed k greater than or equal to 3. The best known polynomial time approximation algorithms require n(delta) (for a positive constant delta depending on k) colours to colour an arbitrary k-colourable n-vertex graph. The situation is entirely different if we look at the average performance of an algorithm rather than its worst-case performance. It is well known that a k-colourable graph drawn from certain classes of distributions can be ii-coloured almost surely in polynomial time. In this paper, we present further results in this direction. We consider k-colourable graphs drawn from the random model in which each allowed edge is chosen independently with probability p(n) after initially partitioning the vertex set into ii colour classes. We present polynomial time algorithms of two different types. The first type of algorithm always runs in polynomial time and succeeds almost surely. Algorithms of this type have been proposed before, but our algorithms have provably exponentially small failure probabilities. The second type of algorithm always succeeds and has polynomial running time on average. Such algorithms are more useful and more difficult to obtain than the first type of algorithms. Our algorithms work as long as p(n) greater than or equal to n(-1+is an element of) where is an element of is a constant greater than 1/4.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Oxides with different cation ratios 2122, 2212, 2213 and 2223 in the Ti-Ca-Ba-Cu-O system exhibit onset of superconductivity in the 110–125 K range with zero-resistance in the 95–105 K range. Electron microscopic studies show dislocations, layered morphology and other interesting features. These oxides absorb electromagnetic radiation (9.11 GHz) in the superconducting phase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thermodynamics of Cr-Mn alloys have been studied by Eremenko et al (l) using a fused salt e.m.f.technique. Their results indicate positive deviations from ideality at 1023 K. Kaufman (2) has independently estimated negative enthaipy and excess entropy for the b.c.c. Cr-Mn alloys, such that at high temperatures, the entropy term predominates over the enthalpy term giving positive deviations from ideality. Recently the thermodynamic properties of the alloys have been measured by 3acob (3) using a Knudsen cell technique in the temperature range of 1200 to 1500 K. The results indicate mild negative deviations from ideality over the entire composition range. Because of the differences in the reported results and Mn being a volatile component in the alloys which leads to surface depletion under a dynamic set up, an isopiestic technique is used to measure the properties of the alloys.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forest certification has been put forward as a means to improve the sustainability of forest management in the tropical countries, where traditional environmental regulation has been inefficient in controlling forest degradation and deforestation. In these countries, the role of communities as managers of the forest resources is rapidly increasing. However, only a fraction of tropical community forests have been certified and little is known about the impacts of certification in these systems. Two areas in Honduras where community-managed forest operations had received FSC certifications were studied. Río Cangrejal represents an area with a longer history of use, whereas Copén is a more recent forest operation. Ecological sustainability was assessed through comparing timber tree regeneration and floristic composition between certified, conventionally managed and natural forests. Data on woody vegetation and environmental conditions was collected within logging gaps and natural treefall gaps. The regeneration success of shade-tolerant timber tree species was lower in certified than in conventionally managed forests in Río Cangrejal. Furthermore, the floristic composition was more natural-like in the conventionally managed than the certified forests. However, the environmental conditions indicated reduced logging disturbance in the certified forests. Data from Copén demonstrated that the regeneration success of light-demanding timber species was higher in the certified than the unlogged forests. In spite of this, the most valuable timber species Swietenia macrophylla was not regenerating successfully in the certified forests, due to rapid gap closure. The results indicate that pre-certification loggings and forest fragmentation may have a stronger impact on forest regeneration than current, certified management practices. The focus in community forests under low-intensive logging should be directed toward landscape connectivity and the restoration of degraded timber species, instead of reducing mechanical logging damage. Such actions are dependent on better recognition of resource rights, and improving the status of small Southern producers in the markets of certified wood products.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we introduce and evaluate testing procedures for specifying the number k of nearest neighbours in the weights matrix of spatial econometric models. The spatial J-test is used for specification search. Two testing procedures are suggested: an increasing neighbours testing procedure and a decreasing neighbours testing procedure. Simulations show that the increasing neighbours testing procedures can be used in large samples to determine k. The decreasing neighbours testing procedure is found to have low power, and is not recommended for use in practice. An empirical example involving house price data is provided to show how to use the testing procedures with real data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several lines of evidence suggest that cancer progression is associated with up-regulation or reactivation of telomerase and the underlying mechanism remains an active area of research. The heterotrimeric MRN complex, consisting of Mre11, Rad50 and Nbs1, which is required for the repair of double-strand breaks, plays a key role in telomere length maintenance. In this study, we show significant differences in the levels of expression of MRN complex subunits among various cancer cells and somatic cells. Notably, siRNA-mediated depletion of any of the subunits of MRN complex led to complete ablation of other subunits of the complex. Treatment of leukemia and prostate cancer cells with etoposide lead to increased expression of MRN complex subunits, with concomitant decrease in the levels of telomerase activity, compared to breast cancer cells. These studies raise the possibility of developing anti-cancer drugs targeting MRN complex subunits to sensitize a subset of cancer cells to radio- and/or chemotherapy. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A k-dimensional box is the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval on the real line. The boxicity of a graph G, denoted as box(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-dimensional boxes. A unit cube in k-dimensional space or a k-cube is defined as the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval oil the real line of the form a(i), a(i) + 1]. The cubicity of G, denoted as cub(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-cubes. The threshold dimension of a graph G(V, E) is the smallest integer k such that E can be covered by k threshold spanning subgraphs of G. In this paper we will show that there exists no polynomial-time algorithm for approximating the threshold dimension of a graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. From this result we will show that there exists no polynomial-time algorithm for approximating the boxicity and the cubicity of a graph on n vertices with factor O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. In fact all these hardness results hold even for a highly structured class of graphs, namely the split graphs. We will also show that it is NP-complete to determine whether a given split graph has boxicity at most 3. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The domination and Hamilton circuit problems are of interest both in algorithm design and complexity theory. The domination problem has applications in facility location and the Hamilton circuit problem has applications in routing problems in communications and operations research.The problem of deciding if G has a dominating set of cardinality at most k, and the problem of determining if G has a Hamilton circuit are NP-Complete. Polynomial time algorithms are, however, available for a large number of restricted classes. A motivation for the study of these algorithms is that they not only give insight into the characterization of these classes but also require a variety of algorithmic techniques and data structures. So the search for efficient algorithms, for these problems in many classes still continues.A class of perfect graphs which is practically important and mathematically interesting is the class of permutation graphs. The domination problem is polynomial time solvable on permutation graphs. Algorithms that are already available are of time complexity O(n2) or more, and space complexity O(n2) on these graphs. The Hamilton circuit problem is open for this class.We present a simple O(n) time and O(n) space algorithm for the domination problem on permutation graphs. Unlike the existing algorithms, we use the concept of geometric representation of permutation graphs. Further, exploiting this geometric notion, we develop an O(n2) time and O(n) space algorithm for the Hamilton circuit problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a novel and efficient algorithm for modelling sub-65 nm clock interconnect-networks in the presence of process variation. We develop a method for delay analysis of interconnects considering the impact of Gaussian metal process variations. The resistance and capacitance of a distributed RC line are expressed as correlated Gaussian random variables which are then used to compute the standard deviation of delay Probability Distribution Function (PDF) at all nodes in the interconnect network. Main objective is to find delay PDF at a cheaper cost. Convergence of this approach is in probability distribution but not in mean of delay. We validate our approach against SPICE based Monte Carlo simulations while the current method entails significantly lower computational cost.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The hazards associated with major accident hazard (MAN) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identification and quantification of these hazards related to chemical industries. Fault tree analysis (FTA) is an established technique in hazard identification. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. This paper outlines the estimation of the probability of release of chlorine from storage and filling facility of chlor-alkali industry using FTA. An attempt has also been made to arrive at the probability of chlorine release using expert elicitation and proven fuzzy logic technique for Indian conditions. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two-dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor involved in expert elicitation. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Utilization of the aryl-beta-glucosides salicin or arbutin in most wild-type strains of E. coli is achieved by a single-step mutational activation of the bgl operon. Shigella sonnei, a branch of the diverse E. coli strain tree, requires two sequential mutational steps for achieving salicin utilization as the bglB gene, encoding the phospho-beta-glucosidase B, harbors an inactivating insertion. We show that in a natural isolate of S. sonnei, transcriptional activation of the gene SSO1595, encoding a phospho-beta-glucosidase, enables salicin utilization with the permease function being provided by the activated bgl operon. SSO1595 is absent in most commensal strains of E. coli, but is present in extra-intestinal pathogens as bgcA, a component of the bgc operon that enables beta-glucoside utilization at low temperature. Salicin utilization in an E. coli bglB laboratory strain also requires a two-step activation process leading to expression of BglF, the PTS-associated permease encoded by the bgl operon and AscB, the phospho-beta-glucosidase B encoded by the silent asc operon. BglF function is needed since AscF is unable to transport beta-glucosides as it lacks the IIA domain involved in phopho-relay. Activation of the asc operon in the Sal(+) mutant is by a promoter-up mutation and the activated operon is subject to induction. The pathway to achieve salicin utilization is therefore diverse in these two evolutionarily related organisms; however, both show cooperation between two silent genetic systems to achieve a new metabolic capability under selection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.