127 resultados para 12930-005


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Existing soil nailing design methodologies are essentially based on limit equilibrium principles that together with a lumped factor of safety or a set of partial factors on the material parameters and loads account for uncertainties in design input parameter values. Recent trends in the development of design procedures for earth retaining structures are towards load and resistance factor design (LRFD). In the present study, a methodology for the use of LRFD in the context of soil-nail walls is proposed and a procedure to determine reliability-based load and resistance factors is illustrated for important strength limit states with reference to a 10 m high soil-nail wall. The need for separate partial factors for each limit state is highlighted, and the proposed factors are compared with those existing in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The importance of inter-and intracellular signal transduction in all forms of life cannot be underestimated. A large number of genes dedicated to cellular signalling are found in almost all sequenced genomes, and Mycobacteria are no exception. What appears to be interesting in Mycobacteria is that well characterized signalling mechanisms used by bacteria, such as the histidine-aspartate phosphorelay seen in two-component systems, are found alongside signalling components that closely mimic those seen in higher eukaryotes. This review will describe the important contribution made by researchers in India towards the identification and characterization of proteins involved in two-component signalling, protein phosphorylation and cyclic nucleotide metabolism. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The boxicity of a graph H, denoted by box(H), is the minimum integer k such that H is an intersection graph of axis-parallel k-dimensional boxes in R(k). In this paper we show that for a line graph G of a multigraph, box(G) <= 2 Delta (G)(inverted right perpendicularlog(2) log(2) Delta(G)inverted left perpendicular + 3) + 1, where Delta(G) denotes the maximum degree of G. Since G is a line graph, Delta(G) <= 2(chi (G) - 1), where chi (G) denotes the chromatic number of G, and therefore, box(G) = 0(chi (G) log(2) log(2) (chi (G))). For the d-dimensional hypercube Q(d), we prove that box(Q(d)) >= 1/2 (inverted right perpendicularlog(2) log(2) dinverted left perpendicular + 1). The question of finding a nontrivial lower bound for box(Q(d)) was left open by Chandran and Sivadasan in [L. Sunil Chandran, Naveen Sivadasan, The cubicity of Hypercube Graphs. Discrete Mathematics 308 (23) (2008) 5795-5800]. The above results are consequences of bounds that we obtain for the boxicity of a fully subdivided graph (a graph that can be obtained by subdividing every edge of a graph exactly once). (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A novel PCR based assay was devised to specifically detect contamination of any Salmonella serovar in milk, fruit juice and ice-cream without pre-enrichment. This method utilizes primers against hilA gene which is conserved in all Salmonella serovars and absent from the close relatives of Salmonella. An optimized protocol, in terms time and money, is provided for the reduction of PCR contaminants from milk, ice-cream and juice through the use of routine laboratory chemicals. The simplicity, efficiency (time taken 3-4 h) and sensitivity (to about 5-10 CFU/ml) of this technique confers a unique advantage over other previously used time consuming detection techniques. This technique does not involve pre-enrichment of the samples or extensive sample processing, which was a pre-requisite in most of the other reported studies. Hence, this assay can be ideal for adoption, after further fine tuning, by food quality control for timely detection of Salmonella contamination as well as other food-borne pathogens (with species specific primers) in food especially milk, ice-cream and fruit juice. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The reversible e.m.f. of galvanic cells: stainlesssteel,Ir,Pb+PbO|CaO+ZrO2|Ag+Pb+PbO,Ir,stainlesssteel,I and Pt,Ni+NiO|CaO+ZrO2|O(Pb+Ag),Cermet,Pt,II incorporating solid oxide electrolytes were measured as a function of alloy composition. In lead-rich alloys, the temperature dependence of the e.m.f. of cell I was also investigated. Since the solubility of oxygen in the alloy is small, the relative partial molar properties of lead in the binary Ag + Pb system can be calculated from the e.m.f. of this cell. The Gibbs free energies obtained in this study are combined with selected calorimetric data to provide a complete thermodynamic discription of liquid Ag + Pb Alloys. The activity coefficient of oxygen in the whole range of Ag + Pb alloys at 1273 K have been obtained from the e.m.f. of cell II; and these are found to deviate positively from Alcock and Richardson's quasichemical equation when the average co-ordination number of all the atoms is assigned a value of 2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

About a third of the human population is estimated to be infected with Mycobacterium tuberculosis. The bacterium displays an excellent adaptability to survive within the host macrophages. As the reactive environment of macrophages is capable of inducing DNA damage, the ability of the pathogen to safeguard its DNA against the damage is of paramount significance for its survival within the host. Analysis of the genome sequence has provided important insights into the DNA repair machinery of the pathogen, and the studies on DNA repair in mycobacteria have gained momentum in the past few years. The studies have revealed considerable differences in the mycobacterial DNA repair machinery when compared with those of the other bacteria. This review article focuses especially on the aspects of base excision, and nucleotide excision repair pathways in mycobacteria. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although Al(1-x)Ga(x)N semiconductors are used in lighting, displays and high-power amplifiers, there is no experimental thermodynamic information on nitride solid solutions. Thermodynamic data are useful for assessing the intrinsic stability of the solid solution with respect to phase separation and extrinsic stability in relation to other phases such as metallic contacts. The activity of GaN in Al(1-x)Ga(x)N solid solution is determined at 1100 K using a solid-state electrochemical cell: Ga + Al(1-x)Ga(x)N/Fe, Ca(3)N(2)//CaF(2)//Ca(3)N(2), N(2) (0.1 MPa), Fe. The solid-state cell is based on single crystal CaF(2) as the electrolyte and Ca(3)N(2) as the auxiliary electrode to convert the nitrogen chemical potential established by the equilibrium between Ga and Al(1-x)Ga(x)N solid solution into an equivalent fluorine potential. Excess Gibbs free energy of mixing of the solid solution is computed from the results. Results suggest an unusual mixing behavior: a mild tendency for ordering at three discrete compositions (x = 0.25, 0.5 and 0.75) superimposed on predominantly positive deviation from ideality. The lattice parameters exhibit slight deviation from Vegard's law, with the a-parameter showing positive and the c-parameter negative deviation. Although the solid solution is stable in the full range of compositions at growth temperatures, thermodynamic instability is indicated at temperatures below 410 K in the composition range 0.26 <= x <= 0.5. At 355 K, two biphasic regions appear, with terminal solid solutions stable only for 0 <= x <= 0.26 and 0.66 <= x <= 1. The range of terminal solid solubility reduces with decreasing temperature. (C) 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One characteristic feature of the athermal beta -> omega transformation is the short time scale of the transformation. So far, no clear understanding of this issue exists. Here we construct a model that includes contributions from a Landau sixth-order free energy density, kinetic energy due to displacement, and the Rayleigh dissipation function to account for the dissipation arising from the rapid movement of the parent product interface during rapid nucleation. We also include the contribution from omega-like fluctuations to local stress. The model shows that the transformation is complete on a time scale comparable to the velocity of sound. The estimated nucleation rate is several orders higher than that for diffusion-controlled transformations. The model predicts that the athermal omega phase is limited to a certain range of alloying composition. The estimated nucleation rate and the size of ``isothermal'' particles beyond 17% Nb are also consistent with experimental results. The model provides an explanation for the reprecipitation process of the omega particles in the ``cleared'' channels formed during deformation of omega-forming alloys. The model also predicts that acoustic emission should be detectable during the formation of the athermal phase. (C) 2011 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show that a homotopy equivalence between compact, connected, oriented surfaces with non-empty boundary is homotopic to a homeomorphism if and only if it commutes with the Goldman bracket. (C) 2011 Academie des sciences. Published by Elsevier Masson SAS. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

XPS and LIII X-ray absorption edge studies regarding the valence state of cerium have been carried out on the intermetallic compounds CeCo2, which becomes superconducting at low temperatures. It is observed from XPS that the surface shows both Ce3+ and Ce4+ valence states, while the X-ray absorption edge studies reveal only Ce4+ in the bulk. Thus valence fluctuation and superconductivity do not coexist in the bulk of this compound.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new structured discretization of 2D space, named X-discretization, is proposed to solve bivariate population balance equations using the framework of minimal internal consistency of discretization of Chakraborty and Kumar [2007, A new framework for solution of multidimensional population balance equations. Chem. Eng. Sci. 62, 4112-4125] for breakup and aggregation of particles. The 2D space of particle constituents (internal attributes) is discretized into bins by using arbitrarily spaced constant composition radial lines and constant mass lines of slope -1. The quadrilaterals are triangulated by using straight lines pointing towards the mean composition line. The monotonicity of the new discretization makes is quite easy to implement, like a rectangular grid but with significantly reduced numerical dispersion. We use the new discretization of space to automate the expansion and contraction of the computational domain for the aggregation process, corresponding to the formation of larger particles and the disappearance of smaller particles by adding and removing the constant mass lines at the boundaries. The results show that the predictions of particle size distribution on fixed X-grid are in better agreement with the analytical solution than those obtained with the earlier techniques. The simulations carried out with expansion and/or contraction of the computational domain as population evolves show that the proposed strategy of evolving the computational domain with the aggregation process brings down the computational effort quite substantially; larger the extent of evolution, greater is the reduction in computational effort. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The questions that one should answer in engineering computations - deterministic, probabilistic/randomized, as well as heuristic - are (i) how good the computed results/outputs are and (ii) how much the cost in terms of amount of computation and the amount of storage utilized in getting the outputs is. The absolutely errorfree quantities as well as the completely errorless computations done in a natural process can never be captured by any means that we have at our disposal. While the computations including the input real quantities in nature/natural processes are exact, all the computations that we do using a digital computer or are carried out in an embedded form are never exact. The input data for such computations are also never exact because any measuring instrument has inherent error of a fixed order associated with it and this error, as a matter of hypothesis and not as a matter of assumption, is not less than 0.005 per cent. Here by error we imply relative error bounds. The fact that exact error is never known under any circumstances and any context implies that the term error is nothing but error-bounds. Further, in engineering computations, it is the relative error or, equivalently, the relative error-bounds (and not the absolute error) which is supremely important in providing us the information regarding the quality of the results/outputs. Another important fact is that inconsistency and/or near-consistency in nature, i.e., in problems created from nature is completely nonexistent while in our modelling of the natural problems we may introduce inconsistency or near-inconsistency due to human error or due to inherent non-removable error associated with any measuring device or due to assumptions introduced to make the problem solvable or more easily solvable in practice. Thus if we discover any inconsistency or possibly any near-inconsistency in a mathematical model, it is certainly due to any or all of the three foregoing factors. We do, however, go ahead to solve such inconsistent/near-consistent problems and do get results that could be useful in real-world situations. The talk considers several deterministic, probabilistic, and heuristic algorithms in numerical optimisation, other numerical and statistical computations, and in PAC (probably approximately correct) learning models. It highlights the quality of the results/outputs through specifying relative error-bounds along with the associated confidence level, and the cost, viz., amount of computations and that of storage through complexity. It points out the limitation in error-free computations (wherever possible, i.e., where the number of arithmetic operations is finite and is known a priori) as well as in the usage of interval arithmetic. Further, the interdependence among the error, the confidence, and the cost is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A low strain shear modulus plays a fundamental role in earthquake geotechnical engineering to estimate the ground response parameters for seismic microzonation. A large number of site response studies are being carried out using the standard penetration test (SPT) data, considering the existing correlation between SPT N values and shear modulus. The purpose of this paper is to review the available empirical correlations between shear modulus and SPT N values and to generate a new correlation by combining the new data obtained by the author and the old available data. The review shows that only few authors have used measured density and shear wave velocity to estimate shear modulus, which were related to the SPT N values. Others have assumed a constant density for all the shear wave velocities to estimate the shear modulus. Many authors used the SPT N values of less than 1 and more than 100 to generate the correlation by extrapolation or assumption, but practically these N values have limited applications, as measuring of the SPT N values of less than 1 is not possible and more than 100 is not carried out. Most of the existing correlations were developed based on the studies carried out in Japan, where N values are measured with a hammer energy of 78%, which may not be directly applicable for other regions because of the variation in SPT hammer energy. A new correlation has been generated using the measured values in Japan and in India by eliminating the assumed and extrapolated data. This correlation has higher regression coefficient and lower standard error. Finally modification factors are suggested for other regions, where the hammer energy is different from 78%. Crown Copyright (C) 2012 Published by Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustered architecture processors are preferred for embedded systems because centralized register file architectures scale poorly in terms of clock rate, chip area, and power consumption. Although clustering helps by improving the clock speed, reducing the energy consumption of the logic, and making the design simpler, it introduces extra overheads by way of inter-cluster communication. This communication happens over long global wires having high load capacitance which leads to delay in execution and significantly high energy consumption. Inter-cluster communication also introduces many short idle cycles, thereby significantly increasing the overall leakage energy consumption in the functional units. The trend towards miniaturization of devices (and associated reduction in threshold voltage) makes energy consumption in interconnects and functional units even worse, and limits the usability of clustered architectures in smaller technologies. However, technological advancements now permit the design of interconnects and functional units with varying performance and power modes. In this paper, we propose scheduling algorithms that aggregate the scheduling slack of instructions and communication slack of data values to exploit the low-power modes of functional units and interconnects. Finally, we present a synergistic combination of these algorithms that simultaneously saves energy in functional units and interconnects to improves the usability of clustered architectures by achieving better overall energy-performance trade-offs. Even with conservative estimates of the contribution of the functional units and interconnects to the overall processor energy consumption, the proposed combined scheme obtains on average 8% and 10% improvement in overall energy-delay product with 3.5% and 2% performance degradation for a 2-clustered and a 4-clustered machine, respectively. We present a detailed experimental evaluation of the proposed schemes. Our test bed uses the Trimaran compiler infrastructure. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Urbanisation is a dynamic complex phenomenon involving large scale changes in the land uses at local levels. Analyses of changes in land uses in urban environments provide a historical perspective of land use and give an opportunity to assess the spatial patterns, correlation, trends, rate and impacts of the change, which would help in better regional planning and good governance of the region. Main objective of this research is to quantify the urban dynamics using temporal remote sensing data with the help of well-established landscape metrics. Bangalore being one of the rapidly urbanising landscapes in India has been chosen for this investigation. Complex process of urban sprawl was modelled using spatio temporal analysis. Land use analyses show 584% growth in built-up area during the last four decades with the decline of vegetation by 66% and water bodies by 74%. Analyses of the temporal data reveals an increase in urban built up area of 342.83% (during 1973-1992), 129.56% (during 1992-1999), 106.7% (1999-2002), 114.51% (2002-2006) and 126.19% from 2006 to 2010. The Study area was divided into four zones and each zone is further divided into 17 concentric circles of 1 km incrementing radius to understand the patterns and extent of the urbanisation at local levels. The urban density gradient illustrates radial pattern of urbanisation for the period 1973-2010. Bangalore grew radially from 1973 to 2010 indicating that the urbanisation is intensifying from the central core and has reached the periphery of the Greater Bangalore. Shannon's entropy, alpha and beta population densities were computed to understand the level of urbanisation at local levels. Shannon's entropy values of recent time confirms dispersed haphazard urban growth in the city, particularly in the outskirts of the city. This also illustrates the extent of influence of drivers of urbanisation in various directions. Landscape metrics provided in depth knowledge about the sprawl. Principal component analysis helped in prioritizing the metrics for detailed analyses. The results clearly indicates that whole landscape is aggregating to a large patch in 2010 as compared to earlier years which was dominated by several small patches. The large scale conversion of small patches to large single patch can be seen from 2006 to 2010. In the year 2010 patches are maximally aggregated indicating that the city is becoming more compact and more urbanised in recent years. Bangalore was the most sought after destination for its climatic condition and the availability of various facilities (land availability, economy, political factors) compared to other cities. The growth into a single urban patch can be attributed to rapid urbanisation coupled with the industrialisation. Monitoring of growth through landscape metrics helps to maintain and manage the natural resources. (C) 2012 Elsevier B.V. All rights reserved.