966 resultados para severity scale
Resumo:
Small-scale mechanical testing of materials has gained prominence in the last decade or so due to the continuous miniaturization of components and devices in everyday application. This review describes the various micro-fabrication processes associated with the preparation of miniaturized specimens, geometries of test specimens and the small scale testing techniques used to determine the mechanical behaviour of materials at the length scales of a few hundred micro-meters and below. This is followed by illustrative examples in a selected class of materials. The choice of the case studies is based on the relevance of the materials used in today's world: evaluation of mechanical properties of thermal barrier coatings (TBCs), applied for enhanced high temperature protection of advanced gas turbine engine components, is essential since its failure by fracture leads to the collapse of the engine system. Si-based substrates, though brittle, are indispensible for MEMS/NEMS applications. Biological specimens, whose response to mechanical loads is important to ascertain their role in diseases and to mimic their structure for attaining high fracture toughness and impact resistance. An insight into the mechanisms behind the observed size effects in metallic systems can be exploited to achieve excellent strength at the nano-scale. A future outlook of where all this is heading is also presented.
Resumo:
In this paper, we propose a low-complexity algorithm based on Markov chain Monte Carlo (MCMC) technique for signal detection on the uplink in large scale multiuser multiple input multiple output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and similar number of uplink users. The algorithm employs a randomized sampling method (which makes a probabilistic choice between Gibbs sampling and random sampling in each iteration) for detection. The proposed algorithm alleviates the stalling problem encountered at high SNRs in conventional MCMC algorithm and achieves near-optimal performance in large systems with M-QAM. A novel ingredient in the algorithm that is responsible for achieving near-optimal performance at low complexities is the joint use of a randomized MCMC (R-MCMC) strategy coupled with a multiple restart strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for large number of BS antennas and users (e.g., 64, 128, 256 BS antennas/users).
Resumo:
Background: The set of indispensable genes that are required by an organism to grow and sustain life are termed as essential genes. There is a strong interest in identification of the set of essential genes, particularly in pathogens, not only for a better understanding of the pathogen biology, but also for identifying drug targets and the minimal gene set for the organism. Essentiality is inherently a systems property and requires consideration of the system as a whole for their identification. The available experimental approaches capture some aspects but each method comes with its own limitations. Moreover, they do not explain the basis for essentiality in most cases. A powerful prediction method to recognize this gene pool including rationalization of the known essential genes in a given organism would be very useful. Here we describe a multi-level multi-scale approach to identify the essential gene pool in a deadly pathogen, Mycobacterium tuberculosis. Results: The multi-level workflow analyses the bacterial cell by studying (a) genome-wide gene expression profiles to identify the set of genes which show consistent and significant levels of expression in multiple samples of the same condition, (b) indispensability for growth by using gene expression integrated flux balance analysis of a genome-scale metabolic model, (c) importance for maintaining the integrity and flow in a protein-protein interaction network and (d) evolutionary conservation in a set of genomes of the same ecological niche. In the gene pool identified, the functional basis for essentiality has been addressed by studying residue level conservation and the sub-structure at the ligand binding pockets, from which essential amino acid residues in that pocket have also been identified. 283 genes were identified as essential genes with high-confidence. An agreement of about 73.5% is observed with that obtained from the experimental transposon mutagenesis technique. A large proportion of the identified genes belong to the class of intermediary metabolism and respiration. Conclusions: The multi-scale, multi-level approach described can be generally applied to other pathogens as well. The essential gene pool identified form a basis for designing experiments to probe their finer functional roles and also serve as a ready shortlist for identifying drug targets.
Resumo:
Elastic Net Regularizers have shown much promise in designing sparse classifiers for linear classification. In this work, we propose an alternating optimization approach to solve the dual problems of elastic net regularized linear classification Support Vector Machines (SVMs) and logistic regression (LR). One of the sub-problems turns out to be a simple projection. The other sub-problem can be solved using dual coordinate descent methods developed for non-sparse L2-regularized linear SVMs and LR, without altering their iteration complexity and convergence properties. Experiments on very large datasets indicate that the proposed dual coordinate descent - projection (DCD-P) methods are fast and achieve comparable generalization performance after the first pass through the data, with extremely sparse models.
Resumo:
This paper attempts to gain an understanding of the effect of lamellar length scale on the mechanical properties of two-phase metal-intermetallic eutectic structure. We first develop a molecular dynamics model for the in-situ grown eutectic interface followed by a model of deformation of Al-Al2Cu lamellar eutectic. Leveraging the insights obtained from the simulation on the behaviour of dislocations at different length scales of the eutectic, we present and explain the experimental results on Al-Al2Cu eutectic with various different lamellar spacing. The physics behind the mechanism is further quantified with help of atomic level energy model for different length scale as well as different strain. An atomic level energy partitioning of the lamellae and the interface regions reveals that the energy of the lamellae core are accumulated more due to dislocations irrespective of the length-scale. Whereas the energy of the interface is accumulated more due to dislocations when the length-scale is smaller, but the trend is reversed when the length-scale is large beyond a critical size of about 80 nm. (C) 2014 Author(s).
Resumo:
Global change in climate and consequent large impacts on regional hydrologic systems have, in recent years, motivated significant research efforts in water resources modeling under climate change. In an integrated future hydrologic scenario, it is likely that water availability and demands will change significantly due to modifications in hydro-climatic variables such as rainfall, reservoir inflows, temperature, net radiation, wind speed and humidity. An integrated regional water resources management model should capture the likely impacts of climate change on water demands and water availability along with uncertainties associated with climate change impacts and with management goals and objectives under non-stationary conditions. Uncertainties in an integrated regional water resources management model, accumulating from various stages of decision making include climate model and scenario uncertainty in the hydro-climatic impact assessment, uncertainty due to conflicting interests of the water users and uncertainty due to inherent variability of the reservoir inflows. This paper presents an integrated regional water resources management modeling approach considering uncertainties at various stages of decision making by an integration of a hydro-climatic variable projection model, a water demand quantification model, a water quantity management model and a water quality control model. Modeling tools of canonical correlation analysis, stochastic dynamic programming and fuzzy optimization are used in an integrated framework, in the approach presented here. The proposed modeling approach is demonstrated with the case study of the Bhadra Reservoir system in Karnataka, India.
Resumo:
Scaling behaviour has been observed at mesoscopic level irrespective of crystal structure, type of boundary and operative micro-mechanisms like slip and twinning. The presence of scaling at the meso-scale accompanied with that at the nano-scale clearly demonstrates the intrinsic spanning for different deformation processes and a true universal nature of scaling. The origin of a 1/2 power law in deformation of crystalline materials in terms of misorientation proportional to square root of strain is attributed to importance of interfaces in deformation processes. It is proposed that materials existing in three dimensional Euclidean spaces accommodate plastic deformation by one dimensional dislocations and their interaction with two dimensional interfaces at different length scales. This gives rise to a 1/2 power law scaling in materials. This intrinsic relationship can be incorporated in crystal plasticity models that aim to span different length and time scales to predict the deformation response of crystalline materials accurately.
Resumo:
Planck scale lepton number violation is an interesting and natural possibility to explain nonzero neutrino masses. We consider such operators in the context of Randall-Sundrum (RS1) scenarios. Implementation of this scenario with a single Higgs localized on the IR brane (standard RS1) is not phenomenologically viable as they lead to inconsistencies in the charged lepton mass fits. In this paper we propose a setup with two Higgs doublets. We present a detailed numerical analysis of the fits to fermion masses and mixing angles. This model solves the issues regarding the fermion mass fits but solutions with consistent electroweak symmetry breaking are highly fine-tuned. A simple resolution is to consider supersymmetry in the bulk and a detailed discussion of which is provided. Constraints from flavor are found to be strong and minimal flavor violation (MFV) is imposed to alleviate them.
Resumo:
Organic molecules adsorbed on magnetic surfaces offer the possibility to merge the concepts of molecular electronics with spintronics to build future nanoscale data storage, sensing, and computing multifunctional devices. In order to engineer the functionalities of such hybrid spintronic devices, an understanding of the electronic and magnetic properties of the interface between carbon-based aromatic materials and magnetic surfaces is essential. In this article, we discuss recent progress in the study of spin-dependent chemistry and physics associated with the above molecule-ferromagnet interface by combining state-of-the-art experiments and theoretical calculations. The magnetic properties such as molecular magnetic moment, electronic interface spin-polarization, magnetic anisotropy, and magnetic exchange coupling can be specifically tuned by an appropriate choice of the organic material and the magnetic substrate. These reports suggest a gradual shift in research toward an emerging subfield of interface-assisted molecular spintronics.
Resumo:
Human Leukocyte Antigen (HLA) plays an important role, in presenting foreign pathogens to our immune system, there by eliciting early immune responses. HLA genes are highly polymorphic, giving rise to diverse antigen presentation capability. An important factor contributing to enormous variations in individual responses to diseases is differences in their HLA profiles. The heterogeneity in allele specific disease responses decides the overall disease epidemiological outcome. Here we propose an agent based computational framework, capable of incorporating allele specific information, to analyze disease epidemiology. This framework assumes a SIR model to estimate average disease transmission and recovery rate. Using epitope prediction tool, it performs sequence based epitope detection for a given the pathogenic genome and derives an allele specific disease susceptibility index depending on the epitope detection efficiency. The allele specific disease transmission rate, that follows, is then fed to the agent based epidemiology model, to analyze the disease outcome. The methodology presented here has a potential use in understanding how a disease spreads and effective measures to control the disease.
Resumo:
In this paper, we propose a multiple-input multiple-output (MIMO) receiver algorithm that exploits channel hardening that occurs in large MIMO channels. Channel hardening refers to the phenomenon where the off-diagonal terms of the matrix become increasingly weaker compared to the diagonal terms as the size of the channel gain matrix increases. Specifically, we propose a message passing detection (MPD) algorithm which works with the real-valued matched filtered received vector (whose signal term becomes, where is the transmitted vector), and uses a Gaussian approximation on the off-diagonal terms of the matrix. We also propose a simple estimation scheme which directly obtains an estimate of (instead of an estimate of), which is used as an effective channel estimate in the MPD algorithm. We refer to this receiver as the channel hardening-exploiting message passing (CHEMP) receiver. The proposed CHEMP receiver achieves very good performance in large-scaleMIMO systems (e.g., in systems with 16 to 128 uplink users and 128 base station antennas). For the considered large MIMO settings, the complexity of the proposed MPD algorithm is almost the same as or less than that of the minimum mean square error (MMSE) detection. This is because the MPD algorithm does not need a matrix inversion. It also achieves a significantly better performance compared to MMSE and other message passing detection algorithms using MMSE estimate of. Further, we design optimized irregular low density parity check (LDPC) codes specific to the considered large MIMO channel and the CHEMP receiver through EXIT chart matching. The LDPC codes thus obtained achieve improved coded bit error rate performance compared to off-the-shelf irregular LDPC codes.
Resumo:
Aiming to develop high mechanical strength and toughness by tuning ultrafine lamellar spacing of magnetic eutectic alloys, we report the mechanical and magnetic properties of the binary eutectic alloys Co90.5Zr9.5 and Fe90.2Zr9.8, as well as the pseudo-binary eutectic alloys Co82.4Fe8Zr9.6, Co78Fe12.4Zr9.6 and Co49.2Fe49.2Zr9.6 developed by suction-casting. The lower lamellar spacing around 100 nm of the eutectics Co49.2Fe49.2Zr9.6 yields a high hardness of 713(+/- 20) VHN. Magnetic measurements reveal high magnetic moment of 1.92 mu B (at 5 K) and 1.82 mu B (at 300 K) per formula unit for this composition. The magnetization vs. applied field data at 5 K show a directional preference to some extent and therefore smaller non-collinear magnetization behavior compared to Co11Zr2 reported in the literature due to exchange frustration and transverse spin freezing owing to the presence of smaller Zr content. The decay of magnetization as a function of temperature along the easy axis of magnetization of all the eutectic compositions can be described fairly well by the spin wave excitation equation Delta M/M(0) = BT3/2 + CT5/2. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In this paper we present a massively parallel open source solver for Richards equation, named the RichardsFOAM solver. This solver has been developed in the framework of the open source generalist computational fluid dynamics tool box OpenFOAM (R) and is capable to deal with large scale problems in both space and time. The source code for RichardsFOAM may be downloaded from the CPC program library website. It exhibits good parallel performances (up to similar to 90% parallel efficiency with 1024 processors both in strong and weak scaling), and the conditions required for obtaining such performances are analysed and discussed. These performances enable the mechanistic modelling of water fluxes at the scale of experimental watersheds (up to few square kilometres of surface area), and on time scales of decades to a century. Such a solver can be useful in various applications, such as environmental engineering for long term transport of pollutants in soils, water engineering for assessing the impact of land settlement on water resources, or in the study of weathering processes on the watersheds. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In this study, we applied the integration methodology developed in the companion paper by Aires (2014) by using real satellite observations over the Mississippi Basin. The methodology provides basin-scale estimates of the four water budget components (precipitation P, evapotranspiration E, water storage change Delta S, and runoff R) in a two-step process: the Simple Weighting (SW) integration and a Postprocessing Filtering (PF) that imposes the water budget closure. A comparison with in situ observations of P and E demonstrated that PF improved the estimation of both components. A Closure Correction Model (CCM) has been derived from the integrated product (SW+PF) that allows to correct each observation data set independently, unlike the SW+PF method which requires simultaneous estimates of the four components. The CCM allows to standardize the various data sets for each component and highly decrease the budget residual (P - E - Delta S - R). As a direct application, the CCM was combined with the water budget equation to reconstruct missing values in any component. Results of a Monte Carlo experiment with synthetic gaps demonstrated the good performances of the method, except for the runoff data that has a variability of the same order of magnitude as the budget residual. Similarly, we proposed a reconstruction of Delta S between 1990 and 2002 where no Gravity Recovery and Climate Experiment data are available. Unlike most of the studies dealing with the water budget closure at the basin scale, only satellite observations and in situ runoff measurements are used. Consequently, the integrated data sets are model independent and can be used for model calibration or validation.