24 resultados para bottom-up analysis
em Indian Institute of Science - Bangalore - Índia
Resumo:
Escherichia coli RNA polymerase is a multi-subunit enzyme containing alpha(2)beta beta'omega sigma, which transcribes DNA template to intermediate RNA product in a sequence specific manner. Although most of the subunits are essential for its function, the smallest subunit omega (average molecular mass similar to 10,105 Da) can be deleted without affecting bacterial growth. Creating a mutant of the omega subunit can aid in improving the understanding of its role. Sequencing of rpoZ gene that codes for omega subunit from a mutant variant suggested a substitution mutation at position 60 of the protein: asparagine (N) -> aspartic acid (D). This mutation was verified at the protein level by following a typical mass spectrometry (MS) based bottom-up proteomic approach. Characterization of in-gel trypsin digested samples by reverse phase liquid chromatography (LC) coupled to electrospray ionization (ESI)-tandem mass spectrometry (MS/MS) enabled in ascertaining this mutation. Electron transfer dissociation (ETD) of triply charged (M + 3H)(3+)] tryptic peptides (residues 53-67]), EIEEGLINNQILDVR from wild-type and EIEEGLIDNQILDVR from mutant, facilitated in unambiguously determining the site of mutation at residue 60.
Resumo:
Two-dimensional (2D) sheets are currently in the spotlight of nanotechnology owing to high-performance device fabrication possibilities. Building a free-standing quantum sheet with controlled morphology is challenging when large planar geometry and ultranarrow thickness are simultaneously concerned. Coalescence of nanowires into large single-crystalline sheet is a promising approach leading to large, molecularly thick 2D sheets with controlled planar morphology. Here we report on a bottom-up approach to fabricate high-quality ultrathin 2D single crystalline sheets with well-defined rectangular morphology via collective coalescence of PbS nanowires. The ultrathin sheets are strictly rectangular with 1.8 nm thickness, 200-250 nm width, and 3-20 mu m length. The sheets show high electrical conductivity at room and cryogenic temperatures upon device fabrication. Density functional theory (DFT) calculations reveal that a single row of delocalized orbitals of a nanowire is gradually converted into several parallel conduction channels upon sheet formation, which enable superior in-plane carrier conduction.
Resumo:
Friction stir processing (FSP) is emerging as one of the most competent severe plastic deformation (SPD) method for producing bulk ultra-fine grained materials with improved properties. Optimizing the process parameters for a defect free process is one of the challenging aspects of FSP to mark its commercial use. For the commercial aluminium alloy 2024-T3 plate of 6 mm thickness, a bottom-up approach has been attempted to optimize major independent parameters of the process such as plunge depth, tool rotation speed and traverse speed. Tensile properties of the optimum friction stir processed sample were correlated with the microstructural characterization done using Scanning Electron Microscope (SEM) and Electron Back-Scattered Diffraction (EBSD). Optimum parameters from the bottom-up approach have led to a defect free FSP having a maximum strength of 93% the base material strength. Micro tensile testing of the samples taken from the center of processed zone has shown an increased strength of 1.3 times the base material. Measured maximum longitudinal residual stress on the processed surface was only 30 MPa which was attributed to the solid state nature of FSP. Microstructural observation reveals significant grain refinement with less variation in the grain size across the thickness and a large amount of grain boundary precipitation compared to the base metal. The proposed experimental bottom-up approach can be applied as an effective method for optimizing parameters during FSP of aluminium alloys, which is otherwise difficult through analytical methods due to the complex interactions between work-piece, tool and process parameters. Precipitation mechanisms during FSP were responsible for the fine grained microstructure in the nugget zone that provided better mechanical properties than the base metal. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged. Formation of such procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions. Recent research in supply chain management has emphasized that such decisions need to take into account the fact that suppliers and buyers are intelligent and rational agents who act strategically. In this paper, we view the problem of Procurement Network Formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a fair manner. We study the implications of using the Shapley value as a solution concept for forming such procurement networks. We also present a protocol, based on the extensive form game realization of the Shapley value, for forming these networks.
Resumo:
Formation of high value procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions, where suppliers are intelligent and rational agents who act strategically. In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged We model the problem of Procurement Network Formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a stable and fair manner We first investigate the stability of such networks by examining the conditions under which the core of the game is non-empty. We then present a protocol, based on the extensive form game realization of the core, for forming such networks so that the resulting network is stable. We also mention a key result when the Shapley value is applied as a solution concept.
Resumo:
In this paper we address the problem of forming procurement networks for items with value adding stages that are linearly arranged. Formation of such procurement networks involves a bottom-up assembly of complex production, assembly, and exchange relationships through supplier selection and contracting decisions. Research in supply chain management has emphasized that such decisions need to take into account the fact that suppliers and buyers are intelligent and rational agents who act strategically. In this paper, we view the problem of procurement network formation (PNF) for multiple units of a single item as a cooperative game where agents cooperate to form a surplus maximizing procurement network and then share the surplus in a fair manner. We study the implications of using the Shapley value as a solution concept for forming such procurement networks. We also present a protocol, based on the extensive form game realization of the Shapley value, for forming these networks.
Resumo:
Distributed computing systems can be modeled adequately by Petri nets. The computation of invariants of Petri nets becomes necessary for proving the properties of modeled systems. This paper presents a two-phase, bottom-up approach for invariant computation and analysis of Petri nets. In the first phase, a newly defined subnet, called the RP-subnet, with an invariant is chosen. In the second phase, the selected RP-subnet is analyzed. Our methodology is illustrated with two examples viz., the dining philosophers' problem and the connection-disconnection phase of a transport protocol. We believe that this new method, which is computationally no worse than the existing techniques, would simplify the analysis of many practical distributed systems.
Resumo:
DNA nanotubes are tubular structures composed of DNA crossover molecules. We present a bottom up approach for the construction and characterization of these structures. Various possible topologies of nanotubes are constructed such as 6-helix, 8-helix and tri-tubes with different sequences and lengths. We have used fully atomistic molecular dynamics simulations to study the structure, stability and elasticity of these structures. Several nanosecond long MD simulations give the microscopic details about DNA nanotubes. Based on the structural analysis of simulation data, we show that 6-helix nanotubes are stable and maintain their tubular structure; while 8-helix nanotubes are flattened to stabilize themselves. We also comment on the sequence dependence and the effect of overhangs. These structures are approximately four times more rigid having a stretch modulus of similar to 4000 pN compared to the stretch modulus of 1000 pN of a DNA double helix molecule of the same length and sequence. The stretch moduli of these nanotubes are also three times larger than those of PX/JX crossover DNA molecules which have stretch moduli in the range of 1500-2000 pN. The calculated persistence length is in the range of a few microns which is close to the reported experimental results on certain classes of DNA nanotubes.
Resumo:
Glycated hemoglobin (HbA(1c)) is a `gold standard' biomarker for assessing the glycemic index of an individual. HbA(1c) is formed due to nonenzymatic glycosylation at N-terminal valine residue of the P-globin chain. Cation exchange based high performance liquid chromatography (CE HPLC) is mostly used to quantify HbA(1c), in blood sample. A few genetic variants of hemoglobin and post-translationally modified variants of hemoglobin interfere with CE HPLC-based quantification,. resulting in its false positive estimation. Using mass spectrometry, we analyzed a blood sample with abnormally high HbA(1c) (52.1%) in the CE HPLC method. The observed HbA(1c) did not corroborate the blood glucose level of the patient. A mass spectrometry based bottom up proteomics approach, intact globin chain mass analysis, and chemical modification of the proteolytic peptides identified the presence of Hb Beckman, a genetic variant of hemoglobin, in the experimental sample. A similar surface area to charge ratio between HbA(1c) and Hb Beckman might have resulted in the coelution of the variant with HbA(1c) in CE HPLC. Therefore, in the screening of diabetes mellitus through the estimation of HbA(1c), it is important to look for genetic variants of hemoglobin in samples that show abnormally high glycemic index, and HbA(1c) must be estimated using an alternative method. (C) 2015 Elsevier Inc. All rights reserved.
Resumo:
Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.
Resumo:
We present an algorithm for testing the suitability of an affix grammar for deterministic, one-pass, bottom-up parsing which is an improvement over the one suggested by Pohlmann [1]. The space requirements of the new algorithm are considerably less than that of Pohlmann's. We also describe an implementation of Pohlmann's algorithm and methods for improving its space requirements.
Resumo:
Abstract | Molecular self-assembly plays a vital role in the construction of various nanostructures using the ‘bottom-up’ approach. Peptides have been considered important bio-molecular building blocks for different nanoscale structures as they are biocompatible, biodegradable, generally non-toxic and can be attuned to environmental responses like pH, temperature, salt concentration and others. Peptide based nanostructures can offer various wonderful biological applications in tissue engineering, cell culture, regenerative medicine and drug delivery. In this review, the construction of short peptide-based different nanostructures including nanotubes, nanovesicles and nanofibers, short peptide-based nanoporous materials, short peptide-based nanofibrous hydrogels and nanovesicles for various biological applications has been discussed. Moreover, morphological transformations from one nanoscopic structure to an other type of nanostructure (e.g., nanotubes to nanovesicles) are also clearly discussed in this review.
Resumo:
Energy use in developing countries is heterogeneous across households. Present day global energy models are mostly too aggregate to account for this heterogeneity. Here, a bottom-up model for residential energy use that starts from key dynamic concepts on energy use in developing countries is presented and applied to India. Energy use and fuel choice is determined for five end-use functions (cooking, water heating, space heating, lighting and appliances) and for five different income quintiles in rural and urban areas. The paper specifically explores the consequences of different assumptions for income distribution and rural electrification on residential sector energy use and CO(2) emissions, finding that results are clearly sensitive to variations in these parameters. As a result of population and economic growth, total Indian residential energy use is expected to increase by around 65-75% in 2050 compared to 2005, but residential carbon emissions may increase by up to 9-10 times the 2005 level. While a more equal income distribution and rural electrification enhance the transition to commercial fuels and reduce poverty, there is a trade-off in terms of higher CO(2) emissions via increased electricity use. (C) 2011 Elsevier Ltd. All rights reserved.