865 resultados para Filmic approach methods
Resumo:
Friction stir processing (FSP) is emerging as one of the most competent severe plastic deformation (SPD) method for producing bulk ultra-fine grained materials with improved properties. Optimizing the process parameters for a defect free process is one of the challenging aspects of FSP to mark its commercial use. For the commercial aluminium alloy 2024-T3 plate of 6 mm thickness, a bottom-up approach has been attempted to optimize major independent parameters of the process such as plunge depth, tool rotation speed and traverse speed. Tensile properties of the optimum friction stir processed sample were correlated with the microstructural characterization done using Scanning Electron Microscope (SEM) and Electron Back-Scattered Diffraction (EBSD). Optimum parameters from the bottom-up approach have led to a defect free FSP having a maximum strength of 93% the base material strength. Micro tensile testing of the samples taken from the center of processed zone has shown an increased strength of 1.3 times the base material. Measured maximum longitudinal residual stress on the processed surface was only 30 MPa which was attributed to the solid state nature of FSP. Microstructural observation reveals significant grain refinement with less variation in the grain size across the thickness and a large amount of grain boundary precipitation compared to the base metal. The proposed experimental bottom-up approach can be applied as an effective method for optimizing parameters during FSP of aluminium alloys, which is otherwise difficult through analytical methods due to the complex interactions between work-piece, tool and process parameters. Precipitation mechanisms during FSP were responsible for the fine grained microstructure in the nugget zone that provided better mechanical properties than the base metal. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
A modified solution combustion approach was applied in the synthesis of nanosize SrFeO3-delta (SFO) using single as well as mixture of citric acid, oxalic acid, and glycine as fuels with corresponding metal nitrates as precursors. The synthesized and calcined powders were characterized by Fourier transform infrared spectroscopy (FT-IR), X-ray diffraction (XRD), thermogravimetric analysis and derivative thermogravimetric analysis (TG-DTG), scanning electron microscopy, transmission electron microscopy, N-2 physisorption methods, and acidic strength by n-butyl amine titration methods. The FT-IR spectra show the lower-frequency band at 599 cm(-1) corresponds to metal-oxygen bond (possible Fe-O stretching frequencies) vibrations for the perovskite-structure compound. TG-DTG confirms the formation temperature of SFO ranging between 850-900 degrees C. XRD results reveal that the use of mixture of fuels in the preparation has effect on the crystallite size of the resultant compound. The average particle size of the samples prepared from single fuels as determined from XRD was similar to 50-35 nm, whereas for samples obtained from mixture of fuels, particles with a size of 30-25 nm were obtained. Specifically, the combination of mixture of fuels for the synthesis of SFO catalysts prevents agglomeration of the particles, which in turn leads to decrease in crystallite size and increase in the surface area of the catalysts. It was also observed that the present approach also impacted the catalytic activity of the SFO in the catalytic reduction of nitrobenzene to azoxybenzene.
Resumo:
Ice volume estimates are crucial for assessing water reserves stored in glaciers. Due to its large glacier coverage, such estimates are of particular interest for the Himalayan-Karakoram (HK) region. In this study, different existing methodologies are used to estimate the ice reserves: three area-volume relations, one slope-dependent volume estimation method, and two ice-thickness distribution models are applied to a recent, detailed, and complete glacier inventory of the HK region, spanning over the period 2000-2010 and revealing an ice coverage of 40 775 km(2). An uncertainty and sensitivity assessment is performed to investigate the influence of the observed glacier area and important model parameters on the resulting total ice volume. Results of the two ice-thickness distribution models are validated with local ice-thickness measurements at six glaciers. The resulting ice volumes for the entire HK region range from 2955 to 4737 km(3), depending on the approach. This range is lower than most previous estimates. Results from the ice thickness distribution models and the slope-dependent thickness estimations agree well with measured local ice thicknesses. However, total volume estimates from area-related relations are larger than those from other approaches. The study provides evidence on the significant effect of the selected method on results and underlines the importance of a careful and critical evaluation.
Resumo:
The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Layered transition metal dichalcogenides (TMDs), such as MoS2, are candidate materials for next generation 2-D electronic and optoelectronic devices. The ability to grow uniform, crystalline, atomic layers over large areas is the key to developing such technology. We report a chemical vapor deposition (CVD) technique which yields n-layered MoS2 on a variety of substrates. A generic approach suitable to all TMDs, involving thermodynamic modeling to identify the appropriate CVD process window, and quantitative control of the vapor phase supersaturation, is demonstrated. All reactant sources in our method are outside the growth chamber, a significant improvement over vapor-based methods for atomic layers reported to date. The as-deposited layers are p-type, due to Mo deficiency, with field effect and Hall hole mobilities of up to 2.4 cm(2) V-1 s(-1) and 44 cm(2) V-1 s(-1) respectively. These are among the best reported yet for CVD MoS2.
Resumo:
This work deals with the homogenization of an initial- and boundary-value problem for the doubly-nonlinear system D(t)w - del.(z) over right arrow = g(x, t, x/epsilon) (0.1) w is an element of alpha(u, x/epsilon) (0.2) (z) over right arrow is an element of (gamma) over right arrow (del u, x/epsilon) (0.3) Here epsilon is a positive parameter; alpha and (gamma) over right arrow are maximal monotone with respect to the first variable and periodic with respect to the second one. The inclusions (0.2) and (0.3) are here formulated as null-minimization principles, via the theory of Fitzpatrick MR 1009594]. As epsilon -> 0, a two-scale formulation is derived via Nguetseng's notion of two-scale convergence, and a (single-scale) homogenized problem is then retrieved. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
This work deals with the homogenization of an initial- and boundary-value problem for the doubly-nonlinear system D(t)w - del.(z) over right arrow = g(x, t, x/epsilon) (0.1) w is an element of alpha(u, x/epsilon) (0.2) (z) over right arrow is an element of (gamma) over right arrow (del u, x/epsilon) (0.3) Here epsilon is a positive parameter; alpha and (gamma) over right arrow are maximal monotone with respect to the first variable and periodic with respect to the second one. The inclusions (0.2) and (0.3) are here formulated as null-minimization principles, via the theory of Fitzpatrick MR 1009594]. As epsilon -> 0, a two-scale formulation is derived via Nguetseng's notion of two-scale convergence, and a (single-scale) homogenized problem is then retrieved. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
We revisit the a posteriori error analysis of discontinuous Galerkin methods for the obstacle problem derived in 25]. Under a mild assumption on the trace of obstacle, we derive a reliable a posteriori error estimator which does not involve min/max functions. A key in this approach is an auxiliary problem with discrete obstacle. Applications to various discontinuous Galerkin finite element methods are presented. Numerical experiments show that the new estimator obtained in this article performs better.
Resumo:
Background: In the post-genomic era where sequences are being determined at a rapid rate, we are highly reliant on computational methods for their tentative biochemical characterization. The Pfam database currently contains 3,786 families corresponding to ``Domains of Unknown Function'' (DUF) or ``Uncharacterized Protein Family'' (UPF), of which 3,087 families have no reported three-dimensional structure, constituting almost one-fourth of the known protein families in search for both structure and function. Results: We applied a `computational structural genomics' approach using five state-of-the-art remote similarity detection methods to detect the relationship between uncharacterized DUFs and domain families of known structures. The association with a structural domain family could serve as a start point in elucidating the function of a DUF. Amongst these five methods, searches in SCOP-NrichD database have been applied for the first time. Predictions were classified into high, medium and low-confidence based on the consensus of results from various approaches and also annotated with enzyme and Gene ontology terms. 614 uncharacterized DUFs could be associated with a known structural domain, of which high confidence predictions, involving at least four methods, were made for 54 families. These structure-function relationships for the 614 DUF families can be accessed on-line at http://proline.biochem.iisc.ernet.in/RHD_DUFS/. For potential enzymes in this set, we assessed their compatibility with the associated fold and performed detailed structural and functional annotation by examining alignments and extent of conservation of functional residues. Detailed discussion is provided for interesting assignments for DUF3050, DUF1636, DUF1572, DUF2092 and DUF659. Conclusions: This study provides insights into the structure and potential function for nearly 20 % of the DUFs. Use of different computational approaches enables us to reliably recognize distant relationships, especially when they converge to a common assignment because the methods are often complementary. We observe that while pointers to the structural domain can offer the right clues to the function of a protein, recognition of its precise functional role is still `non-trivial' with many DUF domains conserving only some of the critical residues. It is not clear whether these are functional vestiges or instances involving alternate substrates and interacting partners. Reviewers: This article was reviewed by Drs Eugene Koonin, Frank Eisenhaber and Srikrishna Subramanian.
Resumo:
Structural-acoustic waveguides of two different geometries are considered: a 2-D rectangular and a circular cylindrical geometry. The objective is to obtain asymptotic expansions of the fluid-structure coupled wavenumbers. The required asymptotic parameters are derived in a systematic way, in contrast to the usual intuitive methods used in such problems. The systematic way involves analyzing the phase change of a wave incident on a single boundary of the waveguide. Then, the coupled wavenumber expansions are derived using these asymptotic parameters. The phase change is also used to qualitatively demarcate the dispersion diagram as dominantly structure-originated, fluid originated or fully coupled. In contrast to intuitively obtained asymptotic parameters, this approach does not involve any restriction on the material and geometry of the structure. The derived closed-form solutions are compared with the numerical solutions and a good match is obtained. (C) 2016 Elsevier Ltd. All rights reserved.
Resumo:
By the semi-inverse method, a variational principle is obtained for the Lane-Emden equation, which gives much numerical convenience when applying finite element methods or Ritz method.
Resumo:
DNA microarrays provide such a huge amount of data that unsupervised methods are required to reduce the dimension of the data set and to extract meaningful biological information. This work shows that Independent Component Analysis (ICA) is a promising approach for the analysis of genome-wide transcriptomic data. The paper first presents an overview of the most popular algorithms to perform ICA. These algorithms are then applied on a microarray breast-cancer data set. Some issues about the application of ICA and the evaluation of biological relevance of the results are discussed. This study indicates that ICA significantly outperforms Principal Component Analysis (PCA).