988 resultados para Recovery framework
Resumo:
Permanent plastic deformation induced by mechanical contacts affects the shape recovery of shape memory alloys. To understand the shape recovery of NiTiCu thin films subjected to local contact stresses, systematic investigations are carried out by inducing varying levels of contact stresses using nanoindentation. The resulting indents are located precisely for imaging using a predetermined array consisting of different sized indents. Morphology and topography of these indents before and after shape recovery are characterized using Scanning Electron Microscope and Atomic Force Microscope quantitatively. Shape recovery is found to be dependent on the contact stresses at the low loads while the recovery ratio remains constant at 0.13 for higher loads. Shape recovery is found to occur mainly in depth direction of the indent, while far field residual stresses play very little role in the recovery. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Finite volume methods traditionally employ dimension by dimension extension of the one-dimensional reconstruction and averaging procedures to achieve spatial discretization of the governing partial differential equations on a structured Cartesian mesh in multiple dimensions. This simple approach based on tensor product stencils introduces an undesirable grid orientation dependence in the computed solution. The resulting anisotropic errors lead to a disparity in the calculations that is most prominent between directions parallel and diagonal to the grid lines. In this work we develop isotropic finite volume discretization schemes which minimize such grid orientation effects in multidimensional calculations by eliminating the directional bias in the lowest order term in the truncation error. Explicit isotropic expressions that relate the cell face averaged line and surface integrals of a function and its derivatives to the given cell area and volume averages are derived in two and three dimensions, respectively. It is found that a family of isotropic approximations with a free parameter can be derived by combining isotropic schemes based on next-nearest and next-next-nearest neighbors in three dimensions. Use of these isotropic expressions alone in a standard finite volume framework, however, is found to be insufficient in enforcing rotational invariance when the flux vector is nonlinear and/or spatially non-uniform. The rotationally invariant terms which lead to a loss of isotropy in such cases are explicitly identified and recast in a differential form. Various forms of flux correction terms which allow for a full recovery of rotational invariance in the lowest order truncation error terms, while preserving the formal order of accuracy and discrete conservation of the original finite volume method, are developed. Numerical tests in two and three dimensions attest the superior directional attributes of the proposed isotropic finite volume method. Prominent anisotropic errors, such as spurious asymmetric distortions on a circular reaction-diffusion wave that feature in the conventional finite volume implementation are effectively suppressed through isotropic finite volume discretization. Furthermore, for a given spatial resolution, a striking improvement in the prediction of kinetic energy decay rate corresponding to a general two-dimensional incompressible flow field is observed with the use of an isotropic finite volume method instead of the conventional discretization. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Polyhedral techniques for program transformation are now used in several proprietary and open source compilers. However, most of the research on polyhedral compilation has focused on imperative languages such as C, where the computation is specified in terms of statements with zero or more nested loops and other control structures around them. Graphical dataflow languages, where there is no notion of statements or a schedule specifying their relative execution order, have so far not been studied using a powerful transformation or optimization approach. The execution semantics and referential transparency of dataflow languages impose a different set of challenges. In this paper, we attempt to bridge this gap by presenting techniques that can be used to extract polyhedral representation from dataflow programs and to synthesize them from their equivalent polyhedral representation. We then describe PolyGLoT, a framework for automatic transformation of dataflow programs which we built using our techniques and other popular research tools such as Clan and Pluto. For the purpose of experimental evaluation, we used our tools to compile LabVIEW, one of the most widely used dataflow programming languages. Results show that dataflow programs transformed using our framework are able to outperform those compiled otherwise by up to a factor of seventeen, with a mean speed-up of 2.30x while running on an 8-core Intel system.
Resumo:
In this work, we address the recovery of block sparse vectors with intra-block correlation, i.e., the recovery of vectors in which the correlated nonzero entries are constrained to lie in a few clusters, from noisy underdetermined linear measurements. Among Bayesian sparse recovery techniques, the cluster Sparse Bayesian Learning (SBL) is an efficient tool for block-sparse vector recovery, with intra-block correlation. However, this technique uses a heuristic method to estimate the intra-block correlation. In this paper, we propose the Nested SBL (NSBL) algorithm, which we derive using a novel Bayesian formulation that facilitates the use of the monotonically convergent nested Expectation Maximization (EM) and a Kalman filtering based learning framework. Unlike the cluster-SBL algorithm, this formulation leads to closed-form EMupdates for estimating the correlation coefficient. We demonstrate the efficacy of the proposed NSBL algorithm using Monte Carlo simulations.
Resumo:
It has been shown that iterative re-weighted strategies will often improve the performance of many sparse reconstruction algorithms. However, these strategies are algorithm dependent and cannot be easily extended for an arbitrary sparse reconstruction algorithm. In this paper, we propose a general iterative framework and a novel algorithm which iteratively enhance the performance of any given arbitrary sparse reconstruction algorithm. We theoretically analyze the proposed method using restricted isometry property and derive sufficient conditions for convergence and performance improvement. We also evaluate the performance of the proposed method using numerical experiments with both synthetic and real-world data. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Task-parallel languages are increasingly popular. Many of them provide expressive mechanisms for intertask synchronization. For example, OpenMP 4.0 will integrate data-driven execution semantics derived from the StarSs research language. Compared to the more restrictive data-parallel and fork-join concurrency models, the advanced features being introduced into task-parallelmodels in turn enable improved scalability through load balancing, memory latency hiding, mitigation of the pressure on memory bandwidth, and, as a side effect, reduced power consumption. In this article, we develop a systematic approach to compile loop nests into concurrent, dynamically constructed graphs of dependent tasks. We propose a simple and effective heuristic that selects the most profitable parallelization idiom for every dependence type and communication pattern. This heuristic enables the extraction of interband parallelism (cross-barrier parallelism) in a number of numerical computations that range from linear algebra to structured grids and image processing. The proposed static analysis and code generation alleviates the burden of a full-blown dependence resolver to track the readiness of tasks at runtime. We evaluate our approach and algorithms in the PPCG compiler, targeting OpenStream, a representative dataflow task-parallel language with explicit intertask dependences and a lightweight runtime. Experimental results demonstrate the effectiveness of the approach.
Resumo:
Metal-organic frameworks (MOFs) and boron nitride both possess novel properties, the former associated with microporosity and the latter with good mechanical properties. We have synthesized composites of the imidazolate based MOF, ZIF-8, and few-layer BN in order to see whether we can incorporate the properties of both these materials in the composites. The composites so prepared between BN nanosheets and ZIF-8 have compositions ZIF-1BN, ZIF-2BN, ZIF-3BN and similar to ZIF-4BN. The composites have been characterized by PXRD, TGA, XPS, electron microscopy, IR, Raman and solid state NMR spectroscopy. The composites possess good surface areas, the actual value decreasing only slightly with the increase in the BN content. The CO2 uptake remains nearly the same in the composites as in the parent ZIF-8. More importantly, the addition of BN markedly improves the mechanical properties of ZIF-8, a feature that is much desired in MOFs. Observation of microporous features along with improved mechanical properties in a MOF is indeed noteworthy. Such manipulation of properties can be profitably exploited in practical applications.
Resumo:
Matroidal networks were introduced by Dougherty et al. and have been well studied in the recent past. It was shown that a network has a scalar linear network coding solution if and only if it is matroidal associated with a representable matroid. A particularly interesting feature of this development is the ability to construct (scalar and vector) linearly solvable networks using certain classes of matroids. Furthermore, it was shown through the connection between network coding and matroid theory that linear network coding is not always sufficient for general network coding scenarios. The current work attempts to establish a connection between matroid theory and network-error correcting and detecting codes. In a similar vein to the theory connecting matroids and network coding, we abstract the essential aspects of linear network-error detecting codes to arrive at the definition of a matroidal error detecting network (and similarly, a matroidal error correcting network abstracting from network-error correcting codes). An acyclic network (with arbitrary sink demands) is then shown to possess a scalar linear error detecting (correcting) network code if and only if it is a matroidal error detecting (correcting) network associated with a representable matroid. Therefore, constructing such network-error correcting and detecting codes implies the construction of certain representable matroids that satisfy some special conditions, and vice versa. We then present algorithms that enable the construction of matroidal error detecting and correcting networks with a specified capability of network-error correction. Using these construction algorithms, a large class of hitherto unknown scalar linearly solvable networks with multisource, multicast, and multiple-unicast network-error correcting codes is made available for theoretical use and practical implementation, with parameters, such as number of information symbols, number of sinks, number of coding nodes, error correcting capability, and so on, being arbitrary but for computing power (for the execution of the algorithms). The complexity of the construction of these networks is shown to be comparable with the complexity of existing algorithms that design multicast scalar linear network-error correcting codes. Finally, we also show that linear network coding is not sufficient for the general network-error correction (detection) problem with arbitrary demands. In particular, for the same number of network errors, we show a network for which there is a nonlinear network-error detecting code satisfying the demands at the sinks, whereas there are no linear network-error detecting codes that do the same.
Resumo:
Bioshields or coastal vegetation structures are currently amongst the most important coastal habitat modification activities in south-east Asia, particularly after the December 2004 tsunami. Coastal plantations have been promoted at a large scale as protection against severe natural disasters despite considerable debate over their efficacy as protection measures. In this paper, we provide an interdisciplinary framework for evaluating and monitoring coastal plantations. We then use this framework in a case study in peninsular India. We conducted a socio-ecological questionnaire-based survey on government and non-government organizations directly involved in coastal plantation efforts in three 2004 Indian Ocean tsunami affected states in mainland India. We found that though coastal protection was stated to be the primary cause, socio-economic factors like providing rural employment were strong drivers of plantation activities. Local communities were engaged primarily as daily wage labour for plantation. rather than in the planning or monitoring phases. Application of ecological criteria has been undermined during the establishment and maintenance of plantations and there was a general lack of awareness about conservation laws relating to coastal forests. While ample flow of international aid has fuelled the plantation of exotics in the study area particularly after the Indian Ocean tsunami in 2004, the long term ecological consequences need further evaluation and rigorous monitoring in the future. (C) 2014 Elsevier Masson SAS. All rights reserved.
Resumo:
Based on an ultrasound-modulated optical tomography experiment, a direct, quantitative recovery of Young's modulus (E) is achieved from the modulation depth (M) in the intensity autocorrelation. The number of detector locations is limited to two in orthogonal directions, reducing the complexity of the data gathering step whilst ensuring against an impoverishment of the measurement, by employing ultrasound frequency as a parameter to vary during data collection. The M and E are related via two partial differential equations. The first one connects M to the amplitude of vibration of the scattering centers in the focal volume and the other, this amplitude to E. A (composite) sensitivity matrix is arrived at mapping the variation of M with that of E and used in a (barely regularized) Gauss-Newton algorithm to iteratively recover E. The reconstruction results showing the variation of E are presented. (C) 2015 Optical Society of America
Resumo:
Regional frequency analysis is widely used for estimating quantiles of hydrological extreme events at sparsely gauged/ungauged target sites in river basins. It involves identification of a region (group of watersheds) resembling watershed of the target site, and use of information pooled from the region to estimate quantile for the target site. In the analysis, watershed of the target site is assumed to completely resemble watersheds in the identified region in terms of mechanism underlying generation of extreme event. In reality, it is rare to find watersheds that completely resemble each other. Fuzzy clustering approach can account for partial resemblance of watersheds and yield region(s) for the target site. Formation of regions and quantile estimation requires discerning information from fuzzy-membership matrix obtained based on the approach. Practitioners often defuzzify the matrix to form disjoint clusters (regions) and use them as the basis for quantile estimation. The defuzzification approach (DFA) results in loss of information discerned on partial resemblance of watersheds. The lost information cannot be utilized in quantile estimation, owing to which the estimates could have significant error. To avert the loss of information, a threshold strategy (TS) was considered in some prior studies. In this study, it is analytically shown that the strategy results in under-prediction of quantiles. To address this, a mathematical approach is proposed in this study and its effectiveness in estimating flood quantiles relative to DFA and TS is demonstrated through Monte-Carlo simulation experiments and case study on Mid-Atlantic water resources region, USA. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
We investigate the problem of timing recovery for 2-D magnetic recording (TDMR) channels. We develop a timing error model for TDMR channel considering the phase and frequency offsets with noise. We propose a 2-D data-aided phase-locked loop (PLL) architecture for tracking variations in the position and movement of the read head in the down-track and cross-track directions and analyze the convergence of the algorithm under non-separable timing errors. We further develop a 2-D interpolation-based timing recovery scheme that works in conjunction with the 2-D PLL. We quantify the efficiency of our proposed algorithms by simulations over a 2-D magnetic recording channel with timing errors.
Resumo:
A closed-form expression for the dual of dissipation potential is derived within the framework of irreversible thermodynamics using the principles of dimensional analysis and self-similarity. Through this potential, a damage evolution law is proposed for concrete under fatigue loading using the concepts of damage mechanics in conjunction with fracture mechanics. The proposed law is used to compute damage in a volume element when a member is subjected to fatigue loading. The evolution of damage from microcracking to macrocracking of the entire member is captured through a series of volume elements failing one after the other. The number of loading cycles to failure of the member is obtained as the summation of number of cycles to failure for each individual volume element. A parametric study is conducted to determine the effect of the size of the volume element on the model's prediction of fatigue life. A global damage index is also defined, and the residual moment carrying capacity of damaged beams is evaluated. Through a deterministic sensitivity analysis, it is found that the load range and maximum aggregate size are the most influencing parameters on the fatigue life of a plain concrete beam.
Resumo:
Compressive Sensing (CS) theory combines the signal sampling and compression for sparse signals resulting in reduction in sampling rate. In recent years, many recovery algorithms have been proposed to reconstruct the signal efficiently. Subspace Pursuit and Compressive Sampling Matching Pursuit are some of the popular greedy methods. Also, Fusion of Algorithms for Compressed Sensing is a recently proposed method where several CS reconstruction algorithms participate and the final estimate of the underlying sparse signal is determined by fusing the estimates obtained from the participating algorithms. All these methods involve solving a least squares problem which may be ill-conditioned, especially in the low dimension measurement regime. In this paper, we propose a step prior to least squares to ensure the well-conditioning of the least squares problem. Using Monte Carlo simulations, we show that in low dimension measurement scenario, this modification improves the reconstruction capability of the algorithm in clean as well as noisy measurement cases.
Resumo:
The structure of a new cysteine framework (-C-CC-C-C-C) ``M''-superfamily conotoxin, Mo3964, shows it to have a beta-sandwich structure that is stabilized by inter-sheet cross disulfide bonds. Mo3964 decreases outward K+ currents in rat dorsal root ganglion neurons and increases the reversal potential of the Na(V)1.2 channels. The structure of Mo3964 (PDB ID: 2MW7) is constructed from the disulfide connectivity pattern, i.e., 1-3, 2-5, and 4-6, that is hitherto undescribed for the ``M''-superfamily conotoxins. The tertiary structural fold has not been described for any of the known conus peptides. NOE (549), dihedral angle (84), and hydrogen bond (28) restraints, obtained by measurement of (h3)J(NC') scalar couplings, were used as input for structure calculation. The ensemble of structures showed a backbone root mean square deviation of 0.68 +/- 0.18 angstrom, with 87% and 13% of the backbone dihedral (phi, psi) angles lying in the most favored and additional allowed regions of the Ramachandran map. The conotoxin Mo3964 represents a new bioactive peptide fold that is stabilized by disulfide bonds and adds to the existing repertoire of scaffolds that can be used to design stable bioactive peptide molecules.