25 resultados para fictitious translations
Resumo:
At an EMBO Workshop on DNA Curvature and Bending, held at Churchill College, Cambridge, on 10-15 September 1988, two sessions were scheduled on definitions of parameters used to describe the geometry of nucleic acid chains and helices, and a common nomenclature for these parameters. The most widely used library of helix analysis programs, HELIB (Fratini et al., 1982; Dickerson, 1985) suffers from the fact that the translations and rotations as defined are not fully independent and depend to a certain extent upon the choice of overall helix axis. Several research groups have been engaged independently in developing alternative programs for the geometrical analysis of polynucleotide chains, but with different definitions of quantities calculated and with widely different nomenclature even when the same parameter was involved.
Resumo:
Load-deflection curves for a notched beam under three-point load are determined using the Fictitious Crack Model (FCM) and Blunt Crack Model (BCM). Two values of fracture energy GF are used in this analysis: (i) GF obtained from the size effect law and (ii) GF obtained independently of the size effect. The predicted load-deflection diagrams are compared with the experimental ones obtained for the beams tested by Jenq and Shah. In addition, the values of maximum load (Pmax) obtained by the analyses are compared with the experimental ones for beams tested by Jenq and Shah and by Bažant and Pfeiffer. The results indicate that the descending portion of the load-deflection curve is very sensitive to the GF value used.
Resumo:
This paper presents a new approach by making use of a hybrid method of using the displacement discontinuity element method and direct boundary element method to model concrete cracking by incorporating fictitious crack model. Fracture mechanics approach is followed using the Hillerborg's fictitious crack model. A boundary element based substructure method and a hybrid technique of using displacement discontinuity element method and direct boundary element method are compared in this paper. In order to represent the process zone ahead of the crack, closing forces are assumed to act in such a way that they obey a linear normal stress-crack opening displacement law. Plain concrete beams with and without initial crack under three-point loading were analyzed by both the methods. The numerical results obtained were shown to agree well with the results from existing finite element method. The model is capable of reproducing the whole range of load-deflection response including strain-softening and snap-back behavior as illustrated in the numerical examples. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We have studied the behaviour of a charged particle in an axially symmetric magnetic field having a neutral point, so as to find a possibility of confining a charged particle in a thermonuclear device. In order to study the motion we have reduced a three-dimensional motion to a two-dimensional one by introducing a fictitious potential. Following Schmidt we have classified the motion, as an ‘off-axis motion’ and ‘encircling motion’ depending on the behaviour of this potential. We see that the particle performs a hybrid type of motion in the negative z-axis, i.e. at some instant it is in ‘off-axis motion’ while at another instant it is in ‘encircling motion’. We have also solved the equation of motion numerically and the graphs of the particle trajectory verify our analysis. We find that in most of the cases the particle is contained. The magnetic moment is found to be moderately adiabatic.
Resumo:
Purpose: The authors aim at developing a pseudo-time, sub-optimal stochastic filtering approach based on a derivative free variant of the ensemble Kalman filter (EnKF) for solving the inverse problem of diffuse optical tomography (DOT) while making use of a shape based reconstruction strategy that enables representing a cross section of an inhomogeneous tumor boundary by a general closed curve. Methods: The optical parameter fields to be recovered are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions) and the EnKF is used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the pseudo-dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ``measurement'' equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. Results: In our numerical simulations, we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes (such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as mu(b)(a)=0.01mm(-1) and mu('b)(s)=1.0mm(-1), respectively. We also assume mu(a) = 0.02 mm(-1) within the inhomogeneity (for the single inhomogeneity case) and mu(a) = 0.02 and 0.03 mm(-1) (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown mu(a) from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. Conclusions: The PD-EnKF, which exhibits little sensitivity against variations in the fictitiously introduced noise processes, is also proven to be accurate and robust in recovering a spatial map of the absorption coefficient from DOT data. With the help of shape based representation of the inhomogeneities and an appropriate scaling of the CH expansion coefficients representing the boundary, we have been able to recover inhomogeneities representative of the shape of malignancies in medical diagnostic imaging. (C) 2012 American Association of Physicists in Medicine. [DOI: 10.1118/1.3679855]
Resumo:
The study of the fracture behaviour of concrete structures using the fictitious crack model requires two fracture properties of the concrete mix, namely, the size-independent specific fracture energy G(F). and the corresponding tension softening relation sigma(w) between the residual stress carrying capacity sigma and the crack opening w in the fracture process zone ahead of a real crack. In this paper, bi-linear tension softening diagrams of three different concrete mixes, ranging in compressive strength from 57 to 122 MPa whose size-independent specific fracture energy has been previously determined, have been constructed in an inverse manner based on the concept of a non-linear hinge from the load-crack mouth opening plots of notched three-point bend beams. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Heterodimeric proteins with homologous subunits of same fold are involved in various biological processes. The objective of this study is to understand the evolution of structural and functional features of such heterodimers. Using a non-redundant dataset of 70 such heterodimers of known 3D structure and an independent dataset of 173 heterodimers from yeast, we note that the mean sequence identity between interacting homologous subunits is only 23-24% suggesting that, generally, highly diverged paralogues assemble to form such a heterodimer. We also note that the functional roles of interacting subunits/domains are generally quite different. This suggests that, though the interacting subunits/domains are homologous, the high evolutionary divergence characterize their high functional divergence which contributes to a gross function for the heterodimer considered as a whole. The inverse relationship between sequence identity and RMSD of interacting homologues in heterodimers is not followed. We also addressed the question of formation of homodimers of the subunits of heterodimers by generating models of fictitious homodimers on the basis of the 3D structures of the heterodimers. Interaction energies associated with these homodimers suggests that, in overwhelming majority of the cases, such homodimers are unlikely to be stable. Majority of the homologues of heterodimers of known structures form heterodimers (51.8%) and a small proportion (14.6%) form homodimers. Comparison of 3D structures of heterodimers with homologous homodimers suggests that interfacial nature of residues is not well conserved. In over 90% of the cases we note that the interacting subunits of heterodimers are co-localized in the cell. Proteins 2015; 83:1766-1786. (c) 2015 Wiley Periodicals, Inc.
Resumo:
Identifying translations from comparable corpora is a well-known problem with several applications, e.g. dictionary creation in resource-scarce languages. Scarcity of high quality corpora, especially in Indian languages, makes this problem hard, e.g. state-of-the-art techniques achieve a mean reciprocal rank (MRR) of 0.66 for English-Italian, and a mere 0.187 for Telugu-Kannada. There exist comparable corpora in many Indian languages with other ``auxiliary'' languages. We observe that translations have many topically related words in common in the auxiliary language. To model this, we define the notion of a translingual theme, a set of topically related words from auxiliary language corpora, and present a probabilistic framework for translation induction. Extensive experiments on 35 comparable corpora using English and French as auxiliary languages show that this approach can yield dramatic improvements in performance (e.g. MRR improves by 124% to 0.419 for Telugu-Kannada). A user study on WikiTSu, a system for cross-lingual Wikipedia title suggestion that uses our approach, shows a 20% improvement in the quality of titles suggested.
Resumo:
Availability of land for conventional air-insulated substations is becoming increasingly difficult not only in urban but also in semiurban areas. When the land made available is highly uneven, the associated technoeconomic factors favors the erection of substations on a steplike-formed ground surface and such constructions are in service for more than ten years in some parts of southern India. Noting that the literature on the performance of ground grids in such a construction is rather scarce, the present work was taken up. Evaluation of the performance of earthing elements in steplike ground forms the main goal of the present work. For the numerical evaluation, a suitable boundary-based methodology is employed. This method retains the classical Galerkin approach for the conductors, while the interfaces are replaced by equivalent fictitious surface sources defined over unstructured mesh. Details of the implementation of this numerical method, along with special measures to minimize the computation, are presented. The performance of basic earthing elements, such as the driven rod, counterpoise, and simple grids buried in steplike ground, are analyzed and compared with that for the case with uniform soil surface. It is shown that more than the earthing resistances, the step potentials can get significantly affected.
Resumo:
The bilateral filter is a versatile non-linear filter that has found diverse applications in image processing, computer vision, computer graphics, and computational photography. A common form of the filter is the Gaussian bilateral filter in which both the spatial and range kernels are Gaussian. A direct implementation of this filter requires O(sigma(2)) operations per pixel, where sigma is the standard deviation of the spatial Gaussian. In this paper, we propose an accurate approximation algorithm that can cut down the computational complexity to O(1) per pixel for any arbitrary sigma (constant-time implementation). This is based on the observation that the range kernel operates via the translations of a fixed Gaussian over the range space, and that these translated Gaussians can be accurately approximated using the so-called Gauss-polynomials. The overall algorithm emerging from this approximation involves a series of spatial Gaussian filtering, which can be efficiently implemented (in parallel) using separability and recursion. We present some preliminary results to demonstrate that the proposed algorithm compares favorably with some of the existing fast algorithms in terms of speed and accuracy.