898 resultados para Flail space model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The association of a factors with the RNA polymerase dictates the expression profile of a bacterial cell. Major changes to the transcription profile are achieved by the use of multiple sigma factors that confer distinct promoter selectivity to the holoenzyme. The cellular concentration of a sigma factor is regulated by diverse mechanisms involving transcription, translation and post-translational events. The number of sigma factors varies substantially across bacteria. The diversity in the interactions between sigma factors also vary-ranging from collaboration, competition or partial redundancy in some cellular or environmental contexts. These interactions can be rationalized by a mechanistic model referred to as the partitioning of a space model of bacterial transcription. The structural similarity between different sigma/anti-sigma complexes despite poor sequence conservation and cellular localization reveals an elegant route to incorporate diverse regulatory mechanisms within a structurally conserved scaffold. These features are described here with a focus on sigma/anti-sigma complexes from Mycobacterium tuberculosis. In particular, we discuss recent data on the conditional regulation of sigma/anti-sigma factor interactions. Specific stages of M. tuberculosis infection, such as the latent phase, as well as the remarkable adaptability of this pathogen to diverse environmental conditions can be rationalized by the synchronized action of different a factors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this research work, we introduce a novel approach for phase estimation from noisy reconstructed interference fields in digital holographic interferometry using an unscented Kalman filter. Unlike conventionally used unwrapping algorithms and piecewise polynomial approximation approaches, this paper proposes, for the first time to the best of our knowledge, a signal tracking approach for phase estimation. The state space model derived in this approach is inspired from the Taylor series expansion of the phase function as the process model, and polar to Cartesian conversion as the measurement model. We have characterized our approach by simulations and validated the performance on experimental data (holograms) recorded under various practical conditions. Our study reveals that the proposed approach, when compared with various phase estimation methods available in the literature, outperforms at lower SNR values (i.e., especially in the range 0-20 dB). It is demonstrated with experimental data as well that the proposed approach is a better choice for estimating rapidly varying phase with high dynamic range and noise. (C) 2014 Optical Society of America

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a novel coarse-to-fine global localization approach inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by scale-invariant transformation feature descriptors are used as natural landmarks. They are indexed into two databases: a location vector space model (LVSM) and a location database. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the LVSM is fast, but not accurate enough, whereas localization from the location database using a voting algorithm is relatively slow, but more accurate. The integration of coarse and fine stages makes fast and reliable localization possible. If necessary, the localization result can be verified by epipolar geometry between the representative view in the database and the view to be localized. In addition, the localization system recovers the position of the camera by essential matrix decomposition. The localization system has been tested in indoor and outdoor environments. The results show that our approach is efficient and reliable. © 2006 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics, however, the representations can be engaged based on visual context, and are updated by a single-rate process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motor task variation has been shown to be a key ingredient in skill transfer, retention, and structural learning. However, many studies only compare training of randomly varying tasks to either blocked or null training, and it is not clear how experiencing different nonrandom temporal orderings of tasks might affect the learning process. Here we study learning in human subjects who experience the same set of visuomotor rotations, evenly spaced between -60° and +60°, either in a random order or in an order in which the rotation angle changed gradually. We compared subsequent learning of three test blocks of +30°→-30°→+30° rotations. The groups that underwent either random or gradual training showed significant (P < 0.01) facilitation of learning in the test blocks compared with a control group who had not experienced any visuomotor rotations before. We also found that movement initiation times in the random group during the test blocks were significantly (P < 0.05) lower than for the gradual or the control group. When we fit a state-space model with fast and slow learning processes to our data, we found that the differences in performance in the test block were consistent with the gradual or random task variation changing the learning and retention rates of only the fast learning process. Such adaptation of learning rates may be a key feature of ongoing meta-learning processes. Our results therefore suggest that both gradual and random task variation can induce meta-learning and that random learning has an advantage in terms of shorter initiation times, suggesting less reliance on cognitive processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we present an expectation-maximisation (EM) algorithm for maximum likelihood estimation in multiple target models (MTT) with Gaussian linear state-space dynamics. We show that estimation of sufficient statistics for EM in a single Gaussian linear state-space model can be extended to the MTT case along with a Monte Carlo approximation for inference of unknown associations of targets. The stochastic approximation EM algorithm that we present here can be used along with any Monte Carlo method which has been developed for tracking in MTT models, such as Markov chain Monte Carlo and sequential Monte Carlo methods. We demonstrate the performance of the algorithm with a simulation. © 2012 ISIF (Intl Society of Information Fusi).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a three-dimensional comprehensive model for the calculation of vibration in a building based on pile-foundation due to moving trains in a nearby underground tunnel. The model calculates the Power Spectral Density (PSD) of the building's responses due to trains moving on floating-slab tracks with random roughness. The tunnel and its surrounding soil are modelled as a cylindrical shell embedded in half-space using the well-known PiP model. The building and its piles are modelled as a 2D frame using the dynamic stiffness matrix. Coupling between the foundation and the ground is performed using the theory of joining subsystems in the frequency domain. The latter requires calculations of transfer functions of a half-space model. A convenient choice based on the thin-layer method is selected in this work for the calculations of responses in a half-space due to circular strip loadings. The coupling considers the influence of the building's dynamics on the incident wave field from the tunnel, but ignores any reflections of building's waves from the tunnel. The derivation made in the paper shows that the incident vibration field at the building's foundation gets modified by a term reflecting the coupling and the dynamics of the building and its foundation. The comparisons presented in the paper show that the dynamics of the building and its foundation significantly change the incident vibration field from the tunnel and they can lead to loss of accuracy of predictions if not considered in the calculation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

词义消歧一直是自然语言理解中的一个关键问题,该问题解决的好坏直接关系到自然语言处理中诸多应用问题的效果优劣.由于自然语言知识表示的困难,在手工规则的词义消歧难以达到理想效果的情况下,各种有导机器学习方法被应用于词义消歧任务中.借鉴前人的成果引入信息检索领域中向量空间模型文档词语权重计算技术来解决多义词义项的知识表示问题,并提出了上下文位置权重的计算方法,给出了一种基于向量空间模型的词义消歧有导机器学习方法.该方法将多义词的义项和上下文分别映射到向量空间中,通过计算多义词上下文向量与义项向量的距离,采用k-NN(k=1)方法来确定上下文向量的义项分类.在9个汉语高频多义词的开放和封闭测试中均取得了突出的成绩(封闭测试平均正确率为96.31% ,开放测试平均正确率为92.98%),验证了该方法的有效性.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The velocity field is important to investigate the motion and strain parameters of the block. It is also important to investigate the deformation of the fault, for example, the accumulation of strain and stress, at the boundary of the block. The dislocation model is a classic method to simulate the velocity field. In dislocation model, the aseismic crustal deformation is regarded as the sum of the rigid block motion and the effect of the locked fault. We modify the dislocation model in two aspects. Firstly, the block motion is assumed to be the sum of rotation and linear strain rather than the rigid motion. Secondly, the elastic layered-earth model rather than the homogenous half-space model is applied to calculate the effect of the locked part. The 1990~1995 annually Global Position System (GPS) velocity data of the Taiwan area are used in our dislocation model. The misfit of our modified model is smaller than that of the origin model clearly. Our simulation shows, in eastern Coastal Range, the velocity decreases northward rapidly from Chimei Fault, which may result from the high crustal compressive rate of about 30 mm•a-1 at Chimei Fault. The lock of fault in southern part is stronger than that in northern part generally. In western Taiwan, the most strongly locked faults appear in the southern Coastal Plain where many disaster earthquakes occur frequently. The calculated strain and rotation rates consist with previous results in most areas. The strain rate field reveals the nearly NW-SE compression in most parts of Taiwan with a fan-shaped distribution. The rotation rate field reveals anticlockwise rotation in eastern and southern Taiwan while clockwise rotation in western and northern Taiwan, generally.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Describing visually space-time properties of geological phenomena consists of one of the most important parts in geology research. Such visual images are of usually helpful for analyzing geological phenomena and for discovering the regulations behind geological phenomena. This report studies mainly three application problems of scientific visualization in geology: (Dvisualizing geological body A new geometric modeling technique with trimmed surface patches has been eveloped to visualize geological body. Constructional surfaces are represented as trimmed surfaces and a constructional solid is represented by the upper and lower surface composed of trimmed surface patches from constructional surfaces. The technique can completely and definitely represent the structure of geological body. It has been applied in visualization for the coal deposit in Huolinhe, the aquifer thermal energy storage in Tianjin and the structure of meteorite impact in Cangshan et al. (2)visualizing geological space field Efficient visualization methods have been discussed. Marching-Cube algorithm used has been improved and is used to extract iso~surface from 3D data set, iso-line from 2D data set and iso-point from ID data set. The improved method has been used to visualize distribution and evolution of the abnormal pressures in Zhungaer Basin. (3)visualizing porous space a novel way was proposed to define distance from any point to a convex set. Thus a convex set skeleton-based implicit surface modeling technique is developed and used to construct a simplified porous space model. A Buoyancy Percolation numerical simulation platform has been developed to simulate the process of migration of oil in the porous media saturated with water.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

With the large developments of the seismic sources theory, computing technologies and survey instruments, we can model and rebuild the rupture process of earthquakes more realistically. On which earthquake sources' properties and tectonic activities law are realized more clearly. The researches in this domain have been done in this paper as follows. Based on the generalized ray method, expressions for displacement on the surface of a half-space due to an arbitrary oriented shear and tensile dislocation are also obtained. Kinematically, fault-normal motion is equivalent to tensile faulting. There is some evidence that such motion occurs in many earthquakes. The expressions for static displacements on the surface of a layered half-space due to static point moment tensor source are given in terms of the generalized reflection and transmission coefficient matrix method. The validity and precision of the new method is illustrated by comparing the consistency of our results with the analytical solution given by Okada's code employing same point source and homogenous half-space model. The computed vertical ground displacement using the moment tensor solution of the Lanchang_Gengma earthquake displays considerable difference with that of a double couple component .The effect of a soft layer at the top of the homogenous half-space on a shallow normal-faulting earthquake is also analyzed. Our results show that more seismic information would be obtained utilizing seismic moment tensor source and layered half-space model. The rupture process of 1999 Chi-Chi, Taiwan, earthquake investigated by using co-seismic surface displacement GPS observations and far field P-wave records. In according to the tectonic analysis and distributions of aftershock, we introduce a three-segment bending fault planes into our model. Both elastic half-space models and layered-earth models to invert the distribution of co-seismic slip along the Chi-Chi earthquake rupture. The results indicate that the shear slip model can not fit horizontal and vertical co-seismic displacements together, unless we add the fault-normal motion (tensile component) in inversions. And then, the Chi Chi earthquake rupture process was obtained by inversion using the seismograms and GPS observations. Fault normal motions determined by inversion, concentrate on the shallow northern bending fault from Fengyuan to Shuangji where the surface earthquake ruptures reveal more complexity and the developed flexural slip folding structures than the other portions of the rupture zone For understanding the perturbation of surface displacements caused by near-surface complex structures, We have taken a numeric test to synthesize and inverse the surface displacements for a pop-up structure that is composed of a main thrust and a back thrust. Our result indicates that the pop-up structure, the typical shallow complex rupture that occurred in the northern bending fault zone form Fengyuan to Shuangji, can be modeled better by a thrust fault added negative tensile component than by a simple thrust fault. We interpret the negative tensile distributions, that concentrate on the shallow northern bending fault from Fengyuan to Shuangji, as a the synthetic effect including the complexities of property and geometry of rupture. The earthquake rupture process also reveal the more spatial and temporal complexities form Fenyuan to SHuangji. According to the three-components teleseismic records, the S-wave velocity structure beneath the 59 teleseismic stations of Taiwan obtained by using the transform function method and the SA techniques. The integrated results, the 3D crustal structure of Taiwan reveal that the thickest part of crustal local in the western Central Range. This conclusion is consistent with the result form the Bouguer gravity anomaly. The orogenic evolution of Taiwan is young period, and the developing foot of Central Range dose not in static balancing. The crustal of Taiwan stays in the course of dynamic equilibrium. The rupture process of 2003)2,24,Jiashi, Xinjiang earthquake was estimated by the finite fault model using far field broadband P wave records of CDSN and IRIS. The results indicate that the earthquake focal is north dip trust fault including some left-lateral strike slip. The focal mechanism of this earthquake is different form that of earthquakes occurred in 1997 and 1998, but similar to that of 1996, Artux, Xinjiang earthquake. We interpreted that the earthquake caused trust fault due to the Tarim basin pushing northward and orogeny of Tianshan mountain. In the end, give a brief of future research subject: Building the Real Time Distribute System for rupture process of Large Earthquakes Based on Internet.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The CSAMT method is playing an important role in the exploration of geothermal and the pre-exploration in tunnel construction project recently. In order to instruct the interpretation technique for the field data, the forward method from ID to 3D and inversion method in ID and 2D are developed in this paper for the artificial source magnetotelluric in frequency domain. In general, the artificial source data are inverted only after the near field is corrected on the basis of the assumption of half-homogeneous space; however, this method is not suitable for the complex structure because the assumption is not valid any more. Recently the new idea about inversion scheme without near field correction is published in order to avoid the near field correction error. We try to discuss different inversion scheme in ID and 2D using the data without near field correction.The numerical integration method is used to do the forward modeling in ID CSAMT method o The infinite line source is used in the 2D finite-element forward modeling, where the near-field effect is occurred as in the CSAMT method because of using artificial source. The pseudo-delta function is used to modeling the source distribution, which reduces the singularity when solving the finite-element equations. The effect on the exploration area is discussed when anomalous body exists under the source or between the source and exploration area; A series of digital test show the 2D finite element method are correct, the results of modeling has important significant for CSAMT data interpretation. For 3D finite-element forward modeling, the finite-element equation is derived by Galerkin method and the divergence condition is add forcedly to the forward equation, the forward modeling result of the half homogeneous space model is correct.The new inversion idea without near field correction is followed to develop new inversion methods in ID and 2D in the paper. All of the inversion schemes use the data without near field correction, which avoid introducing errors caused by near field correction. The modified grid parameter method and the layer-by-layer inversion method are joined in the ID inversion scheme. The RRI method with artificial source are developed and finite-element inversion method are used in 2D inversion scheme. The inversion results using digital data and the field data are accordant to the model and the known geology data separately, which means the inversion without near field correction is accessible. The feasibility to invert the data only in exploration area is discussed when the anomalous body exists between the source and the exploration area.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Transcriptional regulation has been studied intensively in recent decades. One important aspect of this regulation is the interaction between regulatory proteins, such as transcription factors (TF) and nucleosomes, and the genome. Different high-throughput techniques have been invented to map these interactions genome-wide, including ChIP-based methods (ChIP-chip, ChIP-seq, etc.), nuclease digestion methods (DNase-seq, MNase-seq, etc.), and others. However, a single experimental technique often only provides partial and noisy information about the whole picture of protein-DNA interactions. Therefore, the overarching goal of this dissertation is to provide computational developments for jointly modeling different experimental datasets to achieve a holistic inference on the protein-DNA interaction landscape.

We first present a computational framework that can incorporate the protein binding information in MNase-seq data into a thermodynamic model of protein-DNA interaction. We use a correlation-based objective function to model the MNase-seq data and a Markov chain Monte Carlo method to maximize the function. Our results show that the inferred protein-DNA interaction landscape is concordant with the MNase-seq data and provides a mechanistic explanation for the experimentally collected MNase-seq fragments. Our framework is flexible and can easily incorporate other data sources. To demonstrate this flexibility, we use prior distributions to integrate experimentally measured protein concentrations.

We also study the ability of DNase-seq data to position nucleosomes. Traditionally, DNase-seq has only been widely used to identify DNase hypersensitive sites, which tend to be open chromatin regulatory regions devoid of nucleosomes. We reveal for the first time that DNase-seq datasets also contain substantial information about nucleosome translational positioning, and that existing DNase-seq data can be used to infer nucleosome positions with high accuracy. We develop a Bayes-factor-based nucleosome scoring method to position nucleosomes using DNase-seq data. Our approach utilizes several effective strategies to extract nucleosome positioning signals from the noisy DNase-seq data, including jointly modeling data points across the nucleosome body and explicitly modeling the quadratic and oscillatory DNase I digestion pattern on nucleosomes. We show that our DNase-seq-based nucleosome map is highly consistent with previous high-resolution maps. We also show that the oscillatory DNase I digestion pattern is useful in revealing the nucleosome rotational context around TF binding sites.

Finally, we present a state-space model (SSM) for jointly modeling different kinds of genomic data to provide an accurate view of the protein-DNA interaction landscape. We also provide an efficient expectation-maximization algorithm to learn model parameters from data. We first show in simulation studies that the SSM can effectively recover underlying true protein binding configurations. We then apply the SSM to model real genomic data (both DNase-seq and MNase-seq data). Through incrementally increasing the types of genomic data in the SSM, we show that different data types can contribute complementary information for the inference of protein binding landscape and that the most accurate inference comes from modeling all available datasets.

This dissertation provides a foundation for future research by taking a step toward the genome-wide inference of protein-DNA interaction landscape through data integration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper points out a serious flaw in dynamic multivariate statistical process control (MSPC). The principal component analysis of a linear time series model that is employed to capture auto- and cross-correlation in recorded data may produce a considerable number of variables to be analysed. To give a dynamic representation of the data (based on variable correlation) and circumvent the production of a large time-series structure, a linear state space model is used here instead. The paper demonstrates that incorporating a state space model, the number of variables to be analysed dynamically can be considerably reduced, compared to conventional dynamic MSPC techniques.