888 resultados para Discrete Time Branching Processes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dicer is a member of the RNAase III family which catalyzes the cleavage of double-stranded RNA to small interfering RNAs and micro RNAs, and then directs sequence-specific gene silencing. In this paper, the full-length cDNA of Dicer-1 was cloned from white shrimp Litopenaeus vannamei (designated as LvDcr1). It was of 7636 bp, including a poly A tail, a 5' UTR of 136 bp, a 3' UTR of 78 bp, and an open reading frame (ORF) of 7422 bp encoding a putative protein of 2473 amino acids. The predicted amino acid sequence comprised all recognized functional domains found in other Dicer-1 homologues and showed the highest (97.7%) similarity to the Dicer-1 from tiger shrimp Penaeus mondon. Quantitative real-time PCR was employed to investigate the tissue distribution of LvDcr1 mRNA, and its expression in shrimps under virus challenge and larvae at different developmental stages. The LvDcr1 mRNA could be detected in all examined tissues with the highest expression level in hemocyte, and was up-regulated in hemocytes and gills after virus injection. These results indicated that LvDcr1 was involved in antiviral defense in adult shrimp. During the developmental stages from fertilized egg to postlarva VII, LvDcr1 was constitutively expressed at all examined development stages, but the expression level varied significantly. The highest expression level was observed in fertilized eggs and followed a decrease from fertilized egg to nauplius I stage. Then, the higher levels of expression were detected at nauplius V and postlarva stages. LvDcr1 expression regularly increased at the upper phase of nauplius, zoea and mysis stages than their prophase. The different expression of LvDcr1 in the larval stages could provide clues for understanding the early innate immunity in the process of shrimp larval development. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Suspended particulate matter (SPM) measurements obtained along a cross-section in the central English Channel (Wight-Cotentin transect) indicate that the area may be differentiated into: (1) an English coastal zone, associated with the highest concentrations; (2) a French coastal zone, with intermediate concentrations; and (3) the offshore waters of the Channel, characterised by a very low suspended-sediment load. The SPM particle-size distribution was modal close to the English coast (main mode 10-12 mu m); the remainder of the area was characterised by flat SPM distributions. Examination of the diatom communities in the SPM suggest:; that material resuspended in the intertidal zone and the estuarine environments was advected towards the offshore waters of the English Channel. Considerable variations in SPM concentrations occurred during a tidal cycle: maximum concentrations were sometimes up to 3 times higher than the minimum concentrations, Empirical orthogonal function (EOF) analysis of the SPM concentration time series indicates that, although the bottom waters were more turbid than the surficial waters, this was not likely to be the result of in situ sediment resuspension. Instead, the observed variations appear to be controlled mainly by advective mechanisms. The limited resuspension was probably caused by: (1) the limited availability of fine-grained material within the bottom sediments, and (2) 'bed-armouring' processes which protect the finer-grained fractions of the seabed material from erosion and entrainment within the overlying flow during the less energetic stages of the tide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

本文将Smith预估技术与逆 Nyquist 阵列法结合对多时延多变量对象进行离散控制系统设计,采用这种方法设计出的控制器易用计算机实现,系统仿真结果也是令人满意的。

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Offshore seismic exploration is full of high investment and risk. And there are many problems, such as multiple. The technology of high resolution and high S/N ratio on marine seismic data processing is becoming an important project. In this paper, the technology of multi-scale decomposition on both prestack and poststack seismic data based on wavelet and Hilbert-Huang transform and the theory of phase deconvolution is proposed by analysis of marine seismic exploration, investigation and study of literatures, and integration of current mainstream and emerging technology. Related algorithms are studied. The Pyramid algorithm of decomposition and reconstruction had been given by the Mallat algorithm of discrete wavelet transform In this paper, it is introduced into seismic data processing, the validity is shown by test with field data. The main idea of Hilbert-Huang transform is the empirical mode decomposition with which any complicated data set can be decomposed into a finite and often small number of intrinsic mode functions that admit well-behaved Hilbert transform. After the decomposition, a analytical signal is constructed by Hilbert transform, from which the instantaneous frequency and amplitude can be obtained. And then, Hilbert spectrum. This decomposition method is adaptive and highly efficient. Since the decomposition is based on the local characteristics of the time scale of data, it is applicable to nonlinear and non-stationary processes. The phenomenons of fitting overshoot and undershoot and end swings are analyzed in Hilbert-Huang transform. And these phenomenons are eliminated by effective method which is studied in the paper. The technology of multi-scale decomposition on both prestack and poststack seismic data can realize the amplitude preserved processing, enhance the seismic data resolution greatly, and overcome the problem that different frequency components can not restore amplitude properly uniformly in the conventional method. The method of phase deconvolution, which has overcome the minimum phase limitation in traditional deconvolution, approached the base fact well that the seismic wavelet is phase mixed in practical application. And a more reliable result will be given by this method. In the applied research, the high resolution relative amplitude preserved processing result has been obtained by careful analysis and research with the application of the methods mentioned above in seismic data processing in four different target areas of China Sea. Finally, a set of processing flow and method system was formed in the paper, which has been carried on in the application in the actual production process and has made the good progress and the huge economic benefit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The processes of seismic wave propagation in phase space and one way wave extrapolation in frequency-space domain, if without dissipation, are essentially transformation under the action of one parameter Lie groups. Consequently, the numerical calculation methods of the propagation ought to be Lie group transformation too, which is known as Lie group method. After a fruitful study on the fast methods in matrix inversion, some of the Lie group methods in seismic numerical modeling and depth migration are presented here. Firstly the Lie group description and method of seismic wave propagation in phase space is proposed, which is, in other words, symplectic group description and method for seismic wave propagation, since symplectic group is a Lie subgroup and symplectic method is a special Lie group method. Under the frame of Hamiltonian, the propagation of seismic wave is a symplectic group transformation with one parameter and consequently, the numerical calculation methods of the propagation ought to be symplectic method. After discrete the wave field in time and phase space, many explicit, implicit and leap-frog symplectic schemes are deduced for numerical modeling. Compared to symplectic schemes, Finite difference (FD) method is an approximate of symplectic method. Consequently, explicit, implicit and leap-frog symplectic schemes and FD method are applied in the same conditions to get a wave field in constant velocity model, a synthetic model and Marmousi model. The result illustrates the potential power of the symplectic methods. As an application, symplectic method is employed to give synthetic seismic record of Qinghai foothills model. Another application is the development of Ray+symplectic reverse-time migration method. To make a reasonable balance between the computational efficiency and accuracy, we combine the multi-valued wave field & Green function algorithm with symplectic reverse time migration and thus develop a new ray+wave equation prestack depth migration method. Marmousi model data and Qinghai foothills model data are processed here. The result shows that our method is a better alternative to ray migration for complex structure imaging. Similarly, the extrapolation of one way wave in frequency-space domain is a Lie group transformation with one parameter Z and consequently, the numerical calculation methods of the extrapolation ought to be Lie group methods. After discrete the wave field in depth and space, the Lie group transformation has the form of matrix exponential and each approximation of it gives a Lie group algorithm. Though Pade symmetrical series approximation of matrix exponential gives a extrapolation method which is traditionally regarded as implicit FD migration, it benefits the theoretic and applying study of seismic imaging for it represent the depth extrapolation and migration method in a entirely different way. While, the technique of coordinates of second kind for the approximation of the matrix exponential begins a new way to develop migration operator. The inversion of matrix plays a vital role in the numerical migration method given by Pade symmetrical series approximation. The matrix has a Toepelitz structure with a helical boundary condition and is easy to inverse with LU decomposition. A efficient LU decomposition method is spectral factorization. That is, after the minimum phase correlative function of each array of matrix had be given by a spectral factorization method, all of the functions are arranged in a position according to its former location to get a lower triangular matrix. The major merit of LU decomposition with spectral factorization (SF Decomposition) is its efficiency in dealing with a large number of matrixes. After the setup of a table of the spectral factorization results of each array of matrix, the SF decomposition can give the lower triangular matrix by reading the table. However, the relationship among arrays is ignored in this method, which brings errors in decomposition method. Especially for numerical calculation in complex model, the errors is fatal. Direct elimination method can give the exact LU decomposition But even it is simplified in our case, the large number of decomposition cost unendurable computer time. A hybrid method is proposed here, which combines spectral factorization with direct elimination. Its decomposition errors is 10 times little than that of spectral factorization, and its decomposition speed is quite faster than that of direct elimination, especially in dealing with a large number of matrix. With the hybrid method, the 3D implicit migration can be expected to apply on real seismic data. Finally, the impulse response of 3D implicit migration operator is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The photofragmentation of C6H5I at 266 nn is investigated on the universal crossed molecular beam ma chine, and the translational spectroscopy as well as the angular distribution of I atom is measured. The results reveal that under the laser intensity of 10(R) W/cm(2) the single-photon dissociation competes with multi-photon processes. In single-photon dissociation the anisotropy parameter beta is 0.4 and the average translational energy is only 1.04 kcal/mol, which indicates that this process is a slow predissociation. In two-photon photofragmentation the average translational energy is 51.64 kcal/mol, which accounts for about 35% of the available energy. Another photofragmentation channel is even more faster, whose peak in time-of-flight spectra corresponds to four or five photon absorptions. The branching ratio of these three channels is determined to he about 3:3:4.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this report, I discuss the use of vision to support concrete, everyday activity. I will argue that a variety of interesting tasks can be solved using simple and inexpensive vision systems. I will provide a number of working examples in the form of a state-of-the-art mobile robot, Polly, which uses vision to give primitive tours of the seventh floor of the MIT AI Laboratory. By current standards, the robot has a broad behavioral repertoire and is both simple and inexpensive (the complete robot was built for less than $20,000 using commercial board-level components). The approach I will use will be to treat the structure of the agent's activity---its task and environment---as positive resources for the vision system designer. By performing a careful analysis of task and environment, the designer can determine a broad space of mechanisms which can perform the desired activity. My principal thesis is that for a broad range of activities, the space of applicable mechanisms will be broad enough to include a number mechanisms which are simple and economical. The simplest mechanisms that solve a given problem will typically be quite specialized to that problem. One thus worries that building simple vision systems will be require a great deal of {it ad-hoc} engineering that cannot be transferred to other problems. My second thesis is that specialized systems can be analyzed and understood in a principled manner, one that allows general lessons to be extracted from specialized systems. I will present a general approach to analyzing specialization through the use of transformations that provably improve performance. By demonstrating a sequence of transformations that derive a specialized system from a more general one, we can summarize the specialization of the former in a compact form that makes explicit the additional assumptions that it makes about its environment. The summary can be used to predict the performance of the system in novel environments. Individual transformations can be recycled in the design of future systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The influence of laser-field parameters, such as intensity and pulse width, on the population of molecular excited state is investigated by using the time-dependent wavepacket method. For a two-state system in intense laser fields, the populations in the upper and lower states are given by the wavefunctions obtained by solving the Schrodinger equation through split-operator scheme. The calculation shows that both the laser intensity and the pulse width have a strong effect on the population in molecular excited state, and that as the common feature of light-matter interaction (LMI), the periodic changing of the population with the evolution time in each state can be interpreted by Rabi oscillation and area-theorem. The results illustrate that by controlling these two parameters, the needed population in excited state of interest can be obtained, which provides the foundation of light manipulation of molecular processes. (C) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new continuous configuration time-dependent self-consistent field method has been developed to study polyatomic dynamical problems by using the discrete variable representation for the reaction system, and applied to a reaction system coupled to a bath. The method is very efficient because the equations involved are as simple as those in the traditional single configuration approach, and can account for the correlations between the reaction system and bath modes rather well. (C) American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shen, Q., Zhao, R., Tang, W. (2008). Modelling random fuzzy renewal reward processes. IEEE Transactions on Fuzzy Systems, 16 (5),1379-1385

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gough, John, (2004) 'Holevo-Ordering and the Continuous-Time Limit for Open Floquet Dynamics', Letters in Mathematical Physcis 67(3) pp.207-221 RAE2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new approach is proposed for clustering time-series data. The approach can be used to discover groupings of similar object motions that were observed in a video collection. A finite mixture of hidden Markov models (HMMs) is fitted to the motion data using the expectation-maximization (EM) framework. Previous approaches for HMM-based clustering employ a k-means formulation, where each sequence is assigned to only a single HMM. In contrast, the formulation presented in this paper allows each sequence to belong to more than a single HMM with some probability, and the hard decision about the sequence class membership can be deferred until a later time when such a decision is required. Experiments with simulated data demonstrate the benefit of using this EM-based approach when there is more "overlap" in the processes generating the data. Experiments with real data show the promising potential of HMM-based motion clustering in a number of applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose and evaluate an admission control paradigm for RTDBS, in which a transaction is submitted to the system as a pair of processes: a primary task, and a recovery block. The execution requirements of the primary task are not known a priori, whereas those of the recovery block are known a priori. Upon the submission of a transaction, an Admission Control Mechanism is employed to decide whether to admit or reject that transaction. Once admitted, a transaction is guaranteed to finish executing before its deadline. A transaction is considered to have finished executing if exactly one of two things occur: Either its primary task is completed (successful commitment), or its recovery block is completed (safe termination). Committed transactions bring a profit to the system, whereas a terminated transaction brings no profit. The goal of the admission control and scheduling protocols (e.g., concurrency control, I/O scheduling, memory management) employed in the system is to maximize system profit. We describe a number of admission control strategies and contrast (through simulations) their relative performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A growing wave of behavioral studies, using a wide variety of paradigms that were introduced or greatly refined in recent years, has generated a new wealth of parametric observations about serial order behavior. What was a mere trickle of neurophysiological studies has grown to a more steady stream of probes of neural sites and mechanisms underlying sequential behavior. Moreover, simulation models of serial behavior generation have begun to open a channel to link cellular dynamics with cognitive and behavioral dynamics. Here we summarize the major results from prominent sequence learning and performance tasks, namely immediate serial recall, typing, 2XN, discrete sequence production, and serial reaction time. These populate a continuum from higher to lower degrees of internal control of sequential organization. The main movement classes covered are speech and keypressing, both involving small amplitude movements that are very amenable to parametric study. A brief synopsis of classes of serial order models, vis-à-vis the detailing of major effects found in the behavioral data, leads to a focus on competitive queuing (CQ) models. Recently, the many behavioral predictive successes of CQ models have been joined by successful prediction of distinctively patterend electrophysiological recordings in prefrontal cortex, wherein parallel activation dynamics of multiple neural ensembles strikingly matches the parallel dynamics predicted by CQ theory. An extended CQ simulation model-the N-STREAMS neural network model-is then examined to highlight issues in ongoing attemptes to accomodate a broader range of behavioral and neurophysiological data within a CQ-consistent theory. Important contemporary issues such as the nature of working memory representations for sequential behavior, and the development and role of chunks in hierarchial control are prominent throughout.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis contributes to the understanding of the processes involved in the formation and transformation of identities. It achieves this goal by establishing the critical importance of ‘background’ and ‘liminality’ in the shaping of identity. Drawing mainly from the work of cultural anthropology and philosophical hermeneutics a theoretical framework is constructed from which transformative experiences can be analysed. The particular experience at the heart of this study is the phenomenon of conversion and the dynamics involved in the construction of that process. Establishing the axial age as the horizon from which the process of conversion emerged will be the main theme of the first part of the study. Identifying the ‘birth’ of conversion allows a deeper understanding of the historical dynamics that make up the process. From these fundamental dynamics a theoretical framework is constructed in order to analyse the conversion process. Applying this theoretical framework to a number of case-studies will be the central focus of this study. The transformative experiences of Saint Augustine, the fourteenth century nun Margaret Ebner, the communist revolutionary Karl Marx and the literary figure of Arthur Koestler will provide the material onto which the theoretical framework can be applied. A synthesis of the Judaic religious and the Greek philosophical traditions will be the main findings for the shaping of Augustine’s conversion experience. The dissolution of political order coupled with the institutionalisation of the conversion process will illuminate the mystical experiences of Margaret Ebner at a time when empathetic conversion reached its fullest expression. The final case-studies examine two modern ‘conversions’ that seem to have an ideological rather than a religious basis to them. On closer examination it will be found that the German tradition of Biblical Criticism played a most influential role in the ‘conversion’ of Marx and mythology the best medium to understand the experiences of Koestler. The main ideas emerging from this study highlight the fluidity of identity and the important role of ‘background’ in its transformation. The theoretical framework, as constructed for this study, is found to be a useful methodological tool that can offer insights into experiences, such as conversion, that otherwise would remain hidden from our enquiries.