25 resultados para Computational Delay-Time
Resumo:
The time interval between successive migrations of biological species causes a delay time in the reaction-diffusion equations describing their space-time dynamics. This lowers the predicted speed of the waves of advance, as compared to classical models. It has been shown that this delay-time effect improves the modeling of human range expansions. Here, we demonstrate that it can also be important for other species. We present two new examples where the predictions of the time-delayed and the classical (Fisher) approaches are compared to experimental data. No free or adjustable parameters are used. We show that the importance of the delay effect depends on the dimensionless product of the initial growth rate and the delay time. We argue that the delay effect should be taken into account in the modeling of range expansions for biological species
Resumo:
We generalize to arbitrary waiting-time distributions some results which were previously derived for discrete distributions. We show that for any two waiting-time distributions with the same mean delay time, that with higher dispersion will lead to a faster front. Experimental data on the speed of virus infections in a plaque are correctly explained by the theoretical predictions using a Gaussian delay-time distribution, which is more realistic for this system than the Dirac delta distribution considered previously [J. Fort and V. Méndez, Phys. Rev. Lett.89, 178101 (2002)]
Resumo:
The spread of viruses in growing plaques predicted by classical models is greater than that measured experimentally. There is a widespread belief that this discrepancy is due to biological factors. Here we show that the observed speeds can be satisfactorily predicted by a purely physical model that takes into account the delay time due to virus reproduction inside infected cells. No free or adjustable parameters are used
Resumo:
Report for the scientific sojourn carried out at the Université Catholique de Louvain, Belgium, from March until June 2007. In the first part, the impact of important geometrical parameters such as source and drain thickness, fin spacing, spacer width, etc. on the parasitic fringing capacitance component of multiple-gate field-effect transistors (MuGFET) is deeply analyzed using finite element simulations. Several architectures such as single gate, FinFETs (double gate), triple-gate represented by Pi-gate MOSFETs are simulated and compared in terms of channel and fringing capacitances for the same occupied die area. Simulations highlight the great impact of diminishing the spacing between fins for MuGFETs and the trade-off between the reduction of parasitic source and drain resistances and the increase of fringing capacitances when Selective Epitaxial Growth (SEG) technology is introduced. The impact of these technological solutions on the transistor cut-off frequencies is also discussed. The second part deals with the study of the effect of the volume inversion (VI) on the capacitances of undoped Double-Gate (DG) MOSFETs. For that purpose, we present simulation results for the capacitances of undoped DG MOSFETs using an explicit and analytical compact model. It monstrates that the transition from volume inversion regime to dual gate behaviour is well simulated. The model shows an accurate dependence on the silicon layer thickness,consistent withtwo dimensional numerical simulations, for both thin and thick silicon films. Whereas the current drive and transconductance are enhanced in volume inversion regime, our results show thatintrinsic capacitances present higher values as well, which may limit the high speed (delay time) behaviour of DG MOSFETs under volume inversion regime.
Resumo:
Propósito: Determinar, en los pacientes afectados de desprendimiento de retina rhegmatógeno primario que acudieron a nuestro centro, el tiempo de demora entre la aparición de los primeros síntomas y la visita con el cirujano. Los objetivos secundarios son describir los factores que han influido en este tiempo de demora, determinar la relación existente entre el tiempo de evolución del desprendimiento rhegmatógeno de retina primario, el estado de la mácula y el resultado funcional tras la cirugía, y describir la sintomatología referida por los pacientes. Material y Método: Estudio descriptivo prospectivo de 59 ojos de 59 pacientes con desprendimiento de retina rhegmatógeno primario que acudieron al servicio de oftalmología del hospital Vall d’Hebron entre marzo y junio del 2008. Se realizó una anamnesis y exploración oftalmológica detallada a su llegada, fueron sometidos a cirugía mediante vitrectomía vía pars plana y se les realizó un seguimiento mínimo de 6 meses determinando los resultados funcionales de la cirugía. Resultados: El tiempo de demora medio desde la aparición de los síntomas hasta la primera consulta con el facultativo fue de 4,10 días. La media del tiempo de evolución del desprendimiento rhegmatógeno de retina fue de 17,03 días. Un 84,1% de los pacientes con la mácula desprendida tenían un tiempo de evolución menor o igual a 15 días y un 15,9% un tiempo de evolución mayor a 15 días. La agudeza visual media postoperatoria de los pacientes con la mácula aplicada fue de 0,55 en escala decimal, en los pacientes con la mácula afectada de menos de 15 días de evolución fue de 0,41, y en los pacientes con la mácula afectada de más de 15 días de evolución fue de 0,33. El síntoma más frecuente fue la visión borrosa (98,3%), seguido de miodesopsias (28,8%). Conclusiones: El tiempo de demora entre la aparición de los primeros síntomas del DRR y la visita con el cirujano es superior desde la remisión al cirujano por parte del facultativo que desde la aparición de síntomas y consulta con el facultativo por el paciente. La subestimación de la gravedad por parte del paciente es la causa de demora referida más frecuente. Los pacientes con un mayor tiempo de evolución tienen un mayor porcentaje de afectación macular. Los pacientes con la mácula aplicada han tenido un mejor resultado funcional tras la cirugía del DRR que los pacientes con la mácula desprendida.
Resumo:
In order to explain the speed of Vesicular Stomatitis Virus VSV infections, we develop a simple model that improves previous approaches to the propagation of virus infections. For VSV infections, we find that the delay time elapsed between the adsorption of a viral particle into a cell and the release of its progeny has a veryimportant effect. Moreover, this delay time makes the adsorption rate essentially irrelevant in order to predict VSV infection speeds. Numerical simulations are in agreement with the analytical results. Our model satisfactorily explains the experimentally measured speeds of VSV infections
Resumo:
The second differential of the entropy is used for analysing the stability of a thermodynamic climatic model. A delay time for the heat flux is introduced whereby it becomes an independent variable. Two different expressions for the second differential of the entropy are used: one follows classical irreversible thermodynamics theory; the second is related to the introduction of response time and is due to the extended irreversible thermodynamics theory. the second differential of the classical entropy leads to unstable solutions for high values of delay times. the extended expression always implies stable states for an ice-free earth. When the ice-albedo feedback is included, a discontinuous distribution of stable states is found for high response times. Following the thermodynamic analysis of the model, the maximum rates of entropy production at the steady state are obtained. A latitudinally isothermal earth produces the extremum in global entropy production. the material contribution to entropy production (by which we mean the production of entropy by material transport of heat) is a maximum when the latitudinal distribution of temperatures becomes less homogeneous than present values
Resumo:
The problem of stability analysis for a class of neutral systems with mixed time-varying neutral, discrete and distributed delays and nonlinear parameter perturbations is addressed. By introducing a novel Lyapunov-Krasovskii functional and combining the descriptor model transformation, the Leibniz-Newton formula, some free-weighting matrices, and a suitable change of variables, new sufficient conditions are established for the stability of the considered system, which are neutral-delay-dependent, discrete-delay-range dependent, and distributeddelay-dependent. The conditions are presented in terms of linear matrix inequalities (LMIs) and can be efficiently solved using convex programming techniques. Two numerical examples are given to illustrate the efficiency of the proposed method
Resumo:
Networks are evolving toward a ubiquitous model in which heterogeneousdevices are interconnected. Cryptographic algorithms are required for developing securitysolutions that protect network activity. However, the computational and energy limitationsof network devices jeopardize the actual implementation of such mechanisms. In thispaper, we perform a wide analysis on the expenses of launching symmetric and asymmetriccryptographic algorithms, hash chain functions, elliptic curves cryptography and pairingbased cryptography on personal agendas, and compare them with the costs of basic operatingsystem functions. Results show that although cryptographic power costs are high and suchoperations shall be restricted in time, they are not the main limiting factor of the autonomyof a device.
Resumo:
The inverse scattering problem concerning the determination of the joint time-delayDoppler-scale reflectivity density characterizing continuous target environments is addressed by recourse to the generalized frame theory. A reconstruction formula,involving the echoes of a frame of outgoing signals and its corresponding reciprocalframe, is developed. A ‘‘realistic’’ situation with respect to the transmission ofa finite number of signals is further considered. In such a case, our reconstruction formula is shown to yield the orthogonal projection of the reflectivity density onto a subspace generated by the transmitted signals.
Resumo:
Minimal models for the explanation of decision-making in computational neuroscience are based on the analysis of the evolution for the average firing rates of two interacting neuron populations. While these models typically lead to multi-stable scenario for the basic derived dynamical systems, noise is an important feature of the model taking into account finite-size effects and robustness of the decisions. These stochastic dynamical systems can be analyzed by studying carefully their associated Fokker-Planck partial differential equation. In particular, we discuss the existence, positivity and uniqueness for the solution of the stationary equation, as well as for the time evolving problem. Moreover, we prove convergence of the solution to the the stationary state representing the probability distribution of finding the neuron families in each of the decision states characterized by their average firing rates. Finally, we propose a numerical scheme allowing for simulations performed on the Fokker-Planck equation which are in agreement with those obtained recently by a moment method applied to the stochastic differential system. Our approach leads to a more detailed analytical and numerical study of this decision-making model in computational neuroscience.
Resumo:
A time-delayed second-order approximation for the front speed in reaction-dispersion systems was obtained by Fort and Méndez [Phys. Rev. Lett. 82, 867 (1999)]. Here we show that taking proper care of the effect of the time delay on the reactive process yields a different evolution equation and, therefore, an alternate equation for the front speed. We apply the new equation to the Neolithic transition. For this application the new equation yields speeds about 10% slower than the previous one
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
Intuitively, music has both predictable and unpredictable components. In this work we assess this qualitative statement in a quantitative way using common time series models fitted to state-of-the-art music descriptors. These descriptors cover different musical facets and are extracted from a large collection of real audio recordings comprising a variety of musical genres. Our findings show that music descriptor time series exhibit a certain predictability not only for short time intervals, but also for mid-term and relatively long intervals. This fact is observed independently of the descriptor, musical facet and time series model we consider. Moreover, we show that our findings are not only of theoretical relevance but can also have practical impact. To this end we demonstrate that music predictability at relatively long time intervals can be exploited in a real-world application, namely the automatic identification of cover songs (i.e. different renditions or versions of the same musical piece). Importantly, this prediction strategy yields a parameter-free approach for cover song identification that is substantially faster, allows for reduced computational storage and still maintains highly competitive accuracies when compared to state-of-the-art systems.
Resumo:
The current research in Music Information Retrieval (MIR) is showing the potential that the Information Technologies can have in music related applications. Amajor research challenge in that direction is how to automaticallydescribe/annotate audio recordings and how to use the resulting descriptions to discover and appreciate music in new ways. But music is a complex phenomenonand the description of an audio recording has to deal with this complexity. For example, each musicculture has specificities and emphasizes different musicaland communication aspects, thus the musical recordings of each culture should be described differently. At the same time these cultural specificities give us the opportunity to pay attention to musical concepts andfacets that, despite being present in most world musics, are not easily noticed by listeners. In this paper we present some of the work done in the CompMusic project, including ideas and specific examples on how to take advantage of the cultural specificities of differentmusical repertoires. We will use examples from the art music traditions of India, Turkey and China.