202 resultados para frailty models
Resumo:
We carry out an extensive numerical study of the dynamics of spiral waves of electrical activation, in the presence of periodic deformation (PD) in two-dimensional simulation domains, in the biophysically realistic mathematical models of human ventricular tissue due to (a) ten-Tusscher and Panfilov (the TP06 model) and (b) ten-Tusscher, Noble, Noble, and Panfilov (the TNNPO4 model). We first consider simulations in cable-type domains, in which we calculate the conduction velocity theta and the wavelength lambda of a plane wave; we show that PD leads to a periodic, spatial modulation of theta and a temporally periodic modulation of lambda; both these modulations depend on the amplitude and frequency of the PD. We then examine three types of initial conditions for both TP06 and TNNPO4 models and show that the imposition of PD leads to a rich variety of spatiotemporal patterns in the transmembrane potential including states with a single rotating spiral (RS) wave, a spiral-turbulence (ST) state with a single meandering spiral, an ST state with multiple broken spirals, and a state SA in which all spirals are absorbed at the boundaries of our simulation domain. We find, for both TP06 and TNNPO4 models, that spiral-wave dynamics depends sensitively on the amplitude and frequency of PD and the initial condition. We examine how these different types of spiral-wave states can be eliminated in the presence of PD by the application of low-amplitude pulses by square- and rectangular-mesh suppression techniques. We suggest specific experiments that can test the results of our simulations.
Resumo:
The performance of prediction models is often based on ``abstract metrics'' that estimate the model's ability to limit residual errors between the observed and predicted values. However, meaningful evaluation and selection of prediction models for end-user domains requires holistic and application-sensitive performance measures. Inspired by energy consumption prediction models used in the emerging ``big data'' domain of Smart Power Grids, we propose a suite of performance measures to rationally compare models along the dimensions of scale independence, reliability, volatility and cost. We include both application independent and dependent measures, the latter parameterized to allow customization by domain experts to fit their scenario. While our measures are generalizable to other domains, we offer an empirical analysis using real energy use data for three Smart Grid applications: planning, customer education and demand response, which are relevant for energy sustainability. Our results underscore the value of the proposed measures to offer a deeper insight into models' behavior and their impact on real applications, which benefit both data mining researchers and practitioners.
Resumo:
We update the constraints on two-Higgs-doublet models (2HDMs) focusing on the parameter space relevant to explain the present muon g - 2 anomaly, Delta alpha(mu), in four different types of models, type I, II, ``lepton specific'' (or X) and ``flipped'' (or Y). We show that the strong constraints provided by the electroweak precision data on the mass of the pseudoscalar Higgs, whose contribution may account for Delta alpha(mu), are evaded in regions where the charged scalar is degenerate with the heavy neutral one and the mixing angles alpha and beta satisfy the Standard Model limit beta - alpha approximate to pi/2. We combine theoretical constraints from vacuum stability and perturbativity with direct and indirect bounds arising from collider and B physics. Possible future constraints from the electron g - 2 are also considered. If the 126 GeV resonance discovered at the LHC is interpreted as the light CP-even Higgs boson of the 2HDM, we find that only models of type X can satisfy all the considered theoretical and experimental constraints.
Resumo:
The ability of Coupled General Circulation Models (CGCMs) participating in the Intergovernmental Panel for Climate Change's fourth assessment report (IPCC AR4) for the 20th century climate (20C3M scenario) to simulate the daily precipitation over the Indian region is explored. The skill is evaluated on a 2.5A degrees x 2.5A degrees grid square compared with the Indian Meteorological Department's (IMD) gridded dataset, and every GCM is ranked for each of these grids based on its skill score. Skill scores (SSs) are estimated from the probability density functions (PDFs) obtained from observed IMD datasets and GCM simulations. The methodology takes into account (high) extreme precipitation events simulated by GCMs. The results are analyzed and presented for three categories and six zones. The three categories are the monsoon season (JJASO - June to October), non-monsoon season (JFMAMND - January to May, November, December) and for the entire year (''Annual''). The six precipitation zones are peninsular, west central, northwest, northeast, central northeast India, and the hilly region. Sensitivity analysis was performed for three spatial scales, 2.5A degrees grid square, zones, and all of India, in the three categories. The models were ranked based on the SS. The category JFMAMND had a higher SS than the JJASO category. The northwest zone had higher SSs, whereas the peninsular and hilly regions had lower SS. No single GCM can be identified as the best for all categories and zones. Some models consistently outperformed the model ensemble, and one model had particularly poor performance. Results show that most models underestimated the daily precipitation rates in the 0-1 mm/day range and overestimated it in the 1-15 mm/day range.
Resumo:
We define two general classes of nonabelian sandpile models on directed trees (or arborescences), as models of nonequilibrium statistical physics. Unlike usual applications of the well-known abelian sandpile model, these models have the property that sand grains can enter only through specified reservoirs. In the Trickle-down sandpile model, sand grains are allowed to move one at a time. For this model, we show that the stationary distribution is of product form. In the Landslide sandpile model, all the grains at a vertex topple at once, and here we prove formulas for all eigenvalues, their multiplicities, and the rate of convergence to stationarity. The proofs use wreath products and the representation theory of monoids.
Bayesian parameter identification in dynamic state space models using modified measurement equations
Resumo:
When Markov chain Monte Carlo (MCMC) samplers are used in problems of system parameter identification, one would face computational difficulties in dealing with large amount of measurement data and (or) low levels of measurement noise. Such exigencies are likely to occur in problems of parameter identification in dynamical systems when amount of vibratory measurement data and number of parameters to be identified could be large. In such cases, the posterior probability density function of the system parameters tends to have regions of narrow supports and a finite length MCMC chain is unlikely to cover pertinent regions. The present study proposes strategies based on modification of measurement equations and subsequent corrections, to alleviate this difficulty. This involves artificial enhancement of measurement noise, assimilation of transformed packets of measurements, and a global iteration strategy to improve the choice of prior models. Illustrative examples cover laboratory studies on a time variant dynamical system and a bending-torsion coupled, geometrically non-linear building frame under earthquake support motions. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Eleven general circulation models/global climate models (GCMs) - BCCR-BCCM2.0, INGV-ECHAM4, GFDL2.0, GFDL2.1, GISS, IPSL-CM4, MIROC3, MRI-CGCM2, NCAR-PCMI, UKMO-HADCM3 and UKMO-HADGEM1 - are evaluated for Indian climate conditions using the performance indicator, skill score (SS). Two climate variables, temperature T (at three levels, i.e. 500, 700, 850 mb) and precipitation rate (Pr) are considered resulting in four SS-based evaluation criteria (T500, T700, T850, Pr). The multicriterion decision-making method, technique for order preference by similarity to an ideal solution, is applied to rank 11 GCMs. Efforts are made to rank GCMs for the Upper Malaprabha catchment and two river basins, namely, Krishna and Mahanadi (covered by 17 and 15 grids of size 2.5 degrees x 2.5 degrees, respectively). Similar efforts are also made for India (covered by 73 grid points of size 2.5 degrees x 2.5 degrees) for which an ensemble of GFDL2.0, INGV-ECHAM4, UKMO-HADCM3, MIROC3, BCCR-BCCM2.0 and GFDL2.1 is found to be suitable. It is concluded that the proposed methodology can be applied to similar situations with ease.
Resumo:
Retransmission protocols such as HDLC and TCP are designed to ensure reliable communication over noisy channels (i.e., channels that can corrupt messages). Thakkar et al. 15] have recently presented an algorithmic verification technique for deterministic streaming string transducer (DSST) models of such protocols. The verification problem is posed as equivalence checking between the specification and protocol DSSTs. In this paper, we argue that more general models need to be obtained using non-deterministic streaming string transducers (NSSTs). However, equivalence checking is undecidable for NSSTs. We present two classes where the models belong to a sub-class of NSSTs for which it is decidable. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Early afterdepolarizations (EADs), which are abnormal oscillations of the membrane potential at the plateau phase of an action potential, are implicated in the development of cardiac arrhythmias like Torsade de Pointes. We carry out extensive numerical simulations of the TP06 and ORd mathematical models for human ventricular cells with EADs. We investigate the different regimes in both these models, namely, the parameter regimes where they exhibit (1) a normal action potential (AP) with no EADs, (2) an AP with EADs, and (3) an AP with EADs that does not go back to the resting potential. We also study the dependence of EADs on the rate of at which we pace a cell, with the specific goal of elucidating EADs that are induced by slow or fast rate pacing. In our simulations in two-and three-dimensional domains, in the presence of EADs, we find the following wave types: (A) waves driven by the fast sodium current and the L-type calcium current (Na-Ca-mediated waves); (B) waves driven only by the L-type calcium current (Ca-mediated waves); (C) phase waves, which are pseudo-travelling waves. Furthermore, we compare the wave patterns of the various wave-types (Na-Ca-mediated, Ca-mediated, and phase waves) in both these models. We find that the two models produce qualitatively similar results in terms of exhibiting Na-Ca-mediated wave patterns that are more chaotic than those for the Ca-mediated and phase waves. However, there are quantitative differences in the wave patterns of each wave type. The Na-Ca-mediated waves in the ORd model show short-lived spirals but the TP06 model does not. The TP06 model supports more Ca-mediated spirals than those in the ORd model, and the TP06 model exhibits more phase-wave patterns than does the ORd model.
Resumo:
This paper presents a comprehensive and robust strategy for the estimation of battery model parameters from noise corrupted data. The deficiencies of the existing methods for parameter estimation are studied and the proposed parameter estimation strategy improves on earlier methods by working optimally for low as well as high discharge currents, providing accurate estimates even under high levels of noise, and with a wide range of initial values. Testing on different data sets confirms the performance of the proposed parameter estimation strategy.
Resumo:
Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.
Resumo:
We study the dynamical behaviors of two types of spiral-and scroll-wave turbulence states, respectively, in two-dimensional (2D) and three-dimensional (3D) mathematical models, of human, ventricular, myocyte cells that are attached to randomly distributed interstitial fibroblasts; these turbulence states are promoted by (a) the steep slope of the action-potential-duration-restitution (APDR) plot or (b) early afterdepolarizations (EADs). Our single-cell study shows that (1) the myocyte-fibroblast (MF) coupling G(j) and (2) the number N-f of fibroblasts in an MF unit lower the steepness of the APDR slope and eliminate the EAD behaviors of myocytes; we explore the pacing dependence of such EAD suppression. In our 2D simulations, we observe that a spiral-turbulence (ST) state evolves into a state with a single, rotating spiral (RS) if either (a) G(j) is large or (b) the maximum possible number of fibroblasts per myocyte N-f(max) is large. We also observe that the minimum value of G(j), for the transition from the ST to the RS state, decreases as N-f(max) increases. We find that, for the steep-APDR-induced ST state, once the MF coupling suppresses ST, the rotation period of a spiral in the RS state increases as (1) G(j) increases, with fixed N-f(max), and (2) N-f(max) increases, with fixed G(j). We obtain the boundary between ST and RS stability regions in the N-f(max)-G(j) plane. In particular, for low values of N-f(max), the value of G(j), at the ST-RS boundary, depends on the realization of the randomly distributed fibroblasts; this dependence decreases as N-f(max) increases. Our 3D studies show a similar transition from scroll-wave turbulence to a single, rotating, scroll-wave state because of the MF coupling. We examine the experimental implications of our study and propose that the suppression (a) of the steep slope of the APDR or (b) EADs can eliminate spiral-and scroll-wave turbulence in heterogeneous cardiac tissue, which has randomly distributed fibroblasts.
Resumo:
The calculation of First Passage Time (moreover, even its probability density in time) has so far been generally viewed as an ill-posed problem in the domain of quantum mechanics. The reasons can be summarily seen in the fact that the quantum probabilities in general do not satisfy the Kolmogorov sum rule: the probabilities for entering and non-entering of Feynman paths into a given region of space-time do not in general add up to unity, much owing to the interference of alternative paths. In the present work, it is pointed out that a special case exists (within quantum framework), in which, by design, there exists one and only one available path (i.e., door-way) to mediate the (first) passage -no alternative path to interfere with. Further, it is identified that a popular family of quantum systems - namely the 1d tight binding Hamiltonian systems - falls under this special category. For these model quantum systems, the first passage time distributions are obtained analytically by suitably applying a method originally devised for classical (stochastic) mechanics (by Schroedinger in 1915). This result is interesting especially given the fact that the tight binding models are extensively used in describing everyday phenomena in condense matter physics.
Resumo:
The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. (C) 2015 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Northeast India and its adjoining areas are characterized by very high seismic activity. According to the Indian seismic code, the region falls under seismic zone V, which represents the highest seismic-hazard level in the country. This region has experienced a number of great earthquakes, such as the Assam (1950) and Shillong (1897) earthquakes, that caused huge devastation in the entire northeast and adjacent areas by flooding, landslides, liquefaction, and damage to roads and buildings. In this study, an attempt has been made to find the probability of occurrence of a major earthquake (M-w > 6) in this region using an updated earthquake catalog collected from different sources. Thereafter, dividing the catalog into six different seismic regions based on different tectonic features and seismogenic factors, the probability of occurrences was estimated using three models: the lognormal, Weibull, and gamma distributions. We calculated the logarithmic probability of the likelihood function (ln L) for all six regions and the entire northeast for all three stochastic models. A higher value of ln L suggests a better model, and a lower value shows a worse model. The results show different model suits for different seismic zones, but the majority follows lognormal, which is better for forecasting magnitude size. According to the results, Weibull shows the highest conditional probabilities among the three models for small as well as large elapsed time T and time intervals t, whereas the lognormal model shows the lowest and the gamma model shows intermediate probabilities. Only for elapsed time T = 0, the lognormal model shows the highest conditional probabilities among the three models at a smaller time interval (t = 3-15 yrs). The opposite result is observed at larger time intervals (t = 15-25 yrs), which show the highest probabilities for the Weibull model. However, based on this study, the IndoBurma Range and Eastern Himalaya show a high probability of occurrence in the 5 yr period 2012-2017 with >90% probability.