994 resultados para xenograft models


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vernacular dwellings are well-suited climate-responsive designs that adopt local materials and skills to support comfortable indoor environments in response to local climatic conditions. These naturally-ventilated passive dwellings have enabled civilizations to sustain even in extreme climatic conditions. The design and physiological resilience of the inhabitants have coevolved to be attuned to local climatic and environmental conditions. Such adaptations have perplexed modern theories in human thermal-comfort that have evolved in the era of electricity and air-conditioned buildings. Vernacular local building elements like rubble walls and mud roofs are given way to burnt brick walls and reinforced cement concrete tin roofs. Over 60% of Indian population is rural, and implications of such transitions on thermal comfort and energy in buildings are crucial to understand. Types of energy use associated with a buildings life cycle include its embodied energy, operational and maintenance energy, demolition and disposal energy. Embodied Energy (EE) represents total energy consumption for construction of building, i.e., embodied energy of building materials, material transportation energy and building construction energy. Embodied energy of building materials forms major contribution to embodied energy in buildings. Operational energy (OE) in buildings mainly contributed by space conditioning and lighting requirements, depends on the climatic conditions of the region and comfort requirements of the building occupants. Less energy intensive natural materials are used for traditional buildings and the EE of traditional buildings is low. Transition in use of materials causes significant impact on embodied energy of vernacular dwellings. Use of manufactured, energy intensive materials like brick, cement, steel, glass etc. contributes to high embodied energy in these dwellings. This paper studies the increase in EE of the dwelling attributed to change in wall materials. Climatic location significantly influences operational energy in dwellings. Buildings located in regions experiencing extreme climatic conditions would require more operational energy to satisfy the heating and cooling energy demands throughout the year. Traditional buildings adopt passive techniques or non-mechanical methods for space conditioning to overcome the vagaries of extreme climatic variations and hence less operational energy. This study assesses operational energy in traditional dwelling with regard to change in wall material and climatic location. OE in the dwellings has been assessed for hot-dry, warm humid and moderate climatic zones. Choice of thermal comfort models is yet another factor which greatly influences operational energy assessment in buildings. The paper adopts two popular thermal-comfort models, viz., ASHRAE comfort standards and TSI by Sharma and Ali to investigate thermal comfort aspects and impact of these comfort models on OE assessment in traditional dwellings. A naturally ventilated vernacular dwelling in Sugganahalli, a village close to Bangalore (India), set in warm - humid climate is considered for present investigations on impact of transition in building materials, change in climatic location and choice of thermal comfort models on energy in buildings. The study includes a rigorous real time monitoring of the thermal performance of the dwelling. Dynamic simulation models validated by measured data have also been adopted to determine the impact of the transition from vernacular to modern material-configurations. Results of the study and appraisal for appropriate thermal comfort standards for computing operational energy has been presented and discussed in this paper. (c) 2014 K.I. Praseeda. Published by Elsevier Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We carry out an extensive numerical study of the dynamics of spiral waves of electrical activation, in the presence of periodic deformation (PD) in two-dimensional simulation domains, in the biophysically realistic mathematical models of human ventricular tissue due to (a) ten-Tusscher and Panfilov (the TP06 model) and (b) ten-Tusscher, Noble, Noble, and Panfilov (the TNNPO4 model). We first consider simulations in cable-type domains, in which we calculate the conduction velocity theta and the wavelength lambda of a plane wave; we show that PD leads to a periodic, spatial modulation of theta and a temporally periodic modulation of lambda; both these modulations depend on the amplitude and frequency of the PD. We then examine three types of initial conditions for both TP06 and TNNPO4 models and show that the imposition of PD leads to a rich variety of spatiotemporal patterns in the transmembrane potential including states with a single rotating spiral (RS) wave, a spiral-turbulence (ST) state with a single meandering spiral, an ST state with multiple broken spirals, and a state SA in which all spirals are absorbed at the boundaries of our simulation domain. We find, for both TP06 and TNNPO4 models, that spiral-wave dynamics depends sensitively on the amplitude and frequency of PD and the initial condition. We examine how these different types of spiral-wave states can be eliminated in the presence of PD by the application of low-amplitude pulses by square- and rectangular-mesh suppression techniques. We suggest specific experiments that can test the results of our simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of prediction models is often based on ``abstract metrics'' that estimate the model's ability to limit residual errors between the observed and predicted values. However, meaningful evaluation and selection of prediction models for end-user domains requires holistic and application-sensitive performance measures. Inspired by energy consumption prediction models used in the emerging ``big data'' domain of Smart Power Grids, we propose a suite of performance measures to rationally compare models along the dimensions of scale independence, reliability, volatility and cost. We include both application independent and dependent measures, the latter parameterized to allow customization by domain experts to fit their scenario. While our measures are generalizable to other domains, we offer an empirical analysis using real energy use data for three Smart Grid applications: planning, customer education and demand response, which are relevant for energy sustainability. Our results underscore the value of the proposed measures to offer a deeper insight into models' behavior and their impact on real applications, which benefit both data mining researchers and practitioners.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We update the constraints on two-Higgs-doublet models (2HDMs) focusing on the parameter space relevant to explain the present muon g - 2 anomaly, Delta alpha(mu), in four different types of models, type I, II, ``lepton specific'' (or X) and ``flipped'' (or Y). We show that the strong constraints provided by the electroweak precision data on the mass of the pseudoscalar Higgs, whose contribution may account for Delta alpha(mu), are evaded in regions where the charged scalar is degenerate with the heavy neutral one and the mixing angles alpha and beta satisfy the Standard Model limit beta - alpha approximate to pi/2. We combine theoretical constraints from vacuum stability and perturbativity with direct and indirect bounds arising from collider and B physics. Possible future constraints from the electron g - 2 are also considered. If the 126 GeV resonance discovered at the LHC is interpreted as the light CP-even Higgs boson of the 2HDM, we find that only models of type X can satisfy all the considered theoretical and experimental constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability of Coupled General Circulation Models (CGCMs) participating in the Intergovernmental Panel for Climate Change's fourth assessment report (IPCC AR4) for the 20th century climate (20C3M scenario) to simulate the daily precipitation over the Indian region is explored. The skill is evaluated on a 2.5A degrees x 2.5A degrees grid square compared with the Indian Meteorological Department's (IMD) gridded dataset, and every GCM is ranked for each of these grids based on its skill score. Skill scores (SSs) are estimated from the probability density functions (PDFs) obtained from observed IMD datasets and GCM simulations. The methodology takes into account (high) extreme precipitation events simulated by GCMs. The results are analyzed and presented for three categories and six zones. The three categories are the monsoon season (JJASO - June to October), non-monsoon season (JFMAMND - January to May, November, December) and for the entire year (''Annual''). The six precipitation zones are peninsular, west central, northwest, northeast, central northeast India, and the hilly region. Sensitivity analysis was performed for three spatial scales, 2.5A degrees grid square, zones, and all of India, in the three categories. The models were ranked based on the SS. The category JFMAMND had a higher SS than the JJASO category. The northwest zone had higher SSs, whereas the peninsular and hilly regions had lower SS. No single GCM can be identified as the best for all categories and zones. Some models consistently outperformed the model ensemble, and one model had particularly poor performance. Results show that most models underestimated the daily precipitation rates in the 0-1 mm/day range and overestimated it in the 1-15 mm/day range.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We define two general classes of nonabelian sandpile models on directed trees (or arborescences), as models of nonequilibrium statistical physics. Unlike usual applications of the well-known abelian sandpile model, these models have the property that sand grains can enter only through specified reservoirs. In the Trickle-down sandpile model, sand grains are allowed to move one at a time. For this model, we show that the stationary distribution is of product form. In the Landslide sandpile model, all the grains at a vertex topple at once, and here we prove formulas for all eigenvalues, their multiplicities, and the rate of convergence to stationarity. The proofs use wreath products and the representation theory of monoids.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When Markov chain Monte Carlo (MCMC) samplers are used in problems of system parameter identification, one would face computational difficulties in dealing with large amount of measurement data and (or) low levels of measurement noise. Such exigencies are likely to occur in problems of parameter identification in dynamical systems when amount of vibratory measurement data and number of parameters to be identified could be large. In such cases, the posterior probability density function of the system parameters tends to have regions of narrow supports and a finite length MCMC chain is unlikely to cover pertinent regions. The present study proposes strategies based on modification of measurement equations and subsequent corrections, to alleviate this difficulty. This involves artificial enhancement of measurement noise, assimilation of transformed packets of measurements, and a global iteration strategy to improve the choice of prior models. Illustrative examples cover laboratory studies on a time variant dynamical system and a bending-torsion coupled, geometrically non-linear building frame under earthquake support motions. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eleven general circulation models/global climate models (GCMs) - BCCR-BCCM2.0, INGV-ECHAM4, GFDL2.0, GFDL2.1, GISS, IPSL-CM4, MIROC3, MRI-CGCM2, NCAR-PCMI, UKMO-HADCM3 and UKMO-HADGEM1 - are evaluated for Indian climate conditions using the performance indicator, skill score (SS). Two climate variables, temperature T (at three levels, i.e. 500, 700, 850 mb) and precipitation rate (Pr) are considered resulting in four SS-based evaluation criteria (T500, T700, T850, Pr). The multicriterion decision-making method, technique for order preference by similarity to an ideal solution, is applied to rank 11 GCMs. Efforts are made to rank GCMs for the Upper Malaprabha catchment and two river basins, namely, Krishna and Mahanadi (covered by 17 and 15 grids of size 2.5 degrees x 2.5 degrees, respectively). Similar efforts are also made for India (covered by 73 grid points of size 2.5 degrees x 2.5 degrees) for which an ensemble of GFDL2.0, INGV-ECHAM4, UKMO-HADCM3, MIROC3, BCCR-BCCM2.0 and GFDL2.1 is found to be suitable. It is concluded that the proposed methodology can be applied to similar situations with ease.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Retransmission protocols such as HDLC and TCP are designed to ensure reliable communication over noisy channels (i.e., channels that can corrupt messages). Thakkar et al. 15] have recently presented an algorithmic verification technique for deterministic streaming string transducer (DSST) models of such protocols. The verification problem is posed as equivalence checking between the specification and protocol DSSTs. In this paper, we argue that more general models need to be obtained using non-deterministic streaming string transducers (NSSTs). However, equivalence checking is undecidable for NSSTs. We present two classes where the models belong to a sub-class of NSSTs for which it is decidable. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Early afterdepolarizations (EADs), which are abnormal oscillations of the membrane potential at the plateau phase of an action potential, are implicated in the development of cardiac arrhythmias like Torsade de Pointes. We carry out extensive numerical simulations of the TP06 and ORd mathematical models for human ventricular cells with EADs. We investigate the different regimes in both these models, namely, the parameter regimes where they exhibit (1) a normal action potential (AP) with no EADs, (2) an AP with EADs, and (3) an AP with EADs that does not go back to the resting potential. We also study the dependence of EADs on the rate of at which we pace a cell, with the specific goal of elucidating EADs that are induced by slow or fast rate pacing. In our simulations in two-and three-dimensional domains, in the presence of EADs, we find the following wave types: (A) waves driven by the fast sodium current and the L-type calcium current (Na-Ca-mediated waves); (B) waves driven only by the L-type calcium current (Ca-mediated waves); (C) phase waves, which are pseudo-travelling waves. Furthermore, we compare the wave patterns of the various wave-types (Na-Ca-mediated, Ca-mediated, and phase waves) in both these models. We find that the two models produce qualitatively similar results in terms of exhibiting Na-Ca-mediated wave patterns that are more chaotic than those for the Ca-mediated and phase waves. However, there are quantitative differences in the wave patterns of each wave type. The Na-Ca-mediated waves in the ORd model show short-lived spirals but the TP06 model does not. The TP06 model supports more Ca-mediated spirals than those in the ORd model, and the TP06 model exhibits more phase-wave patterns than does the ORd model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a comprehensive and robust strategy for the estimation of battery model parameters from noise corrupted data. The deficiencies of the existing methods for parameter estimation are studied and the proposed parameter estimation strategy improves on earlier methods by working optimally for low as well as high discharge currents, providing accurate estimates even under high levels of noise, and with a wide range of initial values. Testing on different data sets confirms the performance of the proposed parameter estimation strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the dynamical behaviors of two types of spiral-and scroll-wave turbulence states, respectively, in two-dimensional (2D) and three-dimensional (3D) mathematical models, of human, ventricular, myocyte cells that are attached to randomly distributed interstitial fibroblasts; these turbulence states are promoted by (a) the steep slope of the action-potential-duration-restitution (APDR) plot or (b) early afterdepolarizations (EADs). Our single-cell study shows that (1) the myocyte-fibroblast (MF) coupling G(j) and (2) the number N-f of fibroblasts in an MF unit lower the steepness of the APDR slope and eliminate the EAD behaviors of myocytes; we explore the pacing dependence of such EAD suppression. In our 2D simulations, we observe that a spiral-turbulence (ST) state evolves into a state with a single, rotating spiral (RS) if either (a) G(j) is large or (b) the maximum possible number of fibroblasts per myocyte N-f(max) is large. We also observe that the minimum value of G(j), for the transition from the ST to the RS state, decreases as N-f(max) increases. We find that, for the steep-APDR-induced ST state, once the MF coupling suppresses ST, the rotation period of a spiral in the RS state increases as (1) G(j) increases, with fixed N-f(max), and (2) N-f(max) increases, with fixed G(j). We obtain the boundary between ST and RS stability regions in the N-f(max)-G(j) plane. In particular, for low values of N-f(max), the value of G(j), at the ST-RS boundary, depends on the realization of the randomly distributed fibroblasts; this dependence decreases as N-f(max) increases. Our 3D studies show a similar transition from scroll-wave turbulence to a single, rotating, scroll-wave state because of the MF coupling. We examine the experimental implications of our study and propose that the suppression (a) of the steep slope of the APDR or (b) EADs can eliminate spiral-and scroll-wave turbulence in heterogeneous cardiac tissue, which has randomly distributed fibroblasts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The calculation of First Passage Time (moreover, even its probability density in time) has so far been generally viewed as an ill-posed problem in the domain of quantum mechanics. The reasons can be summarily seen in the fact that the quantum probabilities in general do not satisfy the Kolmogorov sum rule: the probabilities for entering and non-entering of Feynman paths into a given region of space-time do not in general add up to unity, much owing to the interference of alternative paths. In the present work, it is pointed out that a special case exists (within quantum framework), in which, by design, there exists one and only one available path (i.e., door-way) to mediate the (first) passage -no alternative path to interfere with. Further, it is identified that a popular family of quantum systems - namely the 1d tight binding Hamiltonian systems - falls under this special category. For these model quantum systems, the first passage time distributions are obtained analytically by suitably applying a method originally devised for classical (stochastic) mechanics (by Schroedinger in 1915). This result is interesting especially given the fact that the tight binding models are extensively used in describing everyday phenomena in condense matter physics.