29 resultados para Models and Methods
Resumo:
Regular electrical activation waves in cardiac tissue lead to the rhythmic contraction and expansion of the heart that ensures blood supply to the whole body. Irregularities in the propagation of these activation waves can result in cardiac arrhythmias, like ventricular tachycardia (VT) and ventricular fibrillation (VF), which are major causes of death in the industrialised world. Indeed there is growing consensus that spiral or scroll waves of electrical activation in cardiac tissue are associated with VT, whereas, when these waves break to yield spiral- or scroll-wave turbulence, VT develops into life-threatening VF: in the absence of medical intervention, this makes the heart incapable of pumping blood and a patient dies in roughly two-and-a-half minutes after the initiation of VF. Thus studies of spiral- and scroll-wave dynamics in cardiac tissue pose important challenges for in vivo and in vitro experimental studies and for in silico numerical studies of mathematical models for cardiac tissue. A major goal here is to develop low-amplitude defibrillation schemes for the elimination of VT and VF, especially in the presence of inhomogeneities that occur commonly in cardiac tissue. We present a detailed and systematic study of spiral- and scroll-wave turbulence and spatiotemporal chaos in four mathematical models for cardiac tissue, namely, the Panfilov, Luo-Rudy phase 1 (LRI), reduced Priebe-Beuckelmann (RPB) models, and the model of ten Tusscher, Noble, Noble, and Panfilov (TNNP). In particular, we use extensive numerical simulations to elucidate the interaction of spiral and scroll waves in these models with conduction and ionic inhomogeneities; we also examine the suppression of spiral- and scroll-wave turbulence by low-amplitude control pulses. Our central qualitative result is that, in all these models, the dynamics of such spiral waves depends very sensitively on such inhomogeneities. We also study two types of control chemes that have been suggested for the control of spiral turbulence, via low amplitude current pulses, in such mathematical models for cardiac tissue; our investigations here are designed to examine the efficacy of such control schemes in the presence of inhomogeneities. We find that a local pulsing scheme does not suppress spiral turbulence in the presence of inhomogeneities; but a scheme that uses control pulses on a spatially extended mesh is more successful in the elimination of spiral turbulence. We discuss the theoretical and experimental implications of our study that have a direct bearing on defibrillation, the control of life-threatening cardiac arrhythmias such as ventricular fibrillation.
Resumo:
The paper proposes two methodologies for damage identification from measured natural frequencies of a contiguously damaged reinforced concrete beam, idealised with distributed damage model. The first method identifies damage from Iso-Eigen-Value-Change contours, plotted between pairs of different frequencies. The performance of the method is checked for a wide variation of damage positions and extents. The method is also extended to a discrete structure in the form of a five-storied shear building and the simplicity of the method is demonstrated. The second method is through smeared damage model, where the damage is assumed constant for different segments of the beam and the lengths and centres of these segments are the known inputs. First-order perturbation method is used to derive the relevant expressions. Both these methods are based on distributed damage models and have been checked with experimental program on simply supported reinforced concrete beams, subjected to different stages of symmetric and un-symmetric damages. The results of the experiments are encouraging and show that both the methods can be adopted together in a damage identification scenario.
Resumo:
Objective : The main objective of this work was to study the antipyretic and antibacterial activity of C. erectus (Buch.-Ham.) Verdcourt leaf extract in an experimental albino rat model. Materials and Methods : The methanol extract of C. erectus leaf (MECEL) was evaluated for its antipyretic potential on normal body temperature and Brewers yeast-induced pyrexia in albino rats model. While the antibacterial activity of MECEL against five Gram (-) and three Gram () bacterial strains and antimycotic activity was investigated against four fungi using agar disk diffusion and microdilution methods. Result : Yeast suspension (10 mL/kg b.w.) elevated rectal temperature after 19 h of subcutaneous injection. Oral administration of MECEL at 100 and 200 mg/kg b.w. showed significant reduction of normal rectal body temperature and yeast-provoked elevated temperature (38.8 0.2 and 37.6 0.4, respectively, at 2-3 h) in a dose-dependent manner, and the effect was comparable to that of the standard antipyretic drug-paracetamol (150 mg/kg b.w.). MECEL at 2 mg/disk showed broad spectrum of growth inhibition activity against both groups of bacteria. However, MECEL was not effective against the yeast strains tested in this study. Conclusion : This study revealed that the methanol extract of C. erectus exhibited significant antipyretic activity in the tested models and antibacterial activity as well, and may provide the scientific rationale for its popular use as antipyretic agent in Khamptiss folk medicines.
Resumo:
In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csiszar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Grobner bases method to compute an implicit representation of minimum KL-divergence models.
Resumo:
This is a review of the measurement of I If noise in certain classes of materials which have a wide range of potential applications. This includes metal films, semi-conductors, metallic oxides and inhomogeneous systems such as composites. The review contains a basic introduction to this field, the theories and models and follows it up with a discussion on measurement methods. There are discussions on specific examples of the application of noise spectroscopy in the field of materials science. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Background: Levamisole, an imidazo(2,1-b) thiazole derivative, has been reported to be a potential antitumor agent. In the present study, we have investigated the mechanism of action of one of the recently identified analogues, 4a (2-benzyl-6-(4'-fluorophenyl)-5-thiocyanato-imidazo2,1-b]1,3,4]thi adiazole). Materials and Methods: ROS production and expression of various apoptotic proteins were measured following 4a treatment in leukemia cell lines. Tumor animal models were used to evaluate the effect of 4a in comparison with Levamisole on progression of breast adenocarcinoma and survival. Immunohistochemistry and western blotting studies were performed to understand the mechanism of 4a action both ex vivo and in vivo. Results: We have determined the IC50 value of 4a in many leukemic and breast cancer cell lines and found CEM cells most sensitive (IC50 5 mu M). Results showed that 4a treatment leads to the accumulation of ROS. Western blot analysis showed upregulation of pro-apoptotic proteins t-BID and BAX, upon treatment with 4a. Besides, dose-dependent activation of p53 along with FAS, FAS-L, and cleavage of CASPASE-8 suggest that it induces death receptor mediated apoptotic pathway in CEM cells. More importantly, we observed a reduction in tumor growth and significant increase in survival upon oral administration of 4a (20 mg/kg, six doses) in mice. In comparison, 4a was found to be more potent than its parental analogue Levamisole based on both ex vivo and in vivo studies. Further, immunohistochemistry and western blotting studies indicate that 4a treatment led to abrogation of tumor cell proliferation and activation of apoptosis by the extrinsic pathway even in animal models. Conclusion: Thus, our results suggest that 4a could be used as a potent chemotherapeutic agent.
Resumo:
N-gram language models and lexicon-based word-recognition are popular methods in the literature to improve recognition accuracies of online and offline handwritten data. However, there are very few works that deal with application of these techniques on online Tamil handwritten data. In this paper, we explore methods of developing symbol-level language models and a lexicon from a large Tamil text corpus and their application to improving symbol and word recognition accuracies. On a test database of around 2000 words, we find that bigram language models improve symbol (3%) and word recognition (8%) accuracies and while lexicon methods offer much greater improvements (30%) in terms of word recognition, there is a large dependency on choosing the right lexicon. For comparison to lexicon and language model based methods, we have also explored re-evaluation techniques which involve the use of expert classifiers to improve symbol and word recognition accuracies.
Resumo:
The aim of this work is to enable seamless transformation of product concepts to CAD models. This necessitates availability of 3D product sketches. The present work concerns intuitive generation of 3D strokes and intrinsic support for space sharing and articulation for the components of the product being sketched. Direct creation of 3D strokes in air lacks in precision, stability and control. The inadequacy of proprioceptive feedback for the task is complimented in this work with stereo vision and haptics. Three novel methods based on pencil-paper interaction analogy for haptic rendering of strokes have been investigated. The pen-tilt based rendering is simpler and found to be more effective. For the spatial conformity, two modes of constraints for the stylus movements, corresponding to the motions on a control surface and in a control volume have been studied using novel reactive and field based haptic rendering schemes. The field based haptics, which in effect creates an attractive force field near a surface, though non-realistic, provided highly effective support for the control-surface constraints. The efficacy of the reactive haptic rendering scheme for the constrained environments has been demonstrated using scribble strokes. This can enable distributed collaborative 3D concept development. The notion of motion constraints, defined through sketch strokes enables intuitive generation of articulated 3D sketches and direct exploration of motion annotations found in most product concepts. The work, thus, establishes that modeling of the constraints is a central issue in 3D sketching.
Resumo:
The formulation of higher order structural models and their discretization using the finite element method is difficult owing to their complexity, especially in the presence of non-linearities. In this work a new algorithm for automating the formulation and assembly of hyperelastic higher-order structural finite elements is developed. A hierarchic series of kinematic models is proposed for modeling structures with special geometries and the algorithm is formulated to automate the study of this class of higher order structural models. The algorithm developed in this work sidesteps the need for an explicit derivation of the governing equations for the individual kinematic modes. Using a novel procedure involving a nodal degree-of-freedom based automatic assembly algorithm, automatic differentiation and higher dimensional quadrature, the relevant finite element matrices are directly computed from the variational statement of elasticity and the higher order kinematic model. Another significant feature of the proposed algorithm is that natural boundary conditions are implicitly handled for arbitrary higher order kinematic models. The validity algorithm is illustrated with examples involving linear elasticity and hyperelasticity. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
Ice volume estimates are crucial for assessing water reserves stored in glaciers. Due to its large glacier coverage, such estimates are of particular interest for the Himalayan-Karakoram (HK) region. In this study, different existing methodologies are used to estimate the ice reserves: three area-volume relations, one slope-dependent volume estimation method, and two ice-thickness distribution models are applied to a recent, detailed, and complete glacier inventory of the HK region, spanning over the period 2000-2010 and revealing an ice coverage of 40 775 km(2). An uncertainty and sensitivity assessment is performed to investigate the influence of the observed glacier area and important model parameters on the resulting total ice volume. Results of the two ice-thickness distribution models are validated with local ice-thickness measurements at six glaciers. The resulting ice volumes for the entire HK region range from 2955 to 4737 km(3), depending on the approach. This range is lower than most previous estimates. Results from the ice thickness distribution models and the slope-dependent thickness estimations agree well with measured local ice thicknesses. However, total volume estimates from area-related relations are larger than those from other approaches. The study provides evidence on the significant effect of the selected method on results and underlines the importance of a careful and critical evaluation.
Resumo:
Aerosol loading over the South Asian region has the potential to affect the monsoon rainfall, Himalayan glaciers and regional air-quality, with implications for the billions in this region. While field campaigns and network observations provide primary data, they tend to be location/season specific. Numerical models are useful to regionalize such location-specific data. Studies have shown that numerical models underestimate the aerosol scenario over the Indian region, mainly due to shortcomings related to meteorology and the emission inventories used. In this context, we have evaluated the performance of two such chemistry-transport models: WRF-Chem and SPRINTARS over an India-centric domain. The models differ in many aspects including physical domain, horizontal resolution, meteorological forcing and so on etc. Despite these differences, both the models simulated similar spatial patterns of Black Carbon (BC) mass concentration, (with a spatial correlation of 0.9 with each other), and a reasonable estimates of its concentration, though both of them under-estimated vis-a-vis the observations. While the emissions are lower (higher) in SPRINTARS (WRF-Chem), overestimation of wind parameters in WRF-Chem caused the concentration to be similar in both models. Additionally, we quantified the under-estimations of anthropogenic BC emissions in the inventories used these two models and three other widely used emission inventories. Our analysis indicates that all these emission inventories underestimate the emissions of BC over India by a factor that ranges from 1.5 to 2.9. We have also studied the model simulations of aerosol optical depth over the Indian region. The models differ significantly in simulations of AOD, with WRF-Chem having a better agreement with satellite observations of AOD as far as the spatial pattern is concerned. It is important to note that in addition to BC, dust can also contribute significantly to AOD. The models differ in simulations of the spatial pattern of mineral dust over the Indian region. We find that both meteorological forcing and emission formulation contribute to these differences. Since AOD is column integrated parameter, description of vertical profiles in both models, especially since elevated aerosol layers are often observed over Indian region, could be also a contributing factor. Additionally, differences in the prescription of the optical properties of BC between the models appear to affect the AOD simulations. We also compared simulation of sea-salt concentration in the two models and found that WRF-Chem underestimated its concentration vis-a-vis SPRINTARS. The differences in near-surface oceanic wind speeds appear to be the main source of this difference. In-spite of these differences, we note that there are similarities in their simulation of spatial patterns of various aerosol species (with each other and with observations) and hence models could be valuable tools for aerosol-related studies over the Indian region. Better estimation of emission inventories could improve aerosol-related simulations. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
This paper intends to provide an overview of the rich legacy of models and theories that have emerged in the last fifty years of the relatively young discipline of design research, and identifies some of the major areas of further research. It addresses the following questions: What are the major theories and models of design? How are design theory and model defined, and what is their purpose? What are the criteria they must satisfy to be considered a design theory or model? How should a theory or model of design be evaluated or validated? What are the major directions for further research?
Resumo:
The solubilities of two lipid derivatives, geranyl butyrate and 10-undecen-1-ol, in SCCO2 (supercritical carbon dioxide) were measured at different operating conditions of temperature (308.15 to 333.15 K) and pressure (10 to 18 MPa). The solubilities (in mole fraction) ranged from 2.1 x 10(-3) to 23.2 x 10(-3) for geranyl butyrate and 2.2 x 10(-3) to 25.0 x 10(-3) for 10-undecen-1-ol, respectively. The solubility data showed a retrograde behavior in the pressure and temperature range investigated. Various combinations of association and solution theory along with different activity coefficient models were developed. The experimental data for the solubilities of 21 liquid solutes along with geranyl butyrate and 10-undecen-1-ol were correlated using both the newly derived models and the existing models. The average deviation of the correlation of the new models was below 15%.
Resumo:
With the advances in technology, seismological theory, and data acquisition, a number of high-resolution seismic tomography models have been published. However, discrepancies between tomography models often arise from different theoretical treatments of seismic wave propagation, different inversion strategies, and different data sets. Using a fixed velocity-to-density scaling and a fixed radial viscosity profile, we compute global mantle flow models associated with the different tomography models and test the impact of these for explaining surface geophysical observations (geoid, dynamic topography, stress, and strain rates). We use the joint modeling of lithosphere and mantle dynamics approach of Ghosh and Holt (2012) to compute the full lithosphere stresses, except that we use HC for the mantle circulation model, which accounts for the primary flow-coupling features associated with density-driven mantle flow. Our results show that the seismic tomography models of S40RTS and SAW642AN provide a better match with surface observables on a global scale than other models tested. Both of these tomography models have important similarities, including upwellings located in Pacific, Eastern Africa, Iceland, and mid-ocean ridges in the Atlantic and Indian Ocean and downwelling flows mainly located beneath the Andes, the Middle East, and central and Southeast Asia.