41 resultados para Models and modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The oxygen content of liquid Ni-Mn alloy equilibrated with spinel solid solution, (Ni,Mn)O. (1 +x)A12O3, and α-Al2O3 has been measured by suction sampling and inert gas fusion analysis. The corresponding oxygen potential of the three-phase system has been determined with a solid state cell incorporating (Y2O3)ThO2 as the solid electrolyte and Cr + Cr2O3 as the reference electrode. The equilibrium composition of the spinel phase formed at the interface of the alloy and alumina crucible was obtained using EPMA. The experimental data are compared with a thermodynamic model based on the free energies of formation of end-member spinels, free energy of solution of oxygen in liquid nickel, interaction parameters, and the activities in liquid Ni-Mn alloy and spinel solid solution. Mixing properties of the spinel solid solution are derived from a cation distribution model. The computational results agree with the experimental data on oxygen concentration, potential, and composition of the spinel phase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Joint experimental and theoretical work is presented on two quadrupolar D-pi-A-pi-D chromophores characterized by the same bulky donor (D) group and two different central cores. The first chromophore, a newly synthesized species with a malononitrile-based acceptor (A) group, has a V-shaped structure that makes its absorption spectrum very broad, covering most of the visible region. The second chromophore has a squaraine-based core and therefore a linear structure, as also evinced from its absorption spectra. Both chromophores show an anomalous red shift of the absorption band upon increasing solvent polarity, a feature that is ascribed to the large, bulky structure of the moleCules. For these molecules, the basic description of polar solvation in terms of a uniform reaction field fails. Indeed, a simple extension of the model to account for two independent reaction fields associated with the two molecular arms quantitatively reproduces the observed linear absorption and fluorescence as well as fluorescence anisotropy spectra, fully rationalizing their nontrivial dependence on solvent polarity. The model derived from the analysis of linear spectra is adopted to predict nonlinear spectra and specifically hyper-Rayleigh scattering and two-photon absorption spectra. In polar solvents, the V-shaped chromophore is predicted to have a large HRS response in a wide spectral region (approximately 600-1300 nm). Anomalously large and largely solvent-dependent HRS responses for the linear chromophores are ascribed to symmetry lowering induced by polar solvation and amplified in this bulky system by the presence of two reaction fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the advanced analytical methodologies such as Double- G and Double - K models for fracture analysis of concrete specimens made up of high strength concrete (HSC, HSC1) and ultra high strength concrete. Brief details about characterization and experimentation of HSC, HSC1 and UHSC have been provided. Double-G model is based on energy concept and couples the Griffith's brittle fracture theory with the bridging softening property of concrete. The double-K fracture model is based on stress intensity factor approach. Various fracture parameters such as cohesive fracture toughness (4), unstable fracture toughness (K-Ic(c)), unstable fracture toughness (K-Ic(un)) and initiation fracture toughness (K-Ic(ini)) have been evaluated based on linear elastic fracture mechanics and nonlinear fracture mechanics principles. Double-G and double-K method uses the secant compliance at the peak point of measured P-CMOD curves for determining the effective crack length. Bi-linear tension softening model has been employed to account for cohesive stresses ahead of the crack tip. From the studies, it is observed that the fracture parameters obtained by using double - G and double - K models are in good agreement with each other. Crack extension resistance has been estimated by using the fracture parameters obtained through double - K model. It is observed that the values of the crack extension resistance at the critical unstable point are almost equal to the values of the unstable fracture toughness K-Ic(un) of the materials. The computed fracture parameters will be useful for crack growth study, remaining life and residual strength evaluation of concrete structural components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The SUSY Les Houches Accord (SLHA) 2 extended the first SLHA to include various generalisations of the Minimal Supersymmetric Standard Model (MSSM) as well as its simplest next-to-minimal version. Here, we propose further extensions to it, to include the most general and well-established see-saw descriptions (types I/II/III, inverse, and linear) in both an effective and a simple gauged extension of the MSSM framework. In addition, we generalise the PDG numbering scheme to reflect the properties of the particles. (c) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Authentication protocols are very much essential for secure communication in mobile ad hoc networks (MANETs). A number of authentication protocols for MANETs have been proposed in the literature which provide the basic authentication service while trying to optimize their performance and resource consumption parameters. A problem with most of these protocols is that the underlying networking environment on which they are applicable have been left unspecified. As a result, lack of specifications about the networking environments applicable to an authentication protocol for MANETs can mislead about the performance and the applicability of the protocol. In this paper, we first characterize networking environment for a MANET as its 'Membership Model' which is defined as a set of specifications related to the 'Membership Granting Server' (MGS) and the 'Membership Set Pattern' (MSP) of the MANET. We then identify various types of possible membership models for a MANET. In order to illustrate that while designing an authentication protocol for a MANET, it is very much necessary to consider the underlying membership model of the MANET, we study a set of six representative authentication protocols, and analyze their applicability for the membership models as enumerated in this paper. The analysis shows that the same protocol may not perform equally well in all membership models. In addition, there may be membership models which are important from the point of view of users, but for which no authentication protocol is available.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thermal decomposition studies of 3-carene, a bio-fuel, have been carried out behind the reflected shock wave in a single pulse shock tube for temperature ranging from 920 K to 1220 K. The observed products in thermal decomposition of 3-carene are acetylene, allene, butadiene, isoprene, cyclopentadiene, hexatriene, benzene, toluene and p-xylene. The overall rate constant for 3-carene decomposition was found to be k/s(-1) = 10((9.95 +/- 0.54)) exp(-40.88 +/- 2.71 kcal mol(-1) /RT). Ab-initio theoretical calculations were carried out to find the minimum energy pathway that could explain the formation of the observed products in the thermal decomposition experiments. These calculations were carried out at B3LYP/6-311 + G(d,p) and G3 level of theories. A kinetic mechanism explaining the observed products in the thermal decomposition experiments has been derived. It is concluded that the linear hydrocarbons are the primary products in the pyrolysis of 3-carene.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advances in technology, seismological theory, and data acquisition, a number of high-resolution seismic tomography models have been published. However, discrepancies between tomography models often arise from different theoretical treatments of seismic wave propagation, different inversion strategies, and different data sets. Using a fixed velocity-to-density scaling and a fixed radial viscosity profile, we compute global mantle flow models associated with the different tomography models and test the impact of these for explaining surface geophysical observations (geoid, dynamic topography, stress, and strain rates). We use the joint modeling of lithosphere and mantle dynamics approach of Ghosh and Holt (2012) to compute the full lithosphere stresses, except that we use HC for the mantle circulation model, which accounts for the primary flow-coupling features associated with density-driven mantle flow. Our results show that the seismic tomography models of S40RTS and SAW642AN provide a better match with surface observables on a global scale than other models tested. Both of these tomography models have important similarities, including upwellings located in Pacific, Eastern Africa, Iceland, and mid-ocean ridges in the Atlantic and Indian Ocean and downwelling flows mainly located beneath the Andes, the Middle East, and central and Southeast Asia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Solidification processes are complex in nature, involving multiple phases and several length scales. The properties of solidified products are dictated by the microstructure, the mactostructure, and various defects present in the casting. These, in turn, are governed by the multiphase transport phenomena Occurring at different length scales. In order to control and improve the quality of cast products, it is important to have a thorough understanding of various physical and physicochemical phenomena Occurring at various length scales. preferably through predictive models and controlled experiments. In this context, the modeling of transport phenomena during alloy solidification has evolved over the last few decades due to the complex multiscale nature of the problem. Despite this, a model accounting for all the important length scales directly is computationally prohibitive. Thus, in the past, single-phase continuum models have often been employed with respect to a single length scale to model solidification processing. However, continuous development in understanding the physics of solidification at various length scales oil one hand and the phenomenal growth of computational power oil the other have allowed researchers to use increasingly complex multiphase/multiscale models in recent. times. These models have allowed greater understanding of the coupled micro/macro nature of the process and have made it possible to predict solute segregation and microstructure evolution at different length scales. In this paper, a brief overview of the current status of modeling of convection and macrosegregation in alloy solidification processing is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Representation and quantification of uncertainty in climate change impact studies are a difficult task. Several sources of uncertainty arise in studies of hydrologic impacts of climate change, such as those due to choice of general circulation models (GCMs), scenarios and downscaling methods. Recently, much work has focused on uncertainty quantification and modeling in regional climate change impacts. In this paper, an uncertainty modeling framework is evaluated, which uses a generalized uncertainty measure to combine GCM, scenario and downscaling uncertainties. The Dempster-Shafer (D-S) evidence theory is used for representing and combining uncertainty from various sources. A significant advantage of the D-S framework over the traditional probabilistic approach is that it allows for the allocation of a probability mass to sets or intervals, and can hence handle both aleatory or stochastic uncertainty, and epistemic or subjective uncertainty. This paper shows how the D-S theory can be used to represent beliefs in some hypotheses such as hydrologic drought or wet conditions, describe uncertainty and ignorance in the system, and give a quantitative measurement of belief and plausibility in results. The D-S approach has been used in this work for information synthesis using various evidence combination rules having different conflict modeling approaches. A case study is presented for hydrologic drought prediction using downscaled streamflow in the Mahanadi River at Hirakud in Orissa, India. Projections of n most likely monsoon streamflow sequences are obtained from a conditional random field (CRF) downscaling model, using an ensemble of three GCMs for three scenarios, which are converted to monsoon standardized streamflow index (SSFI-4) series. This range is used to specify the basic probability assignment (bpa) for a Dempster-Shafer structure, which represents uncertainty associated with each of the SSFI-4 classifications. These uncertainties are then combined across GCMs and scenarios using various evidence combination rules given by the D-S theory. A Bayesian approach is also presented for this case study, which models the uncertainty in projected frequencies of SSFI-4 classifications by deriving a posterior distribution for the frequency of each classification, using an ensemble of GCMs and scenarios. Results from the D-S and Bayesian approaches are compared, and relative merits of each approach are discussed. Both approaches show an increasing probability of extreme, severe and moderate droughts and decreasing probability of normal and wet conditions in Orissa as a result of climate change. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, reduced level of rock at Bangalore, India is arrived from the 652 boreholes data in the area covering 220 sq.km. In the context of prediction of reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth, ordinary kriging and Support Vector Machine (SVM) models have been developed. In ordinary kriging, the knowledge of the semivariogram of the reduced level of rock from 652 points in Bangalore is used to predict the reduced level of rock at any point in the subsurface of Bangalore, where field measurements are not available. A cross validation (Q1 and Q2) analysis is also done for the developed ordinary kriging model. The SVM is a novel type of learning machine based on statistical learning theory, uses regression technique by introducing e-insensitive loss function has been used to predict the reduced level of rock from a large set of data. A comparison between ordinary kriging and SVM model demonstrates that the SVM is superior to ordinary kriging in predicting rock depth.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The formulation of higher order structural models and their discretization using the finite element method is difficult owing to their complexity, especially in the presence of non-linearities. In this work a new algorithm for automating the formulation and assembly of hyperelastic higher-order structural finite elements is developed. A hierarchic series of kinematic models is proposed for modeling structures with special geometries and the algorithm is formulated to automate the study of this class of higher order structural models. The algorithm developed in this work sidesteps the need for an explicit derivation of the governing equations for the individual kinematic modes. Using a novel procedure involving a nodal degree-of-freedom based automatic assembly algorithm, automatic differentiation and higher dimensional quadrature, the relevant finite element matrices are directly computed from the variational statement of elasticity and the higher order kinematic model. Another significant feature of the proposed algorithm is that natural boundary conditions are implicitly handled for arbitrary higher order kinematic models. The validity algorithm is illustrated with examples involving linear elasticity and hyperelasticity. (C) 2013 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Structural Health Monitoring (SHM) systems require integration of non-destructive technologies into structural design and operational processes. Modeling and simulation of complex NDE inspection processes are important aspects in the development and deployment of SHM technologies. Ray tracing techniques are vital simulation tools to visualize the wave path inside a material. These techniques also help in optimizing the location of transducers and their orientation with respect to the zone of interrogation. It helps in increasing the chances of detection and identification of a flaw in that zone. While current state-of-the-art techniques such as ray tracing based on geometric principle help in such visualization, other information such as signal losses due to spherical or cylindrical shape of wave front are rarely taken into consideration. The problem becomes a little more complicated in the case of dispersive guided wave propagation and near-field defect scattering. We review the existing models and tools to perform ultrasonic NDE simulation in structural components. As an initial step, we develop a ray-tracing approach, where phase and spectral information are preserved. This enables one to study wave scattering beyond simple time of flight calculation of rays. Challenges in terms of theory and modelling of defects of various kinds are discussed. Various additional considerations such as signal decay and physics of scattering are reviewed and challenges involved in realistic computational implementation are discussed. Potential application of this approach to SHM system design is highlighted and by applying this to complex structural components such as airframe structures, SHM is demonstrated to provide additional value in terms of lighter weight and/or longevity enhancement resulting from an extension of the damage tolerance design principle not compromising safety and reliability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is shown, in the composite fermion models studied by 't Hooft and others, that the requirements of Adler-Bell-Jackiw anomaly matching and n-independence are sufficient to fix the indices of composite representations. The third requirement, namely that of decoupling relations, follows from these two constraints in such models and hence is inessential.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regular electrical activation waves in cardiac tissue lead to the rhythmic contraction and expansion of the heart that ensures blood supply to the whole body. Irregularities in the propagation of these activation waves can result in cardiac arrhythmias, like ventricular tachycardia (VT) and ventricular fibrillation (VF), which are major causes of death in the industrialised world. Indeed there is growing consensus that spiral or scroll waves of electrical activation in cardiac tissue are associated with VT, whereas, when these waves break to yield spiral- or scroll-wave turbulence, VT develops into life-threatening VF: in the absence of medical intervention, this makes the heart incapable of pumping blood and a patient dies in roughly two-and-a-half minutes after the initiation of VF. Thus studies of spiral- and scroll-wave dynamics in cardiac tissue pose important challenges for in vivo and in vitro experimental studies and for in silico numerical studies of mathematical models for cardiac tissue. A major goal here is to develop low-amplitude defibrillation schemes for the elimination of VT and VF, especially in the presence of inhomogeneities that occur commonly in cardiac tissue. We present a detailed and systematic study of spiral- and scroll-wave turbulence and spatiotemporal chaos in four mathematical models for cardiac tissue, namely, the Panfilov, Luo-Rudy phase 1 (LRI), reduced Priebe-Beuckelmann (RPB) models, and the model of ten Tusscher, Noble, Noble, and Panfilov (TNNP). In particular, we use extensive numerical simulations to elucidate the interaction of spiral and scroll waves in these models with conduction and ionic inhomogeneities; we also examine the suppression of spiral- and scroll-wave turbulence by low-amplitude control pulses. Our central qualitative result is that, in all these models, the dynamics of such spiral waves depends very sensitively on such inhomogeneities. We also study two types of control chemes that have been suggested for the control of spiral turbulence, via low amplitude current pulses, in such mathematical models for cardiac tissue; our investigations here are designed to examine the efficacy of such control schemes in the presence of inhomogeneities. We find that a local pulsing scheme does not suppress spiral turbulence in the presence of inhomogeneities; but a scheme that uses control pulses on a spatially extended mesh is more successful in the elimination of spiral turbulence. We discuss the theoretical and experimental implications of our study that have a direct bearing on defibrillation, the control of life-threatening cardiac arrhythmias such as ventricular fibrillation.