909 resultados para EVALUATION MODEL
Resumo:
This paper presents the advanced analytical methodologies such as Double- G and Double - K models for fracture analysis of concrete specimens made up of high strength concrete (HSC, HSC1) and ultra high strength concrete. Brief details about characterization and experimentation of HSC, HSC1 and UHSC have been provided. Double-G model is based on energy concept and couples the Griffith's brittle fracture theory with the bridging softening property of concrete. The double-K fracture model is based on stress intensity factor approach. Various fracture parameters such as cohesive fracture toughness (4), unstable fracture toughness (K-Ic(c)), unstable fracture toughness (K-Ic(un)) and initiation fracture toughness (K-Ic(ini)) have been evaluated based on linear elastic fracture mechanics and nonlinear fracture mechanics principles. Double-G and double-K method uses the secant compliance at the peak point of measured P-CMOD curves for determining the effective crack length. Bi-linear tension softening model has been employed to account for cohesive stresses ahead of the crack tip. From the studies, it is observed that the fracture parameters obtained by using double - G and double - K models are in good agreement with each other. Crack extension resistance has been estimated by using the fracture parameters obtained through double - K model. It is observed that the values of the crack extension resistance at the critical unstable point are almost equal to the values of the unstable fracture toughness K-Ic(un) of the materials. The computed fracture parameters will be useful for crack growth study, remaining life and residual strength evaluation of concrete structural components.
Resumo:
Dynamic Voltage and Frequency Scaling (DVFS) offers a huge potential for designing trade-offs involving energy, power, temperature and performance of computing systems. In this paper, we evaluate three different DVFS schemes - our enhancement of a Petri net performance model based DVFS method for sequential programs to stream programs, a simple profile based Linear Scaling method, and an existing hardware based DVFS method for multithreaded applications - using multithreaded stream applications, in a full system Chip Multiprocessor (CMP) simulator. From our evaluation, we find that the software based methods achieve significant Energy/Throughput2(ET−2) improvements. The hardware based scheme degrades performance heavily and suffers ET−2 loss. Our results indicate that the simple profile based scheme achieves the benefits of the complex Petri net based scheme for stream programs, and present a strong case for the need for independent voltage/frequency control for different cores of CMPs, which is lacking in most of the state-of-the-art CMPs. This is in contrast to the conclusions of a recent evaluation of per-core DVFS schemes for multithreaded applications for CMPs.
Resumo:
This paper deals with the evaluation of the component-laminate load-carrying capacity, i.e., to calculate the loads that cause the failure of the individual layers and the component-laminate as a whole in four-bar mechanism. The component-laminate load-carrying capacity is evaluated using the Tsai-Wu-Hahn failure criterion for various lay-ups. The reserve factor of each ply in the component-laminate is calculated by using the maximum resultant force and the maximum resultant moment occurring at different time steps at the joints of the mechanism. Here, all component bars of the mechanism are made of fiber reinforced laminates and have thin rectangular cross-sections. They could, in general, be pre-twisted and/or possess initial curvature, either by design or by defect. They are linked to each other by means of revolute joints. We restrict ourselves to linear materials with small strains within each elastic body (strip-like beam). Each component of the mechanism is modeled as a beam based on geometrically non-linear 3-D elasticity theory. The component problems are thus split into 2-D analyses of reference beam cross-sections and non-linear 1-D analyses along the three beam reference curves. For the thin rectangular cross-sections considered here, the 2-D cross-sectional nonlinearity is also overwhelming. This can be perceived from the fact that such sections constitute a limiting case between thin-walled open and closed sections, thus inviting the non-linear phenomena observed in both. The strong elastic couplings of anisotropic composite laminates complicate the model further. However, a powerful mathematical tool called the Variational Asymptotic Method (VAM) not only enables such a dimensional reduction, but also provides asymptotically correct analytical solutions to the non-linear cross-sectional analysis. Such closed-form solutions are used here in conjunction with numerical techniques for the rest of the problem to predict more quickly and accurately than would otherwise be possible. Local 3-D stress, strain and displacement fields for representative sections in the component-bars are recovered, based on the stress resultants from the 1-D global beam analysis. A numerical example is presented which illustrates the failure of each component-laminate and the mechanism as a whole.
Resumo:
Using continuous and near-real time measurements of the mass concentrations of black carbon (BC) aerosols near the surface, for a period of 1 year (from January to December 2006) from a network of eight observatories spread over different environments of India, a space-time synthesis is generated. The strong seasonal variations observed, with a winter high and summer low, are attributed to the combined effects of changes in synoptic air mass types, modulated strongly by the atmospheric boundary layer dynamics. Spatial distribution shows much higher BC concentration over the Indo-Gangetic Plain (IGP) than the peninsular Indian stations. These were examined against the simulations using two chemical transport models, GOCART (Goddard Global Ozone Chemistry Aerosol Radiation and Transport) and CHIMERE for the first time over Indian region. Both the model simulations significantly deviated from the measurements at all the stations; more so during the winter and pre-monsoon seasons and over mega cities. However, the CHIMERE model simulations show better agreement compared with the measurements. Notwithstanding this, both the models captured the temporal variations; at seasonal and subseasonal timescales and the natural variabilities (intra-seasonal oscillations) fairly well, especially at the off-equatorial stations. It is hypothesized that an improvement in the atmospheric boundary layer (ABL) parameterization scheme for tropical environment might lead to better results with GOCART.
Resumo:
Orthogonal frequency-division multiple access (OFDMA) systems divide the available bandwidth into orthogonal subchannels and exploit multiuser diversity and frequency selectivity to achieve high spectral efficiencies. However, they require a significant amount of channel state feedback for scheduling and rate adaptation and are sensitive to feedback delays. We develop a comprehensive analysis for OFDMA system throughput in the presence of feedback delays as a function of the feedback scheme, frequency-domain scheduler, and rate adaptation rule. Also derived are expressions for the outage probability, which captures the inability of a subchannel to successfully carry data due to the feedback scheme or feedback delays. Our model encompasses the popular best-n and threshold-based feedback schemes and the greedy, proportional fair, and round-robin schedulers that cover a wide range of throughput versus fairness tradeoff. It helps quantify the different robustness of the schedulers to feedback overhead and delays. Even at low vehicular speeds, it shows that small feedback delays markedly degrade the throughput and increase the outage probability. Further, given the feedback delay, the throughput degradation depends primarily on the feedback overhead and not on the feedback scheme itself. We also show how to optimize the rate adaptation thresholds as a function of feedback delay.
Resumo:
The delineation of seismic source zones plays an important role in the evaluation of seismic hazard. In most of the studies the seismic source delineation is done based on geological features. In the present study, an attempt has been made to delineate seismic source zones in the study area (south India) based on the seismicity parameters. Seismicity parameters and the maximum probable earthquake for these source zones were evaluated and were used in the hazard evaluation. The probabilistic evaluation of seismic hazard for south India was carried out using a logic tree approach. Two different types of seismic sources, linear and areal, were considered in the present study to model the seismic sources in the region more precisely. In order to properly account for the attenuation characteristics of the region, three different attenuation relations were used with different weightage factors. Seismic hazard evaluation was done for the probability of exceedance (PE) of 10% and 2% in 50 years. The spatial variation of rock level peak horizontal acceleration (PHA) and spectral acceleration (Sa) values corresponding to return periods of 475 and 2500 years for the entire study area are presented in this work. The peak ground acceleration (PGA) values at ground surface level were estimated based on different NEHRP site classes by considering local site effects.
Resumo:
In the present investigation an attempt has been made to develop a new co-polymeric material for controlled release tablet formulations. The acrylamide grafting was successfully performed on the backbone of sago starch. The modified starch was tested for acute toxicity and drug-excipient compatibility study. The grafted material was used in making of controlled release tablets of lamivudine. The formulations were evaluated for physical characteristics such as hardness, friability, %drug content and weight variations. The in vitro release study showed that the optimized formulation exhibited highest correlation (R) value in case of Higuchi model and the release mechanism of the optimized formulation predominantly exhibited combination of diffusion and erosion process. There was a significant difference in the pharmacokinetic parameters (T-max, C-max, AUC, V-d, T-1/2 and MDT) of the optimized formulation as compared to the marketed conventional tablet Lamivir (R) was observed. The pharmacokinetics parameters were showed controlled pattern and better bioavailability. The optimized formulation exhibited good stability and release profile at the accelerated stability conditions. (c) 2013 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we evaluate the performance of a burst retransmission method for an optical burst switched network with intermediate-node-initiation (INI) signaling technique. The proposed method tries to reduce the burst contention probability at the intermediate core nodes. We develop an analytical model to get the burst contention probability and burst loss probability for an optical burst switched network with intermediate-node-initiation signaling technique. The proposed method uses the optical burst retransmission method. We simulate the performance of the optical burst retransmission. Simulation results show that at low traffic loads the loss probability is low compared to the conventional burst retransmission in the OBS network. Result also show that the retransmission method for OBS network with intermediate-node-initiation signaling technique significantly reduces the burst loss probability.
Resumo:
Gene expression in living systems is inherently stochastic, and tends to produce varying numbers of proteins over repeated cycles of transcription and translation. In this paper, an expression is derived for the steady-state protein number distribution starting from a two-stage kinetic model of the gene expression process involving p proteins and r mRNAs. The derivation is based on an exact path integral evaluation of the joint distribution, P(p, r, t), of p and r at time t, which can be expressed in terms of the coupled Langevin equations for p and r that represent the two-stage model in continuum form. The steady-state distribution of p alone, P(p), is obtained from P(p, r, t) (a bivariate Gaussian) by integrating out the r degrees of freedom and taking the limit t -> infinity. P(p) is found to be proportional to the product of a Gaussian and a complementary error function. It provides a generally satisfactory fit to simulation data on the same two-stage process when the translational efficiency (a measure of intrinsic noise levels in the system) is relatively low; it is less successful as a model of the data when the translational efficiency (and noise levels) are high.
Resumo:
This paper presents a comparative evaluation of the average and switching models of a dc-dc boost converter from the point of view of real-time simulation. Both the models are used to simulate the converter in real-time on a Field Programmable Gate Array (FPGA) platform. The converter is considered to function over a wide range of operating conditions, and could do transition between continuous conduction mode (CCM) and discontinuous conduction mode (DCM). While the average model is known to be computationally efficient from the perspective of off-line simulation, the same is shown here to consume more logical resources than the switching model for real-time simulation of the dc-dc converter. Further, evaluation of the boundary condition between CCM and DCM is found to be the main reason for the increased consumption of resources by the average model.
Resumo:
Propranolol, a beta-adrenergic receptor blocker, is presently considered to be a potential therapeutic intervention under investigation for its role in prevention and treatment of osteoporosis. However, no studies have compared the osteoprotective properties of propranolol with well accepted therapeu-tic interventions for the treatment of osteoporosis. To address this question, this study was designed to evaluate the bone protective effects of zoledronic acid, alfacalcidol and propranolol in an animal model of postmenopausal osteoporosis. Five days after ovariectomy, 36 ovariectomized (OVX) rats were divided in- to 6 equal groups, randomized to treatments zoledronic acid (100 μg/kg, intravenous single dose); alfacal-cidol (0.5 μg/kg, oral gauge daily); propranolol (0.1mg/kg, subcutaneously 5 days per week) for 12 weeks. Untreated OVX and sham OVX were used as controls. At the end of the study, rats were killed under anesthesia. For bone porosity evaluation, whole fourth lumbar vertebrae (LV4) were removed. LV4 were also used to measure bone mechanical propeties. Left femurs were used for bone histology. Propranolol showed a significant decrease in bone porosity in comparison to OVX control. Moreover, propranolol sig- nificantly improved bone mechanical properties and bone quality when compared with OVX control. The osteoprotective effect of propranolol was comparable with zoledronic acid and alfacalcidol. Based on this comparative study, the results strongly suggest that propranolol might be new therapeutic intervention for the management of postmenopausal osteoporosis in humans.
Resumo:
Background & objectives: Pre-clinical toxicology evaluation of biotechnology products is a challenge to the toxicologist. The present investigation is an attempt to evaluate the safety profile of the first indigenously developed recombinant DNA anti-rabies vaccine DRV (100 mu g)] and combination rabies vaccine CRV (100 mu g DRV and 1.25 IU of cell culture-derived inactivated rabies virus vaccine)], which are intended for clinical use by intramuscular route in Rhesus monkeys. Methods: As per the regulatory requirements, the study was designed for acute (single dose - 14 days), sub-chronic (repeat dose - 28 days) and chronic (intended clinical dose - 120 days) toxicity tests using three dose levels, viz. therapeutic, average (2x therapeutic dose) and highest dose (10 x therapeutic dose) exposure in monkeys. The selection of the model i.e. monkey was based on affinity and rapid higher antibody response during the efficacy studies. An attempt was made to evaluate all parameters which included physical, physiological, clinical, haematological and histopathological profiles of all target organs, as well as Tiers I, II, III immunotoxicity parameters. Results: In acute toxicity there was no mortality in spite of exposing the monkeys to 10XDRV. In sub chronic and chronic toxicity studies there were no abnormalities in physical, physiological, neurological, clinical parameters, after administration of test compound in intended and 10 times of clinical dosage schedule of DRV and CRV under the experimental conditions. Clinical chemistry, haematology, organ weights and histopathology studies were essentially unremarkable except the presence of residual DNA in femtogram level at site of injection in animal which received 10X DRV in chronic toxicity study. No Observational Adverse Effects Level (NOAEL) of DRV is 1000 ug/dose (10 times of therapeutic dose) if administered on 0, 4, 7, 14, 28th day. Interpretation & conclusions: The information generated by this study not only draws attention to the need for national and international regulatory agencies in formulating guidelines for pre-clinical safety evaluation of biotech products but also facilitates the development of biopharmaceuticals as safe potential therapeutic agents.
Resumo:
The objective of the current study is to evaluate the fidelity of load cell reading during impact testing in a drop-weight impactor using lumped parameter modeling. For the most common configuration of a moving impactor-load cell system in which dynamic load is transferred from the impactor head to the load cell, a quantitative assessment is made of the possible discrepancy that can result in load cell response. A 3-DOF (degrees-of-freedom) LPM (lumped parameter model) is considered to represent a given impact testing set-up. In this model, a test specimen in the form of a steel hat section similar to front rails of cars is represented by a nonlinear spring while the load cell is assumed to behave in a linear manner due to its high stiffness. Assuming a given load-displacement response obtained in an actual test as the true behavior of the specimen, the numerical solution of the governing differential equations following an implicit time integration scheme is shown to yield an excellent reproduction of the mechanical behavior of the specimen thereby confirming the accuracy of the numerical approach. The spring representing the load cell, however,predicts a response that qualitatively matches the assumed load-displacement response of the test specimen with a perceptibly lower magnitude of load.
Resumo:
The basic objective in the present study is to show that for the most common configuration of an impactor system, an accelerometer cannot exactly reproduce the dynamic response of a specimen subject to impact loading. Assessment of the accelerometer mounted in a drop-weight impactor setup for an axially loaded specimen is done with the aid of an equivalent lumped parameter model (LPM) of the setup. A steel hat-type specimen under the impact loading is represented as a non-linear spring of varying stiffness, while the accelerometer is assumed to behave in a linear manner due to its high stiffness. A suitable numerical approach has been used to solve the non-linear governing equations for a 3 degrees-of-freedom system in a piece-wise linear manner. The numerical solution following an explicit time integration scheme is used to yield an excellent reproduction of the mechanical behavior of the specimen thereby confirming the accuracy of the numerical approach. The spring representing the accelerometer, however, predicts a response that qualitatively matches the assumed load–displacement response of the test specimen with a perceptibly lower magnitude of load.
Resumo:
We present a comparison of the Global Ocean Data Assimilation System (GODAS) five-day ocean analyses against in situ daily data from Research Moored Array for African-Asian-Australian Monsoon Analysis and Prediction (RAMA) moorings at locations 90 degrees E, 12 degrees N; 90 degrees E, 8 degrees N; 90 degrees E, 0 degrees N and 90 degrees E, 1.5 degrees S in the equatorial Indian Ocean and the Bay of Bengal during 2002-2008. We find that the GODAS temperature analysis does not adequately capture a prominent signal of Indian Ocean dipole mode of 2006 seen in the mooring data, particularly at 90 degrees E 0 degrees N and 90 degrees E 1.5 degrees S in the eastern India Ocean. The analysis, using simple statistics such as bias and root-mean-square deviation, indicates that standard GODAS temperature has definite biases and significant differences with observations on both subseasonal and seasonal scales. Subsurface salinity has serious deficiencies as well, but this may not be surprising considering the poorly constrained fresh water forcing, and possible model deficiencies in subsurface vertical mixing. GODAS reanalysis needs improvement to make it more useful for study of climate variability and for creating ocean initial conditions for prediction.