909 resultados para EVALUATION MODEL
Resumo:
We develop a communication theoretic framework for modeling 2-D magnetic recording channels. Using the model, we define the signal-to-noise ratio (SNR) for the channel considering several physical parameters, such as the channel bit density, code rate, bit aspect ratio, and noise parameters. We analyze the problem of optimizing the bit aspect ratio for maximizing SNR. The read channel architecture comprises a novel 2-D joint self-iterating equalizer and detection system with noise prediction capability. We evaluate the system performance based on our channel model through simulations. The coded performance with the 2-D equalizer detector indicates similar to 5.5 dB of SNR gain over uncoded data.
Resumo:
The present work reports the biocompatibility property of injection molded HDPE-HA-Al2O3 hybrid composites. In vitro cytocompatibility results reveal that osteogenic cell viability and bone mineralization are favorably supported in a statistically significant manner on HDPE-20% HA-20% Al2O3 composite, in comparison to HDPE-40 wt.% HA or HDPE-40 wt.% Al2O3. The difference in cytocompatibility property is explained in terms of difference in substrate wettability/surface energy and importantly, both the cell proliferation at 7 days or bone mineralization at 21 days on HDPE-20% HA-20% Al2O3 composite are either comparable or better than sintered HA. The progressive healing of cylindrical femoral bone defects in rabbit animal model was assessed by implantation experiments over 1, 4 and 12 weeks. Based on the histological analysis as well as histomorphometrical evaluation, a better efficacy of HDPE-20% HA-20% Al2O3 over high-density polyethylene (HDPE) for bone regeneration and neobone formation at host bone-implant interface was established. Taken together, the present study unequivocally establishes that despite the presence of 20% Al2O3, HDPE-based hybrid composites are as biocompatible as HA in vitro or better than HDPE in vivo.
Resumo:
Three possible contact conditions may prevail at a contact interface depending on the magnitude of normal and tangential loads, that is, stick condition, partial slip condition or gross sliding condition. Numerical techniques have been used to evaluate the stress field under partial slip and gross sliding condition. Cattaneo and Mindlin approach has been adapted to model partial slip condition. Shear strain energy density and normalized strain energy release rate have been evaluated at the surface and in the subsurface region. It is apparent from the present study that the shear strain energy density gives a fair prediction for the nucleation of damage, whereas the propagation of the crack is controlled by normalized strain energy release rate. Further, it has been observed that the intensity of damage strongly depends on coefficient of friction and contact conditions prevailing at the contact interface. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The objective of this study is to evaluate the ability of a European chemistry transport model, `CHIMERE' driven by the US meteorological model MM5, in simulating aerosol concentrations dust, PM10 and black carbon (BC)] over the Indian region. An evaluation of a meteorological event (dust storm); impact of change in soil-related parameters and meteorological input grid resolution on these aerosol concentrations has been performed. Dust storm simulation over Indo-Gangetic basin indicates ability of the model to capture dust storm events. Measured (AERONET data) and simulated parameters such as aerosol optical depth (AOD) and Angstrom exponent are used to evaluate the performance of the model to capture the dust storm event. A sensitivity study is performed to investigate the impact of change in soil characteristics (thickness of the soil layer in contact with air, volumetric water, and air content of the soil) and meteorological input grid resolution on the aerosol (dust, PM10, BC) distribution. Results show that soil parameters and meteorological input grid resolution have an important impact on spatial distribution of aerosol (dust, PM10, BC) concentrations.
Resumo:
Several statistical downscaling models have been developed in the past couple of decades to assess the hydrologic impacts of climate change by projecting the station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs). This paper presents and compares different statistical downscaling models that use multiple linear regression (MLR), positive coefficient regression (PCR), stepwise regression (SR), and support vector machine (SVM) techniques for estimating monthly rainfall amounts in the state of Florida. Mean sea level pressure, air temperature, geopotential height, specific humidity, U wind, and V wind are used as the explanatory variables/predictors in the downscaling models. Data for these variables are obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) reanalysis dataset and the Canadian Centre for Climate Modelling and Analysis (CCCma) Coupled Global Climate Model, version 3 (CGCM3) GCM simulations. The principal component analysis (PCA) and fuzzy c-means clustering method (FCM) are used as part of downscaling model to reduce the dimensionality of the dataset and identify the clusters in the data, respectively. Evaluation of the performances of the models using different error and statistical measures indicates that the SVM-based model performed better than all the other models in reproducing most monthly rainfall statistics at 18 sites. Output from the third-generation CGCM3 GCM for the A1B scenario was used for future projections. For the projection period 2001-10, MLR was used to relate variables at the GCM and NCEP grid scales. Use of MLR in linking the predictor variables at the GCM and NCEP grid scales yielded better reproduction of monthly rainfall statistics at most of the stations (12 out of 18) compared to those by spatial interpolation technique used in earlier studies.
Resumo:
The ability of Coupled General Circulation Models (CGCMs) participating in the Intergovernmental Panel for Climate Change's fourth assessment report (IPCC AR4) for the 20th century climate (20C3M scenario) to simulate the daily precipitation over the Indian region is explored. The skill is evaluated on a 2.5A degrees x 2.5A degrees grid square compared with the Indian Meteorological Department's (IMD) gridded dataset, and every GCM is ranked for each of these grids based on its skill score. Skill scores (SSs) are estimated from the probability density functions (PDFs) obtained from observed IMD datasets and GCM simulations. The methodology takes into account (high) extreme precipitation events simulated by GCMs. The results are analyzed and presented for three categories and six zones. The three categories are the monsoon season (JJASO - June to October), non-monsoon season (JFMAMND - January to May, November, December) and for the entire year (''Annual''). The six precipitation zones are peninsular, west central, northwest, northeast, central northeast India, and the hilly region. Sensitivity analysis was performed for three spatial scales, 2.5A degrees grid square, zones, and all of India, in the three categories. The models were ranked based on the SS. The category JFMAMND had a higher SS than the JJASO category. The northwest zone had higher SSs, whereas the peninsular and hilly regions had lower SS. No single GCM can be identified as the best for all categories and zones. Some models consistently outperformed the model ensemble, and one model had particularly poor performance. Results show that most models underestimated the daily precipitation rates in the 0-1 mm/day range and overestimated it in the 1-15 mm/day range.
Resumo:
Multifrequency atomic force microscopy is a powerful nanoscale imaging and characterization technique that involves excitation of the atomic force microscope (AFM) probe and measurement of its response at multiple frequencies. This paper reports the design, fabrication, and evaluation of AFM probes with a specified set of torsional eigen-frequencies that facilitate enhancement of sensitivity in multifrequency AFM. A general approach is proposed to design the probes, which includes the design of their generic geometry, adoption of a simple lumped-parameter model, guidelines for determination of the initial dimensions, and an iterative scheme to obtain a probe with the specified eigen-frequencies. The proposed approach is employed to design a harmonic probe wherein the second and the third eigen-frequencies are the corresponding harmonics of the first eigen-frequency. The probe is subsequently fabricated and evaluated. The experimentally evaluated eigen-frequencies and associated mode shapes are shown to closely match the theoretical results. Finally, a simulation study is performed to demonstrate significant improvements in sensitivity to the second-and the third-harmonic spectral components of the tip-sample interaction force with the harmonic probe compared to that of a conventional probe.
Resumo:
The spatial error structure of daily precipitation derived from the latest version 7 (v7) tropical rainfall measuring mission (TRMM) level 2 data products are studied through comparison with the Asian precipitation highly resolved observational data integration toward evaluation of the water resources (APHRODITE) data over a subtropical region of the Indian subcontinent for the seasonal rainfall over 6 years from June 2002 to September 2007. The data products examined include v7 data from the TRMM radiometer Microwave Imager (TMI) and radar precipitation radar (PR), namely, 2A12, 2A25, and 2B31 (combined data from PR and TMI). The spatial distribution of uncertainty from these data products were quantified based on performance metrics derived from the contingency table. For the seasonal daily precipitation over a subtropical basin in India, the data product of 2A12 showed greater skill in detecting and quantifying the volume of rainfall when compared with the 2A25 and 2B31 data products. Error characterization using various error models revealed that random errors from multiplicative error models were homoscedastic and that they better represented rainfall estimates from 2A12 algorithm. Error decomposition techniques performed to disentangle systematic and random errors verify that the multiplicative error model representing rainfall from 2A12 algorithm successfully estimated a greater percentage of systematic error than 2A25 or 2B31 algorithms. Results verify that although the radiometer derived 2A12 rainfall data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that the 2A12 rainfall estimates are in a very good agreement with the reference estimates for the data period considered.
Resumo:
Purpose: Composition of the coronary artery plaque is known to have critical role in heart attack. While calcified plaque can easily be diagnosed by conventional CT, it fails to distinguish between fibrous and lipid rich plaques. In the present paper, the authors discuss the experimental techniques and obtain a numerical algorithm by which the electron density (rho(e)) and the effective atomic number (Z(eff)) can be obtained from the dual energy computed tomography (DECT) data. The idea is to use this inversion method to characterize and distinguish between the lipid and fibrous coronary artery plaques. Methods: For the purpose of calibration of the CT machine, the authors prepare aqueous samples whose calculated values of (rho(e), Z(eff)) lie in the range of (2.65 x 10(23) <= rho(e) <= 3.64 x 10(23)/cm(3)) and (6.80 <= Z(eff) <= 8.90). The authors fill the phantom with these known samples and experimentally determine HU(V-1) and HU(V-2), with V-1,V-2 = 100 and 140 kVp, for the same pixels and thus determine the coefficients of inversion that allow us to determine (rho(e), Z(eff)) from the DECT data. The HU(100) and HU(140) for the coronary artery plaque are obtained by filling the channel of the coronary artery with a viscous solution of methyl cellulose in water, containing 2% contrast. These (rho(e), Z(eff)) values of the coronary artery plaque are used for their characterization on the basis of theoretical models of atomic compositions of the plaque materials. These results are compared with histopathological report. Results: The authors find that the calibration gives Pc with an accuracy of 3.5% while Z(eff) is found within 1% of the actual value, the confidence being 95%. The HU(100) and HU(140) are found to be considerably different for the same plaque at the same position and there is a linear trend between these two HU values. It is noted that pure lipid type plaques are practically nonexistent, and microcalcification, as observed in histopathology, has to be taken into account to explain the nature of the observed (rho(e), Z(eff)) data. This also enables us to judge the composition of the plaque in terms of basic model which considers the plaque to be composed of fibres, lipids, and microcalcification. Conclusions: This simple and reliable method has the potential as an effective modality to investigate the composition of noncalcified coronary artery plaques and thus help in their characterization. In this inversion method, (rho(e), Z(eff)) of the scanned sample can be found by eliminating the effects of the CT machine and also by ensuring that the determination of the two unknowns (rho(e), Z(eff)) does not interfere with each other and the nature of the plaque can be identified in terms of a three component model. (C) 2015 American Association of Physicists in Medicine.
Resumo:
The first objective of this paper is to show that a single-stage adsorption based cooling-cum-desalination system cannot be used if air cooled heat rejection is used under tropical conditions. This objective is achieved by operating a silica gel + water adsorption chiller first in a single-stage mode and then in a 2-stage mode with 2 beds/stage in each case. The second objective is to improve upon the simulation results obtained earlier by way of empirically describing the thermal wave phenomena during switching of operation of beds between adsorption and desorption and vice versa. Performance indicators, namely, cooling capacity, coefficient of performance and desalinated water output are extracted for various evaporator pressures and half cycle times. The improved simulation model is found to interpret experimental results more closely than the earlier one. Reasons for decline in performance indicators between theoretical and actual scenarios are appraised. (C) 2015 Elsevier Ltd and IIR. All rights reserved.
Resumo:
Aerosol loading over the South Asian region has the potential to affect the monsoon rainfall, Himalayan glaciers and regional air-quality, with implications for the billions in this region. While field campaigns and network observations provide primary data, they tend to be location/season specific. Numerical models are useful to regionalize such location-specific data. Studies have shown that numerical models underestimate the aerosol scenario over the Indian region, mainly due to shortcomings related to meteorology and the emission inventories used. In this context, we have evaluated the performance of two such chemistry-transport models: WRF-Chem and SPRINTARS over an India-centric domain. The models differ in many aspects including physical domain, horizontal resolution, meteorological forcing and so on etc. Despite these differences, both the models simulated similar spatial patterns of Black Carbon (BC) mass concentration, (with a spatial correlation of 0.9 with each other), and a reasonable estimates of its concentration, though both of them under-estimated vis-a-vis the observations. While the emissions are lower (higher) in SPRINTARS (WRF-Chem), overestimation of wind parameters in WRF-Chem caused the concentration to be similar in both models. Additionally, we quantified the under-estimations of anthropogenic BC emissions in the inventories used these two models and three other widely used emission inventories. Our analysis indicates that all these emission inventories underestimate the emissions of BC over India by a factor that ranges from 1.5 to 2.9. We have also studied the model simulations of aerosol optical depth over the Indian region. The models differ significantly in simulations of AOD, with WRF-Chem having a better agreement with satellite observations of AOD as far as the spatial pattern is concerned. It is important to note that in addition to BC, dust can also contribute significantly to AOD. The models differ in simulations of the spatial pattern of mineral dust over the Indian region. We find that both meteorological forcing and emission formulation contribute to these differences. Since AOD is column integrated parameter, description of vertical profiles in both models, especially since elevated aerosol layers are often observed over Indian region, could be also a contributing factor. Additionally, differences in the prescription of the optical properties of BC between the models appear to affect the AOD simulations. We also compared simulation of sea-salt concentration in the two models and found that WRF-Chem underestimated its concentration vis-a-vis SPRINTARS. The differences in near-surface oceanic wind speeds appear to be the main source of this difference. In-spite of these differences, we note that there are similarities in their simulation of spatial patterns of various aerosol species (with each other and with observations) and hence models could be valuable tools for aerosol-related studies over the Indian region. Better estimation of emission inventories could improve aerosol-related simulations. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
With the advances in technology, seismological theory, and data acquisition, a number of high-resolution seismic tomography models have been published. However, discrepancies between tomography models often arise from different theoretical treatments of seismic wave propagation, different inversion strategies, and different data sets. Using a fixed velocity-to-density scaling and a fixed radial viscosity profile, we compute global mantle flow models associated with the different tomography models and test the impact of these for explaining surface geophysical observations (geoid, dynamic topography, stress, and strain rates). We use the joint modeling of lithosphere and mantle dynamics approach of Ghosh and Holt (2012) to compute the full lithosphere stresses, except that we use HC for the mantle circulation model, which accounts for the primary flow-coupling features associated with density-driven mantle flow. Our results show that the seismic tomography models of S40RTS and SAW642AN provide a better match with surface observables on a global scale than other models tested. Both of these tomography models have important similarities, including upwellings located in Pacific, Eastern Africa, Iceland, and mid-ocean ridges in the Atlantic and Indian Ocean and downwelling flows mainly located beneath the Andes, the Middle East, and central and Southeast Asia.
Resumo:
Using spcctroscopic ellipsometry (SE), we have measured the optical properties and optical gaps of a series of amorphous carbon (a-C) films ∼ 100-300 Å thick, prepared using a filtered beam of C+ ions from a cathodic arc. Such films exhibit a wide range of sp3-bonded carbon contents from 20 to 76 at.%, as measured by electron energy loss spectroscopy (EELS). The Taue optical gaps of the a-C films increase monotonically from 0.65 eV for 20 at.% sp3 C to 2.25 eV for 76 at.% sp3 C. Spectra in the ellipsometric angles (1.5-5 eV) have been analyzed using different effective medium theories (EMTs) applying a simplified optical model for the dielectric function of a-C, assuming a composite material with sp2 C and sp3 C components. The most widely used EMT, namely that of Bruggeman (with three-dimensionally isotropic screening), yields atomic fractions of sp3 C that correlate monotonically with those obtained from EELS. The results of the SE analysis, however, range from 10 to 25 at.% higher than those from EELS. In fact, we have found that the volume percent sp3 C from SE using the Bruggeman EMT shows good numerical agreement with the atomic percent sp3 C from EELS. The SE-EELS discrepancy has been reduced by using an optical model in which the dielectric function of the a-C is determined as a volume-fraction-weighted average of the dielectric functions of the sp2 C and sp3 C components. © 1998 Elsevier Science S.A.
Resumo:
State-of-the-art large vocabulary continuous speech recognition (LVCSR) systems often combine outputs from multiple subsystems developed at different sites. Cross system adaptation can be used as an alternative to direct hypothesis level combination schemes such as ROVER. In normal cross adaptation it is assumed that useful diversity among systems exists only at acoustic level. However, complimentary features among complex LVCSR systems also manifest themselves in other layers of modelling hierarchy, e.g., subword and word level. It is thus interesting to also cross adapt language models (LM) to capture them. In this paper cross adaptation of multi-level LMs modelling both syllable and word sequences was investigated to improve LVCSR system combination. Significant error rate gains up to 6.7% rel. were obtained over ROVER and acoustic model only cross adaptation when combining 13 Chinese LVCSR subsystems used in the 2010 DARPA GALE evaluation. © 2010 ISCA.
Resumo:
Based on the scaling criteria of polymer flooding reservoir obtained in our previous work in which the gravity and capillary forces, compressibility, non-Newtonian behavior, absorption, dispersion, and diffusion are considered, eight partial similarity models are designed. A new numerical approach of sensitivity analysis is suggested to quantify the dominance degree of relaxed dimensionless parameters for partial similarity model. The sensitivity factor quantifying the dominance degree of relaxed dimensionless parameter is defined. By solving the dimensionless governing equations including all dimensionless parameters, the sensitivity factor of each relaxed dimensionless parameter is calculated for each partial similarity model; thus, the dominance degree of the relaxed one is quantitatively determined. Based on the sensitivity analysis, the effect coefficient of partial similarity model is defined as the summation of product of sensitivity factor of relaxed dimensionless parameter and its relative relaxation quantity. The effect coefficient is used as a criterion to evaluate each partial similarity model. Then the partial similarity model with the smallest effect coefficient can be singled out to approximate to the prototype. Results show that the precision of partial similarity model is not only determined by the number of satisfied dimensionless parameters but also the relative relaxation quantity of the relaxed ones.