896 resultados para deduced optical model parameters
Resumo:
Ice volume estimates are crucial for assessing water reserves stored in glaciers. Due to its large glacier coverage, such estimates are of particular interest for the Himalayan-Karakoram (HK) region. In this study, different existing methodologies are used to estimate the ice reserves: three area-volume relations, one slope-dependent volume estimation method, and two ice-thickness distribution models are applied to a recent, detailed, and complete glacier inventory of the HK region, spanning over the period 2000-2010 and revealing an ice coverage of 40 775 km(2). An uncertainty and sensitivity assessment is performed to investigate the influence of the observed glacier area and important model parameters on the resulting total ice volume. Results of the two ice-thickness distribution models are validated with local ice-thickness measurements at six glaciers. The resulting ice volumes for the entire HK region range from 2955 to 4737 km(3), depending on the approach. This range is lower than most previous estimates. Results from the ice thickness distribution models and the slope-dependent thickness estimations agree well with measured local ice thicknesses. However, total volume estimates from area-related relations are larger than those from other approaches. The study provides evidence on the significant effect of the selected method on results and underlines the importance of a careful and critical evaluation.
Resumo:
Estimation of the municipal solid waste settlements and the contribution of each of the components are essential in the estimation of the volume of the waste that can be accommodated in a landfill and increase the post-usage of the landfill. This article describes an experimental methodology for estimating and separating primary settlement, settlement owing to creep and biodegradation-induced settlement. The primary settlement and secondary settlement have been estimated and separated based on 100% pore pressure dissipation time and the coefficient of consolidation. Mechanical creep and biodegradation settlements were estimated and separated based on the observed time required for landfill gas production. The results of a series of laboratory triaxial tests, creep tests and anaerobic reactor cell setups were conducted to describe the components of settlement. All the tests were conducted on municipal solid waste (compost reject) samples. It was observed that biodegradation accounted to more than 40% of the total settlement, whereas mechanical creep contributed more than 20% towards the total settlement. The essential model parameters, such as the compression ratio (C-c'), rate of mechanical creep (c), coefficient of mechanical creep (b), rate of biodegradation (d) and the total strain owing to biodegradation (E-DG), are useful parameters in the estimation of total settlements as well as components of settlement in landfill.
Resumo:
SARAS is a correlation spectrometer connected to a frequency independent antenna that is purpose-designed for precision measurements of the radio background at long wavelengths. The design, calibration, and observing strategies admit solutions for the internal additive contributions to the radiometer response, and hence a separation of these contaminants from the antenna temperature. We present here a wideband measurement of the radio sky spectrum by SARAS that provides an accurate measurement of the absolute brightness and spectral index between 110 and 175MHz. Accuracy in the measurement of absolute sky brightness is limited by systematic errors of magnitude 1.2%; errors in calibration and in the joint estimation of sky and system model parameters are relatively smaller. We use this wide-angle measurement of the sky brightness using the precision wide-band dipole antenna to provide an improved absolute calibration for the 150 MHz all-sky map of Landecker and Wielebinski: subtracting an offset of 21.4 K and scaling by a factor of 1.05 will reduce the overall offset error to 8 K (from 50 K) and scale error to 0.8% (from 5%). The SARAS measurement of the temperature spectral index is in the range -2.3 to -2.45 in the 110-175MHz band and indicates that the region toward the Galactic bulge has a relatively flatter index.
Resumo:
This article presents frequentist inference of accelerated life test data of series systems with independent log-normal component lifetimes. The means of the component log-lifetimes are assumed to depend on the stress variables through a linear stress translation function that can accommodate the standard stress translation functions in the literature. An expectation-maximization algorithm is developed to obtain the maximum likelihood estimates of model parameters. The maximum likelihood estimates are then further refined by bootstrap, which is also used to infer about the component and system reliability metrics at usage stresses. The developed methodology is illustrated by analyzing a real as well as a simulated dataset. A simulation study is also carried out to judge the effectiveness of the bootstrap. It is found that in this model, application of bootstrap results in significant improvement over the simple maximum likelihood estimates.
Resumo:
An open question within the Bienenstock-Cooper-Munro theory for synaptic modification concerns the specific mechanism that is responsible for regulating the sliding modification threshold (SMT). In this conductance-based modeling study on hippocampal pyramidal neurons, we quantitatively assessed the impact of seven ion channels (R- and T-type calcium, fast sodium, delayed rectifier, A-type, and small-conductance calcium-activated (SK) potassium and HCN) and two receptors (AMPAR and NMDAR) on a calcium-dependent Bienenstock-Cooper-Munro-like plasticity rule. Our analysis with R- and T-type calcium channels revealed that differences in their activation-inactivation profiles resulted in differential impacts on how they altered the SMT. Further, we found that the impact of SK channels on the SMT critically depended on the voltage dependence and kinetics of the calcium sources with which they interacted. Next, we considered interactions among all the seven channels and the two receptors through global sensitivity analysis on 11 model parameters. We constructed 20,000 models through uniform randomization of these parameters and found 360 valid models based on experimental constraints on their plasticity profiles. Analyzing these 360 models, we found that similar plasticity profiles could emerge with several nonunique parametric combinations and that parameters exhibited weak pairwise correlations. Finally, we used seven sets of virtual knock-outs on these 360 models and found that the impact of different channels on the SMT was variable and differential. These results suggest that there are several nonunique routes to regulate the SMT, and call for a systematic analysis of the variability and state dependence of the mechanisms underlying metaplasticity during behavior and pathology.
Resumo:
Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.
Resumo:
We study the optimal control problem of maximizing the spread of an information epidemic on a social network. Information propagation is modeled as a susceptible-infected (SI) process, and the campaign budget is fixed. Direct recruitment and word-of-mouth incentives are the two strategies to accelerate information spreading (controls). We allow for multiple controls depending on the degree of the nodes/individuals. The solution optimally allocates the scarce resource over the campaign duration and the degree class groups. We study the impact of the degree distribution of the network on the controls and present results for Erdos-Renyi and scale-free networks. Results show that more resource is allocated to high-degree nodes in the case of scale-free networks, but medium-degree nodes in the case of Erdos-Renyi networks. We study the effects of various model parameters on the optimal strategy and quantify the improvement offered by the optimal strategy over the static and bang-bang control strategies. The effect of the time-varying spreading rate on the controls is explored as the interest level of the population in the subject of the campaign may change over time. We show the existence of a solution to the formulated optimal control problem, which has nonlinear isoperimetric constraints, using novel techniques that is general and can be used in other similar optimal control problems. This work may be of interest to political, social awareness, or crowdfunding campaigners and product marketing managers, and with some modifications may be used for mitigating biological epidemics.
Resumo:
In this work, we present a numerical study of flow of shear thinning viscoelastic fluids in rectangular lid driven cavities for a wide range of aspect ratios (depth to width ratio) varying from 1/16 to 4. In particular, the effect of elasticity, inertia, model parameters and polymer concentration on flow features in rectangular driven cavity has been studied for two shear thinning viscoelastic fluids, namely, Giesekus and linear PTT. We perform numerical simulations using the symmetric square root representation of the conformation tensor to stabilize the numerical scheme against the high Weissenberg number problem. The variation in flow structures associated with merging and splitting of elongated vortices in shallow cavities and coalescence of corner eddies to yield a second primary vortex in deep cavities with respect to the variation in flow parameters is discussed. We discuss the effect of the dominant eigenvalues and the corresponding eigenvectors on the location of the primary eddy in the cavity. We also demonstrate, by performing numerical simulations for shallow and deep cavities, that where the Deborah number (based on convective time scale) characterizes the elastic behaviour of the fluid in deep cavities, Weissenberg number (based on shear rate) should be used for shallow cavities. (C) 2016 Elsevier B.V. All rights reserved.
Resumo:
We present up-to-date electroweak fits of various Randall-Sundrum (RS) models. We consider the bulk RS, deformed RS, and the custodial RS models. For the bulk RS case we find the lightest Kaluza-Klein (KK) mode of the gauge boson to be similar to 8 TeV, while for the custodial case it is similar to 3 TeV. The deformed model is the least fine-tuned of all which can give a good fit for KK masses < 2 TeV depending on the choice of the model parameters. We also comment on the fine-tuning in each case.
Resumo:
Changepoints are abrupt variations in the generative parameters of a data sequence. Online detection of changepoints is useful in modelling and prediction of time series in application areas such as finance, biometrics, and robotics. While frequentist methods have yielded online filtering and prediction techniques, most Bayesian papers have focused on the retrospective segmentation problem. Here we examine the case where the model parameters before and after the changepoint are independent and we derive an online algorithm for exact inference of the most recent changepoint. We compute the probability distribution of the length of the current ``run,'' or time since the last changepoint, using a simple message-passing algorithm. Our implementation is highly modular so that the algorithm may be applied to a variety of types of data. We illustrate this modularity by demonstrating the algorithm on three different real-world data sets.
Resumo:
The refractive index and thickness of SiO2 thin films naturally grown on Si substrates were determined simultaneously within the wavelength range of 220-1100 nm with variable-angle spectroscopic ellipsometry. Different angles of incidence and wavelength ranges were chosen to enhance the analysis sensitivity for more accurate results. Several optical models describing the practical SiO2-Si system were investigated, and best results were obtained with the optical model, including an interface layer between SiO2 and Si, which proved the existence of the interface layer in this work as described in other publications.
Resumo:
The effect of the translational nonequilibrium on performance modeling of flowing chemical oxygen-iodine lasers (COIL) is emphasized in this paper. The spectral line broadening (SLB) model is a basic factor for predicting the performances of flowing COIL. The Voigt profile function is a well-known SLB model and is usually utilized. In the case of gas pressure in laser cavity less than 5 torr, a low pressure limit expression of the Voigt profile function is used. These two SLB models imply that ail lasing particles can interact with monochromatic laser radiation. Basically, the inhomogeneous broadening effects are not considered in these two SLB models and they cannot predict the spectral content. The latter requires consideration of finite translational relaxation rate. Unfortunately, it is rather difficult to solve simultaneously the Navier-Stokes (NS) equations and the conservation equations of the number of lasing particles per unit volume and per unit frequency interval. In the operating condition of flowing COIL, it is possible to obtain a perturbational solution of the conservational equations for lasing particles and deduce a new relation between the gain and the optical intensity, i.e., a new gain-saturation relation. By coupling the gain-saturation relation with other governing equations (such as the NS equations, chemical reaction equations and the optical model of gain-equal-loss), We have numerically calculated the performances of flowing COIL. The present results are compared with those obtained by the common rate-equation (RE) model, in which the Voigt profile function and its low pressure limit expression are used. The difference of different model's results is great. For instance, in the case of lasing frequency coinciding with the central frequency of line profile and very low gas pressure, the gain-saturation relation of the present model is quite different with that of the RE model.
Resumo:
Using spcctroscopic ellipsometry (SE), we have measured the optical properties and optical gaps of a series of amorphous carbon (a-C) films ∼ 100-300 Å thick, prepared using a filtered beam of C+ ions from a cathodic arc. Such films exhibit a wide range of sp3-bonded carbon contents from 20 to 76 at.%, as measured by electron energy loss spectroscopy (EELS). The Taue optical gaps of the a-C films increase monotonically from 0.65 eV for 20 at.% sp3 C to 2.25 eV for 76 at.% sp3 C. Spectra in the ellipsometric angles (1.5-5 eV) have been analyzed using different effective medium theories (EMTs) applying a simplified optical model for the dielectric function of a-C, assuming a composite material with sp2 C and sp3 C components. The most widely used EMT, namely that of Bruggeman (with three-dimensionally isotropic screening), yields atomic fractions of sp3 C that correlate monotonically with those obtained from EELS. The results of the SE analysis, however, range from 10 to 25 at.% higher than those from EELS. In fact, we have found that the volume percent sp3 C from SE using the Bruggeman EMT shows good numerical agreement with the atomic percent sp3 C from EELS. The SE-EELS discrepancy has been reduced by using an optical model in which the dielectric function of the a-C is determined as a volume-fraction-weighted average of the dielectric functions of the sp2 C and sp3 C components. © 1998 Elsevier Science S.A.
Resumo:
This paper presents a method for the fast and direct extraction of model parameters for capacitive MEMS resonators from their measured transmission response such as quality factor, resonant frequency, and motional resistance. We show that these parameters may be extracted without having to first de-embed the resonator motional current from the feedthrough. The series and parallel resonances from the measured electrical transmission are used to determine the MEMS resonator circuit parameters. The theoretical basis for the method is elucidated by using both the Nyquist and susceptance frequency response plots, and applicable in the limit where CF > CmQ; commonly the case when characterizing MEMS resonators at RF. The method is then applied to the measured electrical transmission for capacitively transduced MEMS resonators, and compared against parameters obtained using a Lorentzian fit to the measured response. Close agreement between the two methods is reported herein. © 2010 IEEE.
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.