299 resultados para Parameter Optimization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experimental quantum simulation of a Hamiltonian H requires unitary operator decomposition (UOD) of its evolution unitary U = exp(-iHt) in terms of native unitary operators of the experimental system. Here, using a genetic algorithm, we numerically evaluate the most generic UOD (valid over a continuous range of Hamiltonian parameters) of the unitary operator U, termed fidelity-profile optimization. The optimization is obtained by systematically evaluating the functional dependence of experimental unitary operators (such as single-qubit rotations and time-evolution unitaries of the system interactions) to the Hamiltonian (H) parameters. Using this technique, we have solved the experimental unitary decomposition of a controlled-phase gate (for any phase value), the evolution unitary of the Heisenberg XY interaction, and simulation of the Dzyaloshinskii-Moriya (DM) interaction in the presence of the Heisenberg XY interaction. Using these decompositions, we studied the entanglement dynamics of a Bell state in the DM interaction and experimentally verified the entanglement preservation procedure of Hou et al. Ann. Phys. (N.Y.) 327, 292 (2012)] in a nuclear magnetic resonance quantum information processor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An optimal measurement selection strategy based on incoherence among rows (corresponding to measurements) of the sensitivity (or weight) matrix for the near infrared diffuse optical tomography is proposed. As incoherence among the measurements can be seen as providing maximum independent information into the estimation of optical properties, this provides high level of optimization required for knowing the independency of a particular measurement on its counterparts. The proposed method was compared with the recently established data-resolution matrix-based approach for optimal choice of independent measurements and shown, using simulated and experimental gelatin phantom data sets, to be superior as it does not require an optimal regularization parameter for providing the same information. (C) 2014 Society of Photo-Optical Instrumentation Engineers (SPIE)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using a realistic nonlinear mathematical model for melanoma dynamics and the technique of optimal dynamic inversion (exact feedback linearization with static optimization), a multimodal automatic drug dosage strategy is proposed in this paper for complete regression of melanoma cancer in humans. The proposed strategy computes different drug dosages and gives a nonlinear state feedback solution for driving the number of cancer cells to zero. However, it is observed that when tumor is regressed to certain value, then there is no need of external drug dosages as immune system and other therapeutic states are able to regress tumor at a sufficiently fast rate which is more than exponential rate. As model has three different drug dosages, after applying dynamic inversion philosophy, drug dosages can be selected in optimized manner without crossing their toxicity limits. The combination of drug dosages is decided by appropriately selecting the control design parameter values based on physical constraints. The process is automated for all possible combinations of the chemotherapy and immunotherapy drug dosages with preferential emphasis of having maximum possible variety of drug inputs at any given point of time. Simulation study with a standard patient model shows that tumor cells are regressed from 2 x 107 to order of 105 cells because of external drug dosages in 36.93 days. After this no external drug dosages are required as immune system and other therapeutic states are able to regress tumor at greater than exponential rate and hence, tumor goes to zero (less than 0.01) in 48.77 days and healthy immune system of the patient is restored. Study with different chemotherapy drug resistance value is also carried out. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic optimization algorithms, especially when the objective is to improve the performance of a stochastic system However, the performance of these methods depends on several parameters, such as the choice of a suitable smoothing kernel. Different kernels have been studied in the literature, which include Gaussian, Cauchy, and uniform distributions, among others. This article studies a new class of kernels based on the q-Gaussian distribution, which has gained popularity in statistical physics over the last decade. Though the importance of this family of distributions is attributed to its ability to generalize the Gaussian distribution, we observe that this class encompasses almost all existing smoothing kernels. This motivates us to study SF schemes for gradient estimation using the q-Gaussian distribution. Using the derived gradient estimates, we propose two-timescale algorithms for optimization of a stochastic objective function in a constrained setting with a projected gradient search approach. We prove the convergence of our algorithms to the set of stationary points of an associated ODE. We also demonstrate their performance numerically through simulations on a queuing model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Friction stir processing (FSP) is emerging as one of the most competent severe plastic deformation (SPD) method for producing bulk ultra-fine grained materials with improved properties. Optimizing the process parameters for a defect free process is one of the challenging aspects of FSP to mark its commercial use. For the commercial aluminium alloy 2024-T3 plate of 6 mm thickness, a bottom-up approach has been attempted to optimize major independent parameters of the process such as plunge depth, tool rotation speed and traverse speed. Tensile properties of the optimum friction stir processed sample were correlated with the microstructural characterization done using Scanning Electron Microscope (SEM) and Electron Back-Scattered Diffraction (EBSD). Optimum parameters from the bottom-up approach have led to a defect free FSP having a maximum strength of 93% the base material strength. Micro tensile testing of the samples taken from the center of processed zone has shown an increased strength of 1.3 times the base material. Measured maximum longitudinal residual stress on the processed surface was only 30 MPa which was attributed to the solid state nature of FSP. Microstructural observation reveals significant grain refinement with less variation in the grain size across the thickness and a large amount of grain boundary precipitation compared to the base metal. The proposed experimental bottom-up approach can be applied as an effective method for optimizing parameters during FSP of aluminium alloys, which is otherwise difficult through analytical methods due to the complex interactions between work-piece, tool and process parameters. Precipitation mechanisms during FSP were responsible for the fine grained microstructure in the nugget zone that provided better mechanical properties than the base metal. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two Chrastil type expressions have been developed to model the solubility of supercritical fluids/gases in liquids. The three parameter expressions proposed correlates the solubility as a function of temperature, pressure and density. The equation can also be used to check the self-consistency of the experimental data of liquid phase compositions for supercritical fluid-liquid equilibria. Fifty three different binary systems (carbon-dioxide + liquid) with around 2700 data points encompassing a wide range of compounds like esters, alcohols, carboxylic acids and ionic liquids were successfully modeled for a wide range of temperatures and pressures. Besides the test for self-consistency, based on the data at one temperature, the model can be used to predict the solubility of supercritical fluids in liquids at different temperatures. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A finite difference method for a time-dependent singularly perturbed convection-diffusion-reaction problem involving two small parameters in one space dimension is considered. We use the classical implicit Euler method for time discretization and upwind scheme on the Shishkin-Bakhvalov mesh for spatial discretization. The method is analysed for convergence and is shown to be uniform with respect to both the perturbation parameters. The use of the Shishkin-Bakhvalov mesh gives first-order convergence unlike the Shishkin mesh where convergence is deteriorated due to the presence of a logarithmic factor. Numerical results are presented to validate the theoretical estimates obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effects of two major electrodeposition process conditions, electrolyte bath temperature and current density, on the microstructure and crystallographic texture of pure tin coatings on brass and, ultimately, on the extent of whisker formation have been examined. The grain size of the deposited coatings increased with increasing electrolyte bath temperature and current density, which significantly affected the dominant texture: (211) or (420) was the dominant texture at low current densities whereas, depending on deposition temperature, (200) or (220) became the dominant texture at high current densities. After deposition, coatings were subjected to different environmental conditions, for example isothermal aging (room temperature, 50A degrees C, or 150A degrees C) for up to 90 days and thermal cycling between -25A degrees C and 85A degrees C for 100 cycles, and whisker growth was studied. The Sn coatings with low Miller index planes, for example (200) and (220), and with moderate aging temperature were more prone to whiskering than coating with high Miller index planes, for example (420), and high aging temperature. A processing route involving the optimum combination of current density and deposition temperature is proposed for suppressing whisker growth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When Markov chain Monte Carlo (MCMC) samplers are used in problems of system parameter identification, one would face computational difficulties in dealing with large amount of measurement data and (or) low levels of measurement noise. Such exigencies are likely to occur in problems of parameter identification in dynamical systems when amount of vibratory measurement data and number of parameters to be identified could be large. In such cases, the posterior probability density function of the system parameters tends to have regions of narrow supports and a finite length MCMC chain is unlikely to cover pertinent regions. The present study proposes strategies based on modification of measurement equations and subsequent corrections, to alleviate this difficulty. This involves artificial enhancement of measurement noise, assimilation of transformed packets of measurements, and a global iteration strategy to improve the choice of prior models. Illustrative examples cover laboratory studies on a time variant dynamical system and a bending-torsion coupled, geometrically non-linear building frame under earthquake support motions. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The inversion of canopy reflectance models is widely used for the retrieval of vegetation properties from remote sensing. This study evaluates the retrieval of soybean biophysical variables of leaf area index, leaf chlorophyll content, canopy chlorophyll content, and equivalent leaf water thickness from proximal reflectance data integrated broadbands corresponding to moderate resolution imaging spectroradiometer, thematic mapper, and linear imaging self scanning sensors through inversion of the canopy radiative transfer model, PROSAIL. Three different inversion approaches namely the look-up table, genetic algorithm, and artificial neural network were used and performances were evaluated. Application of the genetic algorithm for crop parameter retrieval is a new attempt among the variety of optimization problems in remote sensing which have been successfully demonstrated in the present study. Its performance was as good as that of the look-up table approach and the artificial neural network was a poor performer. The general order of estimation accuracy for para-meters irrespective of inversion approaches was leaf area index > canopy chlorophyll content > leaf chlorophyll content > equivalent leaf water thickness. Performance of inversion was comparable for broadband reflectances of all three sensors in the optical region with insignificant differences in estimation accuracy among them.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We revisit a problem studied by Padakandla and Sundaresan SIAM J. Optim., August 2009] on the minimization of a separable convex function subject to linear ascending constraints. The problem arises as the core optimization in several resource allocation problems in wireless communication settings. It is also a special case of an optimization of a separable convex function over the bases of a specially structured polymatroid. We give an alternative proof of the correctness of the algorithm of Padakandla and Sundaresan. In the process we relax some of their restrictions placed on the objective function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Time Projection Chamber (TPC) based X-ray polarimeters using Gas Electron Multiplier (GEM) are currently being developed to make sensitive measurement of polarization in 2-10 keV energy range. The emission direction of the photoelectron ejected via photoelectric effect carries the information of the polarization of the incident X-ray photon. Performance of a gas based polarimeter is affected by the operating drift parameters such as gas pressure, drift field and drift-gap. We present simulation studies carried out in order to understand the effect of these operating parameters on the modulation factor of a TPC polarimeter. Models of Garfield are used to study photoelectron interaction in gas and drift of electron cloud towards GEM. Our study is aimed at achieving higher modulation factors by optimizing drift parameters. Study has shown that Ne/DME (50/50) at lower pressure and drift field can lead to desired performance of a TPC polarimeter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fracture toughness measurements at the small scale have gained prominence over the years due to the continuing miniaturization of structural systems. Measurements carried out on bulk materials cannot be extrapolated to smaller length scales either due to the complexity of the microstructure or due to the size and geometric effect. Many new geometries have been proposed for fracture property measurements at small-length scales depending on the material behaviour and the type of device used in service. In situ testing provides the necessary environment to observe fracture at these length scales so as to determine the actual failure mechanism in these systems. In this paper, several improvements are incorporated to a previously proposed geometry of bending a doubly clamped beam for fracture toughness measurements. Both monotonic and cyclic loading conditions have been imposed on the beam to study R-curve and fatigue effects. In addition to the advantages that in situ SEM-based testing offers in such tests, FEM has been used as a simulation tool to replace cumbersome and expensive experiments to optimize the geometry. A description of all the improvements made to this specific geometry of clamped beam bending to make a variety of fracture property measurements is given in this paper.