922 resultados para Chance-constrained programming
Resumo:
Dendritic rnicroenvironments defined by dynamic internal cavities of a dendrimer were probed through geometric isomerization of stilbene and azobenzene. A third-generation poly(alkyl aryl ether) dendrimer with hydrophilic exterior and hydrophobic interior was used as a reaction cavity in aqueous medium. The dynamic inner cavity sizes were varied by utilizing alkyl linkers that connect the branch junctures from ethyl to n-pentyl moiety (C(2)G(3)-C(5)G(3)). Dendrimers constituted with n-pentyl linker were found to afford higher solubilities of stilbene and azobenzene. Direct irradiation of trans-stilbene showed that C(5)G(3) and C(4)G(3) dendrimers afforded considerable phenanthrene formation, in addition to cis-stilbene, whereas C(3)G(3) and C(2)G(3) gave only cis-stilbene. An electron-transfer sensitized trans-cis isomerization, using cresyl violet perchlorate as the sensitizer, also led to similar results. Thermal isomerization of cis-azobenzene to trans-azobenzene within dendritic microenvironments revealed that the activation energy of the cis- to trans-isomer was increasing in the series C(5)G(3) < C(4)G(3) < C(3)G(3)
Resumo:
The throughput-optimal discrete-rate adaptation policy, when nodes are subject to constraints on the average power and bit error rate, is governed by a power control parameter, for which a closed-form characterization has remained an open problem. The parameter is essential in determining the rate adaptation thresholds and the transmit rate and power at any time, and ensuring adherence to the power constraint. We derive novel insightful bounds and approximations that characterize the power control parameter and the throughput in closed-form. The results are comprehensive as they apply to the general class of Nakagami-m (m >= 1) fading channels, which includes Rayleigh fading, uncoded and coded modulation, and single and multi-node systems with selection. The results are appealing as they are provably tight in the asymptotic large average power regime, and are designed and verified to be accurate even for smaller average powers.
Resumo:
The effect of incorporation of a centrally positioned Ac(6)c-Xxx segment where Xxx = (L)Val/(D)Val into a host oligopeptide composed of L-amino acid residues has been investigated. Studies of four designed octapeptides Boc-Leu-Phe-Val-Ac(6)c-Xxx-Leu-Phe-Val-OMe (Xxx = (D)Val 1, (L)Val 2) Boc-Leu-Val-Val-Ac(6)c-Xxx-Leu-Val-Val-OMe (Xxx = (D)Val 3, (L)Val 4) are reported. Diagnostic nuclear Overhouse effects characteristic of hairpin conformations are observed for Xxx = (D)Val peptides (1 and 3) while continuous helical conformation characterized by sequential NiH <-> Ni+1H NOEs are favored for Xxx = (L)Val peptides (2 and 4) in methanol solutions. Temperature co-efficient of NH chemical shifts are in agreement with distinctly different conformational preferences upon changing the configuration of the residue at position 5. Crystal structures of peptides 2 and 4 (Xxx = (L)Val) establish helical conformations in the solid state, in agreement with the structures deduced from NMR data. The results support the design principle that centrally positioned type I beta-turns may be used to nucleate helices in short peptides, while type I' beta-turns can facilitate folding into beta-hairpins.
Resumo:
Background & objectives: There is a need to develop an affordable and reliable tool for hearing screening of neonates in resource constrained, medically underserved areas of developing nations. This study valuates a strategy of health worker based screening of neonates using a low cost mechanical calibrated noisemaker followed up with parental monitoring of age appropriate auditory milestones for detecting severe-profound hearing impairment in infants by 6 months of age. Methods: A trained health worker under the supervision of a qualified audiologist screened 425 neonates of whom 20 had confirmed severe-profound hearing impairment. Mechanical calibrated noisemakers of 50, 60, 70 and 80 dB (A) were used to elicit the behavioural responses. The parents of screened neonates were instructed to monitor the normal language and auditory milestones till 6 months of age. This strategy was validated against the reference standard consisting of a battery of tests - namely, auditory brain stem response (ABR), otoacoustic emissions (OAE) and behavioural assessment at 2 years of age. Bayesian prevalence weighted measures of screening were calculated. Results: The sensitivity and specificity was high with least false positive referrals for. 70 and 80 dB (A) noisemakers. All the noisemakers had 100 per cent negative predictive value. 70 and 80 dB (A) noisemakers had high positive likelihood ratios of 19 and 34, respectively. The probability differences for pre- and post- test positive was 43 and 58 for 70 and 80 dB (A) noisemakers, respectively. Interpretation & conclusions: In a controlled setting, health workers with primary education can be trained to use a mechanical calibrated noisemaker made of locally available material to reliably screen for severe-profound hearing loss in neonates. The monitoring of auditory responses could be done by informed parents. Multi-centre field trials of this strategy need to be carried out to examine the feasibility of community health care workers using it in resource constrained settings of developing nations to implement an effective national neonatal hearing screening programme.
Resumo:
We develop an online actor-critic reinforcement learning algorithm with function approximation for a problem of control under inequality constraints. We consider the long-run average cost Markov decision process (MDP) framework in which both the objective and the constraint functions are suitable policy-dependent long-run averages of certain sample path functions. The Lagrange multiplier method is used to handle the inequality constraints. We prove the asymptotic almost sure convergence of our algorithm to a locally optimal solution. We also provide the results of numerical experiments on a problem of routing in a multi-stage queueing network with constraints on long-run average queue lengths. We observe that our algorithm exhibits good performance on this setting and converges to a feasible point.
Resumo:
A novel approach that can more effectively use the structural information provided by the traditional imaging modalities in multimodal diffuse optical tomographic imaging is introduced. This approach is based on a prior image-constrained-l(1) minimization scheme and has been motivated by the recent progress in the sparse image reconstruction techniques. It is shown that the proposed framework is more effective in terms of localizing the tumor region and recovering the optical property values both in numerical and gelatin phantom cases compared to the traditional methods that use structural information. (C) 2012 Optical Society of America
Resumo:
Critical applications like cyclone tracking and earthquake modeling require simultaneous high-performance simulations and online visualization for timely analysis. Faster simulations and simultaneous visualization enable scientists provide real-time guidance to decision makers. In this work, we have developed an integrated user-driven and automated steering framework that simultaneously performs numerical simulations and efficient online remote visualization of critical weather applications in resource-constrained environments. It considers application dynamics like the criticality of the application and resource dynamics like the storage space, network bandwidth and available number of processors to adapt various application and resource parameters like simulation resolution, simulation rate and the frequency of visualization. We formulate the problem of finding an optimal set of simulation parameters as a linear programming problem. This leads to 30% higher simulation rate and 25-50% lesser storage consumption than a naive greedy approach. The framework also provides the user control over various application parameters like region of interest and simulation resolution. We have also devised an adaptive algorithm to reduce the lag between the simulation and visualization times. Using experiments with different network bandwidths, we find that our adaptive algorithm is able to reduce lag as well as visualize the most representative frames.
Resumo:
Backbone alkylation has been shown to result in a dramatic reduction in the conformational space that is sterically accessible to a-amino acid residues in peptides. By extension, the presence of geminal dialkyl substituents at backbone atoms also restricts available conformational space for beta and ? residues. Five peptides containing the achiral beta 2,2-disubstituted beta-amino acid residue, 1-(aminomethyl)cyclohexanecarboxylic acid (beta 2,2Ac6c), have been structurally characterized in crystals by X-ray diffraction. The tripeptide Boc-Aib-beta 2,2Ac6c-Aib-OMe (1) adopts a novel fold stabilized by two intramolecular H-bonds (C11 and C9) of opposite directionality. The tetrapeptide Boc-Aib-beta 2,2Ac6c]2-OMe (2) and pentapeptide Boc-Aib-beta 2,2Ac6c]2-Aib-OMe (3) form short stretches of a hybrid a beta C11 helix stabilized by two and three intramolecular H-bonds, respectively. The structure of the dipeptide Boc-Aib-beta 2,2Ac6c-OMe (5) does not reveal any intramolecular H-bond. The aggregation pattern in the crystal provides an example of an extended conformation of the beta 2,2Ac6c residue, forming a polar sheet like H-bond. The protected derivative Ac-beta 2,2Ac6c-NHMe (4) adopts a locally folded gauche conformation about the C beta?Ca bonds (?=-55.7 degrees). Of the seven examples of beta 2,2Ac6c residues reported here, six adopt gauche conformations, a feature which promotes local folding when incorporated into peptides. A comparison between the conformational properties of beta 2,2Ac6c and beta 3,3Ac6c residues, in peptides, is presented. Backbone torsional parameters of H-bonded a beta/beta a turns are derived from the structures presented in this study and earlier reports.
Resumo:
This paper presents methodologies for incorporating phasor measurements into conventional state estimator. The angle measurements obtained from Phasor Measurement Units are handled as angle difference measurements rather than incorporating the angle measurements directly. Handling in such a manner overcomes the problems arising due to the choice of reference bus. Current measurements obtained from Phasor Measurement Units are treated as equivalent pseudo-voltage measurements at the neighboring buses. Two solution approaches namely normal equations approach and linear programming approach are presented to show how the Phasor Measurement Unit measurements can be handled. Comparative evaluation of both the approaches is also presented. Test results on IEEE 14 bus system are presented to validate both the approaches.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
In the underlay mode of cognitive radio, secondary users are allowed to transmit when the primary is transmitting, but under tight interference constraints that protect the primary. However, these constraints limit the secondary system performance. Antenna selection (AS)-based multiple antenna techniques, which exploit spatial diversity with less hardware, help improve secondary system performance. We develop a novel and optimal transmit AS rule that minimizes the symbol error probability (SEP) of an average interference-constrained multiple-input-single-output secondary system that operates in the underlay mode. We show that the optimal rule is a non-linear function of the power gain of the channel from the secondary transmit antenna to the primary receiver and from the secondary transmit antenna to the secondary receive antenna. We also propose a simpler, tractable variant of the optimal rule that performs as well as the optimal rule. We then analyze its SEP with L transmit antennas, and extensively benchmark it with several heuristic selection rules proposed in the literature. We also enhance these rules in order to provide a fair comparison, and derive new expressions for their SEPs. The results bring out new inter-relationships between the various rules, and show that the optimal rule can significantly reduce the SEP.
Resumo:
We present a novel multi-timescale Q-learning algorithm for average cost control in a Markov decision process subject to multiple inequality constraints. We formulate a relaxed version of this problem through the Lagrange multiplier method. Our algorithm is different from Q-learning in that it updates two parameters - a Q-value parameter and a policy parameter. The Q-value parameter is updated on a slower time scale as compared to the policy parameter. Whereas Q-learning with function approximation can diverge in some cases, our algorithm is seen to be convergent as a result of the aforementioned timescale separation. We show the results of experiments on a problem of constrained routing in a multistage queueing network. Our algorithm is seen to exhibit good performance and the various inequality constraints are seen to be satisfied upon convergence of the algorithm.
Resumo:
Crop type classification using remote sensing data plays a vital role in planning cultivation activities and for optimal usage of the available fertile land. Thus a reliable and precise classification of agricultural crops can help improve agricultural productivity. Hence in this paper a gene expression programming based fuzzy logic approach for multiclass crop classification using Multispectral satellite image is proposed. The purpose of this work is to utilize the optimization capabilities of GEP for tuning the fuzzy membership functions. The capabilities of GEP as a classifier is also studied. The proposed method is compared to Bayesian and Maximum likelihood classifier in terms of performance evaluation. From the results we can conclude that the proposed method is effective for classification.
Resumo:
We propose an eigenvalue based technique to solve the Homogeneous Quadratic Constrained Quadratic Programming problem (HQCQP) with at most three constraints which arise in many signal processing problems. Semi-Definite Relaxation (SDR) is the only known approach and is computationally intensive. We study the performance of the proposed fast eigen approach through simulations in the context of MIMO relays and show that the solution converges to the solution obtained using the SDR approach with significant reduction in complexity.
Resumo:
In this brief, variable structure systems theory based guidance laws, to intercept maneuvering targets at a desired impact angle, are presented. Choosing the missile's lateral acceleration (latax) to enforce sliding mode, which is the principal operating mode of variable structure systems, on a switching surface defined by the line-of-sight angle leads to a guidance law that allows the achievement of the desired terminal impact angle. As will be shown, this law does not ensure interception for all states of the missile and the target during the engagement. Hence, additional switching surfaces are designed and a switching logic is developed that allows the latax to switch between enforcing sliding mode on one of these surfaces so that the target can be intercepted at the desired impact angle. The guidance laws are designed using nonlinear engagement dynamics for the general case of a maneuvering target.