166 resultados para adaptive variability
Resumo:
In China, the recent outbreak of novel influenza A/H7N9 virus has been assumed to be severe, and it may possibly turn brutal in the near future. In order to develop highly protective vaccines and drugs for the A/H7N9 virus, it is critical to find out the selection pressure of each amino acid site. In the present study, six different statistical methods consisting of four independent codon-based maximum likelihood (CML) methods, one hierarchical Bayesian (HB) method and one branch-site (BS) method, were employed to determine if each amino acid site of A/H7N9 virus is under natural selection pressure. Functions for both positively and negatively selected sites were inferred by annotating these sites with experimentally verified amino acid sites. Comprehensively, the single amino acid site 627 of PB2 protein was inferred as positively selected and it function was identified as a T-cell epitope (TCE). Among the 26 negatively selected amino acid sites of PB2, PB1, PA, HA, NP, NA, M1 and NS2 proteins, only 16 amino acid sites were identified to be involved in TCEs. In addition, 7 amino acid sites including, 608 and 609 of PA, 480 of NP, and 24, 25, 109 and 205 of M1, were identified to be involved in both B-cell epitopes (BCEs) and TCEs. Conversely, the function of positions 62 of PA, and, 43 and 113 of HA was unknown. In conclusion, the seven amino acid sites engaged in both BCEs and TCEs were identified as highly suitable targets, as these sites will be predicted to play a principal role in inducing strong humoral and cellular immune responses against A/H7N9 virus. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we consider a singularly perturbed boundary-value problem for fourth-order ordinary differential equation (ODE) whose highest-order derivative is multiplied by a small perturbation parameter. To solve this ODE, we transform the differential equation into a coupled system of two singularly perturbed ODEs. The classical central difference scheme is used to discretize the system of ODEs on a nonuniform mesh which is generated by equidistribution of a positive monitor function. We have shown that the proposed technique provides first-order accuracy independent of the perturbation parameter. Numerical experiments are provided to validate the theoretical results.
Resumo:
A neural-network-aided nonlinear dynamic inversion-based hybrid technique of model reference adaptive control flight-control system design is presented in this paper. Here, the gains of the nonlinear dynamic inversion-based flight-control system are dynamically selected in such a manner that the resulting controller mimics a single network, adaptive control, optimal nonlinear controller for state regulation. Traditional model reference adaptive control methods use a linearized reference model, and the presented control design method employs a nonlinear reference model to compute the nonlinear dynamic inversion gains. This innovation of designing the gain elements after synthesizing the single network adaptive controller maintains the advantages that an optimal controller offers, yet it retains a simple closed-form control expression in state feedback form, which can easily be modified for tracking problems without demanding any a priori knowledge of the reference signals. The strength of the technique is demonstrated by considering the longitudinal motion of a nonlinear aircraft system. An extended single network adaptive control/nonlinear dynamic inversion adaptive control design architecture is also presented, which adapts online to three failure conditions, namely, a thrust failure, an elevator failure, and an inaccuracy in the estimation of C-M alpha. Simulation results demonstrate that the presented adaptive flight controller generates a near-optimal response when compared to a traditional nonlinear dynamic inversion controller.
Resumo:
Structures of crystals of Mycobacterium tuberculosis RecA, grown and analysed under different conditions, provide insights into hitherto underappreciated details of molecular structure and plasticity. In particular, they yield information on the invariant and variable features of the geometry of the P-loop, whose binding to ATP is central for all the biochemical activities of RecA. The strengths of interaction of the ligands with the P-loop reveal significant differences. This in turn affects the magnitude of the motion of the `switch' residue, Gln195 in M. tuberculosis RecA, which triggers the transmission of ATP-mediated allosteric information to the DNA binding region. M. tuberculosis RecA is substantially rigid compared with its counterparts from M smegmatis and E. coli, which exhibit concerted internal molecular mobility. The interspecies variability in the plasticity of the two mycobacterial proteins is particularly surprising as they have similar sequence and 3D structure. Details of the interactions of ligands with the protein, characterized in the structures reported here, could be useful for design of inhibitors against M. tuberculosis RecA.
Resumo:
The aim in this paper is to allocate the `sleep time' of the individual sensors in an intrusion detection application so that the energy consumption from the sensors is reduced, while keeping the tracking error to a minimum. We propose two novel reinforcement learning (RL) based algorithms that attempt to minimize a certain long-run average cost objective. Both our algorithms incorporate feature-based representations to handle the curse of dimensionality associated with the underlying partially-observable Markov decision process (POMDP). Further, the feature selection scheme used in our algorithms intelligently manages the energy cost and tracking cost factors, which in turn assists the search for the optimal sleeping policy. We also extend these algorithms to a setting where the intruder's mobility model is not known by incorporating a stochastic iterative scheme for estimating the mobility model. The simulation results on a synthetic 2-d network setting are encouraging.
Resumo:
3-D full-wave method of moments (MoM) based electromagnetic analysis is a popular means toward accurate solution of Maxwell's equations. The time and memory bottlenecks associated with such a solution have been addressed over the last two decades by linear complexity fast solver algorithms. However, the accurate solution of 3-D full-wave MoM on an arbitrary mesh of a package-board structure does not guarantee accuracy, since the discretization may not be fine enough to capture spatial changes in the solution variable. At the same time, uniform over-meshing on the entire structure generates a large number of solution variables and therefore requires an unnecessarily large matrix solution. In this paper, different refinement criteria are studied in an adaptive mesh refinement platform. Consequently, the most suitable conductor mesh refinement criterion for MoM-based electromagnetic package-board extraction is identified and the advantages of this adaptive strategy are demonstrated from both accuracy and speed perspectives. The results are also compared with those of the recently reported integral equation-based h-refinement strategy. Finally, a new methodology to expedite each adaptive refinement pass is proposed.
Resumo:
A block-structured adaptive mesh refinement (AMR) technique has been used to obtain numerical solutions for many scientific applications. Some block-structured AMR approaches have focused on forming patches of non-uniform sizes where the size of a patch can be tuned to the geometry of a region of interest. In this paper, we develop strategies for adaptive execution of block-structured AMR applications on GPUs, for hyperbolic directionally split solvers. While effective hybrid execution strategies exist for applications with uniform patches, our work considers efficient execution of non-uniform patches with different workloads. Our techniques include bin-packing work units to load balance GPU computations, adaptive asynchronism between CPU and GPU executions using a knapsack formulation, and scheduling communications for multi-GPU executions. Our experiments with synthetic and real data, for single-GPU and multi-GPU executions, on Tesla S1070 and Fermi C2070 clusters, show that our strategies result in up to a 3.23 speedup in performance over existing strategies.
Resumo:
The mode I fracture toughness, K-Ic, of ductile bulk metallic glasses (BMGs) exhibits a high degree of specimen-to-specimen variability. By conducting fracture experiments in modes I and II, we demonstrate that the observed high variability in mode I, vis-a-vis mode II, is a result of highly variable propensity for the conversion of shear bands into cracks in mode I whereas in mode II, crack growth direction is fixed. Thus, the measured variability in K-Ic is intrinsic to the nature of BMGs. (C) 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
Remote sensing of physiological parameters could be a cost effective approach to improving health care, and low-power sensors are essential for remote sensing because these sensors are often energy constrained. This paper presents a power optimized photoplethysmographic sensor interface to sense arterial oxygen saturation, a technique to dynamically trade off SNR for power during sensor operation, and a simple algorithm to choose when to acquire samples in photoplethysmography. A prototype of the proposed pulse oximeter built using commercial-off-the-shelf (COTS) components is tested on 10 adults. The dynamic adaptation techniques described reduce power consumption considerably compared to our reference implementation, and our approach is competitive to state-of-the-art implementations. The techniques presented in this paper may be applied to low-power sensor interface designs where acquiring samples is expensive in terms of power as epitomized by pulse oximetry.
Resumo:
We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.
Resumo:
Understanding the changing nature of the intraseasonal oscillatory (ISO) modes of Indian summer monsoon manifested by active and break phase, and their association with extreme rainfall events are necessary for probabilistic estimation of flood-related risks in a warming climate. Here, using ground-based observed rainfall, we define an index to measure the strength of monsoon ISOs and show that the relative strength of the northward-propagating low-frequency ISO (20-60 days) modes have had a significant decreasing trend during the past six decades, possibly attributed to the weakening of large-scale circulation in the region during monsoon season. This reduction is compensated by a gain in synoptic-scale (3-9 days) variability. The decrease in low-frequency ISO variability is associated with a significant decreasing trend in the percentage of extreme events during the active phase of the monsoon. However, this decrease is balanced by significant increasing trends in the percentage of extreme events in the break and transition phases. We also find a significant rise in the occurrence of extremes during early and late monsoon months, mainly over eastern coastal regions. Our study highlights the redistribution of rainfall intensity among periodic (low-frequency) and non-periodic (extreme) modes in a changing climate scenario.
Resumo:
Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. At such low MTBFs, employing periodic checkpointing alone will result in low efficiency because of the high number of application failures resulting in large amount of lost work due to rollbacks. In such scenarios, it is highly necessary to have proactive fault tolerance mechanisms that can help avoid significant number of failures. In this work, we have developed a mechanism for proactive fault tolerance using partial replication of a set of application processes. Our fault tolerance framework adaptively changes the set of replicated processes periodically based on failure predictions to avoid failures. We have developed an MPI prototype implementation, PAREP-MPI that allows changing the replica set. We have shown that our strategy involving adaptive process replication significantly outperforms existing mechanisms providing up to 20 percent improvement in application efficiency even for exascale systems.
Resumo:
Scalable stream processing and continuous dataflow systems are gaining traction with the rise of big data due to the need for processing high velocity data in near real time. Unlike batch processing systems such as MapReduce and workflows, static scheduling strategies fall short for continuous dataflows due to the variations in the input data rates and the need for sustained throughput. The elastic resource provisioning of cloud infrastructure is valuable to meet the changing resource needs of such continuous applications. However, multi-tenant cloud resources introduce yet another dimension of performance variability that impacts the application's throughput. In this paper we propose PLAStiCC, an adaptive scheduling algorithm that balances resource cost and application throughput using a prediction-based lookahead approach. It not only addresses variations in the input data rates but also the underlying cloud infrastructure. In addition, we also propose several simpler static scheduling heuristics that operate in the absence of accurate performance prediction model. These static and adaptive heuristics are evaluated through extensive simulations using performance traces obtained from Amazon AWS IaaS public cloud. Our results show an improvement of up to 20% in the overall profit as compared to the reactive adaptation algorithm.
Resumo:
Complex systems inspired analysis suggests a hypothesis that financial meltdowns are abrupt critical transitions that occur when the system reaches a tipping point. Theoretical and empirical studies on climatic and ecological dynamical systems have shown that approach to tipping points is preceded by a generic phenomenon called critical slowing down, i.e. an increasingly slow response of the system to perturbations. Therefore, it has been suggested that critical slowing down may be used as an early warning signal of imminent critical transitions. Whether financial markets exhibit critical slowing down prior to meltdowns remains unclear. Here, our analysis reveals that three major US (Dow Jones Index, S&P 500 and NASDAQ) and two European markets (DAX and FTSE) did not exhibit critical slowing down prior to major financial crashes over the last century. However, all markets showed strong trends of rising variability, quantified by time series variance and spectral function at low frequencies, prior to crashes. These results suggest that financial crashes are not critical transitions that occur in the vicinity of a tipping point. Using a simple model, we argue that financial crashes are likely to be stochastic transitions which can occur even when the system is far away from the tipping point. Specifically, we show that a gradually increasing strength of stochastic perturbations may have caused to abrupt transitions in the financial markets. Broadly, our results highlight the importance of stochastically driven abrupt transitions in real world scenarios. Our study offers rising variability as a precursor of financial meltdowns albeit with a limitation that they may signal false alarms.