130 resultados para Greedy randomized adaptive search procedure
Resumo:
The author presents adaptive control techniques for controlling the flow of real-time jobs from the peripheral processors (PPs) to the central processor (CP) of a distributed system with a star topology. He considers two classes of flow control mechanisms: (1) proportional control, where a certain proportion of the load offered to each PP is sent to the CP, and (2) threshold control, where there is a maximum rate at which each PP can send jobs to the CP. The problem is to obtain good algorithms for dynamically adjusting the control level at each PP in order to prevent overload of the CP, when the load offered by the PPs is unknown and varying. The author formulates the problem approximately as a standard system control problem in which the system has unknown parameters that are subject to change. Using well-known techniques (e.g., naive-feedback-controller and stochastic approximation techniques), he derives adaptive controls for the system control problem. He demonstrates the efficacy of these controls in the original problem by using the control algorithms in simulations of a queuing model of the CP and the load controls.
Resumo:
Diabetes is a serious disease during which the body's production and use of insulin is impaired, causing glucose concentration level toincrease in the bloodstream. Regulating blood glucose levels as close to normal as possible, leads to a substantial decrease in long term complications of diabetes. In this paper, an intelligent neural network on-line optimal feedback treatment strategy based on nonlinear optimal control theory is presented for the disease using subcutaneous treatment strategy. A simple mathematical model of the nonlinear dynamics of glucose and insulin interaction in the blood system is considered based on the Bergman's minimal model. A glucose infusion term representing the effect of glucose intake resulting from a meal is introduced into the model equations. The efficiency of the proposed controllers is shown taking random parameters and random initial conditions in presence of physical disturbances like food intake. A comparison study with linear quadratic regulator theory brings Out the advantages of the nonlinear control synthesis approach. Simulation results show that unlike linear optimal control, the proposed on-line continuous infusion strategy never leads to severe hypoglycemia problems.
Resumo:
The issue of dynamic spectrum scene analysis in any cognitive radio network becomes extremely complex when low probability of intercept, spread spectrum systems are present in environment. The detection and estimation become more complex if frequency hopping spread spectrum is adaptive in nature. In this paper, we propose two phase approach for detection and estimation of frequency hoping signals. Polyphase filter bank has been proposed as the architecture of choice for detection phase to efficiently detect the presence of frequency hopping signal. Based on the modeling of frequency hopping signal it can be shown that parametric methods of line spectral analysis are well suited for estimation of frequency hopping signals if the issues of order estimation and time localization are resolved. An algorithm using line spectra parameter estimation and wavelet based transient detection has been proposed which resolves above issues in computationally efficient manner suitable for implementation in cognitive radio. The simulations show promising results proving that adaptive frequency hopping signals can be detected and demodulated in a non cooperative context, even at a very low signal to noise ratio in real time.
Resumo:
Motion Estimation is one of the most power hungry operations in video coding. While optimal search (eg. full search)methods give best quality, non optimal methods are often used in order to reduce cost and power. Various algorithms have been used in practice that trade off quality vs. complexity. Global elimination is an algorithm based on pixel averaging to reduce complexity of motion search while keeping performance close to that of full search. We propose an adaptive version of the global elimination algorithm that extracts individual macro-block features using Hadamard transform to optimize the search. Performance achieved is close to the full search method and global elimination. Operational complexity and hence power is reduced by 30% to 45% compared to global elimination method.
Resumo:
This paper presents a singular edge-based smoothed finite element method (sES-FEM) for mechanics problems with singular stress fields of arbitrary order. The sES-FEM uses a basic mesh of three-noded linear triangular (T3) elements and a special layer of five-noded singular triangular elements (sT5) connected to the singular-point of the stress field. The sT5 element has an additional node on each of the two edges connected to the singular-point. It allows us to represent simple and efficient enrichment with desired terms for the displacement field near the singular-point with the satisfaction of partition-of-unity property. The stiffness matrix of the discretized system is then obtained using the assumed displacement values (not the derivatives) over smoothing domains associated with the edges of elements. An adaptive procedure for the sES-FEM is proposed to enhance the quality of the solution with minimized number of nodes. Several numerical examples are provided to validate the reliability of the present sES-FEM method. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Compressive Sampling Matching Pursuit (CoSaMP) is one of the popular greedy methods in the emerging field of Compressed Sensing (CS). In addition to the appealing empirical performance, CoSaMP has also splendid theoretical guarantees for convergence. In this paper, we propose a modification in CoSaMP to adaptively choose the dimension of search space in each iteration, using a threshold based approach. Using Monte Carlo simulations, we show that this modification improves the reconstruction capability of the CoSaMP algorithm in clean as well as noisy measurement cases. From empirical observations, we also propose an optimum value for the threshold to use in applications.
Resumo:
This paper investigates the use of adaptive group testing to find a spectrum hole of a specified bandwidth in a given wideband of interest. We propose a group testing-based spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy by testing a group of adjacent subbands in a single test. This is enabled by a simple and easily implementable sub-Nyquist sampling scheme for signal acquisition by the cognitive radios (CRs). The sampling scheme deliberately introduces aliasing during signal acquisition, resulting in a signal that is the sum of signals from adjacent subbands. Energy-based hypothesis tests are used to provide an occupancy decision over the group of subbands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes of a specified bandwidth. We extend this framework to a multistage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including noncontiguous spectrum hole search. Furthermore, we provide the analytical means to optimize the group tests with respect to the detection thresholds, number of samples, group size, and number of stages to minimize the detection delay under a given error probability constraint. Our analysis allows one to identify the sparsity and SNR regimes where group testing can lead to significantly lower detection delays compared with a conventional bin-by-bin energy detection scheme; the latter is, in fact, a special case of the group test when the group size is set to 1 bin. We validate our analytical results via Monte Carlo simulations.
Resumo:
The aim in this paper is to allocate the `sleep time' of the individual sensors in an intrusion detection application so that the energy consumption from the sensors is reduced, while keeping the tracking error to a minimum. We propose two novel reinforcement learning (RL) based algorithms that attempt to minimize a certain long-run average cost objective. Both our algorithms incorporate feature-based representations to handle the curse of dimensionality associated with the underlying partially-observable Markov decision process (POMDP). Further, the feature selection scheme used in our algorithms intelligently manages the energy cost and tracking cost factors, which in turn assists the search for the optimal sleeping policy. We also extend these algorithms to a setting where the intruder's mobility model is not known by incorporating a stochastic iterative scheme for estimating the mobility model. The simulation results on a synthetic 2-d network setting are encouraging.
Resumo:
Aims. In this work we search for the signatures of low-dimensional chaos in the temporal behavior of the Kepler-field blazar W2R 1946+42. Methods. We use a publicly available, similar to 160 000-point-long and mostly equally spaced light curve of W2R 1946+42. We apply the correlation integral method to both real datasets and phase randomized surrogates. Results. We are not able to confirm the presence of low-dimensional chaos in the light curve. This result, however, still leads to some important implications for blazar emission mechanisms, which are discussed.
Resumo:
We consider the problem of optimizing the workforce of a service system. Adapting the staffing levels in such systems is non-trivial due to large variations in workload and the large number of system parameters do not allow for a brute force search. Further, because these parameters change on a weekly basis, the optimization should not take longer than a few hours. Our aim is to find the optimum staffing levels from a discrete high-dimensional parameter set, that minimizes the long run average of the single-stage cost function, while adhering to the constraints relating to queue stability and service-level agreement (SLA) compliance. The single-stage cost function balances the conflicting objectives of utilizing workers better and attaining the target SLAs. We formulate this problem as a constrained parameterized Markov cost process parameterized by the (discrete) staffing levels. We propose novel simultaneous perturbation stochastic approximation (SPSA)-based algorithms for solving the above problem. The algorithms include both first-order as well as second-order methods and incorporate SPSA-based gradient/Hessian estimates for primal descent, while performing dual ascent for the Lagrange multipliers. Both algorithms are online and update the staffing levels in an incremental fashion. Further, they involve a certain generalized smooth projection operator, which is essential to project the continuous-valued worker parameter tuned by our algorithms onto the discrete set. The smoothness is necessary to ensure that the underlying transition dynamics of the constrained Markov cost process is itself smooth (as a function of the continuous-valued parameter): a critical requirement to prove the convergence of both algorithms. We validate our algorithms via performance simulations based on data from five real-life service systems. For the sake of comparison, we also implement a scatter search based algorithm using state-of-the-art optimization tool-kit OptQuest. From the experiments, we observe that both our algorithms converge empirically and consistently outperform OptQuest in most of the settings considered. This finding coupled with the computational advantage of our algorithms make them amenable for adaptive labor staffing in real-life service systems.
Resumo:
Executing authenticated computation on outsourced data is currently an area of major interest in cryptology. Large databases are being outsourced to untrusted servers without appreciable verification mechanisms. As adversarial server could produce erroneous output, clients should not trust the server's response blindly. Primitive set operations like union, set difference, intersection etc. can be invoked on outsourced data in different concrete settings and should be verifiable by the client. One such interesting adaptation is to authenticate email search result where the untrusted mail server has to provide a proof along with the search result. Recently Ohrimenko et al. proposed a scheme for authenticating email search. We suggest significant improvements over their proposal in terms of client computation and communication resources by properly recasting it in two-party settings. In contrast to Ohrimenko et al. we are able to make the number of bilinear pairing evaluation, the costliest operation in verification procedure, independent of the result set cardinality for union operation. We also provide an analytical comparison of our scheme with their proposal which is further corroborated through experiments.
Resumo:
In a classic study, Kacser & Burns (1981, Genetics 97, 639-666) demonstrated that given certain plausible assumptions, the flux in a metabolic pathway was more or less indifferent to the activity of any of the enzymes in the pathway taken singly. It was inferred from this that the observed dominance of most wild-type alleles with respect to loss-of-function mutations did not require an adaptive, meaning selectionist, explanation. Cornish-Bowden (1987, J. theor. Biol. 125, 333-338) showed that the Kacser-Burns inference was not valid when substrate concentrations were large relative to the relevant Michaelis constants. We find that in a randomly constructed functional pathway, even when substrate levels are small, one can expect high values of control coefficients for metabolic flux in the presence of significant nonlinearities as exemplified by enzymes with Hill coefficients ranging from two to six, or by the existence of oscillatory loops. Under these conditions the flux can be quite sensitive to changes in enzyme activity as might be caused by inactivating one of the two alleles in a diploid. Therefore, the phenomenon of dominance cannot be a trivial ''default'' consequence of physiology but must be intimately linked to the manner in which metabolic networks have been moulded by natural selection.
Resumo:
Selective introduction and removal of protecting groups is of great significance in organic synthesis.l The benzyl ether function is one of the most common protecting groups for alcohols. Selective oxidative removal of the 4-methoxybenzyl (MPM) ethers in the presence of benzyl ethers made the MPM moiety an alternative protecting group, and its utility in carbohydrate chemistry is well established. Several procedures have been developed for the cleavage of the 4-methoxybenzyl moiety, e.g. DDQ oxidation (eq 1),2e lectrochemical ~xidationh,~om ogeneous electron t r a n~f e rp,~ho toinduced single electron t r an~f e rb,o~ro n trichloride-dimethyl sulfide,6e tc. However, in all these methods isolation of the alcohol from the inevitable byproduct, 4-methoxybenzaldehyde [also dichlorodicyanohydroquinone (DDHQ) in the most commonly used method employing DDQI can be troublesome. Recently Wallace and Hedgetts7 discovered that acetic acid at 90 "C cleaves the aromatic MPM ethers into the corresponding phenols and 4-methoxybenzyl acetate (eq 21, whereas the aliphatic MPM ethers generated, instead of alcohols, the corresponding acetates (eq 3). Complimentary to this methodology, herein we report that sodium cyanoborohydride and boron trifluoride etherate reductively cleaves, cleanly and efficiently, the aliphatic MPM ethers to an easily separable mixture of the corresponding alcohols and 4-methylanisole
Resumo:
Numerical analysis of cracked structures often involves numerical estimation of stress intensity factors (SIFs) at a crack tip/front. A newly developed formulation called universal crack closure integral (UCCI) for the evaluation of potential energy release rates (PERRs) and the corresponding SIFs is presented in this paper. Unlike the existing element dedicated forms of crack closure integrals (MCCI, VCCI) with application limited to finite element analysis, this new numerical SIF/PERR estimation technique is independent of the basic stress analysis procedure, making it universally applicable. The second merit of this procedure is that it avoids the generally error-producing zones close to the crack tip/front singularity. The UCCI procedure, based on Irwin's original CCI, is formulated and explored using a simple 2D problem of a straight crack in an infinite sheet. It is then applied to some three-dimensional crack geometries with the stresses and displacements obtained from a boundary element program.
Resumo:
The aim of this study is to propose a method to assess the long-term chemical weathering mass balance for a regolith developed on a heterogeneous silicate substratum at the small experimental watershed scale by adopting a combined approach of geophysics, geochemistry and mineralogy. We initiated in 2003 a study of the steep climatic gradient and associated geomorphologic features of the edge of the rifted continental passive margin of the Karnataka Plateau, Peninsular India. In the transition sub-humid zone of this climatic gradient we have studied the pristine forested small watershed of Mule Hole (4.3 km(2)) mainly developed on gneissic substratum. Mineralogical, geochemical and geophysical investigations were carried out (i) in characteristic red soil profiles and (ii) in boreholes up to 60 m deep in order to take into account the effect of the weathering mantle roots. In addition, 12 Electrical Resistivity Tomography profiles (ERT), with an investigation depth of 30 m, were generated at the watershed scale to spatially characterize the information gathered in boreholes and soil profiles. The location of the ERT profiles is based on a previous electromagnetic survey, with an investigation depth of about 6 m. The soil cover thickness was inferred from the electromagnetic survey combined with a geological/pedological survey. Taking into account the parent rock heterogeneity, the degree of weathering of each of the regolith samples has been defined using both the mineralogical composition and the geochemical indices (Loss on Ignition, Weathering Index of Parker, Chemical Index of Alteration). Comparing these indices with electrical resistivity logs, it has been found that a value of 400 Ohm m delineates clearly the parent rocks and the weathered materials, Then the 12 inverted ERT profiles were constrained with this value after verifying the uncertainty due to the inversion procedure. Synthetic models based on the field data were used for this purpose. The estimated average regolith thickness at the watershed scale is 17.2 m, including 15.2 m of saprolite and 2 m of soil cover. Finally, using these estimations of the thicknesses, the long-term mass balance is calculated for the average gneiss-derived saprolite and red soil. In the saprolite, the open-system mass-transport function T indicates that all the major elements except Ca are depleted. The chlorite and biotite crystals, the chief sources for Mg (95%), Fe (84%), Mn (86%) and K (57%, biotite only), are the first to undergo weathering and the oligoclase crystals are relatively intact within the saprolite with a loss of only 18%. The Ca accumulation can be attributed to the precipitation of CaCO3 from the percolating solution due to the current and/or the paleoclimatic conditions. Overall, the most important losses occur for Si, Mg and Na with -286 x 10(6) mol/ha (62% of the total mass loss), -67 x 10(6) mol/ha (15% of the total mass loss) and -39 x 10(6) mol/ha (9% of the total mass loss), respectively. Al, Fe and K account for 7%, 4% and 3% of the total mass loss, respectively. In the red soil profiles, the open-system mass-transport functions point out that all major elements except Mn are depleted. Most of the oligoclase crystals have broken down with a loss of 90%. The most important losses occur for Si, Na and Mg with -55 x 10(6) mol/ha (47% of the total mass loss), -22 x 10(6) mol/ha (19% of the total mass loss) and -16 x 10(6) mol/ha (14% of the total mass loss), respectively. Ca, Al, K and Fe account for 8%, 6%, 4% and 2% of the total mass loss, respectively. Overall these findings confirm the immaturity of the saprolite at the watershed scale. The soil profiles are more evolved than saprolite but still contain primary minerals that can further undergo weathering and hence consume atmospheric CO2.