896 resultados para Jump linear quadratic (JLQ) control
Resumo:
On cover: Control data, 1604/1604-A computer.
Resumo:
This paper investigates the non-linear bending behaviour of functionally graded plates that are bonded with piezoelectric actuator layers and subjected to transverse loads and a temperature gradient based on Reddy's higher-order shear deformation plate theory. The von Karman-type geometric non-linearity, piezoelectric and thermal effects are included in mathematical formulations. The temperature change is due to a steady-state heat conduction through the plate thickness. The material properties are assumed to be graded in the thickness direction according to a power-law distribution in terms of the volume fractions of the constituents. The plate is clamped at two opposite edges, while the remaining edges can be free, simply supported or clamped. Differential quadrature approximation in the X-axis is employed to convert the partial differential governing equations and the associated boundary conditions into a set of ordinary differential equations. By choosing the appropriate functions as the displacement and stress functions on each nodal line and then applying the Galerkin procedure, a system of non-linear algebraic equations is obtained, from which the non-linear bending response of the plate is determined through a Picard iteration scheme. Numerical results for zirconia/aluminium rectangular plates are given in dimensionless graphical form. The effects of the applied actuator voltage, the volume fraction exponent, the temperature gradient, as well as the characteristics of the boundary conditions are also studied in detail. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
Histological sections of primary segmental arteries and associated interarterial anastomoses and secondary vessels from the long-finned eel Anguilla reinhardtii were examined by light and transmission electron microscopy. Interarterial anastomoses were found to originate from the primary vasculature as depressions through the tunica intima and media, from where they ran perpendicularly to the adventitial layer, before coiling extensively. From here the anastomoses travelled a relatively linear path in the outer margin of the adventitia to anastomose with a secondary vessel running in parallel with the primary counterpart. In contrast to findings from other species, secondary vessels had a structure quite similar to that of primary vessels; they were lined by endothelial cells on a continuous basement membrane, with a single layer of smooth muscle cells surrounding the vessel. Smooth muscle cells were also found in the vicinity of interarterial anastomoses in the adventitia, but these appeared more longitudinally orientated. The presence of smooth muscle cells on all aspects of the secondary circulation suggests that this vascular system is regulated in a similar manner as the primary vascular system. Because interarterial anastomoses are structurally integrated with the primary vessel from which they originate, it is anticipated that flow through secondary vessels to some extent is affected by the vascular tone of the primary vessel. Immunohistochemical studies showed that primary segmental arteries displayed moderate immunoreactivity to antibodies against 5-hydroxytryptamine and substance P, while interarterial anastomoses and secondary vessels showed dense immunoreactivity. No immunoreactivity was observed on primary or secondary arteries against neuropeptide Y or calcitonin gene-related peptide.
Resumo:
Photon counting induces an effective non-linear optical phase shift in certain states derived by linear optics from single photons. Although this non-linearity is non-deterministic, it is sufficient in principle to allow scalable linear optics quantum computation (LOQC). The most obvious way to encode a qubit optically is as a superposition of the vacuum and a single photon in one mode-so-called 'single-rail' logic. Until now this approach was thought to be prohibitively expensive (in resources) compared to 'dual-rail' logic where a qubit is stored by a photon across two modes. Here we attack this problem with real-time feedback control, which can realize a quantum-limited phase measurement on a single mode, as has been recently demonstrated experimentally. We show that with this added measurement resource, the resource requirements for single-rail LOQC are not substantially different from those of dual-rail LOQC. In particular, with adaptive phase measurements an arbitrary qubit state a alpha/0 > + beta/1 > can be prepared deterministically.
Resumo:
It has long been recognized that demographic structure within a population can significantly affect the likely outcomes of harvest. Many studies have focussed on equilibrium dynamics and maximization of the value of the harvest taken. However, in some cases the management objective is to maintain the population at a abundance that is significantly below the carrying capacity. Achieving such an objective by harvest can be complicated by the presence of significant structure (age or stage) in the target population. in such cases, optimal harvest strategies must account for differences among age- or stage-classes of individuals in their relative contribution to the demography of the population. In addition, structured populations are also characterized by transient non-linear dynamics following perturbation, such that even under an equilibrium harvest, the population may exhibit significant momentum, increasing or decreasing before cessation of growth. Using simple linear time-invariant models, we show that if harvest levels are set dynamically (e.g., annually) then transient effects can be as or more important than equilibrium outcomes. We show that appropriate harvest rates can be complicated by uncertainty about the demographic structure of the population, or limited control over the structure of the harvest taken. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Peak adolescent fracture incidence at the distal end of the radius coincides with a decline in size-corrected BMD in both boys and girls. Peak gains in bone area preceded peak gains in BMC in a longitudinal sample of boys and girls, supporting the theory that the dissociation between skeletal expansion and skeletal mineralization results in a period of relative bone weakness. Introduction: The high incidence of fracture in adolescence may be related to a period of relative skeletal fragility resulting from dissociation between bone expansion and bone mineralization during the growing years. The aim of this study was to examine the relationship between changes in size-corrected BMD (BMDsc) and peak distal radius fracture incidence in boys and girls. Materials and Methods: Subjects were 41 boys and 46 girls measured annually (DXA; Hologic 2000) over the adolescent growth period and again in young adulthood. Ages of peak height velocity (PHV), peak BMC velocity (PBMCV), and peak bone area (BA) velocity (PBAV) were determined for each child. To control for maturational differences, subjects were aligned on PHV. BMDsc was calculated by first regressing the natural logarithms of BMC and BA. The power coefficient (pc) values from this analysis were used as follows: BMDsc = BMC/BA(pc). Results: BMDsc decreased significantly before the age of PHV and then increased until 4 years after PHV. The peak rates in radial fractures (reported from previous work) in both boys and girls coincided with the age of negative velocity in BMDsc; the age of peak BA velocity (PBAV) preceded the age of peak BMC velocity (PBMCV) by 0.5 years in both boys and girls. Conclusions: There is a clear dissociation between PBMCV and PBAV in boys and girls. BMDsc declines before age of PHV before rebounding after PHV. The timing of these events coincides directly with reported fracture rates of the distal end of the radius. Thus, the results support the theory that there is a period of relative skeletal weakness during the adolescent growth period caused, in part, by a draw on cortical bone to meet the mineral demands of the expanding skeleton resulting in a temporary increased fracture risk.
Resumo:
By carefully controlling the concentration of alpha,omega-thiol polystyrene in solution, we achieved formation of unique monocyclic polystyrene chains (i.e., polymer chains with only one disulfide linkage). The presence of cyclic polystyrene was confirmed by its lower than expected molecular weight due to a lower hydrodynamic volume and loss of thiol groups as detected by using Ellman's reagent. The alpha,omega-thiol polystyrene was synthesized by polymerizing styrene in the presence of a difunctional RAFT agent and subsequent conversion of the dithioester end groups to thiols via the addition of hexylamine. Oxidation gave either monocyclic polymer chains (i.e., with only one disulfide linkage) or linear multiblock polymers with many disulfide linkages depending on the concentration of polymer used with greater chance of cyclization in more dilute solutions. At high polymer concentrations, linear multiblock polymers were formed. To control the MWD of these linear multiblocks, monofunctional X-PSTY (X = PhCH2C(S)-S-) was added. It was found that the greatest ratio of X-PSTY to X-PSTY-X resulted in a low M-n and PDI. We have shown that we can control both the structure and MWD using this chemistry, but more importantly such disulfide linkages can be readily reduced back to the starting polystyrene with thiol end groups, which has potential use for a recyclable polymer material.
Resumo:
We consider a problem of robust performance analysis of linear discrete time varying systems on a bounded time interval. The system is represented in the state-space form. It is driven by a random input disturbance with imprecisely known probability distribution; this distributional uncertainty is described in terms of entropy. The worst-case performance of the system is quantified by its a-anisotropic norm. Computing the anisotropic norm is reduced to solving a set of difference Riccati and Lyapunov equations and a special form equation.
Resumo:
This paper presents results from the first use of neural networks for the real-time feedback control of high temperature plasmas in a Tokamak fusion experiment. The Tokamak is currently the principal experimental device for research into the magnetic confinement approach to controlled fusion. In the Tokamak, hydrogen plasmas, at temperatures of up to 100 Million K, are confined by strong magnetic fields. Accurate control of the position and shape of the plasma boundary requires real-time feedback control of the magnetic field structure on a time-scale of a few tens of microseconds. Software simulations have demonstrated that a neural network approach can give significantly better performance than the linear technique currently used on most Tokamak experiments. The practical application of the neural network approach requires high-speed hardware, for which a fully parallel implementation of the multi-layer perceptron, using a hybrid of digital and analogue technology, has been developed.
Resumo:
The main aim of this paper is to provide a tutorial on regression with Gaussian processes. We start from Bayesian linear regression, and show how by a change of viewpoint one can see this method as a Gaussian process predictor based on priors over functions, rather than on priors over parameters. This leads in to a more general discussion of Gaussian processes in section 4. Section 5 deals with further issues, including hierarchical modelling and the setting of the parameters that control the Gaussian process, the covariance functions for neural network models and the use of Gaussian processes in classification problems.
Resumo:
How do signals from the 2 eyes combine and interact? Our recent work has challenged earlier schemes in which monocular contrast signals are subject to square-law transduction followed by summation across eyes and binocular gain control. Much more successful was a new 'two-stage' model in which the initial transducer was almost linear and contrast gain control occurred both pre- and post-binocular summation. Here we extend that work by: (i) exploring the two-dimensional stimulus space (defined by left- and right-eye contrasts) more thoroughly, and (ii) performing contrast discrimination and contrast matching tasks for the same stimuli. Twenty-five base-stimuli made from 1 c/deg patches of horizontal grating, were defined by the factorial combination of 5 contrasts for the left eye (0.3-32%) with five contrasts for the right eye (0.3-32%). Other than in contrast, the gratings in the two eyes were identical. In a 2IFC discrimination task, the base-stimuli were masks (pedestals), where the contrast increment was presented to one eye only. In a matching task, the base-stimuli were standards to which observers matched the contrast of either a monocular or binocular test grating. In the model, discrimination depends on the local gradient of the observer's internal contrast-response function, while matching equates the magnitude (rather than gradient) of response to the test and standard. With all model parameters fixed by previous work, the two-stage model successfully predicted both the discrimination and the matching data and was much more successful than linear or quadratic binocular summation models. These results show that performance measures and perception (contrast discrimination and contrast matching) can be understood in the same theoretical framework for binocular contrast vision. © 2007 VSP.
Resumo:
A sieve plate distillation column has been constructed and interfaced to a minicomputer with the necessary instrumentation for dynamic, estimation and control studies with special bearing on low-cost and noise-free instrumentation. A dynamic simulation of the column with a binary liquid system has been compiled using deterministic models that include fluid dynamics via Brambilla's equation for tray liquid holdup calculations. The simulation predictions have been tested experimentally under steady-state and transient conditions. The simulator's predictions of the tray temperatures have shown reasonably close agreement with the measured values under steady-state conditions and in the face of a step change in the feed rate. A method of extending linear filtering theory to highly nonlinear systems with very nonlinear measurement functional relationships has been proposed and tested by simulation on binary distillation. The simulation results have proved that the proposed methodology can overcome the typical instability problems associated with the Kalman filters. Three extended Kalman filters have been formulated and tested by simulation. The filters have been used to refine a much simplified model sequentially and to estimate parameters such as the unmeasured feed composition using information from the column simulation. It is first assumed that corrupted tray composition measurements are made available to the filter and then corrupted tray temperature measurements are accessed instead. The simulation results have demonstrated the powerful capability of the Kalman filters to overcome the typical hardware problems associated with the operation of on-line analyzers in relation to distillation dynamics and control by, in effect, replacirig them. A method of implementing estimator-aided feedforward (EAFF) control schemes has been proposed and tested by simulation on binary distillation. The results have shown that the EAFF scheme provides much better control and energy conservation than the conventional feedback temperature control in the face of a sustained step change in the feed rate or multiple changes in the feed rate, composition and temperature. Further extensions of this work are recommended as regards simulation, estimation and EAFF control.
Resumo:
Over the last ten years our understanding of early spatial vision has improved enormously. The long-standing model of probability summation amongst multiple independent mechanisms with static output nonlinearities responsible for masking is obsolete. It has been replaced by a much more complex network of additive, suppressive, and facilitatory interactions and nonlinearities across eyes, area, spatial frequency, and orientation that extend well beyond the classical recep-tive field (CRF). A review of a substantial body of psychophysical work performed by ourselves (20 papers), and others, leads us to the following tentative account of the processing path for signal contrast. The first suppression stage is monocular, isotropic, non-adaptable, accelerates with RMS contrast, most potent for low spatial and high temporal frequencies, and extends slightly beyond the CRF. Second and third stages of suppression are difficult to disentangle but are possibly pre- and post-binocular summation, and involve components that are scale invariant, isotropic, anisotropic, chromatic, achromatic, adaptable, interocular, substantially larger than the CRF, and saturated by contrast. The monocular excitatory pathways begin with half-wave rectification, followed by a preliminary stage of half-binocular summation, a square-law transducer, full binocular summation, pooling over phase, cross-mechanism facilitatory interactions, additive noise, linear summation over area, and a slightly uncertain decision-maker. The purpose of each of these interactions is far from clear, but the system benefits from area and binocular summation of weak contrast signals as well as area and ocularity invariances above threshold (a herd of zebras doesn't change its contrast when it increases in number or when you close one eye). One of many remaining challenges is to determine the stage or stages of spatial tuning in the excitatory pathway.
Resumo:
Our understanding of early spatial vision owes much to contrast masking and summation paradigms. In particular, the deep region of facilitation at low mask contrasts is thought to indicate a rapidly accelerating contrast transducer (eg a square-law or greater). In experiment 1, we tapped an early stage of this process by measuring monocular and binocular thresholds for patches of 1 cycle deg-1 sine-wave grating. Threshold ratios were around 1.7, implying a nearly linear transducer with an exponent around 1.3. With this form of transducer, two previous models (Legge, 1984 Vision Research 24 385 - 394; Meese et al, 2004 Perception 33 Supplement, 41) failed to fit the monocular, binocular, and dichoptic masking functions measured in experiment 2. However, a new model with two-stages of divisive gain control fits the data very well. Stage 1 incorporates nearly linear monocular transducers (to account for the high level of binocular summation and slight dichoptic facilitation), and monocular and interocular suppression (to fit the profound 42 Oral presentations: Spatial vision Thursday dichoptic masking). Stage 2 incorporates steeply accelerating transduction (to fit the deep regions of monocular and binocular facilitation), and binocular summation and suppression (to fit the monocular and binocular masking). With all model parameters fixed from the discrimination thresholds, we examined the slopes of the psychometric functions. The monocular and binocular slopes were steep (Weibull ߘ3-4) at very low mask contrasts and shallow (ߘ1.2) at all higher contrasts, as predicted by all three models. The dichoptic slopes were steep (ߘ3-4) at very low contrasts, and very steep (ß>5.5) at high contrasts (confirming Meese et al, loco cit.). A crucial new result was that intermediate dichoptic mask contrasts produced shallow slopes (ߘ2). Only the two-stage model predicted the observed pattern of slope variation, so providing good empirical support for a two-stage process of binocular contrast transduction. [Supported by EPSRC GR/S74515/01]