970 resultados para Sensorless speed control


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Blurred edges appear sharper in motion than when they are stationary. We (Vision Research 38 (1998) 2108) have previously shown how such distortions in perceived edge blur may be accounted for by a model which assumes that luminance contrast is encoded by a local contrast transducer whose response becomes progressively more compressive as speed increases. If the form of the transducer is fixed (independent of contrast) for a given speed, then a strong prediction of the model is that motion sharpening should increase with increasing contrast. We measured the sharpening of periodic patterns over a large range of contrasts, blur widths and speeds. The results indicate that whilst sharpening increases with speed it is practically invariant with contrast. The contrast invariance of motion sharpening is not explained by an early, static compressive non-linearity alone. However, several alternative explanations are also inconsistent with these results. We show that if a dynamic contrast gain control precedes the static non-linear transducer then motion sharpening, its speed dependence, and its invariance with contrast, can be predicted with reasonable accuracy. © 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Blurred edges appear sharper in motion than when they are stationary. We have previously shown how such distortions in perceived edge blur may be explained by a model which assumes that luminance contrast is encoded by a local contrast transducer whose response becomes progressively more compressive as speed increases. To test this model further, we measured the sharpening of drifting, periodic patterns over a large range of contrasts, blur widths, and speeds Human Vision. The results indicate that, while sharpening increased with speed, it was practically invariant with contrast. This contrast invariance cannot be explained by a fixed compressive nonlinearity since that predicts almost no sharpening at low contrasts.We show by computational modelling of spatiotemporal responses that, if a dynamic contrast gain control precedes the static nonlinear transducer, then motion sharpening, its speed dependence, and its invariance with contrast can be predicted with reasonable accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis describes an investigation into methods for the specification, design and implementation of computer control systems for flexible manufacturing machines comprising multiple, independent, electromechanically-driven mechanisms. An analysis is made of the elements of conventional mechanically-coupled machines in order that the operational functions of these elements may be identified. This analysis is used to define the scope of requirements necessary to specify the format, function and operation of a flexible, independently driven mechanism machine. A discussion of how this type of machine can accommodate modern manufacturing needs of high-speed and flexibility is presented. A sequential method of capturing requirements for such machines is detailed based on a hierarchical partitioning of machine requirements from product to independent drive mechanism. A classification of mechanisms using notations, including Data flow diagrams and Petri-nets, is described which supports capture and allows validation of requirements. A generic design for a modular, IDM machine controller is derived based upon hierarchy of control identified in these machines. A two mechanism experimental machine is detailed which is used to demonstrate the application of the specification, design and implementation techniques. A computer controller prototype and a fully flexible implementation for the IDM machine, based on Petri-net models described using the concurrent programming language Occam, is detailed. The ability of this modular computer controller to support flexible, safe and fault-tolerant operation of the two intermittent motion, discrete-synchronisation independent drive mechanisms is presented. The application of the machine development methodology to industrial projects is established.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis describes an investigation into methods for the design of flexible high-speed product processing machinery, consisting of independent electromechanically actuated machine functions which operate under software coordination and control. An analysis is made of the elements of traditionally designed cam-actuated, mechanically coupled machinery, so that the operational functions and principal performance limitations of the separate machine elements may be identified. These are then used to define the requirements for independent actuators machinery, with a discussion of how this type of design approach is more suited to modern manufacturing trends. A distributed machine controller topology is developed which is a hybrid of hierarchical and pipeline control. An analysis is made, with the aid of dynamic simulation modelling, which confirms the suitability of the controller for flexible machinery control. The simulations include complex models of multiple independent actuators systems, which enable product flow and failure analyses to be performed. An analysis is made of high performance brushless d.c. servomotors and their suitability for actuating machine motions is assessed. Procedures are developed for the selection of brushless servomotors for intermittent machine motions. An experimental rig is described which has enabled the actuation and control methods developed to be implemented. With reference to this, an evaluation is made of the suitability of the machine design method and a discussion is given of the developments which are necessary for operational independent actuators machinery to be attained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high capital cost of robots prohibit their economic application. One method of making their application more economic is to increase their operating speed. This can be done in a number of ways e.g. redesign of robot geometry, improving actuators and improving control system design. In this thesis the control system design is considered. It is identified in the literature review that two aspects in relation to robot control system design have not been addressed in any great detail by previous researchers. These are: how significant are the coupling terms in the dynamic equations of the robot and what is the effect of the coupling terms on the performance of a number of typical independent axis control schemes?. The work in this thesis addresses these two questions in detail. A program was designed to automatically calculate the path and trajectory and to calculate the significance of the coupling terms in an example application of a robot manipulator tracking a part on a moving conveyor. The inertial and velocity coupling terms have been shown to be of significance when the manipulator was considered to be directly driven. A simulation of the robot manipulator following the planned trajectory has been established in order to assess the performance of the independent axis control strategies. The inertial coupling was shown to reinforce the control torque at the corner points of the trajectory, where there was an abrupt demand in acceleration in each axis but of opposite sign. This reduced the tracking error however, this effect was not controllable. A second effect was due to the velocity coupling terms. At high trajectory speeds it was shown, by means of a root locus analysis, that the velocity coupling terms caused the system to become unstable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deep hole drilling is one of the most complicated metal cutting processes and one of the most difficult to perform on CNC machine-tools or machining centres under conditions of limited manpower or unmanned operation. This research work investigates aspects of the deep hole drilling process with small diameter twist drills and presents a prototype system for real time process monitoring and adaptive control; two main research objectives are fulfilled in particular : First objective is the experimental investigation of the mechanics of the deep hole drilling process, using twist drills without internal coolant supply, in the range of diarneters Ø 2.4 to Ø4.5 mm and working length up to 40 diameters. The definition of the problems associated with the low strength of these tools and the study of mechanisms of catastrophic failure which manifest themselves well before and along with the classic mechanism of tool wear. The relationships between drilling thrust and torque with the depth of penetration and the various machining conditions are also investigated and the experimental evidence suggests that the process is inherently unstable at depths beyond a few diameters. Second objective is the design and implementation of a system for intelligent CNC deep hole drilling, the main task of which is to ensure integrity of the process and the safety of the tool and the workpiece. This task is achieved by means of interfacing the CNC system of the machine tool to an external computer which performs the following functions: On-line monitoring of the drilling thrust and torque, adaptive control of feed rate, spindle speed and tool penetration (Z-axis), indirect monitoring of tool wear by pattern recognition of variations of the drilling thrust with cumulative cutting time and drilled depth, operation as a data base for tools and workpieces and finally issuing of alarms and diagnostic messages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A re-examination of fundamental concepts and a formal structuring of the waveform analysis problem is presented in Part I. eg. the nature of frequency is examined and a novel alternative to the classical methods of detection proposed and implemented which has the advantage of speed and independence from amplitude. Waveform analysis provides the link between Parts I and II. Part II is devoted to Human Factors and the Adaptive Task Technique. The Historical, Technical and Intellectual development of the technique is traced in a review which examines the evidence of its advantages relative to non-adaptive fixed task methods of training, skill assessment and man-machine optimisation. A second review examines research evidence on the effect of vibration on manual control ability. Findings are presented in terms of percentage increment or decrement in performance relative to performance without vibration in the range 0-0.6Rms'g'. Primary task performance was found to vary by as much as 90% between tasks at the same Rms'g'. Differences in task difficulty accounted for this difference. Within tasks vibration-added-difficulty accounted for the effects of vibration intensity. Secondary tasks were found to be largely insensitive to vibration except secondaries which involved fine manual adjustment of minor controls. Three experiments are reported next in which an adaptive technique was used to measure the % task difficulty added by vertical random and sinusoidal vibration to a 'Critical Compensatory Tracking task. At vibration intensities between 0 - 0.09 Rms 'g' it was found that random vibration added (24.5 x Rms'g')/7.4 x 100% to the difficulty of the control task. An equivalence relationship between Random and Sinusoidal vibration effects was established based upon added task difficulty. Waveform Analyses which were applied to the experimental data served to validate Phase Plane analysis and uncovered the development of a control and possibly a vibration isolation strategy. The submission ends with an appraisal of subjects mentioned in the thesis title.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes work completed on the application of H controller synthesis to the design of controllers for single axis high speed independent drive design examples. H controller synthesis was used in a single controller format and in a self-tuning regulator, a type of adaptive controller. Three types of industrial design examples were attempted using H controller synthesis, both in simulation and on a Drives Test Facility at Aston University. The results were benchmarked against a Proportional, Integral and Derivative (PID) with velocity feedforward controller (VFF), the industrial standard for this application. An analysis of the differences between a H and PID with VFF controller was completed. A direct-form H controller was determined for a limited class of weighting function and plants which shows the relationship between the weighting function, nominal plant and the controller parameters. The direct-form controller was utilised in two ways. Firstly it allowed the production of simple guidelines for the industrial design of H controllers. Secondly it was used as the controller modifier in a self-tuning regulator (STR). The STR had a controller modification time (including nominal model parameter estimation) of 8ms. A Set-Point Gain Scheduling (SPGS) controller was developed and applied to an industrial design example. The applicability of each control strategy, PID with VFF, H, SPGS and STR, was investigated and a set of general guidelines for their use was determined. All controllers developed were implemented using standard industrial equipment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The unmitigated transmission of undesirable vibration can result in problems by way of causing human discomfort, machinery and equipment failure, and affecting the quality of a manufacturing process. When identifiable transmission paths are discernible, vibrations from the source can be isolated from the rest of the system and this prevents or minimises the problems. The approach proposed here for vibration isolation is active force cancellation at points close to the vibration source. It uses force feedback for multiple-input and multiple-output control at the mounting locations. This is particularly attractive for rigid mounting of machine on relative flexible base where machine alignment and motions are to be restricted. The force transfer function matrix is used as a disturbance rejection performance specification for the design of MIMO controllers. For machine soft-mounted via flexible isolators, a model for this matrix has been derived. Under certain conditions, a simple multiplicative uncertainty model is obtained that shows the amount of perturbation a flexible base has on the machine-isolator-rigid base transmissibility matrix. Such a model is very suitable for use with robust control design paradigm. A different model is derived for the machine on hard-mounts without the flexible isolators. With this model, the level of force transmitted from a machine to a final mounting structure using the measurements for the machine running on another mounting structure can be determined. The two mounting structures have dissimilar dynamic characteristics. Experiments have verified the usefulness of the expression. The model compares well with other methods in the literature. The disadvantage lies with the large amount of data that has to be collected. Active force cancellation is demonstrated on an experimental rig using an AC industrial motor hard-mounted onto a relative flexible structure. The force transfer function matrix, determined from measurements, is used to design H and Static Output Feedback controllers. Both types of controllers are stable and robust to modelling errors within the identified frequency range. They reduce the RMS of transmitted force by between 30?80% at all mounting locations for machine running at 1340 rpm. At the rated speed of 1440 rpm only the static gain controller is able to provide 30?55% reduction at all locations. The H controllers on the other hand could only give a small reduction at one mount location. This is due in part to the deficient of the model used in the design. Higher frequency dynamics has been ignored in the model. This can be resolved by the use of a higher order model that can result in a high order controller. A low order static gain controller, with some tuning, performs better. But it lacks the analytical framework for analysis and design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Speed's theory makes two predictions for the development of analogical reasoning. Firstly, young children should not be able to reason analogically due to an undeveloped PFC neural network. Secondly, category knowledge enables the reinforcement of structural features over surface features, and thus the development of sophisticated, analogical, reasoning. We outline existing studies that support these predictions and highlight some critical remaining issues. Specifically, we argue that the development of inhibition must be directly compared alongside the development of reasoning strategies in order to support Speed's account. © 2010 Psychology Press.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In human (D. H. Baker, T. S. Meese, & R. J. Summers, 2007b) and in cat (B. Li, M. R. Peterson, J. K. Thompson, T. Duong, & R. D. Freeman, 2005; F. Sengpiel & V. Vorobyov, 2005) there are at least two routes to cross-orientation suppression (XOS): a broadband, non-adaptable, monocular (within-eye) pathway and a more narrowband, adaptable interocular (between the eyes) pathway. We further characterized these two routes psychophysically by measuring the weight of suppression across spatio-temporal frequency for cross-oriented pairs of superimposed flickering Gabor patches. Masking functions were normalized to unmasked detection thresholds and fitted by a two-stage model of contrast gain control (T. S. Meese, M. A. Georgeson, & D. H. Baker, 2006) that was developed to accommodate XOS. The weight of monocular suppression was a power function of the scalar quantity ‘speed’ (temporal-frequency/spatial-frequency). This weight can be expressed as the ratio of non-oriented magno- and parvo-like mechanisms, permitting a fast-acting, early locus, as befits the urgency for action associated with high retinal speeds. In contrast, dichoptic-masking functions superimposed. Overall, this (i) provides further evidence for dissociation between the two forms of XOS in humans, and (ii) indicates that the monocular and interocular varieties of XOS are space/time scale-dependent and scale-invariant, respectively. This suggests an image-processing role for interocular XOS that is tailored to natural image statistics—very different from that of the scale-dependent (speed-dependent) monocular variety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been reported that high-speed communication network traffic exhibits both long-range dependence (LRD) and burstiness, which posed new challenges in network engineering. While many models have been studied in capturing the traffic LRD, they are not capable of capturing efficiently the traffic impulsiveness. It is desirable to develop a model that can capture both LRD and burstiness. In this letter, we propose a truncated a-stable LRD process model for this purpose, which can characterize both LRD and burstiness accurately. A procedure is developed further to estimate the model parameters from real traffic. Simulations demonstrate that our proposed model has a higher accuracy compared to existing models and is flexible in capturing the characteristics of high-speed network traffic. © 2012 Springer-Verlag GmbH.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - The aim of the study was to determine the effect of optimal spectral filters on reading performance following stroke. Methods - Seventeen stroke subjects, aged 43-85, were considered with an age-matched Control Group (n = 17). Subjects undertook the Wilkins Rate of Reading Test on three occasions: (i) using an optimally selected spectral filter; (ii) subjects were randomly assigned to two groups: Group 1 used an optimal filter, whereas Group 2 used a grey filter, for two-weeks. The grey filter had similar photopic reflectance to the optimal filters, intended as a surrogate for a placebo; (iii) the groups were crossed over with Group 1 using a grey filter and Group 2 given an optimal filter, for two weeks, before undertaking the task once more. An increase in reading speed of >5% was considered clinically relevant. Results - Initial use of a spectral filter in the stroke cohort, increased reading speed by ~8%, almost halving error scores, findings not replicated in controls. Prolonged use of an optimal spectral filter increased reading speed by >9% for stroke subjects; errors more than halved. When the same subjects switched to using a grey filter, reading speed reduced by ~4%. A second group of stroke subjects used a grey filter first; reading speed decreased by ~3% but increased by ~4% with an optimal filter, with error scores almost halving. Conclusions - The present study has shown that spectral filters can immediately improve reading speed and accuracy following stroke, whereas prolonged use does not increase these benefits significantly. © 2013 Spanish General Council of Optometry.