32 resultados para Speed and torque observers
Resumo:
Purpose - The aim of the study was to determine the effect of optimal spectral filters on reading performance following stroke. Methods - Seventeen stroke subjects, aged 43-85, were considered with an age-matched Control Group (n = 17). Subjects undertook the Wilkins Rate of Reading Test on three occasions: (i) using an optimally selected spectral filter; (ii) subjects were randomly assigned to two groups: Group 1 used an optimal filter, whereas Group 2 used a grey filter, for two-weeks. The grey filter had similar photopic reflectance to the optimal filters, intended as a surrogate for a placebo; (iii) the groups were crossed over with Group 1 using a grey filter and Group 2 given an optimal filter, for two weeks, before undertaking the task once more. An increase in reading speed of >5% was considered clinically relevant. Results - Initial use of a spectral filter in the stroke cohort, increased reading speed by ~8%, almost halving error scores, findings not replicated in controls. Prolonged use of an optimal spectral filter increased reading speed by >9% for stroke subjects; errors more than halved. When the same subjects switched to using a grey filter, reading speed reduced by ~4%. A second group of stroke subjects used a grey filter first; reading speed decreased by ~3% but increased by ~4% with an optimal filter, with error scores almost halving. Conclusions - The present study has shown that spectral filters can immediately improve reading speed and accuracy following stroke, whereas prolonged use does not increase these benefits significantly. © 2013 Spanish General Council of Optometry.
Resumo:
We report the performance of a group of adult dyslexics and matched controls in an array-matching task where two strings of either consonants or symbols are presented side by side and have to be judged to be the same or different. The arrays may differ either in the order or identity of two adjacent characters. This task does not require naming – which has been argued to be the cause of dyslexics’ difficulty in processing visual arrays – but, instead, has a strong serial component as demonstrated by the fact that, in both groups, Reaction times (RTs) increase monotonically with position of a mismatch. The dyslexics are clearly impaired in all conditions and performance in the identity conditions predicts performance across orthographic tasks even after age, performance IQ and phonology are partialled out. Moreover, the shapes of serial position curves are revealing of the underlying impairment. In the dyslexics, RTs increase with position at the same rate as in the controls (lines are parallel) ruling out reduced processing speed or difficulties in shifting attention. Instead, error rates show a catastrophic increase for positions which are either searched later or more subject to interference. These results are consistent with a reduction in the attentional capacity needed in a serial task to bind together identity and positional information. This capacity is best seen as a reduction in the number of spotlights into which attention can be split to process information at different locations rather than as a more generic reduction of resources which would also affect processing the details of single objects.
Resumo:
Hybrid WDM/TDM enabled microstructure based optical fiber sensor network with large capacity is proposed. Assisted by Fabry-Perot filter, the demodulation system with high speed of 500Hz and high wavelength resolution less than 4.91pm is realized. © OSA 2015.
Resumo:
Deep hole drilling is one of the most complicated metal cutting processes and one of the most difficult to perform on CNC machine-tools or machining centres under conditions of limited manpower or unmanned operation. This research work investigates aspects of the deep hole drilling process with small diameter twist drills and presents a prototype system for real time process monitoring and adaptive control; two main research objectives are fulfilled in particular : First objective is the experimental investigation of the mechanics of the deep hole drilling process, using twist drills without internal coolant supply, in the range of diarneters Ø 2.4 to Ø4.5 mm and working length up to 40 diameters. The definition of the problems associated with the low strength of these tools and the study of mechanisms of catastrophic failure which manifest themselves well before and along with the classic mechanism of tool wear. The relationships between drilling thrust and torque with the depth of penetration and the various machining conditions are also investigated and the experimental evidence suggests that the process is inherently unstable at depths beyond a few diameters. Second objective is the design and implementation of a system for intelligent CNC deep hole drilling, the main task of which is to ensure integrity of the process and the safety of the tool and the workpiece. This task is achieved by means of interfacing the CNC system of the machine tool to an external computer which performs the following functions: On-line monitoring of the drilling thrust and torque, adaptive control of feed rate, spindle speed and tool penetration (Z-axis), indirect monitoring of tool wear by pattern recognition of variations of the drilling thrust with cumulative cutting time and drilled depth, operation as a data base for tools and workpieces and finally issuing of alarms and diagnostic messages.
Resumo:
Information technology has increased both the speed and medium of communication between nations. It has brought the world closer, but it has also created new challenges for translation — how we think about it, how we carry it out and how we teach it. Translation and Information Technology has brought together experts in computational linguistics, machine translation, translation education, and translation studies to discuss how these new technologies work, the effect of electronic tools, such as the internet, bilingual corpora, and computer software, on translator education and the practice of translation, as well as the conceptual gaps raised by the interface of human and machine.
Resumo:
The aim of this research was to assess the effect of oxygenated hydrocarbons on the knocking characteristics of an engine when blended with low-leaded gasoline. Alcohols, ethers, esters and ketones were tested individually and in various combinations up to an oxygen content of 4% wt/wt in a blend with Series F-7 gasoline of 90, 92, 94 and 96 RON. Tests were carried out at wide open throttle, constant speed and standard timing setting. Engine speed was varied using a dynamometer and knock was detected by two piezoelectric transducers, one on the cylinder head monitoring all four cylinders and one monitoring the cylinder most prone to knock. The engine speeds associated with trace and light knock of a continuous nature were noted. Curves were produced for each oxygenate blend of base RON used against engine speed for the two knock conditions which were compared with those produced using pure Series F-7 fuels. From this a suggested RON of the blend was derived. RON increase was less when using a higher RON base fuel in the blend. Most individual oxygenates showed similar effects in similar concentrations when their oxygen content was comparable. Blends containing more than one oxygenate showed some variation with methanol/MTBE/3 methylbutan-2-one and methanol/MTBE/4 methyl pentan-2-one knocking less than expected and methanol/MTBE/TBA also showing good knock resistance. Further tests to optimise initial findings suggested a blend of methanol and MTBE to be superior although partial replacement of MTBE by 4 methyl pentan-2-one resulted in a fuel of comparable performance. Exhaust emissions were tested for a number of oxygenated blends in 2-star gasoline. 2-star and 4-star fuels were also tested for reference. All oxygenate blends reduced carbon monoxide emissions as expected and hydrocarbon emissions were also reduced. The largest reduction in carbon monoxide occurred using a 14.5 % (1 : 1 : 1) methanol/MTBE/4 methyl pentan-2-one blend. Hydrocarbon emissions were most markedly reduced by a blend containing 25.5 % 4 methyl pentan-2-one. Power output was tested for the blends and indicated a maximum increase of about 5 % at low engine speeds. The most advantageous blends were methanol/4 methyl pentan-2-one (6 : 5) 11% in 2-star and methanol/MTBE/4 methyl pentan-2-one (6 : 3 : 2) 11% in 2-star. In conclusion methanol/MTBE (6 : 5) and (5 : 5), and various combinations of methanol/MTBE/4 methyl pentan-2-one, notably (6 : 3 : 2) gave good results in all tests conducted. CFR testing of these blends showed them to increase both RON and MON substantially.
Resumo:
Pin on disc wear machines were used to study the boundary lubricated friction and wear of AISI 52100 steel sliding partners. Boundary conditions were obtained by using speed and load combinations which resulted in friction coefficients in excess of 0.1. Lubrication was achieved using zero, 15 and 1000 ppm concentrations of an organic dimeric acid additive in a hydrocarbon base stock. Experiments were performed for sliding speeds of 0.2, 0.35 and 0.5 m/s for a range of loads up to 220 N. Wear rate, frictional force and pin temperature were continually monitored throughout tests and where possible complementary methods of measurement were used to improve accuracy. A number of analytical techniques were used to examine wear surfaces, debris and lubricants, namely: Scanning Electron Microscopy (SEM), Auger Electron Spectroscopy (AES), Powder X-ray Diffraction (XRD), X-ray Photoelectron Spectroscopy (XPS), optical microscopy, Back scattered Electron Detection (BSED) and several metallographic techniques. Friction forces and wear rates were found to vary linearly with load for any given combination of speed and additive concentration. The additive itself was found to act as a surface oxidation inhibitor and as a lubricity enhancer, particularly in the case of the higher (1000 ppm) concentration. Wear was found to be due to a mild oxidational mechanism at low additive concentrations and a more severe metallic mechanism at higher concentrations with evidence of metallic delamination in the latter case. Scuffing loads were found to increase with increasing additive concentration and decrease with increasing speed as would be predicted by classical models of additive behaviour as an organo-metallic soap film. Heat flow considerations tended to suggest that surface temperature was not the overriding controlling factor in oxidational wear and a model is proposed which suggests oxygen concentration in the lubricant is the controlling factor in oxide growth and wear.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT This thesis describes a detailed study of advanced optical fibre sensors based on fibre Bragg grating (FBG), tilted fibre Bragg grating (TFBG) and long-period grating (LPG) and their applications in optical communications and sensing. The major contributions presented in this thesis are summarised below.The most important contribution from the research work presented in this thesis is the implementation of in-fibre grating based refractive index (RI) sensors, which could be the good candidates for optical biochemical sensing. Several fibre grating based RI sensors have been proposed and demonstrated by exploring novel grating structures and different fibre types, and employing efficient hydrofluoric acid etching technique to enhance the RI sensitivity. All the RI devices discussed in this thesis have been used to measure the concentration of sugar solution to simulate the chemical sensing. Efforts have also been made to overcome the RI-temperature cross-sensitivity for practical application. The demonstrated in-fibre grating based RI sensors could be further implemented as potential optical biosensors by applying bioactive coatings to realise high bio-sensitivity and bio-selectivity.Another major contribution of this thesis is the application of TFBGs. A prototype interrogation system by the use of TFBG with CCD-array was implemented to perform wavelength division multiplexing (WDM) interrogation around 800nm wavelength region with the advantages of compact size, fast detection speed and low-cost. As a high light, a novel in-fibre twist sensors utilising strong polarisation dependant coupling behaviour of an 81°-TFBG was presented to demonstrate the high torsion sensitivity and capability of direction recognition.
Resumo:
When viewing a drifting plaid stimulus, perceived motion alternates over time between coherent pattern motion and a transparent impression of the two component gratings. It is known that changing the intrinsic attributes of such patterns (e.g. speed, orientation and spatial frequency of components) can influence percept predominance. Here, we investigate the contribution of extrinsic factors to perception; specifically contextual motion and eye movements. In the first experiment, the percept most similar to the speed and direction of surround motion increased in dominance, implying a tuned integration process. This shift primarily involved an increase in dominance durations of the consistent percept. The second experiment measured eye movements under similar conditions. Saccades were not associated with perceptual transitions, though blink rate increased around the time of a switch. This indicates that saccades do not cause switches, yet saccades in a congruent direction might help to prolong a percept because i) more saccades were directionally congruent with the currently reported percept than expected by chance, and ii) when observers were asked to make deliberate eye movements along one motion axis, this increased percept reports in that direction. Overall, we find evidence that perception of bistable motion can be modulated by information from spatially adjacent regions, and changes to the retinal image caused by blinks and saccades.
Resumo:
Various micro-radial compressor configurations were investigated using one-dimensional meanline and computational fluid dynamics (CFD) techniques for use in a micro gas turbine (MGT) domestic combined heat and power (DCHP) application. Blade backsweep, shaft speed, and blade height were varied at a constant pressure ratio. Shaft speeds were limited to 220 000 r/min, to enable the use of a turbocharger bearing platform. Off-design compressor performance was established and used to determine the MGT performance envelope; this in turn was used to assess potential cost and environmental savings in a heat-led DCHP operating scenario within the target market of a detached family home. A low target-stage pressure ratio provided an opportunity to reduce diffusion within the impeller. Critically for DCHP, this produced very regular flow, which improved impeller performance for a wider operating envelope. The best performing impeller was a low-speed, 170 000 r/min, low-backsweep, 15° configuration producing 71.76 per cent stage efficiency at a pressure ratio of 2.20. This produced an MGT design point system efficiency of 14.85 per cent at 993 W, matching prime movers in the latest commercial DCHP units. Cost and CO2 savings were 10.7 per cent and 6.3 per cent, respectively, for annual power demands of 17.4 MWht and 6.1 MWhe compared to a standard condensing boiler (with grid) installation. The maximum cost saving (on design point) was 14.2 per cent for annual power demands of 22.62 MWht and 6.1 MWhe corresponding to an 8.1 per cent CO2 saving. When sizing, maximum savings were found with larger heat demands. When sized, maximum savings could be made by encouraging more electricity export either by reducing household electricity consumption or by increasing machine efficiency.
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
A re-examination of fundamental concepts and a formal structuring of the waveform analysis problem is presented in Part I. eg. the nature of frequency is examined and a novel alternative to the classical methods of detection proposed and implemented which has the advantage of speed and independence from amplitude. Waveform analysis provides the link between Parts I and II. Part II is devoted to Human Factors and the Adaptive Task Technique. The Historical, Technical and Intellectual development of the technique is traced in a review which examines the evidence of its advantages relative to non-adaptive fixed task methods of training, skill assessment and man-machine optimisation. A second review examines research evidence on the effect of vibration on manual control ability. Findings are presented in terms of percentage increment or decrement in performance relative to performance without vibration in the range 0-0.6Rms'g'. Primary task performance was found to vary by as much as 90% between tasks at the same Rms'g'. Differences in task difficulty accounted for this difference. Within tasks vibration-added-difficulty accounted for the effects of vibration intensity. Secondary tasks were found to be largely insensitive to vibration except secondaries which involved fine manual adjustment of minor controls. Three experiments are reported next in which an adaptive technique was used to measure the % task difficulty added by vertical random and sinusoidal vibration to a 'Critical Compensatory Tracking task. At vibration intensities between 0 - 0.09 Rms 'g' it was found that random vibration added (24.5 x Rms'g')/7.4 x 100% to the difficulty of the control task. An equivalence relationship between Random and Sinusoidal vibration effects was established based upon added task difficulty. Waveform Analyses which were applied to the experimental data served to validate Phase Plane analysis and uncovered the development of a control and possibly a vibration isolation strategy. The submission ends with an appraisal of subjects mentioned in the thesis title.
Resumo:
Task classification is introduced as a method for the evaluation of monitoring behaviour in different task situations. On the basis of an analysis of different monitoring tasks, a task classification system comprising four task 'dimensions' is proposed. The perceptual speed and flexibility of closure categories, which are identified with signal discrimination type, comprise the principal dimension in this taxonomy, the others being sense modality, the time course of events, and source complexity. It is also proposed that decision theory provides the most complete method for the analysis of performance in monitoring tasks. Several different aspects of decision theory in relation to monitoring behaviour are described. A method is also outlined whereby both accuracy and latency measures of performance may be analysed within the same decision theory framework. Eight experiments and an organizational study are reported. The results show that a distinction can be made between the perceptual efficiency (sensitivity) of a monitor and his criterial level of response, and that in most monitoring situations, there is no decrement in efficiency over the work period, but an increase in the strictness of the response criterion. The range of tasks exhibiting either or both of these performance trends can be specified within the task classification system. In particular, it is shown that a sensitivity decrement is only obtained for 'speed' tasks with a high stimulation rate. A distinctive feature of 'speed' tasks is that target detection requires the discrimination of a change in a stimulus relative to preceding stimuli, whereas in 'closure' tasks, the information required for the discrimination of targets is presented at the same point In time. In the final study, the specification of tasks yielding sensitivity decrements is shown to be consistent with a task classification analysis of the monitoring literature. It is also demonstrated that the signal type dimension has a major influence on the consistency of individual differences in performance in different tasks. The results provide an empirical validation for the 'speed' and 'closure' categories, and suggest that individual differences are not completely task specific but are dependent on the demands common to different tasks. Task classification is therefore shovn to enable improved generalizations to be made of the factors affecting 1) performance trends over time, and 2) the consistencv of performance in different tasks. A decision theory analysis of response latencies is shown to support the view that criterion shifts are obtained in some tasks, while sensitivity shifts are obtained in others. The results of a psychophysiological study also suggest that evoked potential latency measures may provide temporal correlates of criterion shifts in monitoring tasks. Among other results, the finding that the latencies of negative responses do not increase over time is taken to invalidate arousal-based theories of performance trends over a work period. An interpretation in terms of expectancy, however, provides a more reliable explanation of criterion shifts. Although the mechanisms underlying the sensitivity decrement are not completely clear, the results rule out 'unitary' theories such as observing response and coupling theory. It is suggested that an interpretation in terms of the memory data limitations on information processing provides the most parsimonious explanation of all the results in the literature relating to sensitivity decrement. Task classification therefore enables the refinement and selection of theories of monitoring behaviour in terms of their reliability in generalizing predictions to a wide range of tasks. It is thus concluded that task classification and decision theory provide a reliable basis for the assessment and analysis of monitoring behaviour in different task situations.