18 resultados para Control test
em Aston University Research Archive
Resumo:
Objectives - Powdered and granulated particulate materials make up most of the ingredients of pharmaceuticals and are often at risk of undergoing unwanted agglomeration, or caking, during transport or storage. This is particularly acute when bulk powders are exposed to extreme swings in temperature and relative humidity, which is now common as drugs are produced and administered in increasingly hostile climates and are stored for longer periods of time prior to use. This study explores the possibility of using a uniaxial unconfined compression test to compare the strength of caked agglomerates exposed to different temperatures and relative humidities. This is part of a longer-term study to construct a protocol to predict the caking tendency of a new bulk material from individual particle properties. The main challenge is to develop techniques that provide repeatable results yet are presented simply enough to be useful to a wide range of industries. Methods - Powdered sucrose, a major pharmaceutical ingredient, was poured into a split die and exposed to high and low relative humidity cycles at room temperature. The typical ranges were 20–30% for the lower value and 70–80% for the higher value. The outer die casing was then removed and the resultant agglomerate was subjected to an unconfined compression test using a plunger fitted to a Zwick compression tester. The force against displacement was logged so that the dynamics of failure as well as the failure load of the sample could be recorded. The experimental matrix included varying the number of cycles, the amount between the maximum and minimum relative humidity, the height and diameters of the samples, the number of cycles and the particle size. Results - Trends showed that the tensile strength of the agglomerates increased with the number of cycles and also with the more extreme swings in relative humidity. This agrees with previous work on alternative methods of measuring the tensile strength of sugar agglomerates formed from humidity cycling (Leaper et al 2003). Conclusions - The results show that at the very least the uniaxial tester is a good comparative tester to examine the caking tendency of powdered materials, with a simple arrangement and operation that are compatible with the requirements of industry. However, further work is required to continue to optimize the height/ diameter ratio during tests.
Resumo:
We have proposed a novel robust inversion-based neurocontroller that searches for the optimal control law by sampling from the estimated Gaussian distribution of the inverse plant model. However, for problems involving the prediction of continuous variables, a Gaussian model approximation provides only a very limited description of the properties of the inverse model. This is usually the case for problems in which the mapping to be learned is multi-valued or involves hysteritic transfer characteristics. This often arises in the solution of inverse plant models. In order to obtain a complete description of the inverse model, a more general multicomponent distributions must be modeled. In this paper we test whether our proposed sampling approach can be used when considering an arbitrary conditional probability distributions. These arbitrary distributions will be modeled by a mixture density network. Importance sampling provides a structured and principled approach to constrain the complexity of the search space for the ideal control law. The effectiveness of the importance sampling from an arbitrary conditional probability distribution will be demonstrated using a simple single input single output static nonlinear system with hysteretic characteristics in the inverse plant model.
Resumo:
The human visual system combines contrast information from the two eyes to produce a single cyclopean representation of the external world. This task requires both summation of congruent images and inhibition of incongruent images across the eyes. These processes were explored psychophysically using narrowband sinusoidal grating stimuli. Initial experiments focussed on binocular interactions within a single detecting mechanism, using contrast discrimination and contrast matching tasks. Consistent with previous findings, dichoptic presentation produced greater masking than monocular or binocular presentation. Four computational models were compared, two of which performed well on all data sets. Suppression between mechanisms was then investigated, using orthogonal and oblique stimuli. Two distinct suppressive pathways were identified, corresponding to monocular and dichoptic presentation. Both pathways impact prior to binocular summation of signals, and differ in their strengths, tuning, and response to adaptation, consistent with recent single-cell findings in cat. Strikingly, the magnitude of dichoptic masking was found to be spatiotemporally scale invariant, whereas monocular masking was dependent on stimulus speed. Interocular suppression was further explored using a novel manipulation, whereby stimuli were presented in dichoptic antiphase. Consistent with the predictions of a computational model, this produced weaker masking than in-phase presentation. This allowed the bandwidths of suppression to be measured without the complicating factor of additive combination of mask and test. Finally, contrast vision in strabismic amblyopia was investigated. Although amblyopes are generally believed to have impaired binocular vision, binocular summation was shown to be intact when stimuli were normalized for interocular sensitivity differences. An alternative account of amblyopia was developed, in which signals in the affected eye are subject to attenuation and additive noise prior to binocular combination.
Resumo:
Dyslexia and attentional difficulty have often been linked, but little is known of the nature of the supposed attentional disorder. The Sustained Attention to Response Task (SART: Robertson, Manly, Andrade, Baddeley and Yiend, 1997) was designed as a measure of sustained attention and requires the withholding of responses to rare (one in nine) targets. To investigate the nature of the attentional disorder in dyslexia, this paper reports two studies which examined the performance of teenagers with dyslexia and their age-matched controls on the SART, the squiggle SART (a modification of the SART using novel and unlabellable stimuli rather than digits) and the go-gap-stop test of response inhibition (GGST). Teenagers with dyslexia made significantly more errors than controls on the original SART, but not the squiggle SART. There were no group differences on the GGST. After controlling for speed of reaction time in a sequential multiple regression predicting SART false alarms, false alarms on the GGST accounted for up to 22% extra variance in the control groups (although less on the squiggle SART) but negligible amounts of variance in the dyslexic groups. We interpret the results as reflecting a stimulus recognition automaticity deficit in dyslexia, rather than a sustained attention deficit. Furthermore, results suggest that response inhibition is an important component of performance on the standard SART when stimuli are recognised automatically.
Resumo:
We studied the visual mechanisms that serve to encode spatial contrast at threshold and supra-threshold levels. In a 2AFC contrast-discrimination task, observers had to detect the presence of a vertical 1 cycle deg-1 test grating (of contrast dc) that was superimposed on a similar vertical 1 cycle deg-1 pedestal grating, whereas in pattern masking the test grating was accompanied by a very different masking grating (horizontal 1 cycle deg-1, or oblique 3 cycles deg-1). When expressed as threshold contrast (dc at 75% correct) versus mask contrast (c) our results confirm previous ones in showing a characteristic 'dipper function' for contrast discrimination but a smoothly increasing threshold for pattern masking. However, fresh insight is gained by analysing and modelling performance (p; percent correct) as a joint function of (c, dc) - the performance surface. In contrast discrimination, psychometric functions (p versus logdc) are markedly less steep when c is above threshold, but in pattern masking this reduction of slope did not occur. We explored a standard gain-control model with six free parameters. Three parameters control the contrast response of the detection mechanism and one parameter weights the mask contrast in the cross-channel suppression effect. We assume that signal-detection performance (d') is limited by additive noise of constant variance. Noise level and lapse rate are also fitted parameters of the model. We show that this model accounts very accurately for the whole performance surface in both types of masking, and thus explains the threshold functions and the pattern of variation in psychometric slopes. The cross-channel weight is about 0.20. The model shows that the mechanism response to contrast increment (dc) is linearised by the presence of pedestal contrasts but remains nonlinear in pattern masking.
Resumo:
Blurred edges appear sharper in motion than when they are stationary. We have previously shown how such distortions in perceived edge blur may be explained by a model which assumes that luminance contrast is encoded by a local contrast transducer whose response becomes progressively more compressive as speed increases. To test this model further, we measured the sharpening of drifting, periodic patterns over a large range of contrasts, blur widths, and speeds Human Vision. The results indicate that, while sharpening increased with speed, it was practically invariant with contrast. This contrast invariance cannot be explained by a fixed compressive nonlinearity since that predicts almost no sharpening at low contrasts.We show by computational modelling of spatiotemporal responses that, if a dynamic contrast gain control precedes the static nonlinear transducer, then motion sharpening, its speed dependence, and its invariance with contrast can be predicted with reasonable accuracy.
Resumo:
Control and governance theories recognize that exchange partners are subject to two general forms of control, the unilateral authority of one firm and bilateral expectations extending from their social bond. In this way, a supplier both exerts unilateral, authority-based controls and is subject to socially-based, bilateral controls as it attempts to manage its brand successfully through reseller channels. Such control is being challenged by suppliers’ growing relative dependence on increasingly dominant resellers in many industries. Yet the impact of supplier relative dependence on the efficacy of control-based governance in the supplier’s channel is not well understood. To address this gap, we specify and test a control model moderated by relative dependence involving the conceptualization and measurement of governance at the level of specific control processes: incenting, monitoring, and enforcing. Our empirical findings show relative dependence undercuts the effectiveness of certain unilateral and bilateral control processes while enhancing the effectiveness of others, largely supporting our dual suppositions that each control process operates through a specialized behavioral mechanism and that these underlying mechanisms are differentially impacted by relative dependence. We offer implications of these findings for managers and identify our contributions to channel theory and research.
Resumo:
Previously, specifications for mechanical properties of casting alloys were based on separately cast test bars. This practice provided consistently reproducible results; thus, any change in conditions was reflected in changes in the mechanical properties of the test coupons. These test specimens, however, did not necessarily reflect the actual mechanical properties of the castings they were supposed to represent'. Factors such as section thickness and casting configuration affect the solidification rate and soundness of the casting thereby raising or lowering its mechanical properties in comparison with separately cast test specimens. In the work now reported, casting shapes were developed to investigate the variations of section thickness, chemical analysis and heat treatment on the mechanical properties of a high strength Aluminium alloy under varying chilling conditions. In addition, an insight was sought into the behaviour of chills under more practical conditions. Finally, it was demonstrated that additional information could be derived from the radiographs which form an essential part of the quality control of premium quality castings. As a result of the work, it is now possible to select analysis and chilling conditions to optimize the as cast and the heat treated mechanical properties of Aluminum 7% Silicon 0.3% Magnesium alloy.
Resumo:
Traditional machinery for manufacturing processes are characterised by actuators powered and co-ordinated by mechanical linkages driven from a central drive. Increasingly, these linkages are replaced by independent electrical drives, each performs a different task and follows a different motion profile, co-ordinated by computers. A design methodology for the servo control of high speed multi-axis machinery is proposed, based on the concept of a highly adaptable generic machine model. In addition to the dynamics of the drives and the loads, the model includes the inherent interactions between the motion axes and thus provides a Multi-Input Multi-Output (MIMO) description. In general, inherent interactions such as structural couplings between groups of motion axes are undesirable and needed to be compensated. On the other hand, imposed interactions such as the synchronisation of different groups of axes are often required. It is recognised that a suitable MIMO controller can simultaneously achieve these objectives and reconciles their potential conflicts. Both analytical and numerical methods for the design of MIMO controllers are investigated. At present, it is not possible to implement high order MIMO controllers for practical reasons. Based on simulations of the generic machine model under full MIMO control, however, it is possible to determine a suitable topology for a blockwise decentralised control scheme. The Block Relative Gain array (BRG) is used to compare the relative strength of closed loop interactions between sub-systems. A number of approaches to the design of the smaller decentralised MIMO controllers for these sub-systems has been investigated. For the purpose of illustration, a benchmark problem based on a 3 axes test rig has been carried through the design cycle to demonstrate the working of the design methodology.
Resumo:
This thesis describes an investigation into methods for controlling the mode distribution in multimode optical fibres. The major contributions presented in this thesis are summarised below. Emerging standards for Gigabit Ethernet transmission over multimode optical fibre have led to a resurgence of interest in the precise control, and specification, of modal launch conditions. In particular, commercial LED and OTDR test equipment does not, in general, comply with these standards. There is therefore a need for mode control devices, which can ensure compliance with the standards. A novel device consisting of a point-load mode-scrambler in tandem with a mode-filter is described in this thesis. The device, which has been patented, may be tuned to achieve a wide range of mode distributions and has been implemented in a ruggedised package for field use. Various other techniques for mode control have been described in this work, including the use of Long Period Gratings and air-gap mode-filters. Some of the methods have been applied to other applications, such as speckle suppression and in sensor technology. A novel, self-referencing, sensor comprising two modal groups in the Mode Power Distribution has been designed and tested. The feasibility of a two-channel Mode Group Diversity Multiplexed system has been demonstrated over 985m. A test apparatus for measuring mode distribution has been designed and constructed. The apparatus consists of a purpose-built video microscope, and comprehensive control and analysis software written in Visual Basic. The system may be fitted with a Silicon camera or an InGaAs camera, for measurement in the 850nm and 130nm transmission windows respectively. A limitation of the measurement method, when applied to well-filled fibres, has been identified and an improvement to the method has been proposed, based on modelled Laguerre Gauss field solutions.
Resumo:
The operation state of photovoltaic Module Integrated Converter (MIC) is subjected to change due to different source and load conditions, while state-swap is usually implemented with flow chart based sequential controller in the past research. In this paper, the signatures for different operational states are evaluated and investigated, which lead to an effective control integrated finite state machine (CIFSM), providing real-time state-swap as fast as the local control loop. The proposed CIFSM is implemented digitally for a boost type MIC prototype and tested under a variety of load and source conditions. The test results prove the effectiveness of the proposed CIFSM design.
Resumo:
Astrocytes are essential for neuronal function and survival, so both cell types were included in a human neurotoxicity test-system to assess the protective effects of astrocytes on neurons, compared with a culture of neurons alone. The human NT2.D1 cell line was differentiated to form either a co-culture of post-mitotic NT2.N neuronal (TUJ1, NF68 and NSE positive) and NT2.A astrocytic (GFAP positive) cells (∼2:1 NT2.A:NT2.N), or an NT2.N mono-culture. Cultures were exposed to human toxins, for 4 h at sub-cytotoxic concentrations, in order to compare levels of compromised cell function and thus evidence of an astrocytic protective effect. Functional endpoints examined included assays for cellular energy (ATP) and glutathione (GSH) levels, generation of hydrogen peroxide (H2O2) and caspase-3 activation. Generally, the NT2.N/A co-culture was more resistant to toxicity, maintaining superior ATP and GSH levels and sustaining smaller significant increases in H2O2 levels compared with neurons alone. However, the pure neuronal culture showed a significantly lower level of caspase activation. These data suggest that besides their support for neurons through maintenance of ATP and GSH and control of H2O2 levels, following exposure to some substances, astrocytes may promote an apoptotic mode of cell death. Thus, it appears the use of astrocytes in an in vitro predictive neurotoxicity test-system may be more relevant to human CNS structure and function than neuronal cells alone. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Field material testing provides firsthand information on pavement conditions which are most helpful in evaluating performance and identifying preventive maintenance or overlay strategies. High variability of field asphalt concrete due to construction raises the demand for accuracy of the test. Accordingly, the objective of this study is to propose a reliable and repeatable methodology to evaluate the fracture properties of field-aged asphalt concrete using the overlay test (OT). The OT is selected because of its efficiency and feasibility for asphalt field cores with diverse dimensions. The fracture properties refer to the Paris’ law parameters based on the pseudo J-integral (A and n) because of the sound physical significance of the pseudo J-integral with respect to characterizing the cracking process. In order to determine A and n, a two-step OT protocol is designed to characterize the undamaged and damaged behaviors of asphalt field cores. To ensure the accuracy of determined undamaged and fracture properties, a new analysis method is then developed for data processing, which combines the finite element simulations and mechanical analysis of viscoelastic force equilibrium and evolution of pseudo displacement work in the OT specimen. Finally, theoretical equations are derived to calculate A and n directly from the OT test data. The accuracy of the determined fracture properties is verified. The proposed methodology is applied to a total of 27 asphalt field cores obtained from a field project in Texas, including the control Hot Mix Asphalt (HMA) and two types of warm mix asphalt (WMA). The results demonstrate a high linear correlation between n and −log A for all the tested field cores. Investigations of the effect of field aging on the fracture properties confirm that n is a good indicator to quantify the cracking resistance of asphalt concrete. It is also indicated that summer climatic condition clearly accelerates the rate of aging. The impact of the WMA technologies on fracture properties of asphalt concrete is visualized by comparing the n-values. It shows that the Evotherm WMA technology slightly improves the cracking resistance, while the foaming WMA technology provides the comparable fracture properties with the HMA. After 15 months aging in the field, the cracking resistance does not exhibit significant difference between HMA and WMAs, which is confirmed by the observations of field distresses.
Resumo:
We investigate the effects of organizational culture and personal values on performance under individual and team contest incentives. We develop a model of regard for others and in-group favoritism that predicts interaction effects between organizational values and personal values in contest games. These predictions are tested in a computerized lab experiment with exogenous control of both organizational values and incentives. In line with our theoretical model we find that prosocial (proself) orientated subjects exert more (less) effort in team contests in the primed prosocial organizational values condition, relative to the neutrally primed baseline condition. Further, when the prosocial organizational values are combined with individual contest incentives, prosocial subjects no longer outperform their proself counterparts. These findings provide a first, affirmative, causal test of person-organization fit theory. They also suggest the importance of a 'triple-fit' between personal preferences, organizational values and incentive mechanisms for prosocially orientated individuals.
Resumo:
How are the image statistics of global image contrast computed? We answered this by using a contrast-matching task for checkerboard configurations of ‘battenberg’ micro-patterns where the contrasts and spatial spreads of interdigitated pairs of micro-patterns were adjusted independently. Test stimuli were 20 × 20 arrays with various sized cluster widths, matched to standard patterns of uniform contrast. When one of the test patterns contained a pattern with much higher contrast than the other, that determined global pattern contrast, as in a max() operation. Crucially, however, the full matching functions had a curious intermediate region where low contrast additions for one pattern to intermediate contrasts of the other caused a paradoxical reduction in perceived global contrast. None of the following models predicted this: RMS, energy, linear sum, max, Legge and Foley. However, a gain control model incorporating wide-field integration and suppression of nonlinear contrast responses predicted the results with no free parameters. This model was derived from experiments on summation of contrast at threshold, and masking and summation effects in dipper functions. Those experiments were also inconsistent with the failed models above. Thus, we conclude that our contrast gain control model (Meese & Summers, 2007) describes a fundamental operation in human contrast vision.