57 resultados para experimental analysis of behaviour
Resumo:
In any investigation in optometry involving more that two treatment or patient groups, an investigator should be using ANOVA to analyse the results assuming that the data conform reasonably well to the assumptions of the analysis. Ideally, specific null hypotheses should be built into the experiment from the start so that the treatments variation can be partitioned to test these effects directly. If 'post-hoc' tests are used, then an experimenter should examine the degree of protection offered by the test against the possibilities of making either a type 1 or a type 2 error. All experimenters should be aware of the complexity of ANOVA. The present article describes only one common form of the analysis, viz., that which applies to a single classification of the treatments in a randomised design. There are many different forms of the analysis each of which is appropriate to the analysis of a specific experimental design. The uses of some of the most common forms of ANOVA in optometry have been described in a further article. If in any doubt, an investigator should consult a statistician with experience of the analysis of experiments in optometry since once embarked upon an experiment with an unsuitable design, there may be little that a statistician can do to help.
Resumo:
Grafting of antioxidants and other modifiers onto polymers by reactive extrusion, has been performed successfully by the Polymer Processing and Performance Group at Aston University. Traditionally the optimum conditions for the grafting process have been established within a Brabender internal mixer. Transfer of this batch process to a continuous processor, such as an extruder, has, typically, been empirical. To have more confidence in the success of direct transfer of the process requires knowledge of, and comparison between, residence times, mixing intensities, shear rates and flow regimes in the internal mixer and in the continuous processor.The continuous processor chosen for the current work in the closely intermeshing, co-rotating twin-screw extruder (CICo-TSE). CICo-TSEs contain screw elements that convey material with a self-wiping action and are widely used for polymer compounding and blending. Of the different mixing modules contained within the CICo-TSE, the trilobal elements, which impose intensive mixing, and the mixing discs, which impose extensive mixing, are of importance when establishing the intensity of mixing. In this thesis, the flow patterns within the various regions of the single-flighted conveying screw elements and within both the trilobal element and mixing disc zones of a Betol BTS40 CICo-TSE, have been modelled using the computational fluid dynamics package Polyflow. A major obstacle encountered when solving the flow problem within all of these sets of elements, arises from both the complex geometry and the time-dependent flow boundaries as the elements rotate about their fixed axes. Simulation of the time dependent boundaries was overcome by selecting a number of sequential 2D and 3D geometries, used to represent partial mixing cycles. The flow fields were simulated using the ideal rheological properties of polypropylene and characterised in terms of velocity vectors, shear stresses generated and a parameter known as the mixing efficiency. The majority of the large 3D simulations were performed on the Cray J90 supercomputer situated at the Rutherford-Appleton laboratories, with pre- and postprocessing operations achieved via a Silicon Graphics Indy workstation. A mechanical model was constructed consisting of various CICo-TSE elements rotating within a transparent outer barrel. A technique has been developed using coloured viscous clays whereby the flow patterns and mixing characteristics within the CICo-TSE may be visualised. In order to test and verify the simulated predictions, the patterns observed within the mechanical model were compared with the flow patterns predicted by the computational model. The flow patterns within the single-flighted conveying screw elements in particular, showed good agreement between the experimental and simulated results.
Resumo:
The primary objective of this research has been to determine the potential of fluorescence spectroscopy as a method for analysis of surface deposition on contact lenses. In order to achieve this it was first necessary to ascertain whether fluorescence analysis would be able to detect and distinguish between protein and lipid deposited on a lens surface. In conjunction with this it was important to determine the specific excitation wavelengths at which these deposited species were detected with the greatest sensitivity. Experimental observations showed that an excitation wavelength of 360nm would detect lipid deposited on a lens surface, and an excitation wavelength of 280nm would detect and distinguish between protein and lipid deposited on a contact lens. It was also very important to determine whether clean unspoilt lenses showed significant levels of fluorescence themselves. Fluorescence spectra recorded from a variety of unworn contact lenses at excitation wavelengths of 360nm and 280nm indicated that most contact lens materials do not fluoresce themselves to any great extent. Following these initial experiments various clinically and laboratory based studies were performed using fluorescence spectroscopy as a method of analysing contact lens deposition levels. The clinically based studies enabled analysis of contact lenses with known wear backgrounds to be rapidly and individually analysed following discontinuation of wear. Deposition levels in the early stages of lens wear were determined for various lens materials. The effect of surfactant cleaning on deposition levels was also investigated. The laboratory based studies involved comparing some of the in vivo results with those of identical lenses that had been spoilt using an in vitro method. Finally, an examination of lysosyme migration into and out of stored ionic high water contact lenses was made.
Resumo:
This work concerns the developnent of a proton irduced X-ray emission (PIXE) analysis system and a multi-sample scattering chamber facility. The characteristics of the beam pulsing system and its counting rate capabilities were evaluated by observing the ion-induced X-ray emission from pure thick copper targets, with and without beam pulsing operation. The characteristic X-rays were detected with a high resolution Si(Li) detector coupled to a rrulti-channel analyser. The removal of the pile-up continuum by the use of the on-demand beam pulsing is clearly demonstrated in this work. This new on-demand pu1sirg system with its counting rate capability of 25, 18 and 10 kPPS corresponding to 2, 4 am 8 usec main amplifier time constant respectively enables thick targets to be analysed more readily. Reproducibility tests of the on-demard beam pulsing system operation were checked by repeated measurements of the system throughput curves, with and without beam pulsing. The reproducibility of the analysis performed using this system was also checked by repeated measurements of the intensity ratios from a number of standard binary alloys during the experimental work. A computer programme has been developed to evaluate the calculations of the X-ray yields from thick targets bornbarded by protons, taking into account the secondary X-ray yield production due to characteristic X-ray fluorescence from an element energetically higher than the absorption edge energy of the other element present in the target. This effect was studied on metallic binary alloys such as Fe/Ni and Cr/Fe. The quantitative analysis of Fe/Ni and Cr/Fe alloy samples to determine their elemental composition taking into account the enhancement has been demonstrated in this work. Furthermore, the usefulness of the Rutherford backscattering (R.B.S.) technique to obtain the depth profiles of the elements in the upper micron of the sample is discussed.
Resumo:
A participant observation method was employed :in the study of a 20-week stoppage at Ansells Brewery Limited, a constituent company of Allied Breweries (U.K.). The strike, :involving 1,000 workers, began :in opposition to the implementation of a four-day working week and culminated in the permanent closure of the brewery. The three main phases of the strike's development (i.e., its :initiation, maintenance and termination) were analysed according to a social-cognitive approach, based on the psychological imagery, beliefs, values and perceptions underlying the employees' behaviour. Previous psychological treatments of strikes have tended to ignore many of the aspects of social definition, planning and coordination that are an integral part of industrial action. The present study is, therefore, unique in concentrating on the thought processes by which striking workers .make sense of their current situation and collectively formulate an appropriate response. The Ansells strike provides an especially vivid illustration of the ways in which the seminal insights of a small number of individuals are developed, via processes of communication and:influence, into a consensual interpretation of reality. By adopting a historical perspective, it has been possible to demonstrate how contemporary definitions are shaped by the prior history of union-management relations, particularly with regard to: (a) the way that previous events were subjectively interpreted, and (b) the lessons that were learned on the basis of that experience. The present approach is psychological insofar as it deals with the cognitive elements of strike action. However, to the extent that it draws from relevant sections of the industrial relations, organizational behaviour, sociology, anthropology and linguistics literatures, it can claim to be truly interdisciplinary.
Resumo:
This investigation is in two parts, theory and experimental verification. (1) Theoretical Study In this study it is, for obvious reasons, necessary to analyse the concept of formability first. For the purpose of the present investigation it is sufficient to define the four aspects of formability as follows: (a) the formability of the material at a critical section, (b) the formability of the material in general, (c) process efficiency, (d) proportional increase in surface area. A method of quantitative assessment is proposed for each of the four aspects of formability. The theoretical study also includes the distinction between coaxial and non-coaxial strains which occur, respectively, in axisymmetrical and unsymmetrical forming processes and the inadequacy of the circular grid system for the assessment of formability is explained in the light of this distinction. (2) Experimental Study As one of the bases of the experimental work, the determination of the end point of a forming process, which sets the limit to the formability of the work material, is discussed. The effects of three process parameters on draw-in are shown graphically. Then the delay of fracture in sheet metal forming resulting from draw-in is analysed in kinematical terms, namely, through the radial displacements, the radial and the circumferential strains, and the projected thickness of the workpiece. Through the equilibrium equation of the membrane stresses, the effect on the shape of the unsupported region of the workpiece, and hence the position of the critical section is explained. Then, the effect of draw-in on the four aspects of formability is discussed throughout this investigation. The triangular coordinate system is used to present and analyse the triaxial strains involved. This coordinate system has the advantage of showing all the three principal strains in a material simultaneously, as well as representing clearly the many types of strains involved in sheet metal work.
Resumo:
The finite element method is now well established among engineers as being an extremely useful tool in the analysis of problems with complicated boundary conditions. One aim of this thesis has been to produce a set of computer algorithms capable of efficiently analysing complex three dimensional structures. This set of algorithms has been designed to permit much versatility. Provisions such as the use of only those parts of the system which are relevant to a given analysis and the facility to extend the system by the addition of new elements are incorporate. Five element types have been programmed, these are, prismatic members, rectangular plates, triangular plates and curved plates. The 'in and out of plane' stiffness matrices for a curved plate element are derived using the finite element technique. The performance of this type of element is compared with two other theoretical solutions as well as with a set of independent experimental observations. Additional experimental work was then carried out by the author to further evaluate the acceptability of this element. Finally the analysis of two large civil engineering structures, the shell of an electrical precipitator and a concrete bridge, are presented to investigate the performance of the algorithms. Comparisons are made between the computer time, core store requirements and the accuracy of the analysis, for the proposed system and those of another program.
Resumo:
Combinatorial libraries continue to play a key role in drug discovery. To increase structural diversity, several experimental methods have been developed. However, limited efforts have been performed so far to quantify the diversity of the broadly used diversity-oriented synthetic (DOS) libraries. Herein we report a comprehensive characterization of 15 bis-diazacyclic combinatorial libraries obtained through libraries from libraries, which is a DOS approach. Using MACCS keys, radial and different pharmacophoric fingerprints as well as six molecular properties, it was demonstrated the increased structural and property diversity of the libraries from libraries over the individual libraries. Comparison of the libraries to existing drugs, NCI Diversity and the Molecular Libraries Small Molecule Repository revealed the structural uniqueness of the combinatorial libraries (mean similarity < 0.5 for any fingerprint representation). In particular, bis-cyclic thiourea libraries were the most structurally dissimilar to drugs retaining drug-like character in property space. This study represents the first comprehensive quantification of the diversity of libraries from libraries providing a solid quantitative approach to compare and contrast the diversity of DOS libraries with existing drugs or any other compound collection.
Resumo:
This thesis addresses the kineto-elastodynamic analysis of a four-bar mechanism running at high-speed where all links are assumed to be flexible. First, the mechanism, at static configurations, is considered as structure. Two methods are used to model the system, namely the finite element method (FEM) and the dynamic stiffness method. The natural frequencies and mode shapes at different positions from both methods are calculated and compared. The FEM is used to model the mechanism running at high-speed. The governing equations of motion are derived using Hamilton's principle. The equations obtained are a set of stiff ordinary differential equations with periodic coefficients. A model is developed whereby the FEM and the dynamic stiffness method are used conjointly to provide high-precision results with only one element per link. The principal concern of the mechanism designer is the behaviour of the mechanism at steady-state. Few algorithms have been developed to deliver the steady-state solution without resorting to costly time marching simulation. In this study two algorithms are developed to overcome the limitations of the existing algorithms. The superiority of the new algorithms is demonstrated. The notion of critical speeds is clarified and a distinction is drawn between "critical speeds", where stresses are at a local maximum, and "unstable bands" where the mechanism deflections will grow boundlessly. Floquet theory is used to assess the stability of the system. A simple method to locate the critical speeds is derived. It is shown that the critical speeds of the mechanism coincide with the local maxima of the eigenvalues of the transition matrix with respect to the rotational speed of the mechanism.
Resumo:
Firstly, we numerically model a practical 20 Gb/s undersea configuration employing the Return-to-Zero Differential Phase Shift Keying data format. The modelling is completed using the Split-Step Fourier Method to solve the Generalised Nonlinear Schrdinger Equation. We optimise the dispersion map and per-channel launch power of these channels and investigate how the choice of pre/post compensation can influence the performance. After obtaining these optimal configurations, we investigate the Bit Error Rate estimation of these systems and we see that estimation based on Gaussian electrical current systems is appropriate for systems of this type, indicating quasi-linear behaviour. The introduction of narrower pulses due to the deployment of quasi-linear transmission decreases the tolerance to chromatic dispersion and intra-channel nonlinearity. We used tools from Mathematical Statistics to study the behaviour of these channels in order to develop new methods to estimate Bit Error Rate. In the final section, we consider the estimation of Eye Closure Penalty, a popular measure of signal distortion. Using a numerical example and assuming the symmetry of eye closure, we see that we can simply estimate Eye Closure Penalty using Gaussian statistics. We also see that the statistics of the logical ones dominates the statistics of the logical ones dominates the statistics of signal distortion in the case of Return-to-Zero On-Off Keying configurations.
Resumo:
The reliability of the printed circuit board assembly under dynamic environments, such as those found onboard airplanes, ships and land vehicles is receiving more attention. This research analyses the dynamic characteristics of the printed circuit board (PCB) supported by edge retainers and plug-in connectors. By modelling the wedge retainer and connector as providing simply supported boundary condition with appropriate rotational spring stiffnesses along their respective edges with the aid of finite element codes, accurate natural frequencies for the board against experimental natural frequencies are obtained. For a PCB supported by two opposite wedge retainers and a plug-in connector and with its remaining edge free of any restraint, it is found that these real supports behave somewhere between the simply supported and clamped boundary conditions and provide a percentage fixity of 39.5% more than the classical simply supported case. By using an eigensensitivity method, the rotational stiffnesses representing the boundary supports of the PCB can be updated effectively and is capable of representing the dynamics of the PCB accurately. The result shows that the percentage error in the fundamental frequency of the PCB finite element model is substantially reduced from 22.3% to 1.3%. The procedure demonstrated the effectiveness of using only the vibration test frequencies as reference data when the mode shapes of the original untuned model are almost identical to the referenced modes/experimental data. When using only modal frequencies in model improvement, the analysis is very much simplified. Furthermore, the time taken to obtain the experimental data will be substantially reduced as the experimental mode shapes are not required.In addition, this thesis advocates a relatively simple method in determining the support locations for maximising the fundamental frequency of vibrating structures. The technique is simple and does not require any optimisation or sequential search algorithm in the analysis. The key to the procedure is to position the necessary supports at positions so as to eliminate the lower modes from the original configuration. This is accomplished by introducing point supports along the nodal lines of the highest possible mode from the original configuration, so that all the other lower modes are eliminated by the introduction of the new or extra supports to the structure. It also proposes inspecting the average driving point residues along the nodal lines of vibrating plates to find the optimal locations of the supports. Numerical examples are provided to demonstrate its validity. By applying to the PCB supported on its three sides by two wedge retainers and a connector, it is found that a single point constraint that would yield maximum fundamental frequency is located at the mid-point of the nodal line, namely, node 39. This point support has the effect of increasing the structure's fundamental frequency from 68.4 Hz to 146.9 Hz, or 115% higher.
Resumo:
An experimental testing system for the study of the dynamic behavior of fluid-loaded rectangular micromachined silicon plates is designed and presented in this paper. In this experimental system, the base-excitation technique combined with pseudo-random signal and cross-correlation analysis is applied to test fluid-loaded microstructures. Theoretical model is also derived to reveal the mechanism of such an experimental system in the application of testing fluid-loaded microstructures. The dynamic experiments cover a series of testings of various microplates with different boundary conditions and dimensions, both in air and immersed in water. This paper is the first that demonstrates the ability and performances of base excitation in the application of dynamic testing of microstructures that involves a natural fluid environment. Traditional modal analysis approaches are used to evaluate natural frequencies, modal damping and mode shapes from the experimental data. The obtained experimental results are discussed and compared with theoretical predictions. This research experimentally determines the dynamic characteristics of the fluid-loaded silicon microplates, which can contribute to the design of plate-based microsystems. The experimental system and testing approaches presented in this paper can be widely applied to the investigation of the dynamics of microstructures and nanostructures.
Resumo:
This paper investigates the vibration characteristics of the coupling system of a microscale fluid-loaded rectangular isotropic plate attached to a uniformly distributed mass. Previous literature has, respectively, studied the changes in the plate vibration induced by an acoustic field or by the attached mass loading. This paper investigates the issue of involving these two types of loading simultaneously. Based on Lamb's assumption of the fluid-loaded structure and the Rayleigh–Ritz energy method, this paper presents an analytical solution for the natural frequencies and mode shapes of the coupling system. Numerical results for microplates with different types of boundary conditions have also been obtained and compared with experimental and numerical results from previous literature. The theoretical model and novel analytical solution are of particular interest in the design of microplate-based biosensing devices.
Resumo:
The 21-day experimental gingivitis model, an established noninvasive model of inflammation in response to increasing bacterial accumulation in humans, is designed to enable the study of both the induction and resolution of inflammation. Here, we have analyzed gingival crevicular fluid, an oral fluid comprising a serum transudate and tissue exudates, by LC-MS/MS using Fourier transform ion cyclotron resonance mass spectrometry and iTRAQ isobaric mass tags, to establish meta-proteomic profiles of inflammation-induced changes in proteins in healthy young volunteers. Across the course of experimentally induced gingivitis, we identified 16 bacterial and 186 human proteins. Although abundances of the bacterial proteins identified did not vary temporally, Fusobacterium outer membrane proteins were detected. Fusobacterium species have previously been associated with periodontal health or disease. The human proteins identified spanned a wide range of compartments (both extracellular and intracellular) and functions, including serum proteins, proteins displaying antibacterial properties, and proteins with functions associated with cellular transcription, DNA binding, the cytoskeleton, cell adhesion, and cilia. PolySNAP3 clustering software was used in a multilayered analytical approach. Clusters of proteins that associated with changes to the clinical parameters included neuronal and synapse associated proteins.
Resumo:
Analysis of covariance (ANCOVA) is a useful method of ‘error control’, i.e., it can reduce the size of the error variance in an experimental or observational study. An initial measure obtained before the experiment, which is closely related to the final measurement, is used to adjust the final measurements, thus reducing the error variance. When this method is used to reduce the error term, the X variable must not itself be affected by the experimental treatments, because part of the treatment effect would then also be removed. Hence, the method can only be safely used when X is measured before an experiment. A further limitation of the analysis is that only the linear effect of Y on X is being removed and it is possible that Y could be a curvilinear function of X. A question often raised is whether ANCOVA should be used routinely in experiments rather than a randomized blocks or split-plot design, which may also reduce the error variance. The answer to this question depends on the relative precision of the difference methods with reference to each scenario. Considerable judgment is often required to select the best experimental design and statistical help should be sought at an early stage of an investigation.