37 resultados para simplicity
em Aston University Research Archive
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
Advances in our understanding of pathological mechanisms can inform the identification of various biomarkers for risk stratification, monitoring drug efficacy and toxicity; and enabling careful monitoring of polypharmacy. Biomarkers in the broadest sense refer to 'biological markers' and this can be blood-based (eg. fibrin D-dimer, von Willebrand factor, etc) urine-based (eg. thromboxane), or even related to cardiac or cerebral imaging(1). Most biomarkers offer improvements over clinical risk scores in predicting high risk patients - at least statistically - but usually at the loss of simplicity and practicality for easy application in everyday clinical practice. Given the various biomarkers can be informed by different aspects of pathophysiology (e.g. inflammation, clotting, collagen turnover) they can nevertheless contribute to a better understanding of underlying disease processes(2). Indeed, many age-related diseases share common modifiable underpinning mechanisms e.g. inflammation, oxidative stress and visceral adiposity.
Resumo:
The n-tuple recognition method is briefly reviewed, summarizing the main theoretical results. Large-scale experiments carried out on Stat-Log project datasets confirm this method as a viable competitor to more popular methods due to its speed, simplicity, and accuracy on the majority of a wide variety of classification problems. A further investigation into the failure of the method on certain datasets finds the problem to be largely due to a mismatch between the scales which describe generalization and data sparseness.
Resumo:
This thesis is concerned with the effect of polymer structure on miscibility of the three component blends based on poly(lactic acid) (PLA) with using blending techniques. The examination of novel PLA homologues (pre-synthesised poly(a-esters)), including a range of aliphatic and aromatic poly(a-esters) is an important aspect of the work. Because of their structural simplicity and similarity to PLA, they provide an ideal system to study the effect of polyester structures on the miscibility of PLA polymer blends. The miscibility behaviour of the PLA homologues is compared with other aliphatic polyesters (e.g. poly(e-caprolactone) (PCL), poly(hydroxybutyrate hydroxyvalerate) (P(HB-HV)), together with a series of cellulose-based polymers (e.g. cellulose acetate butyrate (CAB)). The work started with the exploration the technique used for preliminary observation of the miscibility of blends referred to as “a rapid screening method” and then the miscibility of binary blends was observed and characterised by percent transmittance together with the Coleman and Painter miscibility approach. However, it was observed that symmetrical structures (e.g. a1(dimethyl), a2(diethyl)) promote the well-packing which restrict their chains from intermingling into poly(L-lactide) (PLLA) chains and leads the blends to be immiscible, whereas, asymmetrical structures (e.g. a4(cyclohexyl)) behave to the contrary. a6(chloromethyl-methyl) should interact well with PLLA because of the polar group of chloride to form interactions, but it does not. It is difficult to disrupt the helical structure of PLLA. PLA were immiscible with PCL, P(HB-HV), or compatibiliser (e.g. G40, LLA-co-PCL), but miscible with CAB which is a hydrogen-bonded polymer. However, these binary blends provided a useful indication for the exploration the novel three component blends. In summary, the miscibility of the three-component blends are miscible even if only two polymers are miscible. This is the benefit for doing the three components blend in this thesis, which is not an attempt to produce a theoretical explanation for the miscibility of three components blend system.
Resumo:
In this paper, we propose and demonstrate a novel scheme for simultaneous measurement of liquid level and temperature based on a simple uniform fiber Bragg grating (FBG) by monitoring both the short-wavelength-loss peaks and its Bragg resonance. The liquid level can be measured from the amplitude changes of the short-wavelength-loss peaks, while temperature can be measured from the wavelength shift of the Bragg resonance. Both theoretical simulation results and experimental results are presented. Such a scheme has some advantages including robustness, simplicity, flexibility in choosing sensitivity and simultaneous temperature measurement capability.
Resumo:
The future broadband information network will undoubtedly integrate the mobility and flexibility of wireless access systems with the huge bandwidth capacity of photonics solutions to enable a communication system capable of handling the anticipated demand for interactive services. Towards wide coverage and low cost implementations of such broadband wireless photonics communication networks, various aspects of the enabling technologies are continuingly generating intense research interest. Among the core technologies, the optical generation and distribution of radio frequency signals over fibres, and the fibre optic signal processing of optical and radio frequency signals, have been the subjects for study in this thesis. Based on the intrinsic properties of single-mode optical fibres, and in conjunction with the concepts of optical fibre delay line filters and fibre Bragg gratings, a number of novel fibre-based devices, potentially suitable for applications in the future wireless photonics communication systems, have been realised. Special single-mode fibres, namely, the high birefringence (Hi-Bi) fibre and the Er/Yb doped fibre have been employed so as to exploit their merits to achieve practical and cost-effective all-fibre architectures. A number of fibre-based complex signal processors for optical and radio frequencies using novel Hi-Bi fibre delay line filter architectures have been illustrated. In particular, operations such as multichannel flattop bandpass filtering, simultaneous complementary outputs and bidirectional nonreciprocal wavelength interleaving, have been demonstrated. The proposed configurations featured greatly reduced environmental sensitivity typical of coherent fibre delay line filter schemes, reconfigurable transfer functions, negligible chromatic dispersions, and ease of implementation, not easily achievable based on other techniques. A number of unique fibre grating devices for signal filtering and fibre laser applications have been realised. The concept of the superimposed fibre Bragg gratings has been extended to non-uniform grating structures and into Hi-Bi fibres to achieve highly useful grating devices such as overwritten phase-shifted fibre grating structure and widely/narrowly spaced polarization-discriminating filters that are not limited by the intrinsic fibre properties. In terms of the-fibre-based optical millimetre wave transmitters, unique approaches based on fibre laser configurations have been proposed and demonstrated. The ability of the dual-mode distributed feedback (DFB) fibre lasers to generate high spectral purity, narrow linewidth heterodyne signals without complex feedback mechanisms has been illustrated. A novel co-located dual DFB fibre laser configuration, based on the proposed superimposed phase-shifted fibre grating structure, has been further realised with highly desired operation characteristics without the need for costly high frequency synthesizers and complex feedback controls. Lastly, a novel cavity mode condition monitoring and optimisation scheme for short length, linear-cavity fibre lasers has been proposed and achieved. Based on the concept and simplicity of the superimposed fibre laser cavities structure, in conjunction with feedback controls, enhanced output performances from the fibre lasers have been achieved. The importance of such cavity mode assessment and feedback control for optimised fibre laser output performance has been illustrated.
Spatial pattern analysis of beta-amyloid (A beta) deposits in Alzheimer disease by linear regression
Resumo:
The spatial patterns of discrete beta-amyloid (Abeta) deposits in brain tissue from patients with Alzheimer disease (AD) were studied using a statistical method based on linear regression, the results being compared with the more conventional variance/mean (V/M) method. Both methods suggested that Abeta deposits occurred in clusters (400 to <12,800 mu m in diameter) in all but 1 of the 42 tissues examined. In many tissues, a regular periodicity of the Abeta deposit clusters parallel to the tissue boundary was observed. In 23 of 42 (55%) tissues, the two methods revealed essentially the same spatial patterns of Abeta deposits; in 15 of 42 (36%), the regression method indicated the presence of clusters at a scale not revealed by the V/M method; and in 4 of 42 (9%), there was no agreement between the two methods. Perceived advantages of the regression method are that there is a greater probability of detecting clustering at multiple scales, the dimension of larger Abeta clusters can be estimated more accurately, and the spacing between the clusters may be estimated. However, both methods may be useful, with the regression method providing greater resolution and the V/M method providing greater simplicity and ease of interpretation. Estimates of the distance between regularly spaced Abeta clusters were in the range 2,200-11,800 mu m, depending on tissue and cluster size. The regular periodicity of Abeta deposit clusters in many tissues would be consistent with their development in relation to clusters of neurons that give rise to specific neuronal projections.
Resumo:
The modified polarization spectroscopy method was applied for determination of angular momenta of autoionizing states of Pu in multistep resonance ionization processes. In comparison with the known one, our method does not require circular polarization at all, only linear polarizations are needed. This simplicity was reached using a three-dimensional excitation geometry. Angular momenta of nine new autoionizing <sup>242</sup>Pu states were determined. The method suggested could be applied for efficiency improvement in multistep RIMS applications as well as for the odd-even isotope separation for elements with a J = 0 ground state (Pu, Yb, Sm etc.).
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
Visualising data for exploratory analysis is a major challenge in many applications. Visualisation allows scientists to gain insight into the structure and distribution of the data, for example finding common patterns and relationships between samples as well as variables. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are employed. These methods are favoured because of their simplicity, but they cannot cope with missing data and it is difficult to incorporate prior knowledge about properties of the variable space into the analysis; this is particularly important in the high-dimensional, sparse datasets typical in geochemistry. In this paper we show how to utilise a block-structured correlation matrix using a modification of a well known non-linear probabilistic visualisation model, the Generative Topographic Mapping (GTM), which can cope with missing data. The block structure supports direct modelling of strongly correlated variables. We show that including prior structural information it is possible to improve both the data visualisation and the model fit. These benefits are demonstrated on artificial data as well as a real geochemical dataset used for oil exploration, where the proposed modifications improved the missing data imputation results by 3 to 13%.
Resumo:
The subject of this thesis is the n-tuple net.work (RAMnet). The major advantage of RAMnets is their speed and the simplicity with which they can be implemented in parallel hardware. On the other hand, this method is not a universal approximator and the training procedure does not involve the minimisation of a cost function. Hence RAMnets are potentially sub-optimal. It is important to understand the source of this sub-optimality and to develop the analytical tools that allow us to quantify the generalisation cost of using this model for any given data. We view RAMnets as classifiers and function approximators and try to determine how critical their lack of' universality and optimality is. In order to understand better the inherent. restrictions of the model, we review RAMnets showing their relationship to a number of well established general models such as: Associative Memories, Kamerva's Sparse Distributed Memory, Radial Basis Functions, General Regression Networks and Bayesian Classifiers. We then benchmark binary RAMnet. model against 23 other algorithms using real-world data from the StatLog Project. This large scale experimental study indicates that RAMnets are often capable of delivering results which are competitive with those obtained by more sophisticated, computationally expensive rnodels. The Frequency Weighted version is also benchmarked and shown to perform worse than the binary RAMnet for large values of the tuple size n. We demonstrate that the main issues in the Frequency Weighted RAMnets is adequate probability estimation and propose Good-Turing estimates in place of the more commonly used :Maximum Likelihood estimates. Having established the viability of the method numerically, we focus on providillg an analytical framework that allows us to quantify the generalisation cost of RAMnets for a given datasetL. For the classification network we provide a semi-quantitative argument which is based on the notion of Tuple distance. It gives a good indication of whether the network will fail for the given data. A rigorous Bayesian framework with Gaussian process prior assumptions is given for the regression n-tuple net. We show how to calculate the generalisation cost of this net and verify the results numerically for one dimensional noisy interpolation problems. We conclude that the n-tuple method of classification based on memorisation of random features can be a powerful alternative to slower cost driven models. The speed of the method is at the expense of its optimality. RAMnets will fail for certain datasets but the cases when they do so are relatively easy to determine with the analytical tools we provide.
Resumo:
A detailed literature survey confirmed cold roll-forming to be a complex and little understood process. In spite of its growing value, the process remains largely un-automated with few principles used in set-up of the rolling mill. This work concentrates on experimental investigations of operating conditions in order to gain a scientific understanding of the process. The operating conditions are; inter-pass distance, roll load, roll speed, horizontal roll alignment. Fifty tests have been carried out under varied operating conditions, measuring section quality and longitudinal straining to give a picture of bending. A channel section was chosen for its simplicity and compatibility with previous work. Quality measurements were measured in terms of vertical bow, twist and cross-sectional geometric accuracy, and a complete method of classifying quality has been devised. The longitudinal strain profile was recorded, by the use of strain gauges attached to the strip surface at five locations. Parameter control is shown to be important in allowing consistency in section quality. At present rolling mills are constructed with large tolerances on operating conditions. By reduction of the variability in parameters, section consistency is maintained and mill down-time is reduced. Roll load, alignment and differential roll speed are all shown to affect quality, and can be used to control quality. Set-up time is reduced by improving the design of the mill so that parameter values can be measured and set, without the need for judgment by eye. Values of parameters can be guided by models of the process, although elements of experience are still unavoidable. Despite increased parameter control, section quality is variable, if only due to variability in strip material properties. Parameters must therefore be changed during rolling. Ideally this can take place by closed-loop feedback control. Future work lies in overcoming the problems connected with this control.
Resumo:
The compaction behaviour of powders with soft and hard components is of particular interest to the paint processing industry. Unfortunately, at the present time, very little is known about the internal mechanisms within such systems and therefore suitable tests are required to help in the interpretative process. The TRUBAL, Distinct Element Method (D.E.M.) program was the method of investigation used in this study. Steel (hard) and rubber (soft) particles were used in the randomly-generated, binary assemblies because they provided a sharp contrast in physical properties. For reasons of simplicity, isotropic compression of two-dimensional assemblies was also initially considered. The assemblies were first subject to quasi-static compaction, in order to define their behaviour under equilibrium conditions. The stress-strain behaviour of the assemblies under such conditions was found to be adequately described by a second-order polynomial expansion. The structural evolution of the simulation assemblies was also similar to that observed for real powder systems. Further simulation tests were carried out to investigate the effects of particle size on the compaction behaviour of the two-dimensional, binary assemblies. Later work focused on the quasi-static compaction behaviour of three-dimensional assemblies, because they represented more realistic particle systems. The compaction behaviour of the assemblies during the simulation experiments was considered in terms of percolation theory concepts, as well as more familiar macroscopic and microstructural parameters. Percolation theory, which is based on ideas from statistical physics, has been found to be useful in the interpretation of the mechanical behaviour of simple, elastic lattices. However, from the evidence of this study, percolation theory is also able to offer a useful insight into the compaction behaviour of more realistic particle assemblies.
Resumo:
The thesis aims to define further the biometric correlates in anisometropic eyes in order to provide a structural foundation for propositions concerning the development of ametropia.Biometric data are presented for 40 anisometropes and 40 isometropic controls drawn from Caucasian and Chinese populations.The principal finding was that the main structural correlate of myopia is an increase in axial rather than equatorial dimensions of the posterior globe. This finding has not been previously reported for in vivo work on humans. The computational method described in the thesis is a more accessible method for determination of eye shape than current imaging techniques such as magnetic resonance imaging or laser Doppler interferometry (LDI). Retinal contours derived from LDI and computation were shown to be closely matched. Corneal topography revealed no differences in corneal characteristics in anisometropic eyes, which supports the finding that anisometropia arises from differences in vitreous chamber depth.The corollary to axial expansion in myopia, that is retinal stretch in central regions of the posterior pole, was investigated by measurement of disc-to-fovea distances (DFD) using a scanning laser ophthalmoscope. DFD was found to increase with increased myopia, which demonstrates the primary contribution made by posterior central regions of the globe to axial expansion.The ocular pulse volume and choroidal blood flow, measured with the Ocular Blood Flow Tonograph, were found to be reduced in myopia; the reductions were found to be significantly correlated with vitreous chamber depth. The thesis includes preliminary data on whether the relationship arises from the influx of a blood bolus into eyes of different posterior volumes or represents actual differences in choroidal blood flow.The results presented in this thesis show the utility of computed retinal contour and demonstrate that the structural correlate of myopia is axial rather than equatorial expansion of the vitreous chamber. The technique is suitable for large population studies and its relative simplicity makes it feasible for longitudinal studies on the development of ametropia in, for example, children.
Resumo:
Many workers have studied the ocular components which occur in eyes exhibiting differing amounts of central refractive error but few have ever considered the additional information that could be derived from a study of peripheral refraction. Before now, peripheral refraction has either been measured in real eyes or has otherwise been modelled in schematic eyes of varying levels of sophistication. Several differences occur between measured and modelled results which, if accounted for, could give rise to more information regarding the nature of the optical and retinal surfaces and their asymmetries. Measurements of ocular components and peripheral refraction, however, have never been made in the same sample of eyes. In this study, ocular component and peripheral refractive measurements were made in a sample of young near-emmetropic, myopic and hyperopic eyes. The data for each refractive group was averaged. A computer program was written to construct spherical surfaced schematic eyes from this data. More sophisticated eye models were developed making use of linear algebraic ray tracing program. This method allowed rays to be traced through toroidal aspheric surfaces which were translated or rotated with respect to each other. For simplicity, the gradient index optical nature of the crystalline lens was neglected. Various alterations were made in these eye models to reproduce the measured peripheral refractive patterns. Excellent agreement was found between the modelled and measured peripheral refractive values over the central 70o of the visual field. This implied that the additional biometric features incorporated in each eye model were representative of those which were present in the measured eyes. As some of these features are not otherwise obtainable using in vivo techniques, it is proposed that the variation of refraction in the periphery offers a very useful optical method for studying human ocular component dimensions.