13 resultados para Innovative monitoring techniques
em Aston University Research Archive
Resumo:
There is currently, no ideal system for studying nasal drug delivery in vitro. The existing techniques such as the Ussing chamber and cell culture all have major disadvantages. Most importantly, none of the existing techniques accurately represent the interior of the nasal cavity, with its airflow and humidity; neither do they allow the investigation of solid dosage forms.The work in this thesis represents the development of an in vitro model system in which the interior characteristics of the nasal cavity are closely represented, and solid or minimal volume dosage forms can be investigated. The complete nasal chamber consists of two sections: a lower tissue, viability chamber and an upper nasal chamber. The lower tissue viability chamber has been shown, using existing tissue viability monitoring techniques, to maintain the viability of a number of epithelial tissues, including porcine and rabbit nasal tissue, and rat ileal and Payers' patch tissue. The complete chamber including the upper nasal chamber has been shown to provide tissue viability for porcine and rabbit nasal tissue above that available using the existing Ussing chamber techniques. Adaptation of the complete system, and the development of the necessary experimental protocols that allow aerosol particle-sizing, together with videography, has shown that the new factors investigated, humidity and airflow, have a measurable effect on the delivered dose from a typical nasal pump. Similarly, adaptation of the chamber to fit under a confocal microscope, and the development of the necessary protocols has shown the effect of surface and size on the penetration of microparticulate materials into nasal epithelial tissues. The system developed in this thesis has been shown to be flexible, in allowing the development of the confocal and particle-sizing systems. For future nasal drug delivery studies, the ability to measure such factors as the size of the delivered system in the nasal cavity, the depth of penetration of the formulation into the tissue are essential. Additionally, to have access to other data such as that obtained from drug transport in the same system, and to have the tissue available for histological examination represents a significant advance in the usefulness of such an in vitro technique for nasal delivery.
Resumo:
A study of information available on the settlement characteristics of backfill in restored opencast coal mining sites and other similar earthworks projects has been undertaken. In addition, the methods of opencast mining, compaction controls, monitoring and test methods have been reviewed. To consider and develop the methods of predicting the settlement of fill, three sites in the West Midlands have been examined; at each, the backfill had been placed in a controlled manner. In addition, use has been made of a finite element computer program to compare a simple two-dimensional linear elastic analysis with field observations of surface settlements in the vicinity of buried highwalls. On controlled backfill sites, settlement predictions have been accurately made, based on a linear relationship between settlement (expressed as a percentage of fill height) against logarithm of time. This `creep' settlement was found to be effectively complete within 18 months of restoration. A decrease of this percentage settlement was observed with increasing fill thickness; this is believed to be related to the speed with which the backfill is placed. A rising water table within the backfill is indicated to cause additional gradual settlement. A prediction method, based on settlement monitoring, has been developed and used to determine the pattern of settlement across highwalls and buried highwalls. The zone of appreciable differential settlement was found to be mainly limited to the highwall area, the magnitude was dictated by the highwall inclination. With a backfill cover of about 15 metres over a buried highwall the magnitude of differential settlement was negligible. Use has been made of the proposed settlement prediction method and monitoring to control the re-development of restored opencase sites. The specifications, tests and monitoring techniques developed in recent years have been used to aid this. Such techniques have been valuable in restoring land previously derelict due to past underground mining.
Resumo:
The economic and efficient exploitation of composite materials in critical load bearing applications relies on the ability to predict safe operational lives without excessive conservatism. Developing life prediction and monitoring techniques in these complex, inhomogeneous materials requires an understanding of the various failure mechanisms which can take place. This article describes a range of damage mechanisms which are observed in polymer, metal and ceramic matrix composites.
Resumo:
Non-intrusive monitoring of health state of induction machines within industrial process and harsh environments poses a technical challenge. In the field, winding failures are a major fault accounting for over 45% of total machine failures. In the literature, many condition monitoring techniques based on different failure mechanisms and fault indicators have been developed where the machine current signature analysis (MCSA) is a very popular and effective method at this stage. However, it is extremely difficult to distinguish different types of failures and hard to obtain local information if a non-intrusive method is adopted. Typically, some sensors need to be installed inside the machines for collecting key information, which leads to disruption to the machine operation and additional costs. This paper presents a new non-invasive monitoring method based on GMRs to measure stray flux leaked from the machines. It is focused on the influence of potential winding failures on the stray magnetic flux in induction machines. Finite element analysis and experimental tests on a 1.5-kW machine are presented to validate the proposed method. With time-frequency spectrogram analysis, it is proven to be effective to detect several winding faults by referencing stray flux information. The novelty lies in the implement of GMR sensing and analysis of machine faults.
Resumo:
Technological innovation has been widely studied: however surprisingly little is known about the experience of managing the process. Most reports tend to be generalistic and/or prescriptive whereas it is argued that multiple sources of variation in the process limit the value of these. A description of the innovation process is given together with a presentation of what is knovrn from existing studies. Gaps identified in this area suggest that a variety of organisational influences are important and an attempt is made to identify some of these at individual, group and organisational level. A simple system model of the innovation management process is developed. Further investigation of the influence of these factors was made possible through an extended on-site case study. Methodology for this based upon participant observation coupled wth a wide and flexible range of techniques is described. Evidence is presented about many aspects of the innovation process from a number of different levels and perspectives: the attempt is to demonstrate the extent to which variation due to contingent influences takes place. It is argued that problems identified all relate to the issue of integration. This theme is also developed from an analytical viewoint and it is suggested that organisational response to increases in complexity in the external environment will be to match them with internal complexity. Differentiation of this kind will require extensive and flexible integration, especially in those inherently uncertain areas associated with innovation. Whilst traditionally a function of management, it is argued that integration needs have increased to the point where a new specialism is required. The concept of integration specialist is developed from this analysis and attempts at simple integrative change during the research are described. Finally a strategy for integration - or rather for building in integrative capability - ln the organisation studied is described.
Resumo:
The work presents a new method that combines plasma etching with extrinsic techniques to simultaneously measure matrix and surface protein and lipid deposits. The acronym for this technique is PEEMS - Plasma Etching and Emission Monitoring System. Previous work has identified the presence of proteinaceous and lipoidal deposition on the surface of contact lenses and highlighted the probability that penetration of these spoilants will occur. This technique developed here allows unambiguous identification of the depth of penetration of spoilants to be made for various material types. It is for this reason that the technique has been employed in this thesis. The technique is applied as a 'molecular' scalpel, removing known amounts of material from the target. In this case from both the anterior .and posterior surfaces of a 'soft' contact lens. The residual material is then characterised by other analytical techniques such as UV/visible .and fluorescence spectroscopy. Several studies have be.en carried out for both in vivo and in vitro spoilt materials. The analysis and identification of absorbed protein and lipid of the substrate revealed the importance of many factors in the absorption and adsorption process. The effect of the material structure, protein nature (in terms of size, shape and charge) and environment conditions were examined in order to determine the relative uptake of tear proteins. The studies were extended to real cases in order to study the. patient dependent factors and lipoidal penetration.
Resumo:
The case for monitoring large-scale sea level variability is established in the context of the estimation of the extent of anthropogenic climate change. Satellite altimeters are identified as having the potential to monitor this change with high resolution and accuracy. Possible sources of systematic errors and instabilities in these instruments which would be hurdles to the most accurate monitoring of such ocean signals are examined. Techniques for employing tide gauges to combat such inaccuracies are proposed and developed. The tide gauge at Newhaven in Sussex is used in conjunction with the nearby satellite laser ranger and high-resolution ocean models to estimate the absolute bias of the TOPEX, Poseidon, ERS 1 and ERS 2 altimeters. The theory which underlies the augmentation of altimeter measurements with tide gauge data is developed. In order to apply this, the tide gauges of the World Ocean Circulation Experiment are assessed and their suitability for altimeter calibration is determined. A reliable subset of these gauges is derived. A method of intra-altimeter calibration is developed using these tide gauges to remove the effect of variability over long time scales. In this way the long-term instability in the TOPEX range measurement is inferred and the drift arising from the on-board ultra stable oscillator is thus detected. An extension to this work develops a method for inter-altimeter calibration, allowing the systematic differences between unconnected altimeters to be measured. This is applied to the TOPEX and ERS 1 altimeters.
Resumo:
An initial aim of this project was to evaluate the conventional techniques used in the analysis of newly prepared environmentally friendly water-borne automotive coatings and compare them with solvent-borne coatings having comparable formulations. The investigation was carried out on microtuned layers as well as on complete automotive multi-layer paint systems. Methods used included the very traditional methods of gloss and hardness and the commonly used photo-oxidation index (from FTIR spectral analysis). All methods enabled the durability to weathering of the automotive coatings to be initially investigated. However, a primary aim of this work was to develop methods for analysing the early stages of chemical and property changes in both the solvent-borne and water-borne coating systems that take place during outdoor natural weathering exposures and under accelerated artificial exposures. This was achieved by using dynamic mechanical analysis (DMA), in both tension mode on the microtomed films (on all depths of the coating systems from the uppermost clear-coat right down to the electron-coat) and bending mode of the full (unmicrotomed) systems, as well as MALDI-Tof analysis on the movement of the stabilisers in the full systems. Changes in glass transition temperature and relative cross-link density were determined after weathering and these were related to changes in the chemistries of the binder systems of the coatings after weathering. Concentration profiles of the UV-stabilisers (UVA and HALS) in the coating systems were analysed as a consequence of migration in the coating systems in separate microtomed layers of the paint samples (depth profiling) after weathering and diffusion co-efficient and solubility parameters were determined for the UV stabilisers in the coating systems. The methods developed were used to determine the various physical and chemical changes that take place during weathering of the different (water-borne and solvent-borne) systems (photoxidation). The solvent-borne formulations showed less changes after weathering (both natural and accelerated) than the corresponding water-borne formulations due to the lower level of cross-links in the binders of the water-borne systems. The silver systems examined were more durable than the blue systems due to the reflecting power of the aluminium and the lower temperature of the silver coatings.
Resumo:
This thesis presents experimental investigation of different effects/techniques that can be used to upgrade legacy WDM communication systems. The main issue in upgrading legacy systems is that the fundamental setup, including components settings such as EDFA gains, does not need to be altered thus the improvement must be carried out at the network terminal. A general introduction to optical fibre communications is given at the beginning, including optical communication components and system impairments. Experimental techniques for performing laboratory optical transmission experiments are presented before the experimental work of this thesis. These techniques include optical transmitter and receiver designs as well as the design and operation of the recirculating loop. The main experimental work includes three different studies. The first study involves a development of line monitoring equipment that can be reliably used to monitor the performance of optically amplified long-haul undersea systems. This equipment can provide instant finding of the fault locations along the legacy communication link which in tum enables rapid repair execution to be performed hence upgrading the legacy system. The second study investigates the effect of changing the number of transmitted 1s and Os on the performance of WDM system. This effect can, in reality, be seen in some coding systems, e.g. forward-error correction (FEC) technique, where the proportion of the 1s and Os are changed at the transmitter by adding extra bits to the original bit sequence. The final study presents transmission results after all-optical format conversion from NRZ to CSRZ and from RZ to CSRZ using semiconductor optical amplifier in nonlinear optical loop mirror (SOA-NOLM). This study is mainly based on the fact that the use of all-optical processing, including format conversion, has become attractive for the future data networks that are proposed to be all-optical. The feasibility of the SOA-NOLM device for converting single and WDM signals is described. The optical conversion bandwidth and its limitations for WDM conversion are also investigated. All studies of this thesis employ 10Gbit/s single or WDM signals being transmitted over dispersion managed fibre span in the recirculating loop. The fibre span is composed of single-mode fibres (SMF) whose losses and dispersion are compensated using erbium-doped fibre amplifiers (EDFAs) and dispersion compensating fibres (DCFs), respectively. Different configurations of the fibre span are presented in different parts.
Resumo:
Reactive, but not a reactant. Heterogeneous catalysts play an unseen role in many of today's processes and products. With the increasing emphasis on sustainability in both products and processes, this handbook is the first to combine the hot topics of heterogeneous catalysis and clean technology. It focuses on the development of heterogeneous catalysts for use in clean chemical synthesis, dealing with how modern spectroscopic techniques can aid the design of catalysts for use in liquid phase reactions, their application in industrially important chemistries - including selective oxidation, hydrogenation, solid acid- and base-catalyzed processes - as well as the role of process intensification and use of renewable resources in improving the sustainability of chemical processes. With its emphasis on applications, this book is of high interest to those working in the industry.
Resumo:
In recent years, we have witnessed the mushrooming of pro- democracy and protest movements not only in the Arab world, but also within Europe and the Americas. Such movements have ranged from popular upheavals, like in Tunisia and Egypt, to the organization of large- scale demonstrations against unpopular policies, as in Spain, Greece and Poland. What connects these different events are not only their democratic aspirations, but also their innovative forms of communication and organization through online means, which are sometimes considered to be outside of the State’s control. At the same time, however, it has become more and more apparent that countries are attempting to increase their understanding of, and control over, their citizens’ actions in the digital sphere. This involves striving to develop surveillance instruments, control mechanisms and processes engineered to dominate the digital public sphere, which necessitates the assistance and support of private actors such as Internet intermediaries. Examples include the growing use of Internet surveillance technology with which online data traffic is analysed, and the extensive monitoring of social networks. Despite increased media attention, academic debate on the ambivalence of these technologies, mechanisms and techniques remains relatively limited, as is discussion of the involvement of corporate actors. The purpose of this edited volume is to reflect on how Internet-related technologies, mechanisms and techniques may be used as a means to enable expression, but also to restrict speech, manipulate public debate and govern global populaces.
Resumo:
Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using multiple sensors is inherently more accurate than using a single pressure reading to estimate depth. Second, common mode temperature induced wavelength shifts in the individual sensors are automatically compensated. Thirdly, temperature induced changes in the sensor pressure sensitivity are also compensated. Fourthly, the approach provides the possibility to detect and compensate for malfunctioning sensors. Finally, the system is immune to changes in the density of the monitored fluid and even to changes in the effective force of gravity, as might be obtained in an aerospace application. The performance of an individual sensor was characterized and displays a sensitivity (54 pm/cm), enhanced by more than a factor of 2 when compared to a sensor head configuration based on a silica FBG published in the literature, resulting from the much lower elastic modulus of POF. Furthermore, the temperature/humidity behavior and measurement resolution were also studied in detail. The proposed configuration also displays a highly linear response, high resolution and good repeatability. The results suggest the new configuration can be a useful tool in many different applications, such as aircraft fuel monitoring, and biochemical and environmental sensing, where accuracy and stability are fundamental. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.