14 resultados para Force balance system

em Aston University Research Archive


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A structured approach to process improvement is described in the context of the human resources division of a UK police force. The approach combines a number of established techniques of process improvement such as the balanced scorecard and process mapping with a scoring system developed to prioritise processes for improvement. The methodology described presents one way of ensuring the correct processes are identified and redesigned at an operational level in such a way as to support the organisation's strategic aims. In addition, a performance measurement system is utilised to attempt to ensure that the changes implemented do actually achieve the desired effect over time. The case demonstrates the need to choose and in some cases develop in-house tools and techniques dependent on the context of the process improvement effort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Issues of wear and tribology are increasingly important in computer hard drives as slider flying heights are becoming lower and disk protective coatings thinner to minimise spacing loss and allow higher areal density. Friction, stiction and wear between the slider and disk in a hard drive were studied using Accelerated Friction Test (AFT) apparatus. Contact Start Stop (CSS) and constant speed drag tests were performed using commercial rigid disks and two different air bearing slider types. Friction and stiction were captured during testing by a set of strain gauges. System parameters were varied to investigate their effect on tribology at the head/disk interface. Chosen parameters were disk spinning velocity, slider fly height, temperature, humidity and intercycle pause. The effect of different disk texturing methods was also studied. Models were proposed to explain the influence of these parameters on tribology. Atomic Force Microscopy (AFM) and Scanning Electron Microscopy (SEM) were used to study head and disk topography at various test stages and to provide physical parameters to verify the models. X-ray Photoelectron Spectroscopy (XPS) was employed to identify surface composition and determine if any chemical changes had occurred as a result of testing. The parameters most likely to influence the interface were identified for both CSS and drag testing. Neural Network modelling was used to substantiate results. Topographical AFM scans of disk and slider were exported numerically to file and explored extensively. Techniques were developed which improved line and area analysis. A method for detecting surface contacts was also deduced, results supported and explained observed AFT behaviour. Finally surfaces were computer generated to simulate real disk scans, this allowed contact analysis of many types of surface to be performed. Conclusions were drawn about what disk characteristics most affected contacts and hence friction, stiction and wear.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this investigation was to design a novel magnetic drive and bearing system for a new centrifugal rotary blood pump (CRBP). The drive system consists of two components: (i) permanent magnets within the impeller of the CRBP; and (ii) the driving electromagnets. Orientation of the magnets varies from axial through to 60° included out-lean (conical configuration). Permanent magnets replace the electromagnet drive to allow easier characterization. The performance characteristics tested were the axial force of attraction between the stator and rotor at angles of rotational alignment, Ø, and the corresponding torque at those angles. The drive components were tested for various magnetic cone angles, ?. The test was repeated for three backing conditions: (i) non-backed; (ii) steel-cupped; and (iii) steel plate back-iron, performed on an Instron tensile testing machine. Experimental results were expanded upon through finite element and boundary element analysis (BEM). The force/torque characteristics were maximal for a 12-magnet configuration at 0° cone angle with steel-back iron (axial force = 60 N, torque = 0.375 Nm). BEM showed how introducing a cone angle increases the radial restoring force threefold while not compromising axial bearing force. Magnets in the drive system may be orientated not only to provide adequate coupling to drive the CRBP, but to provide significant axial and radial bearing forces capable of withstanding over 100 m/s2 shock excitation on the impeller. Although the 12 magnet 0° (?) configuration yielded the greatest force/torque characteristic, this was seen as potentially unattractive as this magnetic cone angle yielded poor radial restoring force characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Jeffcott rotor consists of a disc at the centre of an axle supported at its end by bearings. A bolted Jeffcott rotor is formed by two discs, each with a shaft on one side. The discs are held together by spring loaded bolts near the outer edge. When the rotor turns there is tendency for the discs to separate on one side. This effect is more marked if the rotor is unbalanced, especially at resonance speeds. The equations of motion of the system have been developed with four degrees of freedom to include the rotor and bearing movements in the respective axes. These equations which include non-linear terms caused by the rotor opening, are subjected to external force such from rotor imbalance. A simulation model based on these equations was created using SIMULINK. An experimental test rig was used to characterise the dynamic features. Rotor discs open at a lateral displacement of the rotor of 0.8 mm. This is the threshold value used to show the change of stiffness from high stiffness to low stiffness. The experimental results, which measure the vibration amplitude of the rotor, show the dynamic behaviour of the bolted rotor due to imbalance. Close agreement of the experimental and theoretical results from time histories, waterfall plots, pseudo-phase plots and rotor orbit plot, indicated the validity of the model and existence of the non-linear jump phenomenon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis described the research carried out on the development of a novel hardwired tactile sensing system tailored for the application of a next generation of surgical robotic and clinical devices, namely a steerable endoscope with tactile feedback, and a surface plate for patient posture and balance. Two case studies are examined. The first is a one-dimensional sensor for the steerable endoscope retrieving shape and ‘touch’ information. The second is a two-dimensional surface which interprets the three-dimensional motion of a contacting moving load. This research can be used to retrieve information from a distributive tactile sensing surface of a different configuration, and can interpret dynamic and static disturbances. This novel approach to sensing has the potential to discriminate contact and palpation in minimal invasive surgery (MIS) tools, and posture and balance in patients. The hardwired technology uses an embedded system based on Field Programmable Gate Arrays (FPGA) as the platform to perform the sensory signal processing part in real time. High speed robust operation is an advantage from this system leading to versatile application involving dynamic real time interpretation as described in this research. In this research the sensory signal processing uses neural networks to derive information from input pattern from the contacting surface. Three neural network architectures namely single, multiple and cascaded were introduced in an attempt to find the optimum solution for discrimination of the contacting outputs. These architectures were modelled and implemented into the FPGA. With the recent introduction of modern digital design flows and synthesis tools that essentially take a high-level sensory processing behaviour specification for a design, fast prototyping of the neural network function can be achieved easily. This thesis outlines the challenge of the implementations and verifications of the performances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The world food crisis, Britain's reliance on imported food and feedstuffs and balance of payments difficulties were some of the factors which lent weight to the call for increased self-sufficiency in Britain's agriculture in the 1970s. This project considers two main areas: an investigation of the impact of radical agricultural change, designed to increase self-sufficiency, on the balance of payments; and, an appraisal of the potential role of the food industry within a radically different food system. The study proceeded by: an examination of the principles of agricultural policy and its development in Britain; an overview of the mechanism and meaning of the balance of payments; a consideration of the debate on agricultural import saving; the construction of radical agricultural strategies; the estimation of effects of the strategies, particularly to the balance of. payments; the role of the food industry and possible innovations within the strategies; a case study of textured vegetable proteins; and, the wider implications of implementation of radical agricultural alternatives. Two strategies were considered: a vegan system, involving no livestock; and, an intermediate system, including some livestock and dairy cattle. The study concludes that although agricultural change could in principle make a contribution to the balance of payments, implementation of agricultural change cannot be justified for this purpose alone. First, balance of payments problems can be solved by more appropriate methods. Second, the UK' s balance of payments problem has disappeared for the time being owing to North Sea oil and economic recession. Third, the political and social consequences of the changes investigated would be unacceptable. Progress in UK food policy is likely to be in the form of an integrated food and health policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research developed in this thesis explores the sensing and inference of human movement in a dynamic way, as opposed to conventional measurement systems, that are only concerned with discrete evaluations of stimuli in sequential time. Typically, conventional approaches are used to infer the dynamic movement of the body; such as vision and motion tracking devices, with either a human diagnosis or complex image processing algorithm to classify the movement. This research is therefore the first of its kind to attempt and provide a movement classifying algorithm through the use of minimal sensing points, with the application for this novel system, to classify human movement during a golf swing. There are two main categories of force sensing. Firstly, array-type systems consisting of many sensing elements, and are the most commonly researched and commercially available. Secondly, reduced force sensing element systems (RFSES) also known as distributive systems have only been recently exploited in the academic world. The fundamental difference between these systems is that array systems handle the data captured from each sensor as unique outputs and suffer the effects of resolution. The effect of resolution, is the error in the load position measurement between sensing elements, as the output is quantized in terms of position. This can be compared to a reduced sensor element system that maximises that data received through the coupling of data from a distribution of sensing points to describe the output in discrete time. Also this can be extended to a coupling of transients in the time domain to describe an activity or dynamic movement. It is the RFSES that is to be examined and exploited in the commercial sector due to its advantages over array-based approaches such as reduced design, computational complexity and cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many areas of northern India, salinity renders groundwater unsuitable for drinking and even for irrigation. Though membrane treatment can be used to remove the salt, there are some drawbacks to this approach e.g. (1) depletion of the groundwater due to over-abstraction, (2) saline contamination of surface water and soil caused by concentrate disposal and (3) high electricity usage. To address these issues, a system is proposed in which a photovoltaic-powered reverse osmosis (RO) system is used to irrigate a greenhouse (GH) in a stand-alone arrangement. The concentrate from the RO is supplied to an evaporative cooling system, thus reducing the volume of the concentrate so that finally it can be evaporated in a pond to solid for safe disposal. Based on typical meteorological data for Delhi, calculations based on mass and energy balance are presented to assess the sizing and cost of the system. It is shown that solar radiation, freshwater output and evapotranspiration demand are readily matched due to the approximately linear relation among these variables. The demand for concentrate varies independently, however, thus favouring the use of a variable recovery arrangement. Though enough water may be harvested from the GH roof to provide year-round irrigation, this would require considerable storage. Some practical options for storage tanks are discussed. An alternative use of rainwater is in misting to reduce peak temperatures in the summer. An example optimised design provides internal temperatures below 30EC (monthly average daily maxima) for 8 months of the year and costs about €36,000 for the whole system with GH floor area of 1000 m2 . Further work is needed to assess technical risks relating to scale-deposition in the membrane and evaporative pads, and to develop a business model that will allow such a project to succeed in the Indian rural context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Apoptosis is an important cell death mechanism by which multicellular organisms remove unwanted cells. It culminates in a rapid, controlled removal of cell corpses by neighboring or recruited viable cells. Whilst many of the molecular mechanisms that mediate corpse clearance are components of the innate immune system, clearance of apoptotic cells is an anti-inflammatory process. Control of cell death is dependent on competing pro-apoptotic and anti-apoptotic signals. Evidence now suggests a similar balance of competing signals is central to the effective removal of cells, through so called 'eat me' and 'don't eat me' signals. Competing signals are also important for the controlled recruitment of phagocytes to sites of cell death. Consequently recruitment of phagocytes to and from sites of cell death can underlie the resolution or inappropriate propagation of cell death and inflammation. This article highlights our understanding of mechanisms mediating clearance of dying cells and discusses those mechanisms controlling phagocyte migration and how inappropriate control may promote important pathologies. © the authors, publisher and licensee libertas academica limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The UK Police Force is required to operate communications centres under increased funding constraints. Staff represent the main cost in operating the facility and the key issue for the efficient deployment of staff, in this case call handler staff, is to try to ensure sufficient staff are available to make a timely response to customer calls when the timing of individual calls is difficult to predict. A discrete-event simulation study is presented of an investigation of a new shift pattern for call handler staff that aims to improve operational efficiency. The communications centre can be considered a specialised case of a call centre but an important issue for Police Force management is the particularly stressful nature of the work staff are involved with when responding to emergency calls. Thus decisions regarding changes to the shift system were made in the context of both attempting to improve efficiency by matching staff supply with customer demand, but also ensuring a reasonable workload pattern for staff over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a presumption that invention is good. It provides us with innovative goods, services and ways of doing things leading to greater employment, wealth and health. This article looks at the two recent UK cases regarding statutory extra compensation that may be awarded to employee inventors under the Patents Act 1977. Most universities worldwide and many companies have individual inventor reward schemes. Researchers now work in teams made up of both industry and academic researchers who are often based in different countries where different legal regimes apply. Is leaving the decision to award employees extra financial compensation up to individual companies unfair, unequal and de-motivating? Is having differing legislative systems in different European countries counter productive and a barrier to economic growth? There must be a balance between the inventor and the innovator. Do we have it right and if not what should it be? Legislation: Patents Act 1977 s.39 , s.40 , s.41 Cases: Kelly v GE Healthcare Ltd [2009] EWHC 181 (Pat); [2009] R.P.C. 12 (Ch D (Patents Ct)) Shanks v Unilever Plc [2010] EWCA Civ 1283; [2011] R.P.C. 12 (CA (Civ Div))

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT One of the current research trends in Enterprise Resource Planning (ERP) involves examining the critical factors for its successful implementation. However, such research is limited to system implementation, not focusing on the flexibility of ERP to respond to changes in business. Therefore, this study explores a combination system, made up of an ERP and informality, intended to provide organisations with efficient and flexible performance simultaneously. In addition, this research analyses the benefits and challenges of using the system. The research was based on socio-technical system (STS) theory which contains two dimensions: 1) a technical dimension which evaluates the performance of the system; and 2) a social dimension which examines the impact of the system on an organisation. A mixed method approach has been followed in this research. The qualitative part aims to understand the constraints of using a single ERP system, and to define a new system corresponding to these problems. To achieve this goal, four Chinese companies operating in different industries were studied, all of which faced challenges in using an ERP system due to complexity and uncertainty in their business environments. The quantitative part contains a discrete-event simulation study that is intended to examine the impact of operational performance when a company implements the hybrid system in a real-life situation. Moreover, this research conducts a further qualitative case study, the better to understand the influence of the system in an organisation. The empirical aspect of the study reveals that an ERP with pre-determined business activities cannot react promptly to unanticipated changes in a business. Incorporating informality into an ERP can react to different situations by using different procedures that are based on the practical knowledge of frontline employees. Furthermore, the simulation study shows that the combination system can achieve a balance between efficiency and flexibility. Unlike existing research, which emphasises a continuous improvement in the IT functions of an enterprise system, this research contributes to providing a definition of a new system in theory, which has mixed performance and contains both the formal practices embedded in an ERP and informal activities based on human knowledge. It supports both cost-efficiency in executing business transactions and flexibility in coping with business uncertainty.This research also indicates risks of using the system, such as using an ERP with limited functions; a high cost for performing informally; and a low system acceptance, owing to a shift in organisational culture. With respect to practical contribution, this research suggests that companies can choose the most suitable enterprise system approach in accordance with their operational strategies. The combination system can be implemented in a company that needs to operate a medium amount of volume and variety. By contrast, the traditional ERP system is better suited in a company that operates a high-level volume market, while an informal system is more suitable for a firm with a requirement for a high level of variety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Full text: The idea of producing proteins from recombinant DNA hatched almost half a century ago. In his PhD thesis, Peter Lobban foresaw the prospect of inserting foreign DNA (from any source, including mammalian cells) into the genome of a λ phage in order to detect and recover protein products from Escherichia coli [ 1 and 2]. Only a few years later, in 1977, Herbert Boyer and his colleagues succeeded in the first ever expression of a peptide-coding gene in E. coli — they produced recombinant somatostatin [ 3] followed shortly after by human insulin. The field has advanced enormously since those early days and today recombinant proteins have become indispensable in advancing research and development in all fields of the life sciences. Structural biology, in particular, has benefitted tremendously from recombinant protein biotechnology, and an overwhelming proportion of the entries in the Protein Data Bank (PDB) are based on heterologously expressed proteins. Nonetheless, synthesizing, purifying and stabilizing recombinant proteins can still be thoroughly challenging. For example, the soluble proteome is organized to a large part into multicomponent complexes (in humans often comprising ten or more subunits), posing critical challenges for recombinant production. A third of all proteins in cells are located in the membrane, and pose special challenges that require a more bespoke approach. Recent advances may now mean that even these most recalcitrant of proteins could become tenable structural biology targets on a more routine basis. In this special issue, we examine progress in key areas that suggests this is indeed the case. Our first contribution examines the importance of understanding quality control in the host cell during recombinant protein production, and pays particular attention to the synthesis of recombinant membrane proteins. A major challenge faced by any host cell factory is the balance it must strike between its own requirements for growth and the fact that its cellular machinery has essentially been hijacked by an expression construct. In this context, Bill and von der Haar examine emerging insights into the role of the dependent pathways of translation and protein folding in defining high-yielding recombinant membrane protein production experiments for the common prokaryotic and eukaryotic expression hosts. Rather than acting as isolated entities, many membrane proteins form complexes to carry out their functions. To understand their biological mechanisms, it is essential to study the molecular structure of the intact membrane protein assemblies. Recombinant production of membrane protein complexes is still a formidable, at times insurmountable, challenge. In these cases, extraction from natural sources is the only option to prepare samples for structural and functional studies. Zorman and co-workers, in our second contribution, provide an overview of recent advances in the production of multi-subunit membrane protein complexes and highlight recent achievements in membrane protein structural research brought about by state-of-the-art near-atomic resolution cryo-electron microscopy techniques. E. coli has been the dominant host cell for recombinant protein production. Nonetheless, eukaryotic expression systems, including yeasts, insect cells and mammalian cells, are increasingly gaining prominence in the field. The yeast species Pichia pastoris, is a well-established recombinant expression system for a number of applications, including the production of a range of different membrane proteins. Byrne reviews high-resolution structures that have been determined using this methylotroph as an expression host. Although it is not yet clear why P. pastoris is suited to producing such a wide range of membrane proteins, its ease of use and the availability of diverse tools that can be readily implemented in standard bioscience laboratories mean that it is likely to become an increasingly popular option in structural biology pipelines. The contribution by Columbus concludes the membrane protein section of this volume. In her overview of post-expression strategies, Columbus surveys the four most common biochemical approaches for the structural investigation of membrane proteins. Limited proteolysis has successfully aided structure determination of membrane proteins in many cases. Deglycosylation of membrane proteins following production and purification analysis has also facilitated membrane protein structure analysis. Moreover, chemical modifications, such as lysine methylation and cysteine alkylation, have proven their worth to facilitate crystallization of membrane proteins, as well as NMR investigations of membrane protein conformational sampling. Together these approaches have greatly facilitated the structure determination of more than 40 membrane proteins to date. It may be an advantage to produce a target protein in mammalian cells, especially if authentic post-translational modifications such as glycosylation are required for proper activity. Chinese Hamster Ovary (CHO) cells and Human Embryonic Kidney (HEK) 293 cell lines have emerged as excellent hosts for heterologous production. The generation of stable cell-lines is often an aspiration for synthesizing proteins expressed in mammalian cells, in particular if high volumetric yields are to be achieved. In his report, Buessow surveys recent structures of proteins produced using stable mammalian cells and summarizes both well-established and novel approaches to facilitate stable cell-line generation for structural biology applications. The ambition of many biologists is to observe a protein's structure in the native environment of the cell itself. Until recently, this seemed to be more of a dream than a reality. Advances in nuclear magnetic resonance (NMR) spectroscopy techniques, however, have now made possible the observation of mechanistic events at the molecular level of protein structure. Smith and colleagues, in an exciting contribution, review emerging ‘in-cell NMR’ techniques that demonstrate the potential to monitor biological activities by NMR in real time in native physiological environments. A current drawback of NMR as a structure determination tool derives from size limitations of the molecule under investigation and the structures of large proteins and their complexes are therefore typically intractable by NMR. A solution to this challenge is the use of selective isotope labeling of the target protein, which results in a marked reduction of the complexity of NMR spectra and allows dynamic processes even in very large proteins and even ribosomes to be investigated. Kerfah and co-workers introduce methyl-specific isotopic labeling as a molecular tool-box, and review its applications to the solution NMR analysis of large proteins. Tyagi and Lemke next examine single-molecule FRET and crosslinking following the co-translational incorporation of non-canonical amino acids (ncAAs); the goal here is to move beyond static snap-shots of proteins and their complexes and to observe them as dynamic entities. The encoding of ncAAs through codon-suppression technology allows biomolecules to be investigated with diverse structural biology methods. In their article, Tyagi and Lemke discuss these approaches and speculate on the design of improved host organisms for ‘integrative structural biology research’. Our volume concludes with two contributions that resolve particular bottlenecks in the protein structure determination pipeline. The contribution by Crepin and co-workers introduces the concept of polyproteins in contemporary structural biology. Polyproteins are widespread in nature. They represent long polypeptide chains in which individual smaller proteins with different biological function are covalently linked together. Highly specific proteases then tailor the polyprotein into its constituent proteins. Many viruses use polyproteins as a means of organizing their proteome. The concept of polyproteins has now been exploited successfully to produce hitherto inaccessible recombinant protein complexes. For instance, by means of a self-processing synthetic polyprotein, the influenza polymerase, a high-value drug target that had remained elusive for decades, has been produced, and its high-resolution structure determined. In the contribution by Desmyter and co-workers, a further, often imposing, bottleneck in high-resolution protein structure determination is addressed: The requirement to form stable three-dimensional crystal lattices that diffract incident X-ray radiation to high resolution. Nanobodies have proven to be uniquely useful as crystallization chaperones, to coax challenging targets into suitable crystal lattices. Desmyter and co-workers review the generation of nanobodies by immunization, and highlight the application of this powerful technology to the crystallography of important protein specimens including G protein-coupled receptors (GPCRs). Recombinant protein production has come a long way since Peter Lobban's hypothesis in the late 1960s, with recombinant proteins now a dominant force in structural biology. The contributions in this volume showcase an impressive array of inventive approaches that are being developed and implemented, ever increasing the scope of recombinant technology to facilitate the determination of elusive protein structures. Powerful new methods from synthetic biology are further accelerating progress. Structure determination is now reaching into the living cell with the ultimate goal of observing functional molecular architectures in action in their native physiological environment. We anticipate that even the most challenging protein assemblies will be tackled by recombinant technology in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using multiple sensors is inherently more accurate than using a single pressure reading to estimate depth. Second, common mode temperature induced wavelength shifts in the individual sensors are automatically compensated. Thirdly, temperature induced changes in the sensor pressure sensitivity are also compensated. Fourthly, the approach provides the possibility to detect and compensate for malfunctioning sensors. Finally, the system is immune to changes in the density of the monitored fluid and even to changes in the effective force of gravity, as might be obtained in an aerospace application. The performance of an individual sensor was characterized and displays a sensitivity (54 pm/cm), enhanced by more than a factor of 2 when compared to a sensor head configuration based on a silica FBG published in the literature, resulting from the much lower elastic modulus of POF. Furthermore, the temperature/humidity behavior and measurement resolution were also studied in detail. The proposed configuration also displays a highly linear response, high resolution and good repeatability. The results suggest the new configuration can be a useful tool in many different applications, such as aircraft fuel monitoring, and biochemical and environmental sensing, where accuracy and stability are fundamental. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.