839 resultados para Next-generation sequencing
Resumo:
The global rise in antibiotic resistance is a significant problem facing healthcare professionals. In particular within the cystic fibrosis (CF) lung, bacteria can establish chronic infection and resistance to a wide array of antibiotic therapies. One of the principle pathogens associated with chronic infection in the CF lung is Pseudomonas aeruginosa. P. aeruginosa can establish chronic infection in the CF lung partly through the use of the biofilm mode of growth. This biofilm mode of growth offers a considerable degree of protection from a wide variety of challenges such as the host immune system or antibiotic therapy. The threat posed by the emergence of chronic pathogens is prompting the development of next generation antimicrobials. The biofilm mode of growth is often central to the establishment of chronic infection and the development of antibiotic resistance. Thus, targeting biofilm formation has emerged as one of the principle strategies for the development of next generation antimicrobials. In this thesis two separate approaches were used to identify potential anti - biofilm targets. The first strategy focused on the identification of novel genes with a role in a biofilm formation. High throughput screening identified almost 300 genes which had a role in biofilm formation. A number of these genes were characterised at a phenotypic and a molecular level. The second strategy focused on the identification of compounds capable of inhibiting biofilm formation. A collection of marine sponge isolated bacteria were screened for the ability to inhibit the central pathway regulating biofilm formation, quorum sensing. A number of distinct isolates were identified that had quorum sensing inhibition activity from which, a Pseudomonas isolate was selected for further characterisation. A specific compound capable of inhibiting quorum sensing was identified using chemical analytical technologies in the supernatant of this marine isolate.
Resumo:
High volumes of data traffic along with bandwidth hungry applications, such as cloud computing and video on demand, is driving the core optical communication links closer and closer to their maximum capacity. The research community has clearly identifying the coming approach of the nonlinear Shannon limit for standard single mode fibre [1,2]. It is in this context that the work on modulation formats, contained in Chapter 3 of this thesis, was undertaken. The work investigates the proposed energy-efficient four-dimensional modulation formats. The work begins by studying a new visualisation technique for four dimensional modulation formats, akin to constellation diagrams. The work then carries out one of the first implementations of one such modulation format, polarisation-switched quadrature phase-shift keying (PS-QPSK). This thesis also studies two potential next-generation fibres, few-mode and hollow-core photonic band-gap fibre. Chapter 4 studies ways to experimentally quantify the nonlinearities in few-mode fibre and assess the potential benefits and limitations of such fibres. It carries out detailed experiments to measure the effects of stimulated Brillouin scattering, self-phase modulation and four-wave mixing and compares the results to numerical models, along with capacity limit calculations. Chapter 5 investigates hollow-core photonic band-gap fibre, where such fibres are predicted to have a low-loss minima at a wavelength of 2μm. To benefit from this potential low loss window requires the development of telecoms grade subsystems and components. The chapter will outline some of the development and characterisation of these components. The world's first wavelength division multiplexed (WDM) subsystem directly implemented at 2μm is presented along with WDM transmission over hollow-core photonic band-gap fibre at 2μm. References: [1]P. P. Mitra, J. B. Stark, Nature, 411, 1027-1030, 2001 [2] A. D. Ellis et al., JLT, 28, 423-433, 2010.
Resumo:
Long reach passive optical networks (LR-PONs), which integrate fibre-to-the-home with metro networks, have been the subject of intensive research in recent years and are considered one of the most promising candidates for the next generation of optical access networks. Such systems ideally have reaches greater than 100km and bit rates of at least 10Gb/s per wavelength in the downstream and upstream directions. Due to the limited equipment sharing that is possible in access networks, the laser transmitters in the terminal units, which are usually the most expensive components, must be as cheap as possible. However, the requirement for low cost is generally incompatible with the need for a transmitter chirp characteristic that is optimised for such long reaches at 10Gb/s, and hence dispersion compensation is required. In this thesis electronic dispersion compensation (EDC) techniques are employed to increase the chromatic dispersion tolerance and to enhance the system performance at the expense of moderate additional implementation complexity. In order to use such EDC in LR-PON architectures, a number of challenges associated with the burst-mode nature of the upstream link need to be overcome. In particular, the EDC must be made adaptive from one burst to the next (burst-mode EDC, or BM-EDC) in time scales on the order of tens to hundreds of nanoseconds. Burst-mode operation of EDC has received little attention to date. The main objective of this thesis is to demonstrate the feasibility of such a concept and to identify the key BM-EDC design parameters required for applications in a 10Gb/s burst-mode link. This is achieved through a combination of simulations and transmission experiments utilising off-line data processing. The research shows that burst-to-burst adaptation can in principle be implemented efficiently, opening the possibility of low overhead, adaptive EDC-enabled burst-mode systems.
Resumo:
For efficient use of metal oxides, such as MnO(2) and RuO(2), in pseudocapacitors and other electrochemical applications, the poor conductivity of the metal oxide is a major problem. To tackle the problem, we have designed a ternary nanocomposite film composed of metal oxide (MnO(2)), carbon nanotube (CNT), and conducting polymer (CP). Each component in the MnO(2)/CNT/CP film provides unique and critical function to achieve optimized electrochemical properties. The electrochemical performance of the film is evaluated by cyclic voltammetry, and constant-current charge/discharge cycling techniques. Specific capacitance (SC) of the ternary composite electrode can reach 427 F/g. Even at high mass loading and high concentration of MnO(2) (60%), the film still showed SC value as high as 200 F/g. The electrode also exhibited excellent charge/discharge rate and good cycling stability, retaining over 99% of its initial charge after 1000 cycles. The results demonstrated that MnO(2) is effectively utilized with assistance of other components (fFWNTs and poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) in the electrode. Such ternary composite is very promising for the next generation high performance electrochemical supercapacitors.
Resumo:
Neurodegenerative diseases like Alzheimer's and Parkinson's disease are associated with elevated levels of iron, copper, and zinc and consequentially high levels of oxidative stress. Given the multifactorial nature of these diseases, it is becoming evident that the next generation of therapies must have multiple functions to combat multiple mechanisms of disease progression. Metal-chelating agents provide one such function as an intervention for ameliorating metal-associated damage in degenerative diseases. Targeting chelators to adjust localized metal imbalances in the brain, however, presents significant challenges. In this perspective, we focus on some noteworthy advances in the area of multifunctional metal chelators as potential therapeutic agents for neurodegenerative diseases. In addition to metal chelating ability, these agents also contain features designed to improve their uptake across the blood-brain barrier, increase their selectivity for metals in damage-prone environments, increase antioxidant capabilities, lower Abeta peptide aggregation, or inhibit disease-associated enzymes such as monoamine oxidase and acetylcholinesterase.
Resumo:
Ambient sampling for the Pittsburgh Air Quality Study (PAQS) was conducted from July 2001 to September 2002. The study was designed (1) to characterize particulate matter (PM) by examination of size, surface area, and volume distribution, chemical composition as a function of size and on a single particle basis, morphology, and temporal and spatial variability in the Pittsburgh region; (2) to quantify the impact of the various sources (transportation, power plants, biogenic sources, etc.) on the aerosol concentrations in the area; and (3) to develop and evaluate the next generation of atmospheric aerosol monitoring and modeling techniques. The PAQS objectives, study design, site descriptions and routine and intensive measurements are presented. Special study days are highlighted, including those associated with elevated concentrations of daily average PM2.5 mass. Monthly average and diurnal patterns in aerosol number concentration, and aerosol nitrate, sulfate, elemental carbon, and organic carbon concentrations, light scattering as well as gas-phase ozone, nitrogen oxides, and carbon monoxide are discussed with emphasis on the processes affecting them. Preliminary findings reveal day-to-day variability in aerosol mass and composition, but consistencies in seasonal average diurnal profiles and concentrations. For example, the seasonal average variations in the diurnal PM2.5 mass were predominately driven by the sulfate component. © 2004 Elsevier Ltd. All rights reserved.
Resumo:
© 2014, Springer-Verlag Berlin Heidelberg.This study assesses the skill of advanced regional climate models (RCMs) in simulating southeastern United States (SE US) summer precipitation and explores the physical mechanisms responsible for the simulation skill at a process level. Analysis of the RCM output for the North American Regional Climate Change Assessment Program indicates that the RCM simulations of summer precipitation show the largest biases and a remarkable spread over the SE US compared to other regions in the contiguous US. The causes of such a spread are investigated by performing simulations using the Weather Research and Forecasting (WRF) model, a next-generation RCM developed by the US National Center for Atmospheric Research. The results show that the simulated biases in SE US summer precipitation are due mainly to the misrepresentation of the modeled North Atlantic subtropical high (NASH) western ridge. In the WRF simulations, the NASH western ridge shifts 7° northwestward when compared to that in the reanalysis ensemble, leading to a dry bias in the simulated summer precipitation according to the relationship between the NASH western ridge and summer precipitation over the southeast. Experiments utilizing the four dimensional data assimilation technique further suggest that the improved representation of the circulation patterns (i.e., wind fields) associated with the NASH western ridge substantially reduces the bias in the simulated SE US summer precipitation. Our analysis of circulation dynamics indicates that the NASH western ridge in the WRF simulations is significantly influenced by the simulated planetary boundary layer (PBL) processes over the Gulf of Mexico. Specifically, a decrease (increase) in the simulated PBL height tends to stabilize (destabilize) the lower troposphere over the Gulf of Mexico, and thus inhibits (favors) the onset and/or development of convection. Such changes in tropical convection induce a tropical–extratropical teleconnection pattern, which modulates the circulation along the NASH western ridge in the WRF simulations and contributes to the modeled precipitation biases over the SE US. In conclusion, our study demonstrates that the NASH western ridge is an important factor responsible for the RCM skill in simulating SE US summer precipitation. Furthermore, the improvements in the PBL parameterizations for the Gulf of Mexico might help advance RCM skill in representing the NASH western ridge circulation and summer precipitation over the SE US.
Resumo:
Transcranial magnetic stimulation (TMS) is a widely used, noninvasive method for stimulating nervous tissue, yet its mechanisms of effect are poorly understood. Here we report new methods for studying the influence of TMS on single neurons in the brain of alert non-human primates. We designed a TMS coil that focuses its effect near the tip of a recording electrode and recording electronics that enable direct acquisition of neuronal signals at the site of peak stimulus strength minimally perturbed by stimulation artifact in awake monkeys (Macaca mulatta). We recorded action potentials within ∼1 ms after 0.4-ms TMS pulses and observed changes in activity that differed significantly for active stimulation as compared with sham stimulation. This methodology is compatible with standard equipment in primate laboratories, allowing easy implementation. Application of these tools will facilitate the refinement of next generation TMS devices, experiments and treatment protocols.
Resumo:
X-ray mammography has been the gold standard for breast imaging for decades, despite the significant limitations posed by the two dimensional (2D) image acquisitions. Difficulty in diagnosing lesions close to the chest wall and axilla, high amount of structural overlap and patient discomfort due to compression are only some of these limitations. To overcome these drawbacks, three dimensional (3D) breast imaging modalities have been developed including dual modality single photon emission computed tomography (SPECT) and computed tomography (CT) systems. This thesis focuses on the development and integration of the next generation of such a device for dedicated breast imaging. The goals of this dissertation work are to: [1] understand and characterize any effects of fully 3-D trajectories on reconstructed image scatter correction, absorbed dose and Hounsifeld Unit accuracy, and [2] design, develop and implement the fully flexible, third generation hybrid SPECT-CT system capable of traversing complex 3D orbits about a pendant breast volume, without interference from the other. Such a system would overcome artifacts resulting from incompletely sampled divergent cone beam imaging schemes and allow imaging closer to the chest wall, which other systems currently under research and development elsewhere cannot achieve.
The dependence of x-ray scatter radiation on object shape, size, material composition and the CT acquisition trajectory, was investigated with a well-established beam stop array (BSA) scatter correction method. While the 2D scatter to primary ratio (SPR) was the main metric used to characterize total system scatter, a new metric called ‘normalized scatter contribution’ was developed to compare the results of scatter correction on 3D reconstructed volumes. Scatter estimation studies were undertaken with a sinusoidal saddle (±15° polar tilt) orbit and a traditional circular (AZOR) orbit. Clinical studies to acquire data for scatter correction were used to evaluate the 2D SPR on a small set of patients scanned with the AZOR orbit. Clinical SPR results showed clear dependence of scatter on breast composition and glandular tissue distribution, otherwise consistent with the overall phantom-based size and density measurements. Additionally, SPR dependence was also observed on the acquisition trajectory where 2D scatter increased with an increase in the polar tilt angle of the system.
The dose delivered by any imaging system is of primary importance from the patient’s point of view, and therefore trajectory related differences in the dose distribution in a target volume were evaluated. Monte Carlo simulations as well as physical measurements using radiochromic film were undertaken using saddle and AZOR orbits. Results illustrated that both orbits deliver comparable dose to the target volume, and only slightly differ in distribution within the volume. Simulations and measurements showed similar results, and all measured dose values were within the standard screening mammography-specific, 6 mGy dose limit, which is used as a benchmark for dose comparisons.
Hounsfield Units (HU) are used clinically in differentiating tissue types in a reconstructed CT image, and therefore the HU accuracy of a system is very important, especially when using non-traditional trajectories. Uniform phantoms filled with various uniform density fluids were used to investigate differences in HU accuracy between saddle and AZOR orbits. Results illustrate the considerably better performance of the saddle orbit, especially close to the chest and nipple region of what would clinically be a pedant breast volume. The AZOR orbit causes shading artifacts near the nipple, due to insufficient sampling, rendering a major portion of the scanned phantom unusable, whereas the saddle orbit performs exceptionally well and provides a tighter distribution of HU values in reconstructed volumes.
Finally, the third generation, fully-suspended SPECT-CT system was designed in and developed in our lab. A novel mechanical method using a linear motor was developed for tilting the CT system. A new x-ray source and a custom made 40 x 30 cm2 detector were integrated on to this system. The SPECT system was nested, in the center of the gantry, orthogonal to the CT source-detector pair. The SPECT system tilts on a goniometer, and the newly developed CT tilting mechanism allows ±15° maximum polar tilting of the CT system. The entire gantry is mounted on a rotation stage, allowing complex arbitrary trajectories for each system, without interference from the other, while having a common field of view. This hybrid system shows potential to be used clinically as a diagnostic tool for dedicated breast imaging.
Resumo:
Computer based mathematical models describing the aircraft evacuation process and aircraft fire have a role to play in the design and development of safer aircraft, in the implementaion of safer and more rigorous certification criteria and in post mortuum accident investigation. As the cost and risk involved in performing large-scale fire/evacuation experiments for the next generation 'Very Large Aircraft' (VLA) are expected to be high, the development and use of these modelling tools may become essential if these aircraft are to prove a viable reality. By describing the present capabililties and limitations of the EXODUS evacuation model and associated fire models, this paper will examine the future development and data requirements of these models.
Resumo:
Computer based mathematical models describing the aircraft evacuation process have a vital role to play in the design and development of safer aircraft, in the implementation of safer and more rigorous certification criteria, cabin crew training and in post mortuum accident investigation. As the risk of personal injury and costs involved in performing large-scale evacuation experiments for the next generation 'Ultra High Capacity Aircraft' (UHCA) are expected to be high, the development and use of these evacuation modelling tools may become essential if these aircraft are to prove a viable reality. In this paper the capabilities and limitations of the airEXODUS evacuation model are described. Its successful application to the prediction of a recent certification trial, prior to the actual trial taking place, is described. Also described is a newly defined parameter known as OPS which can be used as a measure of evacuation trial optimality. In addition, sample evacuation simulations in the presence of fire atmospheres are described. Finally, the data requiremnets of the airEXODUS evacuation model is discussed along with several projects currently underway at the the Univesity of Greenwich designed to obtain this data. Included in this discussion is a description of the AASK - Aircraft Accident Statistics and Knowledge - data base which contains detailed information from aircraft accident survivors.
Resumo:
Computer based mathematical models describing the aircraft evacuation process have a vital role to play in the design and development of safer aircraft, in the implementation of safer and more rigorous certification criteria and in cabin crew training and post mortuum accident investigation. As the risk of personal injury and costs involved in performing large-scale evacuation experiments for the next generation `Ultra High Capacity Aircraft' (UHCA) are expected to be high, the development and use of these evacuation modelling tools may become essential if these aircraft are to prove a viable reality. This paper describes the capabilities and limitations of the airEXODUS evacuation model and some attempts at validation, including its successful application to the prediction of a recent certification trial, prior to the actual trial taking place, is described. Also described is a newly defined parameter known as OPS which can be used as a measure of evacuation trial optimality. In addition, sample evacuation simulations in the presence of fire atmospheres are described.
Resumo:
Computer based mathematical models describing the aircraft evacuation process have a vital role to play in the design and development of safer aircraft, the implementation of safer and more rigorous certification criteria, in cabin crew training and post-mortem accident investigation. As the risk of personal injury and the costs involved in performing large-scale evacuation experiments for the next generation ultra high capacity aircraft (UHCA) are expected to be high, the development and use of these evacuation modelling tools may become essential if these aircraft are to prove a viable reality. This paper describes the capabilities and limitations of the airEXODUS evacuation model and some attempts at validation, including its successful application to the prediction of a recent certification trial, prior to the actual trial taking place. Also described is a newly defined performance parameter known as OPS that can be used as a measure of evacuation trial optimality. In addition, sample evacuation simulations in the presence of fire atmospheres are described.
Resumo:
Computer based mathematical models describing the aircraft evacuation process have a vital role to play in the design and development of safer aircraft, in the implementation of safer and more rigorous certification criteria and in post mortuuum accident investigation. As the risk of personal injury and costs involved in performing large-scale evacuation experiments for the next generation 'Ultra High Capacity Aircraft' (UHCA) are expected to be high, the development and use of these evacuation modelling tools may become essential if these aircraft are to prove a viable reality. In this paper the capabilities and limitation of the air-EXODUS evacuation model are described. Its successful application to the prediction of a recent certificaiton trial, prior to the actual trial taking place, is described. Also described is a newly defined parameter known as OPS which can be used as a measure of evacuation trial optimality. Finally, the data requirements of aircraft evacuation models is discussed along with several projects currently underway at the University of Greenwich designed to obtain this data. Included in this discussion is a description of the AASK - Aircraft Accident Statistics and Knowledge - data base which contains detailed information from aircraft accident survivors.
Resumo:
Metals casting is a process governed by the interaction of a range of physical phenomena. Most computational models of this process address only what are conventionally regarded as the primary phenomena-heat conduction and solidification. However, to predict the formation of porosity (a factor of key importance in cast quality) requires the modelling of the interaction of the fluid flow, heat transfer, solidification and the development of stress-deformation in the solidified part of a component. In this paper, a model of the casting process is described which addresses all the main continuum phenomena involved in a coupled manner. The model is solved numerically using novel finite volume unstructured mesh techniques, and then applied to both the prediction of shape deformation (plus the subsequent formation of a gap at the metal-mould interface and its impact on the heat transfer behaviour) and porosity formation in solidifying metal components. Although the porosity prediction model is phenomenologically simplistic it is based on the interaction of the continuum phenomena and yields good comparisons with available experimental results. This work represents the first of the next generation of casting simulation tools to predict aspects of the structure of cast components.