14 resultados para S960 QC
em Glasgow Theses Service
Resumo:
Quantum mechanics, optics and indeed any wave theory exhibits the phenomenon of interference. In this thesis we present two problems investigating interference due to indistinguishable alternatives and a mostly unrelated investigation into the free space propagation speed of light pulses in particular spatial modes. In chapter 1 we introduce the basic properties of the electromagnetic field needed for the subsequent chapters. In chapter 2 we review the properties of interference using the beam splitter and the Mach-Zehnder interferometer. In particular we review what happens when one of the paths of the interferometer is marked in some way so that the particle having traversed it contains information as to which path it went down (to be followed up in chapter 3) and we review Hong-Ou-Mandel interference at a beam splitter (to be followed up in chapter 5). In chapter 3 we present the first of the interference problems. This consists of a nested Mach-Zehnder interferometer in which each of the free space propagation segments are weakly marked by mirrors vibrating at different frequencies [1]. The original experiment drew the conclusions that the photons followed disconnected paths. We partition the description of the light in the interferometer according to the number of paths it contains which-way information about and reinterpret the results reported in [1] in terms of the interference of paths spatially connected from source to detector. In chapter 4 we briefly review optical angular momentum, entanglement and spontaneous parametric down conversion. These concepts feed into chapter 5 in which we present the second of the interference problems namely Hong-Ou-Mandel interference with particles possessing two degrees of freedom. We analyse the problem in terms of exchange symmetry for both boson and fermion pairs and show that the particle statistics at a beam splitter can be controlled for suitably chosen states. We propose an experimental test of these ideas using orbital angular momentum entangled photons. In chapter 6 we look at the effect that the transverse spatial structure of the mode that a pulse of light is excited in has on its group velocity. We show that the resulting group velocity is slower than the speed of light in vacuum for plane waves and that this reduction in the group velocity is related to the spread in the wave vectors required to create the transverse spatial structure. We present experimental results of the measurement of this slowing down using Hong-Ou-Mandel interference.
Resumo:
Little is known about historic wood as it ages naturally. Instead, most studies focus on biological decay, as it is often assumed that wood remains otherwise stable with age. This PhD project was organised by Historic Scotland and the University of Glasgow to investigate the natural chemical and physical aging of wood. The natural aging of wood was a concern for Historic Scotland as traditional timber replacement is the standard form of repair used in wooden cultural heritage; replacing rotten timber with new timber of the same species. The project was set up to look at what differences could exist both chemically and physically between old and new wood, which could put unforeseen stress on the joint between them. Through Historic Scotland it was possible to work with genuine historic wood from two species, Oak and Scots pine, both from the 1500’s, rather than relying on artificial aging. Artificial aging of wood is still a debated topic, with consideration given to whether it is truly mimicking the aging process or just damaging the wood cells. The chemical stability of wood was investigated using Fourier-transform infrared (FTIR) microscopy, as well as wet chemistry methods including a test for soluble sugars from the possible breakdown of the wood polymers. The physical properties assessed included using a tensile testing machine to uncover possible differences in mechanical properties. An environmental chamber was used to test the reaction to moisture of wood of different ages, as moisture is the most damaging aspect of the environment to wooden cultural objects. The project uncovered several differences, both physical and chemical, between the modern and historic wood which could affect the success of traditional ‘like for like’ repairs. Both oak and pine lost acetyl groups, over historic time, from their hemicellulose polymers. This chemical reaction releases acetic acid, which had no effect on the historic oak but was associated with reduced stiffness in historic pine, probably due to degradation of the hemicellulose polymers by acid hydrolysis. The stiffness of historic oak and pine was also reduced by decay. Visible pest decay led to loss of wood density but there was evidence that fungal decay, extending beyond what was visible, degraded the S2 layer of the pine cell walls, reducing the stiffness of the wood by depleting the cellulose microfibrils most aligned with the grain. Fungal decay of polysaccharides in pine wood left behind sugars that attracted increased levels of moisture. The degradation of essential polymers in the wood structure due to age had different impacts on the two species of wood, and raised questions concerning both the mechanism of aging of wood and the ways in which traditional repairs are implemented, especially in Scots pine. These repairs need to be done with more care and precision, especially in choosing new timber to match the old. Within this project a quantitative method of measuring the microfibril angle (MFA) of wood using polarised Fourier transform infrared (FTIR) microscopy has been developed, allowing the MFA of both new and historic pine to be measured. This provides some of the information needed for a more specific match when selecting replacement timbers for historic buildings.
Resumo:
Generalised refraction is a topic which has, thus far, garnered far less attention than it deserves. The purpose of this thesis is to highlight the potential that generalised refraction has to offer with regards to imaging and its application to designing new passive optical devices. Specifically in this thesis we will explore two types of gener- alised refraction which takes place across a planar interface: refraction by generalised confocal lenslet arrays (gCLAs), and refraction by ray-rotation sheets. We will show that the corresponding laws of refraction for these interfaces produce, in general, light-ray fields with non-zero curl, and as such do not have a corresponding outgoing waveform. We will then show that gCLAs perform integral, geometrical imaging, and that this enables them to be considered as approximate realisations of metric tensor interfaces. The concept of piecewise transformation optics will be introduced and we will show that it is possible to use gCLAs along with other optical elements such as lenses to design simple piecewise transformation-optics devices such as invisibility cloaks and insulation windows. Finally, we shall show that ray-rotation sheets can be interpreted as performing geometrical imaging into complex space, and that as a consequence, ray-rotation sheets and gCLAs may in fact be more closely related than first realised. We conclude with a summary of potential future projects which lead naturally from the results of this thesis.
Epidemiology and genetic architecture of blood pressure: a family based study of Generation Scotland
Resumo:
Hypertension is a major risk factor for cardiovascular disease and mortality, and a growing global public health concern, with up to one-third of the world’s population affected. Despite the vast amount of evidence for the benefits of blood pressure (BP) lowering accumulated to date, elevated BP is still the leading risk factor for disease and disability worldwide. It is well established that hypertension and BP are common complex traits, where multiple genetic and environmental factors contribute to BP variation. Furthermore, family and twin studies confirmed the genetic component of BP, with a heritability estimate in the range of 30-50%. Contemporary genomic tools enabling the genotyping of millions of genetic variants across the human genome in an efficient, reliable, and cost-effective manner, has transformed hypertension genetics research. This is accompanied by the presence of international consortia that have offered unprecedentedly large sample sizes for genome-wide association studies (GWASs). While GWAS for hypertension and BP have identified more than 60 loci, variants in these loci are associated with modest effects on BP and in aggregate can explain less than 3% of the variance in BP. The aims of this thesis are to study the genetic and environmental factors that influence BP and hypertension traits in the Scottish population, by performing several genetic epidemiological analyses. In the first part of this thesis, it aims to study the burden of hypertension in the Scottish population, along with assessing the familial aggregation and heritialbity of BP and hypertension traits. In the second part, it aims to validate the association of common SNPs reported in the large GWAS and to estimate the variance explained by these variants. In this thesis, comprehensive genetic epidemiology analyses were performed on Generation Scotland: Scottish Family Health Study (GS:SFHS), one of the largest population-based family design studies. The availability of clinical, biological samples, self-reported information, and medical records for study participants has allowed several assessments to be performed to evaluate factors that influence BP variation in the Scottish population. Of the 20,753 subjects genotyped in the study, a total of 18,470 individuals (grouped into 7,025 extended families) passed the stringent quality control (QC) criteria and were available for all subsequent analysis. Based on the BP-lowering treatment exposure sources, subjects were further classified into two groups. First, subjects with both a self-reported medications (SRMs) history and electronic-prescription records (EPRs; n =12,347); second, all the subjects with at least one medication history source (n =18,470). In the first group, the analysis showed a good concordance between SRMs and EPRs (kappa =71%), indicating that SRMs can be used as a surrogate to assess the exposure to BP-lowering medication in GS:SFHS participants. Although both sources suffer from some limitations, SRMs can be considered the best available source to estimate the drug exposure history in those without EPRs. The prevalence of hypertension was 40.8% with higher prevalence in men (46.3%) compared to women (35.8%). The prevalence of awareness, treatment and controlled hypertension as defined by the study definition were 25.3%, 31.2%, and 54.3%, respectively. These findings are lower than similar reported studies in other populations, with the exception of controlled hypertension prevalence, which can be considered better than other populations. Odds of hypertension were higher in men, obese or overweight individuals, people with a parental history of hypertension, and those living in the most deprived area of Scotland. On the other hand, deprivation was associated with higher odds of treatment, awareness and controlled hypertension, suggesting that people living in the most deprived area may have been receiving better quality of care, or have higher comorbidity levels requiring greater engagement with doctors. These findings highlight the need for further work to improve hypertension management in Scotland. The family design of GS:SFHS has allowed family-based analysis to be performed to assess the familial aggregation and heritability of BP and hypertension traits. The familial correlation of BP traits ranged from 0.07 to 0.20, and from 0.18 to 0.34 for parent-offspring pairs and sibling pairs, respectively. A higher correlation of BP traits was observed among first-degree relatives than other types of relative pairs. A variance-component model that was adjusted for sex, body mass index (BMI), age, and age-squared was used to estimate heritability of BP traits, which ranged from 24% to 32% with pulse pressure (PP) having the lowest estimates. The genetic correlation between BP traits showed a high correlation between systolic (SBP), diastolic (DBP) and mean arterial pressure (MAP) (G: 81% to 94%), but lower correlations with PP (G: 22% to 78%). The sibling recurrence risk ratio (λS) for hypertension and treatment were calculated as 1.60 and 2.04 respectively. These findings confirm the genetic components of BP traits in GS:SFHS, and justify further work to investigate genetic determinants of BP. Genetic variants reported in the recent large GWAS of BP traits were selected for genotyping in GS:SFHS using a custom designed TaqMan® OpenArray®. The genotyping plate included 44 single nucleotide polymorphisms (SNPs) that have been previously reported to be associated with BP or hypertension at genome-wide significance level. A linear mixed model that is adjusted for age, age-squared, sex, and BMI was used to test for the association between the genetic variants and BP traits. Of the 43 variants that passed the QC, 11 variants showed statistically significant association with at least one BP trait. The phenotypic variance explained by these variant for the four BP traits were 1.4%, 1.5%, 1.6%, and 0.8% for SBP, DBP, MAP, and PP, respectively. The association of genetic risk score (GRS) that were constructed from selected variants has showed a positive association with BP level and hypertension prevalence, with an average effect of one mmHg increase with each 0.80 unit increases in the GRS across the different BP traits. The impact of BP-lowering medication on the genetic association study for BP traits has been established, with typical practice of adding a fixed value (i.e. 15/10 mmHg) to the measured BP values to adjust for BP treatment. Using the subset of participants with the two treatment exposure sources (i.e. SRMs and EPRs), the influence of using either source to justify the addition of fixed values in SNP association signal was analysed. BP phenotypes derived from EPRs were considered the true phenotypes, and those derived from SRMs were considered less accurate, with some phenotypic noise. Comparing SNPs association signals between the four BP traits in the two model derived from the different adjustments showed that MAP was the least impacted by the phenotypic noise. This was suggested by identifying the same overlapped significant SNPs for the two models in the case of MAP, while other BP traits had some discrepancy between the two sources
Resumo:
Since it has been found that the MadGraph Monte Carlo generator offers superior flavour-matching capability as compared to Alpgen, the suitability of MadGraph for the generation of ttb¯ ¯b events is explored, with a view to simulating this background in searches for the Standard Model Higgs production and decay process ttH, H ¯ → b ¯b. Comparisons are performed between the output of MadGraph and that of Alpgen, showing that satisfactory agreement in their predictions can be obtained with the appropriate generator settings. A search for the Standard Model Higgs boson, produced in association with the top quark and decaying into a b ¯b pair, using 20.3 fb−1 of 8 TeV collision data collected in 2012 by the ATLAS experiment at CERN’s Large Hadron Collider, is presented. The GlaNtp analysis framework, together with the RooFit package and associated software, are used to obtain an expected 95% confidence-level limit of 4.2 +4.1 −2.0 times the Standard Model expectation, and the corresponding observed limit is found to be 5.9; this is within experimental uncertainty of the published result of the analysis performed by the ATLAS collaboration. A search for a heavy charged Higgs boson of mass mH± in the range 200 ≤ mH± /GeV ≤ 600, where the Higgs mediates the five-flavour beyond-theStandard-Model physics process gb → tH± → ttb, with one top quark decaying leptonically and the other decaying hadronically, is presented, using the 20.3 fb−1 8 TeV ATLAS data set. Upper limits on the product of the production cross-section and the branching ratio of the H± boson are computed for six mass points, and these are found to be compatible within experimental uncertainty with those obtained by the corresponding published ATLAS analysis.
Resumo:
Droplet microfluidics is an active multidisciplinary area of research that evolved out of the larger field of microfluidics. It enables the user to handle, process and manipulate micrometer-sized emulsion droplets on a micro- fabricated platform. The capability to carry out a large number of individual experiments per unit time makes the droplet microfluidic technology an ideal high-throughput platform for analysis of biological and biochemical samples. The objective of this thesis was to use such a technology for designing systems with novel implications in the newly emerging field of synthetic biology. Chapter 4, the first results chapter, introduces a novel method of droplet coalescence using a flow-focusing capillary device. In Chapter 5, the development of a microfluidic platform for the fabrication of a cell-free micro-environment for site-specific gene manipulation and protein expression is described. Furthermore, a novel fluorescent reporter system which functions both in vivo and in vitro is introduced in this chapter. Chapter 6 covers the microfluidic fabrication of polymeric vesicles from poly(2-methyloxazoline-b-dimethylsiloxane-b-2-methyloxazoline) tri-block copolymer. The polymersome made from this polymer was used in the next Chapter for the study of a chimeric membrane protein called mRFP1-EstA∗. In Chapter 7, the application of microfluidics for the fabrication of synthetic biological membranes to recreate artificial cell- like chassis structures for reconstitution of a membrane-anchored protein is described.
Resumo:
Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.
Resumo:
This thesis describes the application of multispectral imaging to several novel oximetry applications. Chapter 1 motivates optical microvascular oximetry, outlines oxygen transport in the body, describes the theory of oximetry, and describes the challenges associated with in vivo oximetry, in particular imaging through tissue. Chapter 2 reviews various imaging techniques for quantitative in vivo oximetry of the microvasculature, including multispectral and hyperspectral imaging, photoacoustic imaging, optical coherence tomography, and laser speckle techniques. Chapter 3 describes a two-wavelength oximetry study of two microvascular beds in the anterior segment of the eye: the bulbar conjunctival and episcleral microvasculature. This study reveals previously unseen oxygen diffusion from ambient air into the bulbar conjunctival microvasculature, altering the oxygen saturation of the bulbar conjunctiva. The response of the bulbar conjunctival and episcleral microvascular beds to acute mild hypoxia is quantified and the rate at which oxygen diffuses into bulbar conjunctival vessels is measured. Chapter 4 describes the development and application of a highly novel non-invasive retinal angiography technique: Oximetric Ratio Contrast Angiography (ORCA). ORCA requires only multispectral imaging and a small perturbation of blood oxygen saturation to produce angiographic sequences. A pilot study of ORCA in human subjects was conducted. This study demonstrates that ORCA can produce angiographic sequences with features such as sequential vessel filling and laminar flow. The application and challenges of ORCA are discussed, with emphasis on comparison with other angiography techniques, such as fluorescein angiography. Chapter 5 describes the development of a multispectral microscope for oximetry in the spinal cord dorsal vein of rats. Measurements of blood oxygen saturation are made in the dorsal vein of both healthy rats, and in rats with the Experimental autoimmune encephalomyelitis (EAE) disease model of multiple sclerosis. The venous blood oxygen saturation of EAE disease model rats was found to be significantly lower than that of healthy controls, indicating increased oxygen uptake from blood in the EAE disease model of multiple sclerosis. Chapter 6 describes the development of video-rate red eye oximetry; a technique which could enable stand-off oximetry of the blood-supply of the eye with high temporal resolution. The various challenges associated with video-rate red eye oximetry are investigated and their influence quantified. The eventual aim of this research is to track circulating deoxygenation perturbations as they arrive in both eyes, which could provide a screening method for carotid artery stenosis, which is major risk-factor for stroke. However, due to time constraints, it was not possible to thoroughly investigate if video-rate red eye can detect such perturbations. Directions and recommendations for future research are outlined.
Resumo:
Crossing the Franco-Swiss border, the Large Hadron Collider (LHC), designed to collide 7 TeV proton beams, is the world's largest and most powerful particle accelerator the operation of which was originally intended to commence in 2008. Unfortunately, due to an interconnect discontinuity in one of the main dipole circuit's 13 kA superconducting busbars, a catastrophic quench event occurred during initial magnet training, causing significant physical system damage. Furthermore, investigation into the cause found that such discontinuities were not only present in the circuit in question, but throughout the entire LHC. This prevented further magnet training and ultimately resulted in the maximum sustainable beam energy being limited to approximately half that of the design nominal, 3.5-4 TeV, for the first three years of operation (Run 1, 2009-2012) and a major consolidation campaign being scheduled for the first long shutdown (LS 1, 2012-2014). Throughout Run 1, a series of studies attempted to predict the amount of post-installation training quenches still required to qualify each circuit to nominal-energy current levels. With predictions in excess of 80 quenches (each having a recovery time of 8-12+ hours) just to achieve 6.5 TeV and close to 1000 quenches for 7 TeV, it was decided that for Run 2, all systems be at least qualified for 6.5 TeV operation. However, even with all interconnect discontinuities scheduled to be repaired during LS 1, numerous other concerns regarding circuit stability arose. In particular, observations of an erratic behaviour of magnet bypass diodes and the degradation of other potentially weak busbar sections, as well as observations of seemingly random millisecond spikes in beam losses, known as unidentified falling object (UFO) events, which, if persist at 6.5 TeV, may eventually deposit sufficient energy to quench adjacent magnets. In light of the above, the thesis hypothesis states that, even with the observed issues, the LHC main dipole circuits can safely support and sustain near-nominal proton beam energies of at least 6.5 TeV. Research into minimising the risk of magnet training led to the development and implementation of a new qualification method, capable of providing conclusive evidence that all aspects of all circuits, other than the magnets and their internal joints, can safely withstand a quench event at near-nominal current levels, allowing for magnet training to be carried out both systematically and without risk. This method has become known as the Copper Stabiliser Continuity Measurement (CSCM). Results were a success, with all circuits eventually being subject to a full current decay from 6.5 TeV equivalent current levels, with no measurable damage occurring. Research into UFO events led to the development of a numerical model capable of simulating typical UFO events, reproducing entire Run 1 measured event data sets and extrapolating to 6.5 TeV, predicting the likelihood of UFO-induced magnet quenches. Results provided interesting insights into the involved phenomena as well as confirming the possibility of UFO-induced magnet quenches. The model was also capable of predicting that such events, if left unaccounted for, are likely to be commonplace or not, resulting in significant long-term issues for 6.5+ TeV operation. Addressing the thesis hypothesis, the following written works detail the development and results of all CSCM qualification tests and subsequent magnet training as well as the development and simulation results of both 4 TeV and 6.5 TeV UFO event modelling. The thesis concludes, post-LS 1, with the LHC successfully sustaining 6.5 TeV proton beams, but with UFO events, as predicted, resulting in otherwise uninitiated magnet quenches and being at the forefront of system availability issues.
Resumo:
The results of two separate searches for the rare two-body charmless baryonic decays B0 -> p pbar and B0s -> p pbar at the LHCb experiment are reported in this thesis. The first analysis uses a data sample, corresponding to an integrated luminosity of 0.9 fb^-1, of proton-proton collision data collected by the LHCb experiment at a centre-of-mass energy of 7 TeV. An excess of B0 -> p pbar candidates with respect to background expectations is seen with a statistical significance of 3.3 standard deviations. This constitutes the first evidence for a two-body charmless baryonic B0 decay. No significant B0s -> p pbar signal was observed. However, a small excess of B0s -> p pbar events allowed the extraction of two sided confidence level intervals for the B0s -> p pbar branching fraction using the Feldman-Cousins frequentist method. This improved the upper limit on the B0s -> p pbar branching fraction by three orders of magnitude over previous bounds. The 68.3% confidence level intervals on the branching fractions were measured to be BF(B0 -> p pbar) = ( 1.47 ^{+0.62}_{-0.51} ^{+0.35}_{-0.14} ) x 10^-8, BF(B0s -> p pbar) = ( 2.84 ^{+2.03}_{-1.68} ^{+0.85}_{-0.18} ) x 10^-8, where the first uncertainty is statistical and the second is systematic. The second analysis followed on from the first LHCb result and included the full 2011 and 2012 samples of proton-proton collision data at centre of mass energies of 7 and 8 TeV, corresponding to a total integrated luminosity of 3.122 fb^-1.
Resumo:
This thesis explores the potential of chiral plasmonic nanostructures for the ultrasensitive detection of protein structure. These nanostructures support the generation of fields with enhanced chirality relative to circularly polarised light and are an extremely incisive probe of protein structure. In chapter 4 we introduce a nanopatterned Au film (Templated Plasmonic Substrate, TPS) fabricated using a high through-put injection moulding technique which is a viable alternative to expensive lithographically fabricated nanostructures. The optical and chiroptical properties of TPS nanostructures are found to be highly dependent on the coupling between the electric and magnetic modes of the constituent solid and inverse structures. Significantly, refractive index based measurements of strongly coupled TPSs display a similar sensitivity to protein structure as previous lithographic nanostructures. We subsequently endeavour to improve the sensing properties of TPS nanostructures by developing a high through-put nanoscale chemical functionalisation technique. This process involves a chemical protection/deprotection strategy. The protection step generates a self-assembled monolayer (SAM) of a thermally responsive polymer on the TPS surface which inhibits protein binding. The deprotection step exploits the presence of nanolocalised thermal gradients in the water surrounding the TPS upon irradiation with an 8ns pulsed laser to modify the SAM conformation on surfaces with high net chirality. This allows binding of biomaterial in these regions and subsequently enhances the TPS sensitivity levels. In chapter 6 an alternative method for the detection of protein structure using TPS nanostructures is introduced. This technique relies on mediation of the electric/magnetic coupling in the TPS by the adsorbed protein. This phenomenon is probed through both linear reflectance and nonlinear second harmonic generation (SHG) measurements. Detection of protein structure using this method does not require the presence of fields of enhanced chirality whilst it is also sensitive to a larger array of secondary structure motifs than the measurements in chapters 4 and 5. Finally, a preliminary investigation into the detection of mesoscale biological structure is presented. Sensitivity to the mesoscale helical pitch of insulin amyloid fibrils is displayed through the asymmetry in the circular dichroism (CD) of lithographic gammadions of varying thickness upon adsorption of insulin amyloid fibril spherulites and fragmented fibrils. The proposed model for this sensitivity to the helical pitch relies on the vertical height of the nanostructures relative to this structural property as well as the binding orientation of the fibrils.
Resumo:
The origin of divergent logarithmic contributions to gauge theory cross sections arising from soft and collinear radiation is explored and a general prescription for tackling next-to-soft logarithms is presented. The NNLO Abelian-like contributions to the Drell-Yan K-factor are reproduced using this generalised prescription. The soft limit of gravity is explored where the interplay between the eikonal phase and Reggeization of the graviton is explained using Wilson line techniques. The Wilson line technique is then implemented to treat the set of next-to-soft contributions arising from dressing external partons with a next-to-soft Wilson line.
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
The ability to measure tiny variations in the local gravitational acceleration allows – amongst other applications – the detection of hidden hydrocarbon reserves, magma build-up before volcanic eruptions, and subterranean tunnels. Several technologies are available that achieve the sensitivities required (tens of μGal/√Hz), and stabilities required (periods of days to weeks) for such applications: free-fall gravimeters, spring-based gravimeters, superconducting gravimeters, and atom interferometers. All of these devices can observe the Earth tides; the elastic deformation of the Earth’s crust as a result of tidal forces. This is a universally predictable gravitational signal that requires both high sensitivity and high stability over timescales of several days to measure. All present gravimeters, however, have limitations of excessive cost (£70 k) and high mass (<8 kg). In this thesis, the building of a microelectromechanical system (MEMS) gravimeter with a sensitivity of 40 μGal/√Hz in a package size of only a few cubic centimetres is discussed. MEMS accelerometers – found in most smart phones – can be mass-produced remarkably cheaply, but most are not sensitive enough, and none have been stable enough to be called a ‘gravimeter’. The remarkable stability and sensitivity of the device is demonstrated with a measurement of the Earth tides. Such a measurement has never been undertaken with a MEMS device, and proves the long term stability of the instrument compared to any other MEMS device, making it the first MEMS accelerometer that can be classed as a gravimeter. This heralds a transformative step in MEMS accelerometer technology. Due to their small size and low cost, MEMS gravimeters could create a new paradigm in gravity mapping: exploration surveys could be carried out with drones instead of low-flying aircraft; they could be used for distributed land surveys in exploration settings, for the monitoring of volcanoes; or built into multi-pixel density contrast imaging arrays.