6 resultados para Topology-based methods

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research work included in this thesis examines the synthesis, characterization and chromatographic evaluation of novel bonded silica stationary phases. Innovative methods of preparation of silica hydride intermediates and octadecylsilica using a “green chemistry” approach eliminate the use of toxic organic solvents and exploit the solvating power and enhanced diffusivity of supercritical carbon dioxide to produce phases with a surface coverage of bonded ligands which is comparable to, or exceeds, that achieved using traditional organic solvent-based methods. A new stationary phase is also discussed which displays chromatographic selectivity based on molecular recognition. Chapter 1 introduces the chemistry of silica stationary phases, the retention mechanisms and theories on which reversed-phase liquid chromatography and hydrophilic interaction chromatograpy are based, the art and science of achieving a well packed liquid chromatography column, the properties of supercritical carbon dioxide and molecular recognition chemistry. Chapter 2 compares the properties of silica hydride materials prepared using supercritical carbon dioxide as the reaction medium with those synthesized in an organic solvent. A higher coverage of hydride groups on the silica surface is seen when a monofunctional silane is reacted in supercritical carbon dioxide while trifunctional silanes result in a phase which exhibits different properties depending on the reaction medium used. The differing chromatographic behaviour of these silica hydride materials prepared using supercritical carbon dioxide and using organic solvent are explored in chapter 3. Chapter 4 focusses on the preparation of octadecylsilica using mono-, di- and trifunctional alkoxysilanes in supercritical carbon dioxide and in anhydrous toluene. The surface coverage of octadecyl groups, as calculated using thermogravimetric analysis and elemental analysis, is highest when a trifunctional alkoxysilane is reacted with silica in supercritical carbon dioxide. A novel silica stationary phase is discussed in chapter 5 which displays selectivity for analytes based on their hydrogen bonding capabilities. The phase is also highly selective for barbituric acid and may have a future application in the solid phase extraction of barbiturates from biological samples.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many studies have shown the considerable potential for the application of remote-sensing-based methods for deriving estimates of lake water quality. However, the reliable application of these methods across time and space is complicated by the diversity of lake types, sensor configuration, and the multitude of different algorithms proposed. This study tested one operational and 46 empirical algorithms sourced from the peer-reviewed literature that have individually shown potential for estimating lake water quality properties in the form of chlorophyll-a (algal biomass) and Secchi disc depth (SDD) (water transparency) in independent studies. Nearly half (19) of the algorithms were unsuitable for use with the remote-sensing data available for this study. The remaining 28 were assessed using the Terra/Aqua satellite archive to identify the best performing algorithms in terms of accuracy and transferability within the period 2001–2004 in four test lakes, namely Vänern, Vättern, Geneva, and Balaton. These lakes represent the broad continuum of large European lake types, varying in terms of eco-region (latitude/longitude and altitude), morphology, mixing regime, and trophic status. All algorithms were tested for each lake separately and combined to assess the degree of their applicability in ecologically different sites. None of the algorithms assessed in this study exhibited promise when all four lakes were combined into a single data set and most algorithms performed poorly even for specific lake types. A chlorophyll-a retrieval algorithm originally developed for eutrophic lakes showed the most promising results (R2 = 0.59) in oligotrophic lakes. Two SDD retrieval algorithms, one originally developed for turbid lakes and the other for lakes with various characteristics, exhibited promising results in relatively less turbid lakes (R2 = 0.62 and 0.76, respectively). The results presented here highlight the complexity associated with remotely sensed lake water quality estimates and the high degree of uncertainty due to various limitations, including the lake water optical properties and the choice of methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is an increasing appreciation of the polymicrobial nature of bacterial infections associated with Cystic Fibrosis (CF) and of the important role for interactions in influencing bacterial virulence and response to therapy. Patients with CF are co-infected with Pseudomonas aeruginosa, Burkholderia cenocepacia and Stenotrophomonas maltophilia. These latter bacteria produce signal molecules of the diffusible signal factor (DSF) family, which are cis-2-unsaturated fatty acids. Previous studies showed that DSF from S. maltophilia leads to altered biofilm formation and increased tolerance to antibiotics in P. aeruginosa and that these responses require the P. aeruginosa sensor kinase PA1396. The work in this thesis aims of further elucidate the influence and mechanism of DSF signalling on P. aeruginosa and examine the role that such interspecies signalling play in infection of the CF airway. Next generation sequencing technologies targeting the 16S ribosomal RNA gene were applied to DNA and RNA isolated from sputum taken from cohorts of CF and non-CF subjects to characterise the bacterial community. In parallel, metabolomics analysis of sputum provided insight into the environment of the CF airway. This analysis revealed a number of observations including; that differences in metabolites occur in sputum taken from clinically stable CF patients and those with exacerbation and DNA- and RNA-based methods suggested that a strong relationship existed between the abundance of specific strict anaerobes and fluctuations in the level of metabolites during exacerbation. DSF family signals were also detected in the sputum and a correlation with the presence of DSFproducing organisms was observed. To examine the signal transduction mechanisms used by P. aeruginosa, bioinformatics with site directed mutagenesis were employed to identify signalling partners for PA1396. A pathway suggesting a role for a number of proteins in the regulation of several factors following DSF recognition by PA1396 were observed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

New compensation methods are presented that can greatly reduce the slit errors (i.e. transition location errors) and interval errors induced due to non-idealities in optical incremental encoders (square-wave). An M/T-type, constant sample-time digital tachometer (CSDT) is selected for measuring the velocity of the sensor drives. Using this data, three encoder compensation techniques (two pseudoinverse based methods and an iterative method) are presented that improve velocity measurement accuracy. The methods do not require precise knowledge of shaft velocity. During the initial learning stage of the compensation algorithm (possibly performed in-situ), slit errors/interval errors are calculated through pseudoinversebased solutions of simple approximate linear equations, which can provide fast solutions, or an iterative method that requires very little memory storage. Subsequent operation of the motion system utilizes adjusted slit positions for more accurate velocity calculation. In the theoretical analysis of the compensation of encoder errors, encoder error sources such as random electrical noise and error in estimated reference velocity are considered. Initially, the proposed learning compensation techniques are validated by implementing the algorithms in MATLAB software, showing a 95% to 99% improvement in velocity measurement. However, it is also observed that the efficiency of the algorithm decreases with the higher presence of non-repetitive random noise and/or with the errors in reference velocity calculations. The performance improvement in velocity measurement is also demonstrated experimentally using motor-drive systems, each of which includes a field-programmable gate array (FPGA) for CSDT counting/timing purposes, and a digital-signal-processor (DSP). Results from open-loop velocity measurement and closed-loop servocontrol applications, on three optical incremental square-wave encoders and two motor drives, are compiled. While implementing these algorithms experimentally on different drives (with and without a flywheel) and on encoders of different resolutions, slit error reductions of 60% to 86% are obtained (typically approximately 80%).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Many European countries including Ireland lack high quality, on-going, population based estimates of maternal behaviours and experiences during pregnancy. PRAMS is a CDC surveillance program which was established in the United States in 1987 to generate high quality, population based data to reduce infant mortality rates and improve maternal and infant health. PRAMS is the only on-going population based surveillance system of maternal behaviours and experiences that occur before, during and after pregnancy worldwide.Methods: The objective of this study was to adapt, test and evaluate a modified CDC PRAMS methodology in Ireland. The birth certificate file which is the standard approach to sampling for PRAMS in the United States was not available for the PRAMS Ireland study. Consequently, delivery record books for the period between 3 and 5 months before the study start date at a large urban obstetric hospital [8,900 births per year] were used to randomly sample 124 women. Name, address, maternal age, infant sex, gestational age at delivery, delivery method, APGAR score and birth weight were manually extracted from records. Stillbirths and early neonatal deaths were excluded using APGAR scores and hospital records. Women were sent a letter of invitation to participate including option to opt out, followed by a modified PRAMS survey, a reminder letter and a final survey.Results: The response rate for the pilot was 67%. Two per cent of women refused the survey, 7% opted out of the study and 24% did not respond. Survey items were at least 88% complete for all 82 respondents. Prevalence estimates of socially undesirable behaviours such as alcohol consumption during pregnancy were high [>50%] and comparable with international estimates.Conclusion: PRAMS is a feasible and valid method of collecting information on maternal experiences and behaviours during pregnancy in Ireland. PRAMS may offer a potential solution to data deficits in maternal health behaviour indicators in Ireland with further work. This study is important to researchers in Europe and elsewhere who may be interested in new ways of tailoring an established CDC methodology to their unique settings to resolve data deficits in maternal health.