47 resultados para STEP-NC format

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis a novel transmission format, named Coherent Wavelength Division Multiplexing (CoWDM) for use in high information spectral density optical communication networks is proposed and studied. In chapter I a historical view of fibre optic communication systems as well as an overview of state of the art technology is presented to provide an introduction to the subject area. We see that, in general the aim of modern optical communication system designers is to provide high bandwidth services while reducing the overall cost per transmitted bit of information. In the remainder of the thesis a range of investigations, both of a theoretical and experimental nature are carried out using the CoWDM transmission format. These investigations are designed to consider features of CoWDM such as its dispersion tolerance, compatibility with forward error correction and suitability for use in currently installed long haul networks amongst others. A high bit rate optical test bed constructed at the Tyndall National Institute facilitated most of the experimental work outlined in this thesis and a collaboration with France Telecom enabled long haul transmission experiments using the CoWDM format to be carried out. An amount of research was also carried out on ancillary topics such as optical comb generation, forward error correction and phase stabilisation techniques. The aim of these investigations is to verify the suitability of CoWDM as a cost effective solution for use in both current and future high bit rate optical communication networks

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: To measure the step-count accuracy of an ankle-worn accelerometer, a thigh-worn accelerometer and one pedometer in older and frail inpatients. Design: Cross-sectional design study. Setting: Research room within a hospital. Participants: Convenience sample of inpatients aged ≥65 years, able to walk 20 metres unassisted, with or without a walking-aid. Intervention: Patients completed a 40-minute programme of predetermined tasks while wearing the three motion sensors simultaneously. Video-recording of the procedure provided the criterion measurement of step-count. Main Outcome Measures: Mean percentage (%) errors were calculated for all tasks, slow versus fast walkers, independent versus walking-aid-users, and over shorter versus longer distances. The Intra-class Correlation was calculated and accuracy was visually displayed by Bland-Altman plots. Results: Thirty-two patients (78.1 ±7.8 years) completed the study. Fifteen were female and 17 used walking-aids. Their median speed was 0.46 m/sec (interquartile range, IQR 0.36-0.66). The ankle-worn accelerometer overestimated steps (median 1% error, IQR -3 to 13). The other motion sensors underestimated steps (40% error (IQR -51 to -35) and 38% (IQR -93 to -27), respectively). The ankle-worn accelerometer proved more accurate over longer distances (3% error, IQR 0 to 9), than shorter distances (10%, IQR -23 to 9). Conclusions: The ankle-worn accelerometer gave the most accurate step-count measurement and was most accurate over longer distances. Neither of the other motion sensors had acceptable margins of error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wind energy is the energy source that contributes most to the renewable energy mix of European countries. While there are good wind resources throughout Europe, the intermittency of the wind represents a major problem for the deployment of wind energy into the electricity networks. To ensure grid security a Transmission System Operator needs today for each kilowatt of wind energy either an equal amount of spinning reserve or a forecasting system that can predict the amount of energy that will be produced from wind over a period of 1 to 48 hours. In the range from 5m/s to 15m/s a wind turbine’s production increases with a power of three. For this reason, a Transmission System Operator requires an accuracy for wind speed forecasts of 1m/s in this wind speed range. Forecasting wind energy with a numerical weather prediction model in this context builds the background of this work. The author’s goal was to present a pragmatic solution to this specific problem in the ”real world”. This work therefore has to be seen in a technical context and hence does not provide nor intends to provide a general overview of the benefits and drawbacks of wind energy as a renewable energy source. In the first part of this work the accuracy requirements of the energy sector for wind speed predictions from numerical weather prediction models are described and analysed. A unique set of numerical experiments has been carried out in collaboration with the Danish Meteorological Institute to investigate the forecast quality of an operational numerical weather prediction model for this purpose. The results of this investigation revealed that the accuracy requirements for wind speed and wind power forecasts from today’s numerical weather prediction models can only be met at certain times. This means that the uncertainty of the forecast quality becomes a parameter that is as important as the wind speed and wind power itself. To quantify the uncertainty of a forecast valid for tomorrow requires an ensemble of forecasts. In the second part of this work such an ensemble of forecasts was designed and verified for its ability to quantify the forecast error. This was accomplished by correlating the measured error and the forecasted uncertainty on area integrated wind speed and wind power in Denmark and Ireland. A correlation of 93% was achieved in these areas. This method cannot solve the accuracy requirements of the energy sector. By knowing the uncertainty of the forecasts, the focus can however be put on the accuracy requirements at times when it is possible to accurately predict the weather. Thus, this result presents a major step forward in making wind energy a compatible energy source in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 1966, Roy Geary, Director of the ESRI, noted “the absence of any kind of import and export statistics for regions is a grave lacuna” and further noted that if regional analyses were to be developed then regional Input-Output Tables must be put on the “regular statistical assembly line”. Forty-five years later, the lacuna lamented by Geary still exists and remains the most significant challenge to the construction of regional Input-Output Tables in Ireland. The continued paucity of sufficient regional data to compile effective regional Supply and Use and Input-Output Tables has retarded the capacity to construct sound regional economic models and provide a robust evidence base with which to formulate and assess regional policy. This study makes a first step towards addressing this gap by presenting the first set of fully integrated, symmetric, Supply and Use and domestic Input-Output Tables compiled for the NUTS 2 regions in Ireland: The Border, Midland and Western region and the Southern & Eastern region. These tables are general purpose in nature and are consistent fully with the official national Supply & Use and Input-Output Tables, and the regional accounts. The tables are constructed using a survey-based or bottom-up approach rather than employing modelling techniques, yielding more robust and credible tables. These tables are used to present a descriptive statistical analysis of the two administrative NUTS 2 regions in Ireland, drawing particular attention to the underlying structural differences of regional trade balances and composition of Gross Value Added in those regions. By deriving regional employment multipliers, Domestic Demand Employment matrices are constructed to quantify and illustrate the supply chain impact on employment. In the final part of the study, the predictive capability of the Input-Output framework is tested over two time periods. For both periods, the static Leontief production function assumptions are relaxed to allow for labour productivity. Comparative results from this experiment are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of ultra high speed (~20 Gsamples/s) analogue to digital converters (ADCs), and the delayed deployment of 40 Gbit/s transmission due to the economic downturn, has stimulated the investigation of digital signal processing (DSP) techniques for compensation of optical transmission impairments. In the future, DSP will offer an entire suite of tools to compensate for optical impairments and facilitate the use of advanced modulation formats. Chromatic dispersion is a very significant impairment for high speed optical transmission. This thesis investigates a novel electronic method of dispersion compensation which allows for cost-effective accurate detection of the amplitude and phase of the optical field into the radio frequency domain. The first electronic dispersion compensation (EDC) schemes accessed only the amplitude information using square law detection and achieved an increase in transmission distances. This thesis presents a method by using a frequency sensitive filter to estimate the phase of the received optical field and, in conjunction with the amplitude information, the entire field can be digitised using ADCs. This allows DSP technologies to take the next step in optical communications without requiring complex coherent detection. This is of particular of interest in metropolitan area networks. The full-field receiver investigated requires only an additional asymmetrical Mach-Zehnder interferometer and balanced photodiode to achieve a 50% increase in EDC reach compared to amplitude only detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cream liqueurs manufactured by a one-step process, where alcohol was added before homogenisation, were more stable than those processed by a two -step process which involved addition of alcohol after homogenisation. Using the one-step process, it was possible to produce creaming-stable liqueurs by using one pass through a homogeniser (27.6 MPa) equipped with "liquid whirl" valves. Test procedures to characterise cream liqueurs and to predict shelf life were studied in detail. A turbidity test proved simple, rapid and sensitive for characterising particle size and homogenisation efficiency. Prediction of age thickening/gelation in cream liqueurs during incubation at 45 °C depended on the age of the sample when incubated. Samples that gelled at 45 °C may not do so at ambient temperature. Commercial cream liqueurs were similar in gross chemical composition, and unlike experimentally produced liqueurs, these did not exhibit either age-gelation at ambient or elevated temperatures. Solutions of commercial sodium caseinates from different sources varied in their calcium sensitivity. When incorporated into cream liqueurs, caseinates influenced the rate of viscosity increase, coalescence and, possibly, gelation during incubated storage. Mild heat and alcohol treatment modified the properties of caseinate used to stabilise non-alcoholic emulsions, while the presence of alcohol in emulsions was important in preventing clustering of globules. The response to added trisodium citrate varied. In many cases, addition of the recommended level (0.18%) did not prevent gelation. Addition of small amounts of NaOH with 0.18 % trisodium citrate before homogenisation was beneficial. The stage at which citrate was added during processing was critical to the degree of viscosity increase (as opposed to gelation) in the product during 45 °C incubation. The component responsible for age-gelation was present in the milk-solids non fat portion of the cream and variations in the creams used were important in the age-gelation phenomenon Results indicated that, in addition to possibly Ca++, the micellar casein portion of serum may play a role in gelation. The role of the low molecular weight surfactants, sodium stearoyl lactylate and monodiglycerides in preventing gelation, was influenced by the presence of trisodium citrate. Clustering of fat globules and age-gelation were inhibited when 0.18 % citrate was included. Inclusion of sodium stearoyl lactylate, but not monodiglycerides, reduced the extent of viscosity increase at 45 °C in citrate containing liqueurs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of police accountability is not susceptible to a universal or concise definition. In the context of this thesis it is treated as embracing two fundamental components. First, it entails an arrangement whereby an individual, a minority and the whole community have the opportunity to participate meaningfully in the formulation of the principles and policies governing police operations. Second, it presupposes that those who have suffered as victims of unacceptable police behaviour should have an effective remedy. These ingredients, however, cannot operate in a vacuum. They must find an accommodation with the equally vital requirement that the burden of accountability should not be so demanding that the delivery of an effective police service is fatally impaired. While much of the current debate on police accountability in Britain and the USA revolves around the issue of where the balance should be struck in this accommodation, Ireland lacks the very foundation for such a debate as it suffers from a serious deficit in research and writing on police generally. This thesis aims to fill that gap by laying the foundations for an informed debate on police accountability and related aspects of police in Ireland. Broadly speaking the thesis contains three major interrelated components. The first is concerned with the concept of police in Ireland and the legal, constitutional and political context in which it operates. This reveals that although the Garda Siochana is established as a national force the legal prescriptions concerning its role and governance are very vague. Although a similar legislative format in Britain, and elsewhere, have been interpreted as conferring operational autonomy on the police it has not stopped successive Irish governments from exercising close control over the police. The second component analyses the structure and operation of the traditional police accountability mechanisms in Ireland; namely the law and the democratic process. It concludes that some basic aspects of the peculiar legal, constitutional and political structures of policing seriously undermine their capacity to deliver effective police accountability. In the case of the law, for example, the status of, and the broad discretion vested in, each individual member of the force ensure that the traditional legal actions cannot always provide redress where individuals or collective groups feel victimised. In the case of the democratic process the integration of the police into the excessively centralised system of executive government, coupled with the refusal of the Minister for Justice to accept responsibility for operational matters, project a barrier between the police and their accountability to the public. The third component details proposals on how the current structures of police accountability in Ireland can be strengthened without interfering with the fundamentals of the law, the democratic process or the legal and constitutional status of the police. The key elements in these proposals are the establishment of an independent administrative procedure for handling citizen complaints against the police and the establishment of a network of local police-community liaison councils throughout the country coupled with a centralised parliamentary committee on the police. While these proposals are analysed from the perspective of maximising the degree of police accountability to the public they also take into account the need to ensure that the police capacity to deliver an effective police service is not unduly impaired as a result.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is concerned with an investigation of the anodic behaviour of ruthenium and iridium in aqueous solution and particularly of oxygen evolution on these metals. The latter process is of major interest in the large-scale production of hydrogen gas by the electrolysis of water. The presence of low levels of ruthenium trichloride ca. 10-4 mol dm-3 in acid solution give a considerable increase in the rate of oxygen evolution from platinum and gold, but not graphite, anodes. The mechanism of this catalytic effect was investigated using potential step and a.c. impedance technique. Earlier suggestions that the effect is due to catalysis by metal ions in solution were proved to be incorrect and it was shown that ruthenium species were incorporated into the surface oxide film. Changes in the oxidation state of these ruthenium species is probably responsible for the lowering of the oxygen overvoltage. Both the theoretical and practical aspects of the reaction were complicated by the fact that at constant potential the rates of both the catalysed and the uncatalysed oxygen evolution processes exhibit an appreciable, continuous decrease with either time or degree of oxidation of the substrate. The anodic behaviour of iridium in the oxide layer region has been investigated using conventional electrochemical techniques such as cyclic voltammetry. Applying a triangular voltage sweep at 10 Hz, 0.01 to 1.50V increases the amount of electric charge which the surface can store in the oxide region. This activation effect and the mechanism of charge storage is discussed in terms of both an expanded lattice theory for oxide growth on noble metals and a more recent theory of irreversible oxide formation with subsequent stoichiometry changes. The lack of hysteresis between the anodic and cathodic peaks at ca. 0.9 V suggests that the process involved here is proton migration in a relatively thick surface layer, i.e. that the reaction involved is some type of oxide-hydroxide transition. Lack of chloride ion inhibition in the anodic region also supports the irreversible oxide formation theory; however, to account for the hydrogen region of the potential sweep a compromise theory involving partial reduction of the outer regions of iridium oxide film is proposed. The loss of charge storage capacity when the activated iridium surface is anodized for a short time above ca. 1.60 V is attributed to loss by corrosion of the outer active layer from the metal surface. The behaviour of iridium at higher anodic potentials in acid solution was investigated. Current-time curves at constant potential and Tafel plots suggested that a change in the mechanism of the oxygen evolution reaction occurs at ca. 1.8 V. Above this potential, corrosion of the metal occurred, giving rise to an absorbance in the visible spectrum of the electrolyte (λ max = 455 nm). It is suggested that the species involved was Ir(O2)2+. A similar investigation in the case of alkaline electrolyte gave no evidence for a change in mechanism at 1.8 V and corrosion of the iridium was not observed. Oxygen evolution overpotentials were much lower for iridium than for platinum in both acidic and alkaline solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Garda Youth Diversion Projects (GYDPs) have since their beginnings in the early 1990s gained an increasingly important role and now constitute a central feature of Irish youth justice provision. Managed by the Irish Youth Justice Service and implemented by the Gardai and a variety of youth work organisations as well as independent community organisations, GYDPs are located at the crossroads of welfarist and corporatist approaches to youth justice, combining diversionary and preventative aspects in their work. To date, these projects have been subjected to very little systematic analysis and they have thus largely escaped critical scrutiny. To address this gap, this thesis locates the analysis of GYDP policy and practice within a post-structuralist theoretical framework and deploys discourse analysis primarily based on the work of Michel Foucault. It makes visible the official youth crime prevention and GYDP policy discourses and identifies how official discourses relating to youth crime prevention, young people and their offending behaviour, are drawn upon, negotiated, rejected or re-contextualised by project workers and JLOs. It also lays bare how project workers and JLOs draw upon a variety of other discourses, resulting in multi-layered, complex and sometimes contradictory constructions of young people, their offending behaviour and corresponding interventions. At a time when the projects are undergoing significant changes in terms of their repositioning to operate as the support infrastructure underpinning the statutory Garda Youth Diversion Programme, the thesis traces the discursive shifts and the implications for practice that are occurring as the projects move away from a youth work orientation towards a youth justice orientation. A key contribution of this thesis is the insight it provides into how young people and their families are being constituted in individualising and sometimes pathologising ways in GYDP discourses and practices. It reveals the part played by the GYDP intervention in favouring individual and narrow familial causes of offending behaviour while broader societal contexts are sidelined. By explicating the very assumptions upon which contemporary youth crime prevention policy, as well as GYDP policy and practice are based, this thesis offers a counterpoint to the prevailing evidence-based agenda of much research in the field of Irish youth justice theory and youth studies more generally. Rather, it encourages the reader to take a step back and examine some of the most fundamental and unquestioned assumptions about the construction of young people, their offending behaviour and ways of addressing this, in contemporary Irish youth crime prevention policy and practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The technological role of handheld devices is fundamentally changing. Portable computers were traditionally application specific. They were designed and optimised to deliver a specific task. However, it is now commonly acknowledged that future handheld devices need to be multi-functional and need to be capable of executing a range of high-performance applications. This thesis has coined the term pervasive handheld computing systems to refer to this type of mobile device. Portable computers are faced with a number of constraints in trying to meet these objectives. They are physically constrained by their size, their computational power, their memory resources, their power usage, and their networking ability. These constraints challenge pervasive handheld computing systems in achieving their multi-functional and high-performance requirements. This thesis proposes a two-pronged methodology to enable pervasive handheld computing systems meet their future objectives. The methodology is a fusion of two independent and yet complementary concepts. The first step utilises reconfigurable technology to enhance the physical hardware resources within the environment of a handheld device. This approach recognises that reconfigurable computing has the potential to dynamically increase the system functionality and versatility of a handheld device without major loss in performance. The second step of the methodology incorporates agent-based middleware protocols to support handheld devices to effectively manage and utilise these reconfigurable hardware resources within their environment. The thesis asserts the combined characteristics of reconfigurable computing and agent technology can meet the objectives of pervasive handheld computing systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Comfort is, in essence, satisfaction with the environment, and with respect to the indoor environment it is primarily satisfaction with the thermal conditions and air quality. Improving comfort has social, health and economic benefits, and is more financially significant than any other building cost. Despite this, comfort is not strictly managed throughout the building lifecycle. This is mainly due to the lack of an appropriate system to adequately manage comfort knowledge through the construction process into operation. Previous proposals to improve knowledge management have not been successfully adopted by the construction industry. To address this, the BabySteps approach was devised. BabySteps is an approach, proposed by this research, which states that for an innovation to be adopted into the industry it must be implementable through a number of small changes. This research proposes that improving the management of comfort knowledge will improve comfort. ComMet is a new methodology proposed by this research that manages comfort knowledge. It enables comfort knowledge to be captured, stored and accessed throughout the building life-cycle and so allowing it to be re-used in future stages of the building project and in future projects. It does this using the following: Comfort Performances – These are simplified numerical representations of the comfort of the indoor environment. Comfort Performances quantify the comfort at each stage of the building life-cycle using standard comfort metrics. Comfort Ratings - These are a means of classifying the comfort conditions of the indoor environment according to an appropriate standard. Comfort Ratings are generated by comparing different Comfort Performances. Comfort Ratings provide additional information relating to the comfort conditions of the indoor environment, which is not readily determined from the individual Comfort Performances. Comfort History – This is a continuous descriptive record of the comfort throughout the project, with a focus on documenting the items and activities, proposed and implemented, which could potentially affect comfort. Each aspect of the Comfort History is linked to the relevant comfort entity it references. These three components create a comprehensive record of the comfort throughout the building lifecycle. They are then stored and made available in a common format in a central location which allows them to be re-used ad infinitum. The LCMS System was developed to implement the ComMet methodology. It uses current and emerging technologies to capture, store and allow easy access to comfort knowledge as specified by ComMet. LCMS is an IT system that is a combination of the following six components: Building Standards; Modelling & Simulation; Physical Measurement through the specially developed Egg-Whisk (Wireless Sensor) Network; Data Manipulation; Information Recording; Knowledge Storage and Access.Results from a test case application of the LCMS system - an existing office room at a research facility - highlighted that while some aspects of comfort were being maintained, the building’s environment was not in compliance with the acceptable levels as stipulated by the relevant building standards. The implementation of ComMet, through LCMS, demonstrates how comfort, typically only considered during early design, can be measured and managed appropriately through systematic application of the methodology as means of ensuring a healthy internal environment in the building.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Petrochemical plastics/polymers are a common feature of day to day living as they occur in packaging, furniture, mobile phones, computers, construction equipment etc. However, these materials are produced from non-renewable materials and are resistant to microbial degradation in the environment. Considerable research has therefore been carried out into the production of sustainable, biodegradable polymers, amenable to microbial catabolism to CO2 and H2O. A key group of microbial polyesters, widely considered as optimal replacement polymers, are the Polyhydroxyalkaonates (PHAs). Primary research in this area has focused on using recombinant pure cultures to optimise PHA yields, however, despite considerable success, the high costs of pure culture fermentation have thus far hindered the commercial viability of PHAs thus produced. In more recent years work has begun to focus on mixed cultures for the optimisation of PHA production, with waste incorporations offering optimal production cost reductions. The scale of dairy processing in Ireland, and the high organic load wastewaters generated, represent an excellent potential substrate for bioconversion to PHAs in a mixed culture system. The current study sought to investigate the potential for such bioconversion in a laboratory scale biological system and to establish key operational and microbial characteristics of same. Two sequencing batch reactors were set up and operated along the lines of an enhanced biological phosphate removal (EBPR) system, which has PHA accumulation as a key step within repeated rounds of anaerobic/aerobic cycling. Influents to the reactors varied only in the carbon sources provided. Reactor 1 received artificial wastewater with acetate alone, which is known to be readily converted to PHA in the anaerobic step of EBPR. Reactor 2 wastewater influent contained acetate and skim milk to imitate a dairy processing effluent. Chemical monitoring of nutrient remediation within the reactors as continuously applied and EBPR consistent performances observed. Qualitative analysis of the sludge was carried out using fluorescence microscopy with Nile Blue A lipophillic stain and PHA production was confirmed in both reactors. Quantitative analysis via HPLC detection of crotonic acid derivatives revealed the fluorescence to be short chain length Polyhydroxybutyrate, with biomass dry weight accumulations of 11% and 13% being observed in reactors 1 and 2, respectively. Gas Chromatography-Mass Spectrometry for medium chain length methyl ester derivatives revealed the presence of hydroxyoctanoic, -decanoic and -dodecanoic acids in reactor 1. Similar analyses in reactor 2 revealed monomers of 3-hydroxydodecenoic and 3-hydroxytetradecanoic acids. Investigation of the microbial ecology of both reactors as conducted in an attempt to identify key species potentially contributing to reactor performance. Culture dependent investigations indicated that quite different communities were present in both reactors. Reactor 1 isolates demonstrated the following species distributions Pseudomonas (82%), Delftia acidovorans (3%), Acinetobacter sp. (5%) Aminobacter sp., (3%) Bacillus sp. (3%), Thauera sp., (3%) and Cytophaga sp. (3%). Relative species distributions among reactor 2 profiled isolates were more evenly distributed between Pseudoxanthomonas (32%), Thauera sp (24%), Acinetobacter (24%), Citrobacter sp (8%), Lactococcus lactis (5%), Lysinibacillus (5%) and Elizabethkingia (2%). In both reactors Gammaproteobacteria dominated the cultured isolates. Culture independent 16S rRNA gene analyses revealed differing profiles for both reactors. Reactor 1 clone distribution was as follows; Zooglea resiniphila (83%), Zooglea oryzae (2%), Pedobacter composti (5%), Neissericeae sp. (2%) Rhodobacter sp. (2%), Runella defluvii (3%) and Streptococcus sp. (3%). RFLP based species distribution among the reactor 2 clones was as follows; Runella defluvii (50%), Zoogloea oryzae (20%), Flavobacterium sp. (9%), Simplicispira sp. (6%), Uncultured Sphingobacteria sp. (6%), Arcicella (6%) and Leadbetterella bysophila (3%). Betaproteobacteria dominated the 16S rRNA gene clones identified in both reactors. FISH analysis with Nile Blue dual staining resolved these divergent findings, identifying the Betaproteobacteria as dominant PHA accumulators within the reactor sludges, although species/strain specific allocations could not be made. GC analysis of the sludge had indicated the presence of both medium chain length as well short chain length PHAs accumulating in both reactors. In addition the cultured isolates from the reactors had been identified previously as mcl and scl PHA producers, respectively. Characterisations of the PHA monomer profiles of the individual isolates were therefore performed to screen for potential novel scl-mcl PHAs. Nitrogen limitation driven PHA accumulation in E2 minimal media revealed a greater propensity among isoates for mcl-pHA production. HPLC analysis indicated that PHB production was not a major feature of the reactor isolates and this was supported by the low presence of scl phaC1 genes among PCR screened isolates. A high percentage distribution of phaC2 mcl-PHA synthase genes was recorded, with the majority sharing high percentage homology with class II synthases from Pseudomonas sp. The common presence of a phaC2 homologue was not reflected in the production of a common polymer. Considerable variation was noted in both the monomer composition and ratios following GC analysis. While co-polymer production could not be demonstrated, potentially novel synthase substrate specificities were noted which could be exploited further in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semiconductor nanowires, particularly group 14 semiconductor nanowires, have been the subject of intensive research in the recent past. They have been demonstrated to provide an effective, versatile route towards the continued miniaturisation and improvement of microelectronics. This thesis aims to highlight some novel ways of fabricating and controlling various aspects of the growth of Si and Ge nanowires. Chapter 1 highlights the primary technique used for the growth of nanowires in this study, namely, supercritical fluid (SCF) growth reactions. The advantages (and disadvantages) of this technique for the growth of Si and Ge nanowires are highlighted, citing numerous examples from the past ten years. The many variables involved in this technique are discussed along with the resultant characteristics of nanowires produced (diameter, doping, orientation etc.). Chapter 2 outlines the experimental methodologies used in this thesis. The analytical techniques used for the structural characterisation of nanowires produced are also described as well as the techniques used for the chemical analysis of various surface terminations. Chapter 3 describes the controlled self-seeded growth of highly crystalline Ge nanowires, in the absence of conventional metal seed catalysts, using a variety of oligosilylgermane precursors and mixtures of germane and silane compounds. A model is presented which describes the main stages of self-seeded Ge nanowire growth (nucleation, coalescence and Ostwald ripening) from the oligosilylgermane precursors and in conjunction with TEM analysis, a mechanism of growth is proposed. Chapter 4 introduces the metal assisted etching (MAE) of Si substrates to produce Si nanowires. A single step metal-assisted etch (MAE) process, utilising metal ion-containing HF solutions in the absence of an external oxidant, was developed to generate heterostructured Si nanowires with controllable porous (isotropically etched) and non-porous (anisotropically etched) segments. In Chapter 5 the bottom-up growth of Ge nanowires, similar to that described in Chapter 3, and the top down etching of Si, described in Chapter 4, are combined. The introduction of a MAE processing step in order to “sink” the Ag seeds into the growth substrate, prior to nanowire growth, is shown to dramatically decrease the mean nanowire diameters and to narrow the diameter distributions. Finally, in Chapter 6, the biotin – streptavidin interaction was explored for the purposes of developing a novel Si junctionless nanowire transistor (JNT) sensor.