54 resultados para Peer Leader Skills Development
Resumo:
Hazard perception has been found to correlate with crash involvement, and has thus been suggested as the most likely source of any skill gap between novice and experienced drivers. The most commonly used method for measuring hazard perception is to evaluate the perception-reaction time to filmed traffic events. It can be argued that this method lacks ecological validity and may be of limited value in predicting the actions drivers’ will take to hazards encountered. The first two studies of this thesis compare novice and experienced drivers’ performance on a hazard detection test, requiring discrete button press responses, with their behaviour in a more dynamic driving environment, requiring hazard handling ability. Results indicate that the hazard handling test is more successful at identifying experience-related differences in response time to hazards. Hazard detection test scores were strongly related to performance on a driver theory test, implying that traditional hazard perception tests may be focusing more on declarative knowledge of driving than on the procedural knowledge required to successfully avoid hazards while driving. One in five Irish drivers crash within a year of passing their driving test. This suggests that the current driver training system does not fully prepare drivers for the dangers they will encounter. Thus, the third and fourth studies in this thesis focus on the development of two simulator-based training regimes. In the third study participants receive intensive training on the molar elements of driving i.e. speed and distance evaluation. The fourth study focuses on training higher order situation awareness skills, including perception, comprehension and projection. Results indicate significant improvement in aspects of speed, distance and situation awareness across training days. However, neither training programme leads to significant improvements in hazard handling performance, highlighting the difficulties of applying learning to situations not previously encountered.
Resumo:
For at least two millennia and probably much longer, the traditional vehicle for communicating geographical information to end-users has been the map. With the advent of computers, the means of both producing and consuming maps have radically been transformed, while the inherent nature of the information product has also expanded and diversified rapidly. This has given rise in recent years to the new concept of geovisualisation (GVIS), which draws on the skills of the traditional cartographer, but extends them into three spatial dimensions and may also add temporality, photorealistic representations and/or interactivity. Demand for GVIS technologies and their applications has increased significantly in recent years, driven by the need to study complex geographical events and in particular their associated consequences and to communicate the results of these studies to a diversity of audiences and stakeholder groups. GVIS has data integration, multi-dimensional spatial display advanced modelling techniques, dynamic design and development environments and field-specific application needs. To meet with these needs, GVIS tools should be both powerful and inherently usable, in order to facilitate their role in helping interpret and communicate geographic problems. However no framework currently exists for ensuring this usability. The research presented here seeks to fill this gap, by addressing the challenges of incorporating user requirements in GVIS tool design. It starts from the premise that usability in GVIS should be incorporated and implemented throughout the whole design and development process. To facilitate this, Subject Technology Matching (STM) is proposed as a new approach to assessing and interpreting user requirements. Based on STM, a new design framework called Usability Enhanced Coordination Design (UECD) is ten presented with the purpose of leveraging overall usability of the design outputs. UECD places GVIS experts in a new key role in the design process, to form a more coordinated and integrated workflow and a more focused and interactive usability testing. To prove the concept, these theoretical elements of the framework have been implemented in two test projects: one is the creation of a coastal inundation simulation for Whitegate, Cork, Ireland; the other is a flooding mapping tool for Zhushan Town, Jiangsu, China. The two case studies successfully demonstrated the potential merits of the UECD approach when GVIS techniques are applied to geographic problem solving and decision making. The thesis delivers a comprehensive understanding of the development and challenges of GVIS technology, its usability concerns, usability and associated UCD; it explores the possibility of putting UCD framework in GVIS design; it constructs a new theoretical design framework called UECD which aims to make the whole design process usability driven; it develops the key concept of STM into a template set to improve the performance of a GVIS design. These key conceptual and procedural foundations can be built on future research, aimed at further refining and developing UECD as a useful design methodology for GVIS scholars and practitioners.
Resumo:
RNA editing is a biological phenomena that alters nascent RNA transcripts by insertion, deletion and/or substitution of one or a few nucleotides. It is ubiquitous in all kingdoms of life and in viruses. The predominant editing event in organisms with a developed central nervous system is Adenosine to Inosine deamination. Inosine is recognized as Guanosine by the translational machinery and reverse-transcriptase. In primates, RNA editing occurs frequently in transcripts from repetitive regions of the genome. In humans, more than 500,000 editing instances have been identified, by applying computational pipelines on available ESTs and high-throughput sequencing data, and by using chemical methods. However, the functions of only a small number of cases have been studied thoroughly. RNA editing instances have been found to have roles in peptide variants synthesis by non-synonymous codon substitutions, transcript variants by alterations in splicing sites and gene silencing by miRNAs sequence modifications. We established the Database of RNA EDiting (DARNED) to accommo-date the reference genomic coordinates of substitution editing in human, mouse and fly transcripts from published literatures, with additional information on edited genomic coordinates collected from various databases e.g. UCSC, NCBI. DARNED contains mostly Adenosine to Inosine editing and allows searches based on genomic region, gene ID, and user provided sequence. The Database is accessible at http://darned.ucc.ie RNA editing instances in coding region are likely to result in recoding in protein synthesis. This encouraged me to focus my research on the occurrences of RNA editing specific CDS and non-Alu exonic regions. By applying various filters on discrepancies between available ESTs and their corresponding reference genomic sequences, putative RNA editing candidates were identified. High-throughput sequencing was used to validate these candidates. All predicted coordinates appeared to be either SNPs or unedited.
Resumo:
Aim: To develop and evaluate the psychometric properties of an instrument for the measurement of self-neglect (SN).Conceptual Framework: An elder self-neglect (ESN) conceptual framework guided the literature review and scale development. The framework has two key dimensions physical/psycho-social and environmental and seven sub dimensions which are representative of the factors that can contribute to intentional and unintentional SN. Methods: A descriptive cross-sectional design was adopted to achieve the research aim. The study was conducted in two phases. Phase 1 involved the development of the questionnaire content and structure. Phase 2 focused on establishing the psychometric properties of the instrument. Content validity was established by a panel of 8 experts and piloted with 9 health and social care professionals. The instrument was subsequently posted with a stamped addressed envelope to 566 health and social care professionals who met specific eligibility criteria across the four HSE areas. A total of 341 questionnaires were returned, a response rate of 60% and 305 (50%) completed responses were included in exploratory factor analysis (EFA). Item and factor analyses were performed to elicit the instruments underlying factor structure and establish preliminary construct validity. Findings: Item and factor analyses resulted in a logically coherent, 37 items, five factor solution, explaining 55.6% of the cumulative variance. The factors were labelled: ‘Environment’, ‘Social Networks’, ‘Emotional and Behavioural Liability’, ‘Health Avoidance’ and ‘Self-Determinism’. The factor loadings were >0.40 for all items on each of the five subscales. Preliminary construct validity was supported by findings. Conclusion: The main outcome of this research is a 37 item Self-Neglect (SN-37) measurement instrument that was developed by EFA and underpinned by an ESN conceptual framework. Preliminary psychometric evaluation of the instrument is promising. Future work should be directed at establishing the construct and criterion related validity of the instrument.
Resumo:
Background: Assessing child growth and development is complex. Delayed identification of growth or developmental problems until school entry has health, educational and social consequences for children and families. Health care professionals (HCPs), including Public Health Nurses work with parents to elicit and attend to their growth and development concerns. It is known that parents have concerns about their children’s growth and development which are not expressed in a timely manner. Measuring parental concern has not been fully effective to date and little is known about parents’ experiences of expressing concerns. Aim: To understand how parents make sense of child growth or development concerns. Method: The study was qualitative using Interpretative Phenomenological Analysis (IPA). A purposeful sample of 15 parents of pre-school children referred by their PHN to second tier services was used. Data were collected by semi-structured interviews. NVivo version 10 was used for data management purposes and IPA for analysis. Findings: Findings yielded two contextual themes which captured how parents described The Concern – ‘telling it as it is’ and their experiences of being Referred on. Four superordinate themes were found which encapsulated the Uncertainty – ‘a little bit not sure’ of parents as they made sense of the child’s growth and development problems. They were influenced by Parental Knowledge – ‘being and getting in the know’ which aided their sense-making before being prompted by Triggers to action. Parents then described Getting the child’s problem checked out as they went to express their concerns to HCPs. Conclusion and Implications: Parental expression of concerns about their child is a complex process that may not be readily understood by HCPs. A key implication of findings is to reappraise how parental concern is elicited and attended to in order to promote early referral and intervention of children who may have growth and development problems.
Resumo:
Ventral midbrain (VM) dopaminergic (DA) neurons, which project to the dorsal striatum via the nigrostriatal pathway, are progressively degenerated in Parkinson’s disease (PD). The identification of the instructive factors that regulate midbrain DA neuron development, and the subsequent elucidation of the molecular bases of their effects, is vital. Such an understanding would facilitate the generation of transplantable DA neurons from stem cells and the identification of developmentally-relevant neurotrophic factors, the two most promising therapeutic approaches for PD. Two related members of the bone morphogenetic protein (BMP) family, BMP2 and growth/differentiation factor (GDF) 5, which signal via a canonical Smad 1/5/8 signalling pathway, have been shown to have neurotrophic effects on midbrain DA neurons both in vitro and in vivo, and may function to regulate VM DA neuronal development. However, the molecular (signalling pathway(s)) and cellular (direct neuronal or indirect via glial cells) mechanisms of their effects remain to be elucidated. The present thesis hypothesised that canonical Smad signalling mediates the direct effects of BMP2 and GDF5 on the development of VM DA neurons. By activating, modulating and/or inhibiting various components of the BMP-Smad signalling pathway, this research demonstrated that GDF5- and BMP2-induced neurite outgrowth from midbrain DA neurons is dependent on BMP type I receptor activation of the Smad signalling pathway. The role of glial cell-line derived neurotrophic factor (GDNF)-signalling, dynamin-dependent endocytosis and Smad interacting protein-1 (Sip1) regulation, in the neurotrophic effects of BMP2 and GDF5 were determined. Finally, the in vitro development of VM neural stem cells (NSCs) was characterised, and the ability of GDF5 and BMP2 to induce these VM NSCs towards DA neuronal differentiation was investigated. Taken together, these experiments identify GDF5 and BMP2 as novel regulators of midbrain DA neuronal induction and differentiation, and demonstrate that their effects on DA neurons are mediated by canonical BMPR-Smad signalling.
Resumo:
A new science curriculum was introduced to primary schools in the Republic of Ireland in 2003. This curriculum, broader in scope than its 1971 predecessor (Curaclam na Bunscoile, 1971), requires teachers at all levels of primary school to teach science. A review carried out in 2008 of children’s experiences of this curriculum found that its implementation throughout the country was uneven. This finding, together with the increasing numbers of teachers who were requesting support to implement this curriculum, suggested the need for a review of Irish primary teachers’ needs in the area of science. The research study described in this thesis was undertaken to establish the extent of Irish primary teachers’ needs in the area of science by conducting a national survey. The data from this survey, together with data from international studies, were used to develop a theoretical framework for a model of Continuing Professional Development (CPD). This theoretical framework was used to design the Whole- School, In-School (WSIS) CPD model which was trialled in two case-study schools. The participants in these ‘action-research’ case-studies acted as co-researchers, who contributed to the development and evolution of the CPD model in each school. Analysis of the data gathered as part of the evaluation of the Whole-School, In- School (WSIS) model of CPD found an improved experience of science for children and improved confidence for teachers teaching at all levels of the primary school. In addition, a template for the establishment of a culture of collaborative CPD in schools has been developed from an analysis of the data
Resumo:
This PhD covers the development of planar inversion-mode and junctionless Al2O3/In0.53Ga0.47As metal-oxidesemiconductor field-effect transistors (MOSFETs). An implant activation anneal was developed for the formation of the source and drain (S/D) of the inversionmode MOSFET. Fabricated inversion-mode devices were used as test vehicles to investigate the impact of forming gas annealing (FGA) on device performance. Following FGA, the devices exhibited a subthreshold swing (SS) of 150mV/dec., an ION/IOFF of 104 and the transconductance, drive current and peak effective mobility increased by 29%, 25% and 15%, respectively. An alternative technique, based on the fitting of the measured full-gate capacitance vs gate voltage using a selfconsistent Poisson-Schrödinger solver, was developed to extract the trap energy profile across the full In0.53Ga0.47As bandgap and beyond. A multi-frequency inversion-charge pumping approach was proposed to (1) study the traps located at energy levels aligned with the In0.53Ga0.47As conduction band and (2) separate the trapped charge and mobile charge contributions. The analysis revealed an effective mobility (μeff) peaking at ~2850cm2/V.s for an inversion-charge density (Ninv) = 7*1011cm2 and rapidly decreasing to ~600cm2/V.s for Ninv = 1*1013 cm2, consistent with a μeff limited by surface roughness scattering. Atomic force microscopy measurements confirmed a large surface roughness of 1.95±0.28nm on the In0.53Ga0.47As channel caused by the S/D activation anneal. In order to circumvent the issue relative to S/D formation, a junctionless In0.53Ga0.47As device was developed. A digital etch was used to thin the In0.53Ga0.47As channel and investigate the impact of channel thickness (tInGaAs) on device performance. Scaling of the SS with tInGaAs was observed for tInGaAs going from 24 to 16nm, yielding a SS of 115mV/dec. for tInGaAs = 16nm. Flat-band μeff values of 2130 and 1975cm2/V.s were extracted on devices with tInGaAs of 24 and 20nm, respectively
Resumo:
The application of biological effect monitoring for the detection of environmental chemical exposure in domestic animals is still in its infancy. This study investigated blood sample preparations in vitro for their use in biological effect monitoring. When peripheral blood mononuclear cells (PBMCs), isolated following the collection of multiple blood samples from sheep in the field, were cryopreserved and subsequently cultured for 24 hours a reduction in cell viability (<80%) was attributed to delays in the processing following collection. Alternative blood sample preparations using rat and sheep blood demonstrated that 3 to 5 hour incubations can be undertaken without significant alterations in the viability of the lymphocytes; however, a substantial reduction in viability was observed after 24 hours in frozen blood. Detectable levels of early and late apoptosis as well as increased levels of ROS were detectable in frozen sheep blood samples. The addition of ascorbic acid partly reversed this effect and reduced the loss in cell viability. The response of the rat and sheep blood sample preparations to genotoxic compounds ex vivo showed that EMS caused comparable dose-dependent genotoxic effects in all sample preparations (fresh and frozen) as detected by the Comet assay. In contrast, the effects of CdCl2 were dependent on the duration of exposure as well as the sample preparation. The analysis of leukocyte subsets in frozen sheep blood showed no alterations in the percentages of T and B lymphocytes but led to a major decrease in the percentage of granulocytes compared to those in the fresh samples. The percentages of IFN-γ and IL-4 but not IL-6 positive cells were comparable between fresh and frozen sheep blood after 4 hour stimulation with phorbol 12-myrisate 13-acetate and ionomycin (PMA+I). These results show that frozen blood gives comparable responses to fresh blood samples in the toxicological and immune assays used.
Resumo:
Defects in commercial cheese result in a downgrading of the final cheese and a consequential economic loss to the cheese producer. Developments of defects in cheese are often not fully understood and therefore not controllable by the producer. This research investigated the underlying factors in the development of split and secondary fermentation defect and of pinking defects in commercial Irish cheeses. Split defect in Swiss-type cheese is a common defect associated with eye formation and manifests as slits and cracks visible in the cut cheese loaf (Reinbold, 1972; Daly et al., 2010). No consensus exists as to the definitive causes of the defect and possible factors which may contribute to the defect were reviewed. Models were derived to describe the relationship between moisture, pH, and salt levels and the distance from sample location to the closest external block surface during cheese ripening. Significant gradients within the cheese blocks were observed for all measured parameters in cheeses at 7 day post/after manufacture. No significant pH gradient was found within the blocks on exit from hot-room ripening and at three months post exit from the hot-room. Moisture content reached equilibrium within the blocks between exit from hot-room and 3 months after exit from hot-room while salt and salt-to-moisture levels had not reached equilibrium within the cheese blocks even at three months after exit from hot-room ripening. A characterisation of Swiss-type cheeses produced from a seasonal milk supply was undertaken. Cheeses were sampled on two days per month of the production year, at three different times during the manufacturing day, at internal and external regions of the cheese block and at four ripening time points (7 days post manufacture, post hot-room, 14 days post hot-room and 3 months in a cold room after exit from hot-room). Compositional, biochemical and microbial indices were determined, and the results were analysed as a splitplot with a factorial arrangement of treatments (season, time of day, area) on the main plot and ripening time on the sub-plot. Season (and interactions) had a significant effect on pH and salt-in-moisture levels (SM), mean viable counts of L. helveticus, propionic acid and non-starter lactic acid bacteria, levels of primary and secondary proteolysis and cheese firmness. Levels of proteolysis increased significantly during hot-room ripening but also during cold room storage, signifying continued development of cheese ripening during cold storage (> 8°C). Rheological parameters (e.g. springiness and cohesiveness) were significantly affected by interactions between ripening and location within cheese blocks. Time of day of manufacture significantly affected mean cheese calcium levels at 7 days post manufacture and mean levels of arginine and mean viable counts of NSLAB. Cheeses produced during the middle of the production day had the best grading scores and were more consistent compared to cheeses produced early or late during day of manufacture. Cheeses with low levels of S/M and low values of resilience were associated with poor grades at 7 days post manufacture. Chesses which had high elastic index values and low values of springiness in the external areas after exit from hot-room ripening also obtained good commercial grades. Development of a pink colour defect is an intermittent defect in ripened cheese which may or may not contain an added colourant, e.g., annatto. Factors associated with the defect were reviewed. Attempts at extraction and identification of the pink discolouration were unsuccessful. The pink colour partitioned with the water insoluble protein fraction. No significant difference was observed between ripened control and defect cheese for oxygen levels and redox potential or for the results of elemental analysis. A possible relationship between starter activity and defect development was established in cheeses with added coulourant, as lower levels of residual galactose and lactose were observed in defective cheese compared to control cheese free of the defect. Swiss-type cheese without added colourant had significantly higher levels of arginine and significantly lower lactate levels. Flow cell cytometry indicated that levels of bacterial cell viability and metabolic state differed between control and defect cheeses (without added colourant). Pyrosequencing analysis of cheese samples with and without the defect detected the previously unreported bacteria in cheese, Deinococcus thermus (a potential carotenoid producer). Defective Swiss-type cheeses had elevated levels of Deinococcus thermus compared to control cheeses, however the direct cause of pink was not linked to this bacterium alone. Overall, research was undertaken on underlying factors associated with the development of specific defects in commercial cheese, but also characterised the dynamic changes in key microbial and physicochemical parameters during cheese ripening and storage. This will enable the development of processing technologies to enable seasonal manipulation of manufacture protocols to minimise compositional and biochemical variability and to reduce and inhibit the occurrence of specific quality defects.
Resumo:
The primary focus of this thesis was the asymmetric peroxidation of α,β-unsaturated aldehydes and the development of this methodology to include the synthesis of bioactive chiral 1,2-dioxane and 1,2-dioxalane rings. In Chapter 1 a review detailing the new and improved methods for the acyclic introduction of peroxide functionality to substrates over the last decade was discussed. These include a detailed examination of metal-mediated transformations, chiral peroxidation using organocatalytic means and the improvements in methodology of well-established peroxidation pathways. The second chapter discusses the method by which peroxidation of our various substrates was attempted and the optimisation studies associated with these reactions. The method by which the enantioselectivity of our β-peroxyaldehydes was determined is also reviewed. Chapters 3 and 4 focus on improving the enantioselectivity associated with our asymmetric peroxidation reaction. A comprehensive analysis exploring the effect of solvent, concentration and temperature on enantioselectivity was examined. The effect that different catalytic systems have on enantioselectivity and reactivity was also investigated in depth. Chapter 5 details the various transformations that β-peroxyaldehydes can undergo and the manipulation of these transformations towards the establishment of several routes for the formation of chiral 1,2-dioxane and 1,2-dioxalane rings. Chapter 6 details the full experimental procedures, including spectroscopic and analytical data for the compounds prepared during this research.
Development of large-scale colloidal crystallisation methods for the production of photonic crystals
Resumo:
Colloidal photonic crystals have potential light manipulation applications including; fabrication of efficient lasers and LEDs, improved optical sensors and interconnects, and improving photovoltaic efficiencies. One road-block of colloidal selfassembly is their inherent defects; however, they can be manufactured cost effectively into large area films compared to micro-fabrication methods. This thesis investigates production of ‘large-area’ colloidal photonic crystals by sonication, under oil co-crystallization and controlled evaporation, with a view to reducing cracking and other defects. A simple monotonic Stöber particle synthesis method was developed producing silica particles in the range of 80 to 600nm in a single step. An analytical method assesses the quality of surface particle ordering in a semiquantitative manner was developed. Using fast Fourier transform (FFT) spot intensities, a grey scale symmetry area method, has been used to quantify the FFT profiles. Adding ultrasonic vibrations during film formation demonstrated large areas could be assembled rapidly, however film ordering suffered as a result. Under oil cocrystallisation results in the particles being bound together during film formation. While having potential to form large areas, it requires further refinement to be established as a production technique. Achieving high quality photonic crystals bonded with low concentrations (<5%) of polymeric adhesives while maintaining refractive index contrast, proved difficult and degraded the film’s uniformity. A controlled evaporation method, using a mixed solvent suspension, represents the most promising method to produce high quality films over large areas, 75mm x 25mm. During this mixed solvent approach, the film is kept in the wet state longer, thus reducing cracks developing during the drying stage. These films are crack-free up to a critical thickness, and show very large domains, which are visible in low magnification SEM images as Moiré fringe patterns. Higher magnification reveals separation between alternate fringe patterns are domain boundaries between individual crystalline growth fronts.
Resumo:
Very Long Baseline Interferometry (VLBI) polarisation observations of the relativistic jets from Active Galactic Nuclei (AGN) allow the magnetic field environment around the jet to be probed. In particular, multi-wavelength observations of AGN jets allow the creation of Faraday rotation measure maps which can be used to gain an insight into the magnetic field component of the jet along the line of sight. Recent polarisation and Faraday rotation measure maps of many AGN show possible evidence for the presence of helical magnetic fields. The detection of such evidence is highly dependent both on the resolution of the images and the quality of the error analysis and statistics used in the detection. This thesis focuses on the development of new methods for high resolution radio astronomy imaging in both of these areas. An implementation of the Maximum Entropy Method (MEM) suitable for multi-wavelength VLBI polarisation observations is presented and the advantage in resolution it possesses over the CLEAN algorithm is discussed and demonstrated using Monte Carlo simulations. This new polarisation MEM code has been applied to multi-wavelength imaging of the Active Galactic Nuclei 0716+714, Mrk 501 and 1633+382, in each case providing improved polarisation imaging compared to the case of deconvolution using the standard CLEAN algorithm. The first MEM-based fractional polarisation and Faraday-rotation VLBI images are presented, using these sources as examples. Recent detections of gradients in Faraday rotation measure are presented, including an observation of a reversal in the direction of a gradient further along a jet. Simulated observations confirming the observability of such a phenomenon are conducted, and possible explanations for a reversal in the direction of the Faraday rotation measure gradient are discussed. These results were originally published in Mahmud et al. (2013). Finally, a new error model for the CLEAN algorithm is developed which takes into account correlation between neighbouring pixels. Comparison of error maps calculated using this new model and Monte Carlo maps show striking similarities when the sources considered are well resolved, indicating that the method is correctly reproducing at least some component of the overall uncertainty in the images. The calculation of many useful quantities using this model is demonstrated and the advantages it poses over traditional single pixel calculations is illustrated. The limitations of the model as revealed by Monte Carlo simulations are also discussed; unfortunately, the error model does not work well when applied to compact regions of emission.
Resumo:
Background: The Early Development Instrument (EDI) is a population-level measure of five developmental domains at school-entry age. The overall aim of this thesis was to explore the potential of the EDI as an indicator of early development in Ireland. Methods: A cross-sectional study was conducted in 47 primary schools in 2011 using the EDI and a linked parental questionnaire. EDI (teacher completed) scores were calculated for 1,344 children in their first year of full-time education. Those scoring in the lowest 10% of the sample population in one or more domains were deemed to be 'developmentally vulnerable'. Scores were correlated with contextual data from the parental questionnaire and with indicators of area and school-level deprivation. Rasch analysis was used to determine the validity of the EDI. Results: Over one quarter (27.5%) of all children in the study were developmentally vulnerable. Individual characteristics associated with increased risk of vulnerability were being male; under 5 years old; and having English as a second language. Adjusted for these demographics, low birth weight, poor parent/child interaction and mother’s lower level of education showed the most significant odds ratios for developmental vulnerability. Vulnerability did not follow the area-level deprivation gradient as measured by a composite index of material deprivation. Children considered by the teacher to be in need of assessment also had lower scores, which were not significantly different from those of children with a clinical diagnosis of special needs. all domains showed at least reasonable fit to the Rasch model supporting the validity of the instrument. However, there was a need for further refinement of the instrument in the Irish context. Conclusion: This thesis provides a unique snapshot of early development in Ireland. The EDI and linked parental questionnaires are promising indicators of the extent, distribution and determinants of developmental vulnerability.
Resumo:
Absorption heat transformers are thermodynamic systems which are capable of recycling industrial waste heat energy by increasing its temperature. Triple stage heat transformers (TAHTs) can increase the temperature of this waste heat by up to approximately 145˚C. The principle factors influencing the thermodynamic performance of a TAHT and general points of operating optima were identified using a multivariate statistical analysis, prior to using heat exchange network modelling techniques to dissect the design of the TAHT and systematically reassemble it in order to minimise internal exergy destruction within the unit. This enabled first and second law efficiency improvements of up to 18.8% and 31.5% respectively to be achieved compared to conventional TAHT designs. The economic feasibility of such a thermodynamically optimised cycle was investigated by applying it to an oil refinery in Ireland, demonstrating that in general the capital cost of a TAHT makes it difficult to achieve acceptable rates of return. Decreasing the TAHT's capital cost may be achieved by redesigning its individual pieces of equipment and reducing their size. The potential benefits of using a bubble column absorber were therefore investigated in this thesis. An experimental bubble column was constructed and used to track the collapse of steam bubbles being absorbed into a hotter lithium bromide salt solution. Extremely high mass transfer coefficients of approximately 0.0012m/s were observed, showing significant improvements over previously investigated absorbers. Two separate models were developed, namely a combined heat and mass transfer model describing the rate of collapse of the bubbles, and a stochastic model describing the hydrodynamic motion of the collapsing vapour bubbles taking into consideration random fluctuations observed in the experimental data. Both models showed good agreement with the collected data, and demonstrated that the difference between the solution's temperature and its boiling temperature is the primary factor influencing the absorber's performance.