47 resultados para STEP-NC format


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electronic signal processing systems currently employed at core internet routers require huge amounts of power to operate and they may be unable to continue to satisfy consumer demand for more bandwidth without an inordinate increase in cost, size and/or energy consumption. Optical signal processing techniques may be deployed in next-generation optical networks for simple tasks such as wavelength conversion, demultiplexing and format conversion at high speed (≥100Gb.s-1) to alleviate the pressure on existing core router infrastructure. To implement optical signal processing functionalities, it is necessary to exploit the nonlinear optical properties of suitable materials such as III-V semiconductor compounds, silicon, periodically-poled lithium niobate (PPLN), highly nonlinear fibre (HNLF) or chalcogenide glasses. However, nonlinear optical (NLO) components such as semiconductor optical amplifiers (SOAs), electroabsorption modulators (EAMs) and silicon nanowires are the most promising candidates as all-optical switching elements vis-à-vis ease of integration, device footprint and energy consumption. This PhD thesis presents the amplitude and phase dynamics in a range of device configurations containing SOAs, EAMs and/or silicon nanowires to support the design of all optical switching elements for deployment in next-generation optical networks. Time-resolved pump-probe spectroscopy using pulses with a pulse width of 3ps from mode-locked laser sources was utilized to accurately measure the carrier dynamics in the device(s) under test. The research work into four main topics: (a) a long SOA, (b) the concatenated SOA-EAMSOA (CSES) configuration, (c) silicon nanowires embedded in SU8 polymer and (d) a custom epitaxy design EAM with fast carrier sweepout dynamics. The principal aim was to identify the optimum operation conditions for each of these NLO device configurations to enhance their switching capability and to assess their potential for various optical signal processing functionalities. All of the NLO device configurations investigated in this thesis are compact and suitable for monolithic and/or hybrid integration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this project is to integrate neuronal cell culture with commercial or in-house built micro-electrode arrays and MEMS devices. The resulting device is intended to support neuronal cell culture on its surface, expose specific portions of a neuronal population to different environments using microfluidic gradients and stimulate/record neuronal electrical activity using micro-electrode arrays. Additionally, through integration of chemical surface patterning, such device can be used to build neuronal cell networks of specific size, conformation and composition. The design of this device takes inspiration from the nervous system because its development and regeneration are heavily influenced by surface chemistry and fluidic gradients. Hence, this device is intended to be a step forward in neuroscience research because it utilizes similar concepts to those found in nature. The large part of this research revolved around solving technical issues associated with integration of biology, surface chemistry, electrophysiology and microfluidics. Commercially available microelectrode arrays (MEAs) are mechanically and chemically brittle making them unsuitable for certain surface modification and micro-fluidic integration techniques described in the literature. In order to successfully integrate all the aspects into one device, some techniques were heavily modified to ensure that their effects on MEA were minimal. In terms of experimental work, this thesis consists of 3 parts. The first part dealt with characterization and optimization of surface patterning and micro-fluidic perfusion. Through extensive image analysis, the optimal conditions required for micro-contact printing and micro-fluidic perfusion were determined. The second part used a number of optimized techniques and successfully applied these to culturing patterned neural cells on a range of substrates including: Pyrex, cyclo-olefin and SiN coated Pyrex. The second part also described culturing neurons on MEAs and recording electrophysiological activity. The third part of the thesis described integration of MEAs with patterned neuronal culture and microfluidic devices. Although integration of all methodologies proved difficult, a large amount of data relating to biocompatibility, neuronal patterning, electrophysiology and integration was collected. Original solutions were successfully applied to solve a number of issues relating to consistency of micro printing and microfluidic integration leading to successful integration of techniques and device components.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thin film dielectrics based on titanium, zirconium or hafnium oxides are being introduced to increase the permittivity of insulating layers in transistors for micro/nanoelectronics and memory devices. Atomic layer deposition (ALD) is the process of choice for fabricating these films, as it allows for high control of composition and thickness in thin, conformal films which can be deposited on substrates with high aspect-ratio features. The success of this method depends crucially on the chemical properties of the precursor molecules. A successful ALD precursor should be volatile, stable in the gas-phase, but reactive on the substrate and growing surface, leading to inert by-products. In recent years, many different ALD precursors for metal oxides have been developed, but many of them suffer from low thermal stability. Much promise is shown by group 4 metal precursors that contain cyclopentadienyl (Cp = C5H5-xRx) ligands. One of the main advantages of Cp precursors is their thermal stability. In this work ab initio calculations were carried out at the level of density functional theory (DFT) on a range of heteroleptic metallocenes [M(Cp)4-n(L)n], M = Hf/Zr/Ti, L = Me and OMe, in order to find mechanistic reasons for their observed behaviour during ALD. Based on optimized monomer structures, reactivity is analyzed with respect to ligand elimination. The order in which different ligands are eliminated during ALD follows their energetics which was in agreement with experimental measurements. Titanocene-derived precursors, TiCp*(OMe)3, do not yield TiO2 films in atomic layer deposition (ALD) with water, while Ti(OMe)4 does. DFT was used to model the ALD reaction sequence and find the reason for the difference in growth behaviour. Both precursors adsorb initially via hydrogen-bonding. The simulations reveal that the Cp* ligand of TiCp*(OMe)3 lowers the Lewis acidity of the Ti centre and prevents its coordination to surface O (densification) during both of the ALD pulses. Blocking this step hindered further ALD reactions and for that reason no ALD growth is observed from TiCp*(OMe)3 and water. The thermal stability in the gas phase of Ti, Zr and Hf precursors that contain cyclopentadienyl ligands was also considered. The reaction that was found using DFT is an intramolecular α-H transfer that produces an alkylidene complex. The analysis shows that thermal stabilities of complexes of the type MCp2(CH3)2 increase down group 4 (M = Ti, Zr and Hf) due to an increase in the HOMO-LUMO band gap of the reactants, which itself increases with the electrophilicity of the metal. The reverse reaction of α-hydrogen abstraction in ZrCp2Me2 is 1,2-addition reaction of a C-H bond to a Zr=C bond. The same mechanism is investigated to determine if it operates for 1,2 addition of the tBu C-H across Hf=N in a corresponding Hf dimer complex. The aim of this work is to understand orbital interactions, how bonds break and how new bonds form, and in what state hydrogen is transferred during the reaction. Calculations reveal two synchronous and concerted electron transfers within a four-membered cyclic transition state in the plane between the cyclopentadienyl rings, one π(M=X)-to-σ(M-C) involving metal d orbitals and the other σ(C-H)-to-σ(X-H) mediating the transfer of neutral H, where X = C or N. The reaction of the hafnium dimer complex with CO that was studied for the purpose of understanding C-H bond activation has another interesting application, namely the cleavage of an N-N bond and resulting N-C bond formation. Analysis of the orbital plots reveals repulsion between the occupied orbitals on CO and the N-N unit where CO approaches along the N-N axis. The repulsions along the N-N axis are minimized by instead forming an asymmetrical intermediate in which CO first coordinates to one Hf and then to N. This breaks the symmetry of the N-N unit and the resultant mixing of MOs allows σ(NN) to be polarized, localizing electrons on the more distant N. This allowed σ(CO) and π(CO) donation to N and back-donation of π*(Hf2N2) to CO. Improved understanding of the chemistry of metal complexes can be gained from atomic-scale modelling and this provides valuable information for the design of new ALD precursors. The information gained from the model decomposition pathway can be additionally used to understand the chemistry of molecules in the ALD process as well as in catalytic systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This ethnographic study makes a number of original contributions to the consumer identity projects and the marketplace cultures dimensions of consumer culture theory research. This study introduces the notion of the brand-orientated play-community, a novel consumption community form, which displays, as locus, a desire to play. This contributes to our understanding of the fluid relationship between subcultures of consumption, consumer tribes, and brand community. It was found that the brand-orientated play-community’s prime celebration, conceptualised as the ‘branded carnival’, displays characteristics of the archetypal carnival. The community access carnivalistic life and a world-upside-down ethos via the use and misuse of marketplace resources. The branded carnival is further supported by the community’s enactment of ‘toxic play’, which entails abnormal alcohol consumption, black market illegal resources, edgework activities, hegemonic masculinity and upsetting the public. This play-community is discussed in terms of a hyper-masculine playpen, as the play enacted has a direct relationship with the enactment of strong masculine roles. It was found that male play-ground members enact the extremes of contrasting masculine roles as a means to subvert the calculated and sedate ‘man-of-action-hero’ synthesis. Carnivals are unisex, and hence, women have begun entering the play-ground. Female members have successfully renegotiated their role within the community, from playthings to players – they have achieved player equality, which within the liminoid zone is more powerful than gender equality. However, while toxic play is essential to the maintenance of collective identity within the culture so too is the more serious form of play: the toxic sport of professional beer pong. The author conceptualises beer pong as a ‘toxic sport’, as it displays the contradictory play foundations of agon and corrupt ilinx: this is understood as a milestone step in the emergence of the postmodern sport era, in which spontaneity and the carnivalesque will dominate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this thesis was to improve the dissolution rate of the poorly waters-soluble drug, fenofibrate by processing it with a high surface area carrier, mesoporous silica. The subsequent properties of the drug – silica composite were studied in terms of drug distribution within the silica matrix, solid state and release properties. Prior to commencing any experimental work, the properties of unprocessed mesoporous silica and fenofibrate were characterised (chapter 3), this allowed for comparison with the processed samples studied in later chapters. Fenofibrate was a highly stable, crystalline drug that did not adsorb moisture, even under long term accelerated storage conditions. It maintained its crystallinity even after SC-CO2 processing. Its dissolution rate was limited and dependent on the characteristics of the particular in vitro media studied. Mesoporous silica had a large surface area and mesopore volume and readily picked up moisture when stored under long term accelerated storage conditions (75% RH, 40 oC). It maintained its mesopore character after SC-CO2 processing. A variety of methods were employed to process fenofibrate with mesoporous silica including physical mixing, melt method, solvent impregnation and novel methods such as liquid and supercritical carbon dioxide (SC-CO2) (chapter 4). It was found that it was important to break down the fenofibrate particulate structure to a molecular state to enable drug molecules enter into the silica mesopores. While all processing methods led to some increase in fenofibrate release properties; the impregnation, liquid and SC-CO2 methods produced the most rapid release rates. SC-CO2 processing was further studied with a view to optimising the processing parameters to achieve the highest drug-loading efficiency possible (chapter 5). In this thesis, it was that SC-CO2 processing pressure had a bearing on drug-loading efficiency. Neither pressure, duration or depressurisation rate affected drug solid state or release properties. The amount of drug that could be loaded onto to the mesoporous silica successfully was also investigated at different ratios of drug mass to silica surface area under constant SC-CO2 conditions; as the drug – silica ratio increased, the drug-loading efficiency decreased, while there was no effect on drug solid state or release properties. The influence of the number of drug-loading steps was investigated (chapter 6) with a view to increasing the drug-loading efficiency. This multiple step approach did not yield an increase in drug-loading efficiency compared to the single step approach. It was also an objective in this chapter to understand how much drug could be loaded into silica mesopores; a method based on the known volume of the mesopores and true density of drug was investigated. However, this approach led to serious repercussions in terms of the subsequent solid state nature of the drug and its release performance; there was significant drug crystallinity and reduced release extent. The impact of in vitro release media on fenofibrate release was also studied (chapter 6). Here it was seen that media containing HCl led to reduced drug release over time compared to equivalent media not containing HCl. The key findings of this thesis are discussed in chapter 7 and included: 1. Drug – silica processing method strongly influenced drug distribution within the silica matrix, drug solid state and release. 2. The silica surface area and mesopore volume also influenced how much drug could be loaded. It was shown that SC-CO2 processing variables such as processing pressure (13.79 – 41.37 MPa), duration time (4 – 24 h) and depressurisation rate (rapid or controlled) did not influence the drug distribution within the SBA- 15 matrix, drug solid state form or release. Possible avenues of research to be considered going forward include the development and application of high resolution imaging techniques to visualise drug molecules within the silica mesopores. Also, the issues surrounding SBA-15 usage in a pharmaceutical manufacturing environment should be addressed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Avalanche Photodiodes (APDs) have been used in a wide range of low light sensing applications such as DNA sequencing, quantum key distribution, LIDAR and medical imaging. To operate the APDs, control circuits are required to achieve the desired performance characteristics. This thesis presents the work on development of three control circuits including a bias circuit, an active quench and reset circuit and a gain control circuit all of which are used for control and performance enhancement of the APDs. The bias circuit designed is used to bias planar APDs for operation in both linear and Geiger modes. The circuit is based on a dual charge pumps configuration and operates from a 5 V supply. It is capable of providing milliamp load currents for shallow-junction planar APDs that operate up to 40 V. With novel voltage regulators, the bias voltage provided by the circuit can be accurately controlled and easily adjusted by the end user. The circuit is highly integrable and provides an attractive solution for applications requiring a compact integrated APD device. The active quench and reset circuit is designed for APDs that operate in Geiger-mode and are required for photon counting. The circuit enables linear changes in the hold-off time of the Geiger-mode APD (GM-APD) from several nanoseconds to microseconds with a stable setting step of 6.5 ns. This facilitates setting the optimal `afterpulse-free' hold-off time for any GM-APD via user-controlled digital inputs. In addition this circuit doesn’t require an additional monostable or pulse generator to reset the detector, thus simplifying the circuit. Compared to existing solutions, this circuit provides more accurate and simpler control of the hold-off time while maintaining a comparable maximum count-rate of 35.2 Mcounts/s. The third circuit designed is a gain control circuit. This circuit is based on the idea of using two matched APDs to set and stabilize the gain. The circuit can provide high bias voltage for operating the planar APD, precisely set the APD’s gain (with the errors of less than 3%) and compensate for the changes in the temperature to maintain a more stable gain. The circuit operates without the need for external temperature sensing and control electronics thus lowering the system cost and complexity. It also provides a simpler and more compact solution compared to previous designs. The three circuits designed in this project were developed independently of each other and are used for improving different performance characteristics of the APD. Further research on the combination of the three circuits will produce a more compact APD-based solution for a wide range of applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To screen for novel ribosomally synthesised antimicrobials, in-silico genome mining was performed on all publically available fully sequenced bacterial genomes. 49 novel type 1 lantibiotic clusters were identified from a number of species, genera and phyla not usually associated with lantibiotic production, and indicates high prevalence. A crucial step towards the commercialisation of fermented beverages is the characterisation of the microbial content. To achieve this goal, we applied next-generation sequencing techniques to analyse the bacterial and yeast populations of the organic, symbiotically-fermented beverages kefir, water kefir and kombucha. A number of minor components were revealed, many of which had not previously been associated with these beverages. The dominant microorganism in each of the water kefir grains and fermentates was Zymomonas, an ethanol-producing bacterium that had not previously been detected on such a scale. These studies represent the most accurate description of these populations to date, and should aid in future starter design and in determining which species are responsible for specific attributes of the beverages. Finally, high-throughput robotics was applied to screen for the presence of antimicrobial producers associated with these beverages. This revealed a low frequency of bacteriocin production amongst the bacterial isolates, with only lactococcins A, B and LcnN of lactococcin M being identified. However, a proteinaceous antimicrobial produced by the yeast Dekkera bruxellensis, isolated from kombucha, was found to be active against Lactobacillus bulgaricus. This peptide was patially purified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is structured in the format of a three part Portfolio of Exploration to facilitate transformation in my ways of knowing to enhance an experienced business practitioner’s capabilities and effectiveness. A key factor in my ways of knowing, as opposed to what I know, is my exploration of context and assumptions. By interacting with my cultural, intellectual, economic, and social history, I seek to become critically aware of the biographical, historical, and cultural context of my beliefs and feelings about myself. This Portfolio is not exclusively for historians of economics or historians of ideas but also for those interested in becoming more aware of how these culturally assimilated frames of reference and bundles of assumptions that influence the way they perceive, think, decide, feel and interpret their experiences in order to operate more effectively in their professional and organisational lives. In the first part of my Portfolio, I outline and reflect upon my Portfolio’s overarching theory of adult development; the writings of Harvard’s Robert Kegan and Columbia University’s Jack Mezirow. The second part delves further into how meaning-making, the activity of how one organises and makes sense of the world and how meaning-making evolves to different levels of complexity. I explore how past experience and our interpretations of history influences our understandings since all perception is inevitably tinged with bias and entrenched ‘theory-laden’ assumptions. In my third part, I explore the 1933 inaugural University College Dublin Finlay Lecture delivered by economist John Maynard Keynes. My findings provide a new perspective and understanding of Keynes’s 1933 lecture by not solely reading or relying upon the text of the three contextualised essay versions of his lecture. The purpose and context of Keynes’s original longer lecture version was quite different to the three shorter essay versions published for the American, British and German audiences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The abundance of many commercially important fish stocks are declining and this has led to widespread concern on the performance of traditional approach in fisheries management. Quantitative models are used for obtaining estimates of population abundance and the management advice is based on annual harvest levels (TAC), where only a certain amount of catch is allowed from specific fish stocks. However, these models are data intensive and less useful when stocks have limited historical information. This study examined whether empirical stock indicators can be used to manage fisheries. The relationship between indicators and the underlying stock abundance is not direct and hence can be affected by disturbances that may account for both transient and persistent effects. Methods from Statistical Process Control (SPC) theory such as the Cumulative Sum (CUSUM) control charts are useful in classifying these effects and hence they can be used to trigger management response only when a significant impact occurs to the stock biomass. This thesis explores how empirical indicators along with CUSUM can be used for monitoring, assessment and management of fish stocks. I begin my thesis by exploring various age based catch indicators, to identify those which are potentially useful in tracking the state of fish stocks. The sensitivity and response of these indicators towards changes in Spawning Stock Biomass (SSB) showed that indicators based on age groups that are fully selected to the fishing gear or Large Fish Indicators (LFIs) are most useful and robust across the range of scenarios considered. The Decision-Interval (DI-CUSUM) and Self-Starting (SS-CUSUM) forms are the two types of control charts used in this study. In contrast to the DI-CUSUM, the SS-CUSUM can be initiated without specifying a target reference point (‘control mean’) to detect out-of-control (significant impact) situations. The sensitivity and specificity of SS-CUSUM showed that the performances are robust when LFIs are used. Once an out-of-control situation is detected, the next step is to determine how much shift has occurred in the underlying stock biomass. If an estimate of this shift is available, they can be used to update TAC by incorporation into Harvest Control Rules (HCRs). Various methods from Engineering Process Control (EPC) theory were tested to determine which method can measure the shift size in stock biomass with the highest accuracy. Results showed that methods based on Grubb’s harmonic rule gave reliable shift size estimates. The accuracy of these estimates can be improved by monitoring a combined indicator metric of stock-recruitment and LFI because this may account for impacts independent of fishing. The procedure of integrating both SPC and EPC is known as Statistical Process Adjustment (SPA). A HCR based on SPA was designed for DI-CUSUM and the scheme was successful in bringing out-of-control fish stocks back to its in-control state. The HCR was also tested using SS-CUSUM in the context of data poor fish stocks. Results showed that the scheme will be useful for sustaining the initial in-control state of the fish stock until more observations become available for quantitative assessments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this thesis is to examine if a difference exists in income for different categories of drinkers in Ireland using the 2007 Slán data set. The possible impact of alcohol consumption on health status and health care utilisation is also examined. Potential endogeneity and selection bias is accounted for throughout. Endogeneity is where an independent variable included in the model is determined within the context of the model (Chenhall and Moers, 2007). An endogenous relationship between income and alcohol and between health and alcohol is accounted for by the use of separate income equations and separate health status equations for each category of drinker similar to what was done in previous studies into the effects of alcohol on earnings (Hamilton and Hamilton, 1997; Barrett, 2002). Sample selection bias arises when a sector selection is non-random due to individuals choosing a particular sector because of their personal characteristics (Heckman, 1979; Zhang, 2004). In relation to alcohol consumption, selection bias may arise as people may select into a particular drinker group due to the fact that they know that by doing so it will not have a negative effect on their income or health (Hamilton and Hamilton, 1997; Di Pietro and Pedace, 2008; Barrett, 2002). Selection bias of alcohol consumption is accounted for by using the Multinomial Logit OLS Two Step Estimate as proposed by Lee (1982), which is an extension of the Heckman Probit OLS Two Step Estimate. Alcohol status as an ordered variable is examined and possible methods of estimation accounting for this ordinality while also accounting for selection bias are looked at. Limited Information Methods and Full Information Methods of estimation of simultaneous equations are assessed and compared. Findings show that in Ireland moderate drinkers have a higher income compared with abstainers or heavy drinkers. Some studies such as Barrett (2002) argue that this is as a consequence of alcohol improving ones health, which in turn can influence ones productivity which may ultimately be reflected in earnings, due to the fact that previous studies have found that moderate levels of alcohol consumption are beneficial towards ones health status. This study goes on to examine the relationship between health status and alcohol consumption and whether the correlation between income and the consumption of alcohol is similar in terms of sign and magnitude to the correlation between health status and the consumption of alcohol. Results indicate that moderate drinkers have a higher income than non or heavy drinkers, with the weekly household income of moderate drinkers being €660.10, non drinkers being €546.75 and heavy drinkers being €449.99. Moderate Drinkers also report having a better health status than non drinkers and a slightly better health status than heavy drinkers. More non-drinkers report poor health than either moderate or heavy drinkers. As part of the analysis into the effect of alcohol consumption on income and on health status, the relationship between other socio economic variables such as gender, age, education among others, with income, health and alcohol status is examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One problem in most three-dimensional (3D) scalar data visualization techniques is that they often overlook to depict uncertainty that comes with the 3D scalar data and thus fail to faithfully present the 3D scalar data and have risks which may mislead users’ interpretations, conclusions or even decisions. Therefore this thesis focuses on the study of uncertainty visualization in 3D scalar data and we seek to create better uncertainty visualization techniques, as well as to find out the advantages/disadvantages of those state-of-the-art uncertainty visualization techniques. To do this, we address three specific hypotheses: (1) the proposed Texture uncertainty visualization technique enables users to better identify scalar/error data, and provides reduced visual overload and more appropriate brightness than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (2) The proposed Linked Views and Interactive Specification (LVIS) uncertainty visualization technique enables users to better search max/min scalar and error data than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (3) The proposed Probabilistic Query uncertainty visualization technique, in comparison to traditional Direct Volume Rendering (DVR) methods, enables radiologists/physicians to better identify possible alternative renderings relevant to a diagnosis and the classification probabilities associated to the materials appeared on these renderings; this leads to improved decision support for diagnosis, as demonstrated in the domain of medical imaging. For each hypothesis, we test it by following/implementing a unified framework that consists of three main steps: the first main step is uncertainty data modeling, which clearly defines and generates certainty types of uncertainty associated to given 3D scalar data. The second main step is uncertainty visualization, which transforms the 3D scalar data and their associated uncertainty generated from the first main step into two-dimensional (2D) images for insight, interpretation or communication. The third main step is evaluation, which transforms the 2D images generated from the second main step into quantitative scores according to specific user tasks, and statistically analyzes the scores. As a result, the quality of each uncertainty visualization technique is determined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Solar Energy is a clean and abundant energy source that can help reduce reliance on fossil fuels around which questions still persist about their contribution to climate and long-term availability. Monolithic triple-junction solar cells are currently the state of the art photovoltaic devices with champion cell efficiencies exceeding 40%, but their ultimate efficiency is restricted by the current-matching constraint of series-connected cells. The objective of this thesis was to investigate the use of solar cells with lattice constants equal to InP in order to reduce the constraint of current matching in multi-junction solar cells. This was addressed by two approaches: Firstly, the formation of mechanically stacked solar cells (MSSC) was investigated through the addition of separate connections to individual cells that make up a multi-junction device. An electrical and optical modelling approach identified separately connected InGaAs bottom cells stacked under dual-junction GaAs based top cells as a route to high efficiency. An InGaAs solar cell was fabricated on an InP substrate with a measured 1-Sun conversion efficiency of 9.3%. A comparative study of adhesives found benzocyclobutene to be the most suitable for bonding component cells in a mechanically stacked configuration owing to its higher thermal conductivity and refractive index when compared to other candidate adhesives. A flip-chip process was developed to bond single-junction GaAs and InGaAs cells with a measured 4-terminal MSSC efficiency of 25.2% under 1-Sun conditions. Additionally, a novel InAlAs solar cell was identified, which can be used to provide an alternative to the well established GaAs solar cell. As wide bandgap InAlAs solar cells have not been extensively investigated for use in photovoltaics, single-junction cells were fabricated and their properties relevant to PV operation analysed. Minority carrier diffusion lengths in the micrometre range were extracted, confirming InAlAs as a suitable material for use in III-V solar cells, and a 1-Sun conversion efficiency of 6.6% measured for cells with 800 nm thick absorber layers. Given the cost and small diameter of commercially available InP wafers, InGaAs and InAlAs solar cells were fabricated on alternative substrates, namely GaAs. As a first demonstration the lattice constant of a GaAs substrate was graded to InP using an InxGa1-xAs metamorphic buffer layer onto which cells were grown. This was the first demonstration of an InAlAs solar cell on an alternative substrate and an initial step towards fabricating these cells on Si. The results presented offer a route to developing multi-junction solar cell devices based on the InP lattice parameter, thus extending the range of available bandgaps for high efficiency cells.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Colloidal photonic crystals have potential light manipulation applications including; fabrication of efficient lasers and LEDs, improved optical sensors and interconnects, and improving photovoltaic efficiencies. One road-block of colloidal selfassembly is their inherent defects; however, they can be manufactured cost effectively into large area films compared to micro-fabrication methods. This thesis investigates production of ‘large-area’ colloidal photonic crystals by sonication, under oil co-crystallization and controlled evaporation, with a view to reducing cracking and other defects. A simple monotonic Stöber particle synthesis method was developed producing silica particles in the range of 80 to 600nm in a single step. An analytical method assesses the quality of surface particle ordering in a semiquantitative manner was developed. Using fast Fourier transform (FFT) spot intensities, a grey scale symmetry area method, has been used to quantify the FFT profiles. Adding ultrasonic vibrations during film formation demonstrated large areas could be assembled rapidly, however film ordering suffered as a result. Under oil cocrystallisation results in the particles being bound together during film formation. While having potential to form large areas, it requires further refinement to be established as a production technique. Achieving high quality photonic crystals bonded with low concentrations (<5%) of polymeric adhesives while maintaining refractive index contrast, proved difficult and degraded the film’s uniformity. A controlled evaporation method, using a mixed solvent suspension, represents the most promising method to produce high quality films over large areas, 75mm x 25mm. During this mixed solvent approach, the film is kept in the wet state longer, thus reducing cracks developing during the drying stage. These films are crack-free up to a critical thickness, and show very large domains, which are visible in low magnification SEM images as Moiré fringe patterns. Higher magnification reveals separation between alternate fringe patterns are domain boundaries between individual crystalline growth fronts.