7 resultados para Expected First Passage Time
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
Receptor modelling was performed on quadrupole unit mass resolution aerosol mass spectrometer (Q-AMS) sub-micron particulate matter (PM) chemical speciation measurements from Windsor, Ontario, an industrial city situated across the Detroit River from Detroit, Michigan. Aerosol and trace gas measurements were collected on board Environment Canada’s CRUISER mobile laboratory. Positive matrix factorization (PMF) was performed on the AMS full particle-phase mass spectrum (PMFFull MS) encompassing both organic and inorganic components. This approach was compared to the more common method of analysing only the organic mass spectra (PMFOrg MS). PMF of the full mass spectrum revealed that variability in the non-refractory sub-micron aerosol concentration and composition was best explained by six factors: an amine-containing factor (Amine); an ammonium sulphate and oxygenated organic aerosol containing factor (Sulphate-OA); an ammonium nitrate and oxygenated organic aerosol containing factor (Nitrate-OA); an ammonium chloride containing factor (Chloride); a hydrocarbon like organic aerosol (HOA) factor; and a moderately oxygenated organic aerosol factor (OOA). PMF of the organic mass spectrum revealed three factors of similar composition to some of those revealed through PMFFull MS: Amine, HOA and OOA. Including both the inorganic and organic mass proved to be a beneficial approach to analysing the unit mass resolution AMS data for several reasons. First, it provided a method for potentially calculating more accurate sub-micron PM mass concentrations, particularly when unusual factors are present, in this case, an Amine factor. As this method does not rely on a priori knowledge of chemical species, it circumvents the need for any adjustments to the traditional AMS species fragmentation patterns to account for atypical species, and can thus lead to more complete factor profiles. It is expected that this method would be even more useful for HR-ToF-AMS data, due to the ability to better understand the chemical nature of atypical factors from high resolution mass spectra. Second, utilizing PMF to extract factors containing inorganic species allowed for the determination of extent of neutralization, which could have implications for aerosol parameterization. Third, subtler differences in organic aerosol components were resolved through the incorporation of inorganic mass into the PMF matrix. The additional temporal features provided by the inorganic aerosol components allowed for the resolution of more types of oxygenated organic aerosol than could be reliably re-solved from PMF of organics alone. Comparison of findings from the PMFFull MS and PMFOrg MS methods showed that for the Windsor airshed, the PMFFull MS method enabled additional conclusions to be drawn in terms of aerosol sources and chemical processes. While performing PMFOrg MS can provide important distinctions between types of organic aerosol, it is shown that including inorganic species in the PMF analysis can permit further apportionment of organics for unit mass resolution AMS mass spectra.
Resumo:
Silicon (Si) is the base material for electronic technologies and is emerging as a very attractive platform for photonic integrated circuits (PICs). PICs allow optical systems to be made more compact with higher performance than discrete optical components. Applications for PICs are in the area of fibre-optic communication, biomedical devices, photovoltaics and imaging. Germanium (Ge), due to its suitable bandgap for telecommunications and its compatibility with Si technology is preferred over III-V compounds as an integrated on-chip detector at near infrared wavelengths. There are two main approaches for Ge/Si integration: through epitaxial growth and through direct wafer bonding. The lattice mismatch of ~4.2% between Ge and Si is the main problem of the former technique which leads to a high density of dislocations while the bond strength and conductivity of the interface are the main challenges of the latter. Both result in trap states which are expected to play a critical role. Understanding the physics of the interface is a key contribution of this thesis. This thesis investigates Ge/Si diodes using these two methods. The effects of interface traps on the static and dynamic performance of Ge/Si avalanche photodetectors have been modelled for the first time. The thesis outlines the original process development and characterization of mesa diodes which were fabricated by transferring a ~700 nm thick layer of p-type Ge onto n-type Si using direct wafer bonding and layer exfoliation. The effects of low temperature annealing on the device performance and on the conductivity of the interface have been investigated. It is shown that the diode ideality factor and the series resistance of the device are reduced after annealing. The carrier transport mechanism is shown to be dominated by generation–recombination before annealing and by direct tunnelling in forward bias and band-to-band tunnelling in reverse bias after annealing. The thesis presents a novel technique to realise photodetectors where one of the substrates is thinned by chemical mechanical polishing (CMP) after bonding the Si-Ge wafers. Based on this technique, Ge/Si detectors with remarkably high responsivities, in excess of 3.5 A/W at 1.55 μm at −2 V, under surface normal illumination have been measured. By performing electrical and optical measurements at various temperatures, the carrier transport through the hetero-interface is analysed by monitoring the Ge band bending from which a detailed band structure of the Ge/Si interface is proposed for the first time. The above unity responsivity of the detectors was explained by light induced potential barrier lowering at the interface. To our knowledge this is the first report of light-gated responsivity for vertically illuminated Ge/Si photodiodes. The wafer bonding approach followed by layer exfoliation or by CMP is a low temperature wafer scale process. In principle, the technique could be extended to other materials such as Ge on GaAs, or Ge on SOI. The unique results reported here are compatible with surface normal illumination and are capable of being integrated with CMOS electronics and readout units in the form of 2D arrays of detectors. One potential future application is a low-cost Si process-compatible near infrared camera.
Resumo:
We examine the role of liquidity risk, both as a stock characteristic as well as systematic liquidity risk, in UK mutual fund performance for the first time. Using four alternative measures of stock liquidity we extract principal components across stocks in order to construct systematic or market liquidity factors. We find that on average UK mutual funds are tilted towards liquid stocks (except for small stock funds as might be expected) but that, counter-intuitively, liquidity as a stock characteristic is positively priced in the cross-section of fund performance. We find that systematic liquidity risk is positively priced in the cross-section of fund performance. Overall, our results reveal a strong role for stock liquidity level and systematic liquidity risk in fund performance evaluation models.
Resumo:
In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.
Resumo:
Amorphous silicon has become the material of choice for many technologies, with major applications in large area electronics: displays, image sensing and thin film photovoltaic cells. This technology development has occurred because amorphous silicon is a thin film semiconductor that can be deposited on large, low cost substrates using low temperature. In this thesis, classical molecular dynamics and first principles DFT calculations have been performed to generate structural models of amorphous and hydrogenated amorphous silicon and interfaces of amorphous and crystalline silicon, with the ultimate aim of understanding the photovoltaic properties of core-shell crystalline amorphous Si nanowire structures. We have shown, unexpectedly, from the simulations, that our understanding of hydrogenated bulk a-Si needs to be revisited, with our robust finding that when fully saturated with hydrogen, bulk a-Si exhibits a constant optical energy gap, irrespective of the hydrogen concentration in the sample. Unsaturated a-Si:H, with a lower than optimum hydrogen content, shows a smaller optical gap, that increases with hydrogen content until saturation is reached. The mobility gaps obtained from an analysis of the electronic states show similar behavior. We also obtained that the optical and mobility gaps show a volcano curve as the H content is varied from 7% (undersaturation) to 18% (mild oversaturation). In the case of mild over saturation, the mid-gap states arise exclusively from an increase in the density of strained Si-Si bonds. Analysis of our structures shows the extra H atoms in this case form a bridge between neighboring silicon atoms which increases the corresponding Si-Si distance and promotes bond length disorder in the sample. That has the potential to enhance the Staebler-Wronski effect. Planar interface models of amorphous-crystalline silicon have been generated in Si (100), (110) and (111) surfaces. The interface models are characterized by structure, RDF, electronic density of states and optical absorption spectrum. We find that the least stable (100) surface will result in the formation of the thickest amorphous silicon layer, while the most stable (110) surface forms the smallest amorphous region. We calculated for the first time band offsets of a-Si:H/c-Si heterojunctions from first principles and examined the influence of different surface orientations and amorphous layer thickness on the offsets and implications for device performance. The band offsets depend on the amorphous layer thickness and increase with thickness. By controlling the amorphous layer thickness we can potentially optimise the solar cell parameters. Finally, we have successfully generated different amorphous layer thickness of the a-Si/c-Si and a-Si:H/c-Si 5 nm nanowires from heat and quench. We perform structural analysis of the a-Si-/c-Si nanowires. The RDF, Si-Si bond length distributions, and the coordination number distributions of amorphous regions of the nanowires reproduce similar behaviour compared to bulk amorphous silicon. In the final part of this thesis we examine different surface terminating chemical groups, -H, - OH and –NH2 in (001) GeNW. Our work shows that the diameter of Ge nanowires and the nature of surface terminating groups both play a significant role in both the magnitude and the nature of the nanowire band gaps, allowing tuning of the band gap by up to 1.1 eV. We also show for the first time how the nanowire diameter and surface termination shifts the absorption edge in the Ge nanowires to longer wavelengths. Thus, the combination of nanowire diameter and surface chemistry can be effectively utilised to tune the band gaps and thus light absorption properties of small diameter Ge nanowires.
Resumo:
Introduction and Rationale: A central argument in the thesis is that performative acts of control, sexual potency and spontaneity are central to the continuous construction of embodied masculine identities. The acts of control, and particularly issues of spontaneity, are central to understandings and addressing the difficulties men face at varying levels of embodied identity. Using Watson’s (2000) ‘Male body schema’, I will explore the challenges and opportunities men face when negotiating normative, pragmatic, and experiential embodiment. I will later then explore the importance of these levels of embodiment to achieving visceral embodiment; or what I would define as a renewed unconscious satisfaction and ability to achieve and maintain normative, pragmatic and experiential forms of embodiment. Purpose and Objectives: Using the concept of liminality, and permanent liminality, the thesis explores how we can interpret and understand men’s experience of prostate cancer diagnosis and treatment, and their struggle to regain power and control in the context of diagnosis, and also the side effects to treatment. The strategies men adopt in seeking out personalised medical programmes of treatment with their doctors are explored in detail. The power and control that can be exercised over medical professionals and treatment options is demonstrated. Method: Collecting responses online from prostate specific discussion boards via gatekeepers, and from interviews on the ‘health talk’ online database, three intersecting conceptual categories - liminality, masculinity and the body/embodiment - are combined in this research. Liminality and ‘time’ are directly linked to notions of ‘success’ and ‘outcome’ during the treatment process, and mark distinct points at which men, and their families, expect measures or limits to have been reached. Exploring liminality within the context of Turner’s ‘rites of passage’, I explore the difficulty men face in concluding the third stage of the rites; reintegration. Results: Prostate cancer diagnosis and treatment, impotence and incontinence, in particular, have profound implications for the continuous construction of embodied masculine identities, and thus identity in general, making the construction of hegemonic ideals in the context of a highly ‘performative’ society highly troublesome. The issue of ‘spontaneity’ in the construction of various forms of embodied identities is of particular concern for men who contributed to this study.
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain