14 resultados para synchroton-based techniques

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This PhD thesis concerns the computational modeling of the electronic and atomic structure of point defects in technologically relevant materials. Identifying the atomistic origin of defects observed in the electrical characteristics of electronic devices has been a long-term goal of first-principles methods. First principles simulations are performed in this thesis, consisting of density functional theory (DFT) supplemented with many body perturbation theory (MBPT) methods, of native defects in bulk and slab models of In0.53Ga0.47As. The latter consist of (100) - oriented surfaces passivated with A12O3. Our results indicate that the experimentally extracted midgap interface state density (Dit) peaks are not the result of defects directly at the semiconductor/oxide interface, but originate from defects in a more bulk-like chemical environment. This conclusion is reached by considering the energy of charge transition levels for defects at the interface as a function of distance from the oxide. Our work provides insight into the types of defects responsible for the observed departure from ideal electrical behaviour in III-V metal-oxidesemiconductor (MOS) capacitors. In addition, the formation energetics and electron scattering properties of point defects in carbon nanotubes (CNTs) are studied using DFT in conjunction with Green’s function based techniques. The latter are applied to evaluate the low-temperature, low-bias Landauer conductance spectrum from which mesoscopic transport properties such as the elastic mean free path and localization length of technologically relevant CNT sizes can be estimated from computationally tractable CNT models. Our calculations show that at CNT diameters pertinent to interconnect applications, the 555777 divacancy defect results in increased scattering and hence higher electrical resistance for electron transport near the Fermi level.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Real time monitoring of oxygenation and respiration is on the cutting edge of bioanalysis, including studies of cell metabolism, bioenergetics, mitochondrial function and drug toxicity. This thesis presents the development and evaluation of new luminescent probes and techniques for intracellular O2 sensing and imaging. A new oxygen consumption rate (OCR) platform based on the commercial microfluidic perfusion channel μ-slides compatible with extra- and intracellular O2 sensitive probes, different cell lines and measurement conditions was developed. The design of semi-closed channels allowed cell treatments, multiplexing with other assays and two-fold higher sensitivity to compare with microtiter plate. We compared three common OCR platforms: hermetically sealed quartz cuvettes for absolute OCRs, partially sealed with mineral oil 96-WPs for relative OCRs, and open 96-WPs for local cell oxygenation. Both 96-WP platforms were calibrated against absolute OCR platform with MEF cell line, phosphorescent O2 probe MitoXpress-Intra and time-resolved fluorescence reader. Found correlations allow tracing of cell respiration over time in a high throughput format with the possibility of cell stimulation and of changing measurement conditions. A new multimodal intracellular O2 probe, based on the phosphorescent reporter dye PtTFPP, fluorescent FRET donor and two-photon antennae PFO and cationic nanoparticles RL-100 was described. This probe, called MM2, possesses high brightness, photo- and chemical stability, low toxicity, efficient cell staining and high-resolution intracellular O2 imaging with 2D and 3D cell cultures in intensity, ratiometric and lifetime-based modalities with luminescence readers and FLIM microscopes. Extended range of O2 sensitive probes was designed and studied in order to optimize their spectral characteristics and intracellular targeting, using different NPs materials, delivery vectors, ratiometric pairs and IR dyes. The presented improvements provide useful tool for high sensitive monitoring and imaging of intracellular O2 in different measurement formats with wide range of physiological applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The importance of non-destructive techniques (NDT) in structural health monitoring programmes is being critically felt in the recent times. The quality of the measured data, often affected by various environmental conditions can be a guiding factor in terms usefulness and prediction efficiencies of the various detection and monitoring methods used in this regard. Often, a preprocessing of the acquired data in relation to the affecting environmental parameters can improve the information quality and lead towards a significantly more efficient and correct prediction process. The improvement can be directly related to the final decision making policy about a structure or a network of structures and is compatible with general probabilistic frameworks of such assessment and decision making programmes. This paper considers a preprocessing technique employed for an image analysis based structural health monitoring methodology to identify sub-marine pitting corrosion in the presence of variable luminosity, contrast and noise affecting the quality of images. A preprocessing of the gray-level threshold of the various images is observed to bring about a significant improvement in terms of damage detection as compared to an automatically computed gray-level threshold. The case dependent adjustments of the threshold enable to obtain the best possible information from an existing image. The corresponding improvements are observed in a qualitative manner in the present study.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The use of structural health monitoring of civil structures is ever expanding and by assessing the dynamical condition of structures, informed maintenance management can be conducted at both individual and network levels. With the continued growth of information age technology, the potential arises for smart monitoring systems to be integrated with civil infrastructure to provide efficient information on the condition of a structure. The focus of this thesis is the integration of smart technology with civil infrastructure for the purposes of structural health monitoring. The technology considered in this regard are devices based on energy harvesting materials. While there has been considerable focus on the development and optimisation of such devices using steady state loading conditions, their applications for civil infrastructure are less known. Although research is still in initial stages, studies into the uses associated with such applications are very promising. Through the use of the dynamical response of structures to a variety of loading conditions, the energy harvesting outputs from such devices is established and the potential power output determined. Through a power variance output approach, damage detection of deteriorating structures using the energy harvesting devices is investigated. Further applications of the integration of energy harvesting devices with civil infrastructure investigated by this research includes the use of the power output as a indicator for control. Four approaches are undertaken to determine the potential applications arising from integrating smart technology with civil infrastructure, namely • Theoretical analysis to determine the applications of energy harvesting devices for vibration based health monitoring of civil infrastructure. • Laboratory experimentation to verify the performance of different energy harvesting configurations for civil infrastructure applications. • Scaled model testing as a method to experimentally validate the integration of the energy harvesting devices with civil infrastructure. • Full scale deployment of energy harvesting device with a bridge structure. These four approaches validate the application of energy harvesting technology with civil infrastructure from a theoretical, experimental and practical perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A massive change is currently taking place in the manner in which power networks are operated. Traditionally, power networks consisted of large power stations which were controlled from centralised locations. The trend in modern power networks is for generated power to be produced by a diverse array of energy sources which are spread over a large geographical area. As a result, controlling these systems from a centralised controller is impractical. Thus, future power networks will be controlled by a large number of intelligent distributed controllers which must work together to coordinate their actions. The term Smart Grid is the umbrella term used to denote this combination of power systems, artificial intelligence, and communications engineering. This thesis focuses on the application of optimal control techniques to Smart Grids with a focus in particular on iterative distributed MPC. A novel convergence and stability proof for iterative distributed MPC based on the Alternating Direction Method of Multipliers is derived. Distributed and centralised MPC, and an optimised PID controllers' performance are then compared when applied to a highly interconnected, nonlinear, MIMO testbed based on a part of the Nordic power grid. Finally, a novel tuning algorithm is proposed for iterative distributed MPC which simultaneously optimises both the closed loop performance and the communication overhead associated with the desired control.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An aim of proactive risk management strategies is the timely identification of safety related risks. One way to achieve this is by deploying early warning systems. Early warning systems aim to provide useful information on the presence of potential threats to the system, the level of vulnerability of a system, or both of these, in a timely manner. This information can then be used to take proactive safety measures. The United Nation’s has recommended that any early warning system need to have four essential elements, which are the risk knowledge element, a monitoring and warning service, dissemination and communication and a response capability. This research deals with the risk knowledge element of an early warning system. The risk knowledge element of an early warning system contains models of possible accident scenarios. These accident scenarios are created by using hazard analysis techniques, which are categorised as traditional and contemporary. The assumption in traditional hazard analysis techniques is that accidents are occurred due to a sequence of events, whereas, the assumption of contemporary hazard analysis techniques is that safety is an emergent property of complex systems. The problem is that there is no availability of a software editor which can be used by analysts to create models of accident scenarios based on contemporary hazard analysis techniques and generate computer code that represent the models at the same time. This research aims to enhance the process of generating computer code based on graphical models that associate early warning signs and causal factors to a hazard, based on contemporary hazard analyses techniques. For this purpose, the thesis investigates the use of Domain Specific Modeling (DSM) technologies. The contributions of this thesis is the design and development of a set of three graphical Domain Specific Modeling languages (DSML)s, that when combined together, provide all of the necessary constructs that will enable safety experts and practitioners to conduct hazard and early warning analysis based on a contemporary hazard analysis approach. The languages represent those elements and relations necessary to define accident scenarios and their associated early warning signs. The three DSMLs were incorporated in to a prototype software editor that enables safety scientists and practitioners to create and edit hazard and early warning analysis models in a usable manner and as a result to generate executable code automatically. This research proves that the DSM technologies can be used to develop a set of three DSMLs which can allow user to conduct hazard and early warning analysis in more usable manner. Furthermore, the three DSMLs and their dedicated editor, which are presented in this thesis, may provide a significant enhancement to the process of creating the risk knowledge element of computer based early warning systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Choosing the right or the best option is often a demanding and challenging task for the user (e.g., a customer in an online retailer) when there are many available alternatives. In fact, the user rarely knows which offering will provide the highest value. To reduce the complexity of the choice process, automated recommender systems generate personalized recommendations. These recommendations take into account the preferences collected from the user in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., studying some behavioral features) way. Such systems are widespread; research indicates that they increase the customers' satisfaction and lead to higher sales. Preference handling is one of the core issues in the design of every recommender system. This kind of system often aims at guiding users in a personalized way to interesting or useful options in a large space of possible options. Therefore, it is important for them to catch and model the user's preferences as accurately as possible. In this thesis, we develop a comparative preference-based user model to represent the user's preferences in conversational recommender systems. This type of user model allows the recommender system to capture several preference nuances from the user's feedback. We show that, when applied to conversational recommender systems, the comparative preference-based model is able to guide the user towards the best option while the system is interacting with her. We empirically test and validate the suitability and the practical computational aspects of the comparative preference-based user model and the related preference relations by comparing them to a sum of weights-based user model and the related preference relations. Product configuration, scheduling a meeting and the construction of autonomous agents are among several artificial intelligence tasks that involve a process of constrained optimization, that is, optimization of behavior or options subject to given constraints with regards to a set of preferences. When solving a constrained optimization problem, pruning techniques, such as the branch and bound technique, point at directing the search towards the best assignments, thus allowing the bounding functions to prune more branches in the search tree. Several constrained optimization problems may exhibit dominance relations. These dominance relations can be particularly useful in constrained optimization problems as they can instigate new ways (rules) of pruning non optimal solutions. Such pruning methods can achieve dramatic reductions in the search space while looking for optimal solutions. A number of constrained optimization problems can model the user's preferences using the comparative preferences. In this thesis, we develop a set of pruning rules used in the branch and bound technique to efficiently solve this kind of optimization problem. More specifically, we show how to generate newly defined pruning rules from a dominance algorithm that refers to a set of comparative preferences. These rules include pruning approaches (and combinations of them) which can drastically prune the search space. They mainly reduce the number of (expensive) pairwise comparisons performed during the search while guiding constrained optimization algorithms to find optimal solutions. Our experimental results show that the pruning rules that we have developed and their different combinations have varying impact on the performance of the branch and bound technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are difficulties with utilising self- report and physiological measures of assessment amongst forensic populations. This study investigates implicit based measures amongst sexual offenders, nonsexual offenders and low risk samples. Implicit measurement is a term applied to measurement methods that makes it difficult to influence responses through conscious control. The test battery includes the Implicit Association Test (IAT), Rapid Serial Visual Presentation (RSVP), Viewing Time (VT) and the Structured Clinical interview for disorders. The IAT proposes that people will perform better on a task when they depend on well-practiced cognitive associations. The RSVP task requires participants to identify a single target image that is presented amongst a series of rapidly presented visual images. RSVP operates on the premise that if two target images are presented within 500milliseconds of each other, the possibility that the participant will recognize the second target is significantly reduced when the first target is of salience to the individual. This is the attentional blink phenomenon. VT is based on the principle that people will look longer at images that are of salience. Results showed that on the VT task, child sexual offenders took longer to view images of children than low risk groups. Nude over clothed images induced a greater attentional blink amongst low risk and offending samples on the RSVP task. Sexual offenders took longer than low risk groups on word pairing tasks where sexual words were paired with adult words on the IAT. The SCID highlighted differences between the offending and non offending groups on the sub scales for personality disorders. More erotic stimulus items on the VT and RSVP measures is recommended to better differentiate sexual preference between offending and non offending samples. A pictorial IAT is recommended. Findings provide the basis for further development of implicit measures within the assessment of sexual offenders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In developing a biosensor, the utmost important aspects that need to be emphasized are the specificity and selectivity of the transducer. These two vital prerequisites are of paramount in ensuring a robust and reliable biosensor. Improvements in electrochemical sensors can be achieved by using microelectrodes and to modify the electrode surface (using chemical or biological recognition layers to improve the sensitivity and selectivity). The fabrication and characterisations of silicon-based and glass-based gold microelectrode arrays with various geometries (band and disc) and dimension (ranging from 10 μm-100 nm) were reported. It was found that silicon-based transducers of 10 μm gold microelectrode array exhibited the most stable and reproducible electrochemical measurements hence this dimension was selected for further study. Chemical electrodeposition on both 10 μm microband and microdisc were found viable by electro-assisted self-assembled sol-gel silica film and nanoporous-gold electrodeposition respectively. The fabrication and characterisations of on-chip electrochemical cell was also reported with a fixed diameter/width dimension and interspacing variation. With this regard, the 10 μm microelectrode array with interspacing distance of 100 μm exhibited the best electrochemical response. Surface functionalisations on single chip of planar gold macroelectrodes were also studied for the immobilisation of histidine-tagged protein and antibody. Imaging techniques such as atomic force microscopy, fluorescent microscopy or scanning electron microscope were employed to complement the electrochemical characterisations. The long-chain thiol of self-assembled monolayer with NTA-metal ligand coordination was selected for the histidine-tagged protein while silanisation technique was selected for the antibody immobilisation. The final part of the thesis described the development of a T-2 labelless immunosensor using impedimetric approach. Good antibody calibration curve was obtained for both 10 μm microband and 10 μm microdisc array. For the establishment of the T-2/HT-2 toxin calibration curve, it was found that larger microdisc array dimension was required to produce better calibration curve. The calibration curves established in buffer solution show that the microelectrode arrays were sensitive and able to detect levels of T-2/HT-2 toxin as low as 25 ppb (25 μg kg-1) with a limit of quantitation of 4.89 ppb for a 10 μm microband array and 1.53 ppb for the 40 μm microdisc array.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work presented in this thesis described the development of low-cost sensing and separation devices with electrochemical detections for health applications. This research employs macro, micro and nano technology. The first sensing device developed was a tonerbased micro-device. The initial development of microfluidic devices was based on glass or quartz devices that are often expensive to fabricate; however, the introduction of new types of materials, such as plastics, offered a new way for fast prototyping and the development of disposable devices. One such microfluidic device is based on the lamination of laser-printed polyester films using a computer, printer and laminator. The resulting toner-based microchips demonstrated a potential viability for chemical assays, coupled with several detection methods, particularly Chip-Electrophoresis-Chemiluminescence (CE-CL) detection which has never been reported in the literature. Following on from the toner-based microchip, a three-electrode micro-configuration was developed on acetate substrate. This is the first time that a micro-electrode configuration made from gold; silver and platinum have been fabricated onto acetate by means of patterning and deposition techniques using the central fabrication facilities in Tyndall National Institute. These electrodes have been designed to facilitate the integration of a 3- electrode configuration as part of the fabrication process. Since the electrodes are on acetate the dicing step can automatically be eliminated. The stability of these sensors has been investigated using electrochemical techniques with excellent outcomes. Following on from the generalised testing of the electrodes these sensors were then coupled with capillary electrophoresis. The final sensing devices were on a macro scale and involved the modifications of screenprinted electrodes. Screen-printed electrodes (SPE) are generally seen to be far less sensitive than the more expensive electrodes including the gold, boron-doped diamond and glassy carbon electrodes. To enhance the sensitivity of these electrodes they were treated with metal nano-particles, gold and palladium. Following on from this, another modification was introduced. The carbonaceous material carbon monolith was drop-cast onto the SPE and then the metal nano-particles were electrodeposited onto the monolith material

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantitative analysis of penetrative deformation in sedimentary rocks of fold and thrust belts has largely been carried out using clast based strain analysis techniques. These methods analyse the geometric deviations from an original state that populations of clasts, or strain markers, have undergone. The characterisation of these geometric changes, or strain, in the early stages of rock deformation is not entirely straight forward. This is in part due to the paucity of information on the original state of the strain markers, but also the uncertainty of the relative rheological properties of the strain markers and their matrix during deformation, as well as the interaction of two competing fabrics, such as bedding and cleavage. Furthermore one of the single largest setbacks for accurate strain analysis has been associated with the methods themselves, they are traditionally time consuming, labour intensive and results can vary between users. A suite of semi-automated techniques have been tested and found to work very well, but in low strain environments the problems discussed above persist. Additionally these techniques have been compared to Anisotropy of Magnetic Susceptibility (AMS) analyses, which is a particularly sensitive tool for the characterisation of low strain in sedimentary lithologies.