998 resultados para Science Foundation Ireland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trophoblasts of the placenta are the frontline cells involved in communication and exchange of materials between the mother and fetus. Within trophoblasts, calcium signalling proteins are richly expressed. Intracellular free calcium ions are a key second messenger, regulating various cellular activities. Transcellular Ca2+ transport through trophoblasts is essential in fetal skeleton formation. Ryanodine receptors (RyRs) are high conductance cation channels that mediate Ca2+ release from intracellular stores to the cytoplasm. To date, the roles of RyRs in trophoblasts have not been reported. By use of reverse transcription PCR and western blotting, the current study revealed that RyRs are expressed in model trophoblast cell lines (BeWo and JEG-3) and in human first trimester and term placental villi. Immunohistochemistry of human placental sections indicated that both syncytiotrophoblast and cytotrophoblast cell layers were positively stained by antibodies recognising RyRs; likewise, expression of RyR isoforms was also revealed in BeWo and JEG-3 cells by immunofluorescence microscopy. In addition, changes in [Ca2+]i were observed in both BeWo and JEG-3 cells upon application of various RyR agonists and antagonists, using fura-2 fluorescent videomicroscopy. Furthermore, endogenous placental peptide hormones, namely angiotensin II, arginine vasopressin and endothelin 1, were demonstrated to increase [Ca2+]i in BeWo cells, and such increases were suppressed by RyR antagonists and by blockers of the corresponding peptide hormone receptors. These findings indicate that 1) multiple RyR subtypes are expressed in human trophoblasts; 2) functional RyRs in BeWo and JEG-3 cells response to both RyR agonists and antagonists; 3) RyRs in BeWo cells mediate Ca2+ release from intracellular store in response to the indirect stimulation by endogenous peptides. These observations suggest that RyR contributes to trophoblastic cellular Ca2+ homeostasis; trophoblastic RyRs are also involved in the functional regulation of human placenta by coupling to endogenous placental peptide-induced signalling pathways.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivated by accurate average-case analysis, MOdular Quantitative Analysis (MOQA) is developed at the Centre for Efficiency Oriented Languages (CEOL). In essence, MOQA allows the programmer to determine the average running time of a broad class of programmes directly from the code in a (semi-)automated way. The MOQA approach has the property of randomness preservation which means that applying any operation to a random structure, results in an output isomorphic to one or more random structures, which is key to systematic timing. Based on original MOQA research, we discuss the design and implementation of a new domain specific scripting language based on randomness preserving operations and random structures. It is designed to facilitate compositional timing by systematically tracking the distributions of inputs and outputs. The notion of a labelled partial order (LPO) is the basic data type in the language. The programmer uses built-in MOQA operations together with restricted control flow statements to design MOQA programs. This MOQA language is formally specified both syntactically and semantically in this thesis. A practical language interpreter implementation is provided and discussed. By analysing new algorithms and data restructuring operations, we demonstrate the wide applicability of the MOQA approach. Also we extend MOQA theory to a number of other domains besides average-case analysis. We show the strong connection between MOQA and parallel computing, reversible computing and data entropy analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented in this thesis covers four major topics of research related to the grid integration of wave energy. More specifically, the grid impact of a wave farm on the power quality of its local network is investigated. Two estimation methods were developed regarding the flicker level Pst generated by a wave farm in relation to its rated power as well as in relation to the impedance angle ψk of the node in the grid to which it is connected. The electrical design of a typical wave farm design is also studied in terms of minimum rating for three types of costly pieces of equipment, namely the VAr compensator, the submarine cables and the overhead line. The power losses dissipated within the farm's electrical network are also evaluated. The feasibility of transforming a test site into a commercial site of greater rated power is investigated from the perspective of power quality and of cables and overhead line thermal loading. Finally, the generic modelling of ocean devices, referring here to both wave and tidal current devices, is investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The wave energy industry is progressing towards an advanced stage of development, with consideration being given to the selection of suitable sites for the first commercial installations. An informed, and accurate, characterisation of the wave energy resource is an essential aspect of this process. Ireland is exposed to an energetic wave climate, however many features of this resource are not well understood. This thesis assesses and characterises the wave energy resource that has been measured and modelled at the Atlantic Marine Energy Test Site, a facility for conducting sea trials of floating wave energy converters that is being developed near Belmullet, on the west coast of Ireland. This characterisation process is undertaken through the analysis of metocean datasets that have previously been unavailable for exposed Irish sites. A number of commonly made assumptions in the calculation of wave power are contested, and the uncertainties resulting from their application are demonstrated. The relationship between commonly used wave period parameters is studied, and its importance in the calculation of wave power quantified, while it is also shown that a disconnect exists between the sea states which occur most frequently at the site and those that contribute most to the incident wave energy. Additionally, observations of the extreme wave conditions that have occurred at the site and estimates of future storms that devices will need to withstand are presented. The implications of these results for the design and operation of wave energy converters are discussed. The foremost contribution of this thesis is the development of an enhanced understanding of the fundamental nature of the wave energy resource at the Atlantic Marine Energy Test Site. The results presented here also have a wider relevance, and can be considered typical of other, similarly exposed, locations on Ireland’s west coast.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the rapid growth of the Internet and digital communications, the volume of sensitive electronic transactions being transferred and stored over and on insecure media has increased dramatically in recent years. The growing demand for cryptographic systems to secure this data, across a multitude of platforms, ranging from large servers to small mobile devices and smart cards, has necessitated research into low cost, flexible and secure solutions. As constraints on architectures such as area, speed and power become key factors in choosing a cryptosystem, methods for speeding up the development and evaluation process are necessary. This thesis investigates flexible hardware architectures for the main components of a cryptographic system. Dedicated hardware accelerators can provide significant performance improvements when compared to implementations on general purpose processors. Each of the designs proposed are analysed in terms of speed, area, power, energy and efficiency. Field Programmable Gate Arrays (FPGAs) are chosen as the development platform due to their fast development time and reconfigurable nature. Firstly, a reconfigurable architecture for performing elliptic curve point scalar multiplication on an FPGA is presented. Elliptic curve cryptography is one such method to secure data, offering similar security levels to traditional systems, such as RSA, but with smaller key sizes, translating into lower memory and bandwidth requirements. The architecture is implemented using different underlying algorithms and coordinates for dedicated Double-and-Add algorithms, twisted Edwards algorithms and SPA secure algorithms, and its power consumption and energy on an FPGA measured. Hardware implementation results for these new algorithms are compared against their software counterparts and the best choices for minimum area-time and area-energy circuits are then identified and examined for larger key and field sizes. Secondly, implementation methods for another component of a cryptographic system, namely hash functions, developed in the recently concluded SHA-3 hash competition are presented. Various designs from the three rounds of the NIST run competition are implemented on FPGA along with an interface to allow fair comparison of the different hash functions when operating in a standardised and constrained environment. Different methods of implementation for the designs and their subsequent performance is examined in terms of throughput, area and energy costs using various constraint metrics. Comparing many different implementation methods and algorithms is nontrivial. Another aim of this thesis is the development of generic interfaces used both to reduce implementation and test time and also to enable fair baseline comparisons of different algorithms when operating in a standardised and constrained environment. Finally, a hardware-software co-design cryptographic architecture is presented. This architecture is capable of supporting multiple types of cryptographic algorithms and is described through an application for performing public key cryptography, namely the Elliptic Curve Digital Signature Algorithm (ECDSA). This architecture makes use of the elliptic curve architecture and the hash functions described previously. These components, along with a random number generator, provide hardware acceleration for a Microblaze based cryptographic system. The trade-off in terms of performance for flexibility is discussed using dedicated software, and hardware-software co-design implementations of the elliptic curve point scalar multiplication block. Results are then presented in terms of the overall cryptographic system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Choosing the right or the best option is often a demanding and challenging task for the user (e.g., a customer in an online retailer) when there are many available alternatives. In fact, the user rarely knows which offering will provide the highest value. To reduce the complexity of the choice process, automated recommender systems generate personalized recommendations. These recommendations take into account the preferences collected from the user in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., studying some behavioral features) way. Such systems are widespread; research indicates that they increase the customers' satisfaction and lead to higher sales. Preference handling is one of the core issues in the design of every recommender system. This kind of system often aims at guiding users in a personalized way to interesting or useful options in a large space of possible options. Therefore, it is important for them to catch and model the user's preferences as accurately as possible. In this thesis, we develop a comparative preference-based user model to represent the user's preferences in conversational recommender systems. This type of user model allows the recommender system to capture several preference nuances from the user's feedback. We show that, when applied to conversational recommender systems, the comparative preference-based model is able to guide the user towards the best option while the system is interacting with her. We empirically test and validate the suitability and the practical computational aspects of the comparative preference-based user model and the related preference relations by comparing them to a sum of weights-based user model and the related preference relations. Product configuration, scheduling a meeting and the construction of autonomous agents are among several artificial intelligence tasks that involve a process of constrained optimization, that is, optimization of behavior or options subject to given constraints with regards to a set of preferences. When solving a constrained optimization problem, pruning techniques, such as the branch and bound technique, point at directing the search towards the best assignments, thus allowing the bounding functions to prune more branches in the search tree. Several constrained optimization problems may exhibit dominance relations. These dominance relations can be particularly useful in constrained optimization problems as they can instigate new ways (rules) of pruning non optimal solutions. Such pruning methods can achieve dramatic reductions in the search space while looking for optimal solutions. A number of constrained optimization problems can model the user's preferences using the comparative preferences. In this thesis, we develop a set of pruning rules used in the branch and bound technique to efficiently solve this kind of optimization problem. More specifically, we show how to generate newly defined pruning rules from a dominance algorithm that refers to a set of comparative preferences. These rules include pruning approaches (and combinations of them) which can drastically prune the search space. They mainly reduce the number of (expensive) pairwise comparisons performed during the search while guiding constrained optimization algorithms to find optimal solutions. Our experimental results show that the pruning rules that we have developed and their different combinations have varying impact on the performance of the branch and bound technique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Li-ion battery has for several years been at the forefront of powering an ever-increasing number of modem consumer electronic devices such as laptops, tablet PCs, cell phones, portable music players etc., while in more recent times, has also been sought to power a range of emerging electric and hybrid-electric vehicle classes. Given their extreme popularity, a number of features which define the performance of the Li-ion battery have become a target of improvement and have garnered tremendous research effort over the past two decades. Features such as battery capacity, voltage, lifetime, rate performance, together with important implications such as safety, environmental benignity and cost have all attracted attention. Although properties such as cell voltage and theoretical capacity are bound by the selection of electrode materials which constitute its interior, other performance makers of the Li-ion battery such as actual capacity, lifetime and rate performance may be improved by tailoring such materials with characteristics favourable to Li+ intercalation. One such tailoring route involves shrinking of the constituent electrode materials to that of the nanoscale, where the ultra-small diameters may bestow favourable Li+ intercalation properties while providing a necessary mechanical robustness during routine electrochemical operation. The work detailed in this thesis describes a range of synthetic routes taken in nanostructuring a selection of choice Li-ion positive electrode candidates, together with a review of their respective Li-ion performances. Chapter one of this thesis serves to highlight a number of key advancements which have been made and detailed in the literature over recent years pertaining to the use of nanostructured materials in Li-ion technology. Chapter two provides an overview of the experimental conditions and techniques employed in the synthesis and electrochemical characterisation of the as-prepared electrode materials constituting this doctoral thesis. Chapter three details the synthesis of small-diameter V2O5 and V2O5/TiO2 nanocomposite structures prepared by a novel carbon nanocage templating method using liquid precursors. Chapter four details a hydrothermal synthesis and characterisation of nanostructured β-LiVOPO4 powders together with an overview of their Li+ insertion properties while chapter five focuses on supercritical fluid synthesis as one technique in the tailoring of FeF2 and CoF2 powders having potentially appealing Li-ion 'conversion' properties. Finally, chapter six summarises the overall conclusions drawn from the results presented in this thesis, coupled with an indication of potential future work which may be explored upon the materials described in this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is focused on the application of numerical atomic basis sets in studies of the structural, electronic and transport properties of silicon nanowire structures from first-principles within the framework of Density Functional Theory. First we critically examine the applied methodology and then offer predictions regarding the transport properties and realisation of silicon nanowire devices. The performance of numerical atomic orbitals is benchmarked against calculations performed with plane waves basis sets. After establishing the convergence of total energy and electronic structure calculations with increasing basis size we have shown that their quality greatly improves with the optimisation of the contraction for a fixed basis size. The double zeta polarised basis offers a reasonable approximation to study structural and electronic properties and transferability exists between various nanowire structures. This is most important to reduce the computational cost. The impact of basis sets on transport properties in silicon nanowires with oxygen and dopant impurities have also been studied. It is found that whilst transmission features quantitatively converge with increasing contraction there is a weaker dependence on basis set for the mean free path; the double zeta polarised basis offers a good compromise whereas the single zeta basis set yields qualitatively reasonable results. Studying the transport properties of nanowire-based transistor setups with p+-n-p+ and p+-i-p+ doping profiles it is shown that charge self-consistency affects the I-V characteristics more significantly than the basis set choice. It is predicted that such ultrascaled (3 nm length) transistors would show degraded performance due to relatively high source-drain tunnelling currents. Finally, it is shown the hole mobility of Si nanowires nominally doped with boron decreases monotonically with decreasing width at fixed doping density and increasing dopant concentration. Significant mobility variations are identified which can explain experimental observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis investigated the block copolymer (BCP) thin film characteristics and pattern formation using a set of predetermined molecular weights of PS-b-PMMA and PS-b-PDMS. Post BCP pattern fabrication on the required base substrate a dry plasma etch process was utilised for successful pattern transfer of the BCP resist onto underlying substrate. The resultant sub-10 nm device features were used in front end of line (FEoL) fabrication of active device components in integrated circuits (IC). The potential use of BCP templates were further extended to metal and metal-oxide nanowire fabrication. These nanowires were further investigated in real-time applications as novel sensors and supercapacitors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Case-Based Reasoning (CBR) uses past experiences to solve new problems. The quality of the past experiences, which are stored as cases in a case base, is a big factor in the performance of a CBR system. The system's competence may be improved by adding problems to the case base after they have been solved and their solutions verified to be correct. However, from time to time, the case base may have to be refined to reduce redundancy and to get rid of any noisy cases that may have been introduced. Many case base maintenance algorithms have been developed to delete noisy and redundant cases. However, different algorithms work well in different situations and it may be difficult for a knowledge engineer to know which one is the best to use for a particular case base. In this thesis, we investigate ways to combine algorithms to produce better deletion decisions than the decisions made by individual algorithms, and ways to choose which algorithm is best for a given case base at a given time. We analyse five of the most commonly-used maintenance algorithms in detail and show how the different algorithms perform better on different datasets. This motivates us to develop a new approach: maintenance by a committee of experts (MACE). MACE allows us to combine maintenance algorithms to produce a composite algorithm which exploits the merits of each of the algorithms that it contains. By combining different algorithms in different ways we can also define algorithms that have different trade-offs between accuracy and deletion. While MACE allows us to define an infinite number of new composite algorithms, we still face the problem of choosing which algorithm to use. To make this choice, we need to be able to identify properties of a case base that are predictive of which maintenance algorithm is best. We examine a number of measures of dataset complexity for this purpose. These provide a numerical way to describe a case base at a given time. We use the numerical description to develop a meta-case-based classification system. This system uses previous experience about which maintenance algorithm was best to use for other case bases to predict which algorithm to use for a new case base. Finally, we give the knowledge engineer more control over the deletion process by creating incremental versions of the maintenance algorithms. These incremental algorithms suggest one case at a time for deletion rather than a group of cases, which allows the knowledge engineer to decide whether or not each case in turn should be deleted or kept. We also develop incremental versions of the complexity measures, allowing us to create an incremental version of our meta-case-based classification system. Since the case base changes after each deletion, the best algorithm to use may also change. The incremental system allows us to choose which algorithm is the best to use at each point in the deletion process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The class of all Exponential-Polynomial-Trigonometric (EPT) functions is classical and equal to the Euler-d’Alembert class of solutions of linear differential equations with constant coefficients. The class of non-negative EPT functions defined on [0;1) was discussed in Hanzon and Holland (2010) of which EPT probability density functions are an important subclass. EPT functions can be represented as ceAxb, where A is a square matrix, b a column vector and c a row vector where the triple (A; b; c) is the minimal realization of the EPT function. The minimal triple is only unique up to a basis transformation. Here the class of 2-EPT probability density functions on R is defined and shown to be closed under a variety of operations. The class is also generalised to include mixtures with the pointmass at zero. This class coincides with the class of probability density functions with rational characteristic functions. It is illustrated that the Variance Gamma density is a 2-EPT density under a parameter restriction. A discrete 2-EPT process is a process which has stochastically independent 2-EPT random variables as increments. It is shown that the distribution of the minimum and maximum of such a process is an EPT density mixed with a pointmass at zero. The Laplace Transform of these distributions correspond to the discrete time Wiener-Hopf factors of the discrete time 2-EPT process. A distribution of daily log-returns, observed over the period 1931-2011 from a prominent US index, is approximated with a 2-EPT density function. Without the non-negativity condition, it is illustrated how this problem is transformed into a discrete time rational approximation problem. The rational approximation software RARL2 is used to carry out this approximation. The non-negativity constraint is then imposed via a convex optimisation procedure after the unconstrained approximation. Sufficient and necessary conditions are derived to characterise infinitely divisible EPT and 2-EPT functions. Infinitely divisible 2-EPT density functions generate 2-EPT Lévy processes. An assets log returns can be modelled as a 2-EPT Lévy process. Closed form pricing formulae are then derived for European Options with specific times to maturity. Formulae for discretely monitored Lookback Options and 2-Period Bermudan Options are also provided. Certain Greeks, including Delta and Gamma, of these options are also computed analytically. MATLAB scripts are provided for calculations involving 2-EPT functions. Numerical option pricing examples illustrate the effectiveness of the 2-EPT approach to financial modelling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite studies demonstrating that inhibition of cyclooxygenase-2 (COX-2)-derived prostaglandin E2 (PGE2) has significant chemotherapeutic benefits in vitro and in vivo, inhibition of COX enzymes is associated with serious gastrointestinal and cardiovascular side effects, limiting the clinical utility of these drugs. PGE2 signals through four different receptors (EP1–EP4) and targeting individual receptor(s) may avoid these side effects, while retaining significant anticancer benefits. Here, we show that targeted inhibition of the EP1 receptor in the tumor cells and the tumor microenvironment resulted in the significant inhibition of tumor growth in vivo. Both dietary administration and direct injection of the EP1 receptor-specific antagonist, ONO-8713, effectively reduced the growth of established CT26 tumors in BALB/c mice, with suppression of the EP1 receptor in the tumor cells alone less effective in reducing tumor growth. This antitumor effect was associated with reduced Fas ligand expression and attenuated tumor-induced immune suppression. In particular, tumor infiltration by CD4+CD25+Foxp3+ regulatory T cells was decreased, whereas the cytotoxic activity of isolated splenocytes against CT26 cells was increased. F4/80+ macrophage infiltration was also decreased; however, there was no change in macrophage phenotype. These findings suggest that the EP1 receptor represents a potential target for the treatment of colon cancer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ever increasing demand for broadband communications requires sophisticated devices. Photonic integrated circuits (PICs) are an approach that fulfills those requirements. PICs enable the integration of different optical modules on a single chip. Low loss fiber coupling and simplified packaging are key issues in keeping the price of PICs at a low level. Integrated spot size converters (SSC) offer an opportunity to accomplish this. Design, fabrication and characterization of SSCs based on an asymmetric twin waveguide (ATG) at a wavelength of 1.55 μm are the main elements of this dissertation. It is theoretically and experimentally shown that a passive ATG facilitates a polarization filter mechanism. A reproducible InP process guideline is developed that achieves vertical waveguides with smooth sidewalls. Birefringence and resonant coupling are used in an ATG to enable a polarization filtering and splitting mechanism. For the first time such a filter is experimentally shown. At a wavelength of 1610 nm a power extinction ratio of (1.6 ± 0.2) dB was measured for the TE- polarization in a single approximately 372 μm long TM- pass polarizer. A TE-pass polarizer with a similar length was demonstrated with a TM/TE-power extinction ratio of (0.7 ± 0.2) dB at 1610 nm. The refractive indices of two different InGaAsP compositions, required for a SSC, are measured by the reflection spectroscopy technique. A SSC layout for dielectric-free fabricated compact photodetectors is adjusted to those index values. The development and the results of the final fabrication procedure for the ATG concept are outlined. The etch rate, sidewall roughness and selectivity of a Cl2/CH4/H2 based inductively coupled plasma (ICP) etch are investigated by a design of experiment approach. The passivation effect of CH4 is illustrated for the first time. Conditions are determined for etching smooth and vertical sidewalls up to a depth of 5 μm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Great demand in power optimized devices shows promising economic potential and draws lots of attention in industry and research area. Due to the continuously shrinking CMOS process, not only dynamic power but also static power has emerged as a big concern in power reduction. Other than power optimization, average-case power estimation is quite significant for power budget allocation but also challenging in terms of time and effort. In this thesis, we will introduce a methodology to support modular quantitative analysis in order to estimate average power of circuits, on the basis of two concepts named Random Bag Preserving and Linear Compositionality. It can shorten simulation time and sustain high accuracy, resulting in increasing the feasibility of power estimation of big systems. For power saving, firstly, we take advantages of the low power characteristic of adiabatic logic and asynchronous logic to achieve ultra-low dynamic and static power. We will propose two memory cells, which could run in adiabatic and non-adiabatic mode. About 90% dynamic power can be saved in adiabatic mode when compared to other up-to-date designs. About 90% leakage power is saved. Secondly, a novel logic, named Asynchronous Charge Sharing Logic (ACSL), will be introduced. The realization of completion detection is simplified considerably. Not just the power reduction improvement, ACSL brings another promising feature in average power estimation called data-independency where this characteristic would make power estimation effortless and be meaningful for modular quantitative average case analysis. Finally, a new asynchronous Arithmetic Logic Unit (ALU) with a ripple carry adder implemented using the logically reversible/bidirectional characteristic exhibiting ultra-low power dissipation with sub-threshold region operating point will be presented. The proposed adder is able to operate multi-functionally.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is focused on the design and synthesis of a diverse range of novel organosulfur compounds (sulfides, sulfoxides and sulfones), with the objective of studying their solid state properties and thereby developing an understanding of how the molecular structure of the compounds impacts upon their solid state crystalline structure. In particular, robust intermolecular interactions which determine the overall structure were investigated. These synthons were then exploited in the development of a molecular switch. Chapter One provides a brief overview of crystal engineering, the key hydrogen bonding interactions utilized in this work and also a general insight into “molecular machines” reported in the literature of relevance to this work. Chapter Two outlines the design and synthetic strategies for the development of two scaffolds suitable for incorporation of terminal alkynes, organosulfur and ether functionalities, in order to investigate the robustness and predictability of the S=O•••H-C≡C- and S=O•••H-C(α) supramolecular synthons. Crystal structures and a detailed analysis of the hydrogen bond interactions observed in these compounds are included in this chapter. Also the biological activities of four novel tertiary amines are discussed. Chapter Three focuses on the design and synthesis of diphenylacetylene compounds bearing amide and sulfur functionalities, and the exploitation of the N-H•••O=S interactions to develop a “molecular switch”. The crystal structures, hydrogen bonding patterns observed, NMR variable temperature studies and computer modelling studies are discussed in detail. Chapter Four provides the overall conclusions from chapter two and chapter three and also gives an indication of how the results of this work may be developed in the future. Chapter Five contains the full experimental details and spectral characterisation of all novel compounds synthesised in this project, while details of the NCI (National Cancer Institute) biological test results are included in the appendix.