12 resultados para 3-LEVEL SYSTEMS
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.
Resumo:
Background: Irritable bowel syndrome (IBS) is a common disorder that affects 10–15% of the population. Although characterised by a lack of reliable biological markers, the disease state is increasingly viewed as a disorder of the brain-gut axis. In particular, accumulating evidence points to the involvement of both the central and peripheral serotonergic systems in disease symptomatology. Furthermore, altered tryptophan metabolism and indoleamine 2,3-dioxygenase (IDO) activity are hallmarks of many stress-related disorders. The kynurenine pathway of tryptophan degradation may serve to link these findings to the low level immune activation recently described in IBS. In this study, we investigated tryptophan degradation in a male IBS cohort (n = 10) and control subjects (n = 26). Methods: Plasma samples were obtained from patients and healthy controls. Tryptophan and its metabolites were measured by high performance liquid chromatography (HPLC) and neopterin, a sensitive marker of immune activation, was measured using a commercially available ELISA assay. Results: Both kynurenine levels and the kynurenine:tryptophan ratio were significantly increased in the IBS cohort compared with healthy controls. Neopterin was also increased in the IBS subjects and the concentration of the neuroprotective metabolite kynurenic acid was decreased, as was the kynurenic acid:kynurenine ratio. Conclusion: These findings suggest that the activity of IDO, the immunoresponsive enzyme which is responsible for the degradation of tryptophan along this pathway, is enhanced in IBS patients relative to controls. This study provides novel evidence for an immune-mediated degradation of tryptophan in a male IBS population and identifies the kynurenine pathway as a potential source of biomarkers in this debilitating condition.
Resumo:
The composition of equine milk differs considerably from that of the milk of the principal dairying species, i.e., the cow, buffalo, goat and sheep. Because equine milk resembles human milk in many respects and is claimed to have special therapeutic properties, it is becoming increasingly popular in Western Europe, where it is produced on large farms in several countries. Equine milk is considered to be highly digestible, rich in essential nutrients and to possess an optimum whey protein:casein ratio, making it very suitable as a substitute for bovine milk in paediatric dietetics. There is some scientific basis for the special nutritional and health-giving properties of equine milk but this study provides a comprehensive analysis of the composition and physico-chemical properties of equine milk which is required to fully exploit its potential in human nutrition. Quantification and distribution of the nitrogenous components and principal salts of equine milk are reported. The effects of the high concentration of ionic calcium, large casein micelles (~ 260 nm), low protein, lack of a sulphydryl group in equine β-lactoglobulin and a very low level of κ-casein on the physico-chemical properties of equine milk are reported. This thesis provides an insight into the stability of equine casein micelles to heat, ethanol, high pressure, rennet or acid. Differences in rennet- and acid-induced coagulation between equine and bovine milk are attributed not only to the low casein content of equine milk but also to differences in the mechanism by which the respective micelles are stabilized. It has been reported that β-casein plays a role in the stabilization of equine casein micelles and proteomic techniques support this view. In this study, equine κ-casein appeared to be resistant to hydrolysis by calf chymosin but equine β-casein was readily hydrolysed. Resolution of equine milk proteins by urea-PAGE showed the multi-phosphorylated isoforms of equine αs- and β-caseins and capillary zone electrophoresis showed 3 to 7 phosphorylated residues in equine β-casein. In vitro digestion of equine β-casein by pepsin and Corolase PP™ did not produce casomorphins BCM-5 or BCM-7, believed to be harmful to human health. Electron microscopy provided very clear, detailed images of equine casein micelles in their native state and when renneted or acidified. Equine milk formed flocs rather then a gel when renneted or acidified which is supported by dynamic oscillatory analysis. The results presented in this thesis will assist in the development of new products from equine milk for human consumption which will retain some of its unique compositional and health-giving properties.
Resumo:
Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions.
Resumo:
The use of optical sensor technology for non-invasive determination of key quality pack parameters improved package/product quality. This technology can be used for optimization of packaging processes, improvement of product shelf-life and maintenance of quality. In recent years, there has been a major focus on O2 and CO2 sensor development as these are key gases used in modified atmosphere packaging (MAP) of food. The first and second experimental chapters (chapter 2 and 3) describe the development of O2, pH and CO2 solid state sensors and its (potential) use for food packaging applications. A dual-analyte sensor for dissolved O2 and pH with one bi-functional reporter dye (meso-substituted Pd- or Ptporphyrin) embedded in plasticized PVC membrane was developed in chapter 2. The developed CO2 sensor in chapter 3 was comprised of a phosphorescent reporter dye Pt(II)- tetrakis(pentafluorophenyl) porphyrin (PtTFPP) and a colourimetric pH indicator α-naphtholphthalein (NP) incorporated in a plastic matrix together with a phase transfer agent tetraoctyl- or cetyltrimethylammonium hydroxide (TOA-OH or CTA-OH). The third experimental chapter, chapter 4, described the development of liquid O2 sensors for rapid microbiological determination which are important for improvement and assurance of food safety systems. This automated screening assay produced characteristic profiles with a sharp increase in fluorescence above the baseline level at a certain threshold time (TT) which can be correlated with their initial microbial load and was applied to various raw fish and horticultural samples. Chapter 5, the fourth experimental chapter, reported upon the successful application of developed O2 and CO2 sensors for quality assessment of MAP mushrooms during storage for 7 days at 4°C.
Resumo:
The ever increasing demand for broadband communications requires sophisticated devices. Photonic integrated circuits (PICs) are an approach that fulfills those requirements. PICs enable the integration of different optical modules on a single chip. Low loss fiber coupling and simplified packaging are key issues in keeping the price of PICs at a low level. Integrated spot size converters (SSC) offer an opportunity to accomplish this. Design, fabrication and characterization of SSCs based on an asymmetric twin waveguide (ATG) at a wavelength of 1.55 μm are the main elements of this dissertation. It is theoretically and experimentally shown that a passive ATG facilitates a polarization filter mechanism. A reproducible InP process guideline is developed that achieves vertical waveguides with smooth sidewalls. Birefringence and resonant coupling are used in an ATG to enable a polarization filtering and splitting mechanism. For the first time such a filter is experimentally shown. At a wavelength of 1610 nm a power extinction ratio of (1.6 ± 0.2) dB was measured for the TE- polarization in a single approximately 372 μm long TM- pass polarizer. A TE-pass polarizer with a similar length was demonstrated with a TM/TE-power extinction ratio of (0.7 ± 0.2) dB at 1610 nm. The refractive indices of two different InGaAsP compositions, required for a SSC, are measured by the reflection spectroscopy technique. A SSC layout for dielectric-free fabricated compact photodetectors is adjusted to those index values. The development and the results of the final fabrication procedure for the ATG concept are outlined. The etch rate, sidewall roughness and selectivity of a Cl2/CH4/H2 based inductively coupled plasma (ICP) etch are investigated by a design of experiment approach. The passivation effect of CH4 is illustrated for the first time. Conditions are determined for etching smooth and vertical sidewalls up to a depth of 5 μm.
Resumo:
The present study aimed to investigate interactions of components in the high solids systems during storage. The systems included (i) lactose–maltodextrin (MD) with various dextrose equivalents at different mixing ratios, (ii) whey protein isolate (WPI)–oil [olive oil (OO) or sunflower oil (SO)] at 75:25 ratio, and (iii) WPI–oil– {glucose (G)–fructose (F) 1:1 syrup [70% (w/w) total solids]} at a component ratio of 45:15:40. Crystallization of lactose was delayed and increasingly inhibited with increasing MD contents and higher DE values (small molecular size or low molecular weight), although all systems showed similar glass transition temperatures at each aw. The water sorption isotherms of non-crystalline lactose and lactose–MD (0.11 to 0.76 aw) could be derived from the sum of sorbed water contents of individual amorphous components. The GAB equation was fitted to data of all non-crystalline systems. The protein–oil and protein–oil–sugar materials showed maximum protein oxidation and disulfide bonding at 2 weeks of storage at 20 and 40°C. The WPI–OO showed denaturation and preaggregation of proteins during storage at both temperatures. The presence of G–F in WPI–oil increased Tonset and Tpeak of protein aggregation, and oxidative damage of the protein during storage, especially in systems with a higher level of unsaturated fatty acids. Lipid oxidation and glycation products in the systems containing sugar promoted oxidation of proteins, increased changes in protein conformation and aggregation of proteins, and resulted in insolubility of solids or increased hydrophobicity concomitantly with hardening of structure, covalent crosslinking of proteins, and formation of stable polymerized solids, especially after storage at 40°C. We found protein hydration transitions preceding denaturation transitions in all high protein systems and also the glass transition of confined water in protein systems using dynamic mechanical analysis.
Resumo:
In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.
Resumo:
In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled
Resumo:
Leachate may be defined as any liquid percolating through deposited waste and emitted from or contained within a landfill. If leachate migrates from a site it may pose a severe threat to the surrounding environment. Increasingly stringent environmental legislation both at European level and national level (Republic of Ireland) regarding the operation of landfill sites, control of associated emissions, as well as requirements for restoration and aftercare management (up to 30 years) has prompted research for this project into the design and development of a low cost, low maintenance, low technology trial system to treat landfill leachate at Kinsale Road Landfill Site, located on the outskirts of Cork city. A trial leachate treatment plant was constructed consisting of 14 separate treatment units (10 open top cylindrical cells [Ø 1.8 m x 2.0 high] and four reed beds [5.0m x 5.0m x 1.0m]) incorporating various alternative natural treatment processes including reed beds (vertical flow [VF] and horizontal flow [HF]), grass treatment planes, compost units, timber chip units, compost-timber chip units, stratified sand filters and willow treatment plots. High treatment efficiencies were achieved in units operating in sequence containing compost and timber chip media, vertical flow reed beds and grass treatment planes. Pollutant load removal rates of 99% for NH4, 84% for BOD5, 46% for COD, 63% for suspended solids, 94% for iron and 98% for manganese were recorded in the final effluent of successfully operated sequences at irrigation rates of 945 l/m2/day in the cylindrical cells and 96 l/m2/day in the VF reed beds and grass treatment planes. Almost total pathogen removal (E. coli) occurred in the final effluent of the same sequence. Denitrification rates of 37% were achieved for a limited period. A draft, up-scaled leachate treatment plant is presented, based on treatment performance of the trial plant.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
Background: Diagnostic decision-making is made through a combination of Systems 1 (intuition or pattern-recognition) and Systems 2 (analytic) thinking. The purpose of this study was to use the Cognitive Reflection Test (CRT) to evaluate and compare the level of Systems 1 and 2 thinking among medical students in pre-clinical and clinical programs. Methods: The CRT is a three-question test designed to measure the ability of respondents to activate metacognitive processes and switch to System 2 (analytic) thinking where System 1 (intuitive) thinking would lead them astray. Each CRT question has a correct analytical (System 2) answer and an incorrect intuitive (System 1) answer. A group of medical students in Years 2 & 3 (pre-clinical) and Years 4 (in clinical practice) of a 5-year medical degree were studied. Results: Ten percent (13/128) of students had the intuitive answers to the three questions (suggesting they generally relied on System 1 thinking) while almost half (44%) answered all three correctly (indicating full analytical, System 2 thinking). Only 3-13% had incorrect answers (i.e. that were neither the analytical nor the intuitive responses). Non-native English speaking students (n = 11) had a lower mean number of correct answers compared to native English speakers (n = 117: 1.0 s 2.12 respectfully: p < 0.01). As students progressed through questions 1 to 3, the percentage of correct System 2 answers increased and the percentage of intuitive answers decreased in both the pre-clinical and clinical students. Conclusions: Up to half of the medical students demonstrated full or partial reliance on System 1 (intuitive) thinking in response to these analytical questions. While their CRT performance has no claims to make as to their future expertise as clinicians, the test may be used in helping students to understand the importance of awareness and regulation of their thinking processes in clinical practice.