877 resultados para next generation matrix
Resumo:
The design demands on water and sanitation engineers are rapidly changing. The global population is set to rise from 7 billion to 10 billion by 2083. Urbanisation in developing regions is increasing at such a rate that a predicted 56% of the global population will live in an urban setting by 2025. Compounding these problems, the global water and energy crises are impacting the Global North and South alike. High-rate anaerobic digestion offers a low-cost, low-energy treatment alternative to the energy intensive aerobic technologies used today. Widespread implementation however is hindered by the lack of capacity to engineer high-rate anaerobic digestion for the treatment of complex wastes such as sewage. This thesis utilises the Expanded Granular Sludge Bed bioreactor (EGSB) as a model system in which to study the ecology, physiology and performance of high-rate anaerobic digestion of complex wastes. The impacts of a range of engineered parameters including reactor geometry, wastewater type, operating temperature and organic loading rate are systematically investigated using lab-scale EGSB bioreactors. Next generation sequencing of 16S amplicons is utilised as a means of monitoring microbial ecology. Microbial community physiology is monitored by means of specific methanogenic activity testing and a range of physical and chemical methods are applied to assess reactor performance. Finally, the limit state approach is trialled as a method for testing the EGSB and is proposed as a standard method for biotechnology testing enabling improved process control at full-scale. The arising data is assessed both qualitatively and quantitatively. Lab-scale reactor design is demonstrated to significantly influence the spatial distribution of the underlying ecology and community physiology in lab-scale reactors, a vital finding for both researchers and full-scale plant operators responsible for monitoring EGSB reactors. Recurrent trends in the data indicate that hydrogenotrophic methanogenesis dominates in high-rate anaerobic digestion at both full- and lab-scale when subject to engineered or operational stresses including low-temperature and variable feeding regimes. This is of relevance for those seeking to define new directions in fundamental understanding of syntrophic and competitive relations in methanogenic communities and also to design engineers in determining operating parameters for full-scale digesters. The adoption of the limit state approach enabled identification of biological indicators providing early warning of failure under high-solids loading, a vital insight for those currently working empirically towards the development of new biotechnologies at lab-scale.
Resumo:
Understanding and measuring the interaction of light with sub-wavelength structures and atomically thin materials is of critical importance for the development of next generation photonic devices. One approach to achieve the desired optical properties in a material is to manipulate its mesoscopic structure or its composition in order to affect the properties of the light-matter interaction. There has been tremendous recent interest in so called two-dimensional materials, consisting of only a single to a few layers of atoms arranged in a planar sheet. These materials have demonstrated great promise as a platform for studying unique phenomena arising from the low-dimensionality of the material and for developing new types of devices based on these effects. A thorough investigation of the optical and electronic properties of these new materials is essential to realizing their potential. In this work we present studies that explore the nonlinear optical properties and carrier dynamics in nanoporous silicon waveguides, two-dimensional graphite (graphene), and atomically thin black phosphorus. We first present an investigation of the nonlinear response of nanoporous silicon optical waveguides using a novel pump-probe method. A two-frequency heterodyne technique is developed in order to measure the pump-induced transient change in phase and intensity in a single measurement. The experimental data reveal a characteristic material response time and temporally resolved intensity and phase behavior matching a physical model dominated by free-carrier effects that are significantly stronger and faster than those observed in traditional silicon-based waveguides. These results shed light on the large optical nonlinearity observed in nanoporous silicon and demonstrate a new measurement technique for heterodyne pump-probe spectroscopy. Next we explore the optical properties of low-doped graphene in the terahertz spectral regime, where both intraband and interband effects play a significant role. Probing the graphene at intermediate photon energies enables the investigation of the nonlinear optical properties in the graphene as its electron system is heated by the intense pump pulse. By simultaneously measuring the reflected and transmitted terahertz light, a precise determination of the pump-induced change in absorption can be made. We observe that as the intensity of the terahertz radiation is increased, the optical properties of the graphene change from interband, semiconductor-like absorption, to a more metallic behavior with increased intraband processes. This transition reveals itself in our measurements as an increase in the terahertz transmission through the graphene at low fluence, followed by a decrease in transmission and the onset of a large, photo-induced reflection as fluence is increased. A hybrid optical-thermodynamic model successfully describes our observations and predicts this transition will persist across mid- and far-infrared frequencies. This study further demonstrates the important role that reflection plays since the absorption saturation intensity (an important figure of merit for graphene-based saturable absorbers) can be underestimated if only the transmitted light is considered. These findings are expected to contribute to the development of new optoelectronic devices designed to operate in the mid- and far-infrared frequency range. Lastly we discuss recent work with black phosphorus, a two-dimensional material that has recently attracted interest due to its high mobility and direct, configurable band gap (300 meV to 2eV), depending on the number of atomic layers comprising the sample. In this work we examine the pump-induced change in optical transmission of mechanically exfoliated black phosphorus flakes using a two-color optical pump-probe measurement. The time-resolved data reveal a fast pump-induced transparency accompanied by a slower absorption that we attribute to Pauli blocking and free-carrier absorption, respectively. Polarization studies show that these effects are also highly anisotropic - underscoring the importance of crystal orientation in the design of optical devices based on this material. We conclude our discussion of black phosphorus with a study that employs this material as the active element in a photoconductive detector capable of gigahertz class detection at room temperature for mid-infrared frequencies.
Resumo:
'Abnormal vertical growth' (AVG) was recognised in Australia as a dysfunction of macadamia (Macadamia spp.) in the mid-1990s. Affected trees displayed unusually erect branching, and poor flowering and yield. Since 2002, the commercial significance of AVG, its cause, and strategies to alleviate its affects, has been studied. The cause is still unknown, and AVG remains a serious threat to orchard viability. AVG affects both commercial and urban macadamia. It occurs predominantly in the warmer-drier production regions of Queensland and New South Wales. An estimated 100,000 orchard trees are affected, equating to an annual loss of $ 10.5 M. In orchards, AVG occurs as aggregations of affected trees, affected tree number can increase by 4.5% per year, and yield reduction can exceed 30%. The more upright cultivars 'HAES 344' and '741' are highly susceptible, while the more spreading cultivars 'A4', 'A16' and 'A268' show tolerance. Incidence is higher (p<0.05) in soils of high permeability and good drainage. No soil chemical anomaly has been found. Fine root dry weight of AVG trees (0-15 cm depth) was found lower (p<0.05) than non-AVG. Next generation sequencing has led to the discovery of a new Bacillus sp. and a bipartite Geminivirus, which may have a role in the disease. Trunk cinctures will increase (p<0.05) yield of moderately affected trees. Further research is needed to clarify whether a pathogen is the cause, the role of soil moisture in AVG, and develop a varietal solution.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Abstract We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new 'Danger Theory' (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of 'grounding' the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.
Resumo:
Thirty-four microsatellite loci were isolated from three reef fish species; golden snapper Lutjanus johnii, blackspotted croaker Protonibea diacanthus and grass emperor Lethrinus laticaudis using a next generation sequencing approach. Both IonTorrent single reads and Illumina MiSeq paired-end reads were used, with the latter demonstrating a higher quality of reads than the IonTorrent. From the 1–1.5 million raw reads per species, we successfully obtained 10–13 polymorphic loci for each species, which satisfied stringent design criteria. We developed multiplex panels for the amplification of the golden snapper and the blackspotted croaker loci, as well as post-amplification pooling panels for the grass emperor loci. The microsatellites characterized in this work were tested across three locations of northern Australia. The microsatellites we developed can detect population differentiation across northern Australia and may be used for genetic structure studies and stock identification.
Resumo:
The next generation of vehicles will be equipped with automated Accident Warning Systems (AWSs) capable of warning neighbouring vehicles about hazards that might lead to accidents. The key enabling technology for these systems is the Vehicular Ad-hoc Networks (VANET) but the dynamics of such networks make the crucial timely delivery of warning messages challenging. While most previously attempted implementations have used broadcast-based data dissemination schemes, these do not cope well as data traffic load or network density increases. This problem of sending warning messages in a timely manner is addressed by employing a network coding technique in this thesis. The proposed NETwork COded DissEmination (NETCODE) is a VANET-based AWS responsible for generating and sending warnings to the vehicles on the road. NETCODE offers an XOR-based data dissemination scheme that sends multiple warning in a single transmission and therefore, reduces the total number of transmissions required to send the same number of warnings that broadcast schemes send. Hence, it reduces contention and collisions in the network improving the delivery time of the warnings. The first part of this research (Chapters 3 and 4) asserts that in order to build a warning system, it is needful to ascertain the system requirements, information to be exchanged, and protocols best suited for communication between vehicles. Therefore, a study of these factors along with a review of existing proposals identifying their strength and weakness is carried out. Then an analysis of existing broadcast-based warning is conducted which concludes that although this is the most straightforward scheme, loading can result an effective collapse, resulting in unacceptably long transmission delays. The second part of this research (Chapter 5) proposes the NETCODE design, including the main contribution of this thesis, a pair of encoding and decoding algorithms that makes the use of an XOR-based technique to reduce transmission overheads and thus allows warnings to get delivered in time. The final part of this research (Chapters 6--8) evaluates the performance of the proposed scheme as to how it reduces the number of transmissions in the network in response to growing data traffic load and network density and investigates its capacity to detect potential accidents. The evaluations use a custom-built simulator to model real-world scenarios such as city areas, junctions, roundabouts, motorways and so on. The study shows that the reduction in the number of transmissions helps reduce competition in the network significantly and this allows vehicles to deliver warning messages more rapidly to their neighbours. It also examines the relative performance of NETCODE when handling both sudden event-driven and longer-term periodic messages in diverse scenarios under stress caused by increasing numbers of vehicles and transmissions per vehicle. This work confirms the thesis' primary contention that XOR-based network coding provides a potential solution on which a more efficient AWS data dissemination scheme can be built.
Resumo:
The poor heating efficiency of the most reported magnetic nanoparticles (MNPs), allied to the lack of comprehensive biocompatibility and haemodynamic studies, hampers the spread of multifunctional nanoparticles as the next generation of therapeutic bio-agents in medicine. The present work reports the synthesis and characterization, with special focus on biological/toxicological compatibility, of superparamagnetic nanoparticles with diameter around 18 nm, suitable for theranostic applications (i.e. simultaneous diagnosis and therapy of cancer). Envisioning more insights into the complex nanoparticle-red blood cells (RBCs) membrane interaction, the deformability of the human RBCs in contact with magnetic nanoparticles (MNPs) was assessed for the first time with a microfluidic extensional approach, and used as an indicator of haematological disorders in comparison with a conventional haematological test, i.e. the haemolysis analysis. Microfluidic results highlight the potential of this microfluidic tool over traditional haemolysis analysis, by detecting small increments in the rigidity of the blood cells, when traditional haemotoxicology analysis showed no significant alteration (haemolysis rates lower than 2 %). The detected rigidity has been predicted to be due to the wrapping of small MNPs by the bilayer membrane of the RBCs, which is directly related to MNPs size, shape and composition. The proposed microfluidic tool adds a new dimension into the field of nanomedicine, allowing to be applied as a highsensitivity technique capable of bringing a better understanding of the biological impact of nanoparticles developed for clinical applications.
Resumo:
International audience
Resumo:
L'hypothyroïdie congénitale par dysgénésie thyroïdienne (HCDT, ectopie dans plus de 80 %) a une prévalence de 1 cas sur 4000 naissances vivantes. L’HCDT est la conséquence d'une défaillance de la thyroïde embryonnaire à se différencier, à se maintenir ou à migrer vers sa localisation anatomique (partie antérieure du cou), qui aboutit à une absence totale de la thyroïde (athyréose) ou à une ectopie thyroïdienne (linguale ou sublinguale). Les HCDT sont principalement non-syndromiques (soit 98% des cas sont non-familiale), ont un taux de discordance de 92% chez les jumeaux monozygotes, et ont une prédominance féminine et ethnique (i.e., Caucasienne). La majorité des cas d’HCDT n’a pas de cause connue, mais est associée à un déficit sévère en hormones thyroïdiennes (hypothyroïdie). Des mutations germinales dans les facteurs de transcription liés à la thyroïde (NKX2.1, FOXE1, PAX8, NKX2.5) ont été identifiées dans seulement 3% des patients atteints d’HCDT sporadiques et l’analyse de liaisons exclue ces gènes dans les rares familles multiplex avec HCDT. Nous supposons que le manque de transmission familiale claire d’HCDT peut résulter de la nécessité d’au moins deux « hits » génétiques différents dans des gènes importants pour le développement thyroïdien. Pour répondre au mieux nos questions de recherche, nous avons utilisé deux approches différentes: 1) une approche gène candidat, FOXE1, seul gène impliqué dans l’ectopie dans le modèle murin et 2) une approche en utilisant les techniques de séquençage de nouvelle génération (NGS) afin de trouver des variants génétiques pouvant expliquer cette pathologie au sein d’une cohorte de patients avec HCDT. Pour la première approche, une étude cas-contrôles a été réalisée sur le promoteur de FOXE1. Il a récemment été découvert qu’une région du promoteur de FOXE1 est différentiellement méthylée au niveau de deux dinucléotides CpG consécutifs, définissant une zone cruciale de contrôle de l’expression de FOXE1. L’analyse d’association basée sur les haplotypes a révélé qu’un haplotype (Hap1: ACCCCCCdel1C) est associé avec le HCDT chez les Caucasiens (p = 5x10-03). Une réduction significative de l’activité luciférase est observée pour Hap1 (réduction de 68%, p<0.001) comparé au promoteur WT de FOXE1. Une réduction de 50% de l’expression de FOXE1 dans une lignée de cellules thyroïdienne humaine est suffisante pour réduire significativement la migration cellulaire (réduction de 55%, p<0.05). Un autre haplotype (Hap2: ACCCCCCC) est observé moins fréquemment chez les Afro-Américain comparés aux Caucasiens (p = 1.7x10-03) et Hap2 diminue l’activité luciférase (réduction de 26%, p<0.001). Deux haplotypes distincts sont trouvés fréquemment dans les contrôles Africains (Black-African descents). Le premier haplotype (Hap3: GTCCCAAC) est fréquent (30.2%) chez les contrôles Afro-Américains comparés aux contrôles Caucasiens (6.3%; p = 2.59 x 10-9) tandis que le second haplotype (Hap4: GTCCGCAC) est trouvé exclusivement chez les contrôles Afro-Américains (9.4%) et est absent chez les contrôles Caucasiens (P = 2.59 x 10-6). Pour la deuxième approche, le séquençage de l’exome de l’ADN leucocytaire entre les jumeaux MZ discordants n’a révélé aucune différence. D'où l'intérêt du projet de séquençage de l’ADN et l’ARN de thyroïdes ectopiques et orthotopiques dans lesquelles de l'expression monoallélique aléatoire dans a été observée, ce qui pourrait expliquer comment une mutation monoallélique peut avoir des conséquences pathogéniques. Finalement, le séquençage de l’exome d’une cohorte de 36 cas atteints d’HCDT a permis d’identifier de nouveaux variants probablement pathogéniques dans les gènes récurrents RYR3, SSPO, IKBKE et TNXB. Ces quatre gènes sont impliqués dans l’adhésion focale (jouant un rôle dans la migration cellulaire), suggérant un rôle direct dans les défauts de migration de la thyroïde. Les essais de migration montrent une forte diminution (au moins 60% à 5h) de la migration des cellules thyroïdiennes infectées par shRNA comparés au shCtrl dans 2 de ces gènes. Des zebrafish KO (-/- et +/-) pour ces nouveaux gènes seront réalisés afin d’évaluer leur impact sur l’embryologie de la thyroïde.
Resumo:
Across the international educational landscape, numerous higher education institutions (HEIs) offer postgraduate programmes in occupational health psychology (OHP). These seek to empower the next generation of OHP practitioners with the knowledge and skills necessary to advance the understanding and prevention of workplace illness and injury, improve working life and promote healthy work through the application of psychological principles and practices. Among the OHP curricula operated within these programmes there exists considerable variability in the topics addressed. This is due, inter alia, to the youthfulness of the discipline and the fact that the development of educational provision has been managed at the level of the HEI where it has remained undirected by external forces such as the discipline’s representative bodies. Such variability makes it difficult to discern the key characteristics of a curriculum which is important for programme accreditation purposes, the professional development and regulation of practitioners and, ultimately, the long-term sustainability of the discipline. This chapter has as its focus the imperative for and development of consensus surrounding OHP curriculum areas. It begins by examining the factors that are currently driving curriculum developments and explores some of the barriers to such. It then reviews the limited body of previous research that has attempted to discern key OHP curriculum areas. This provides a foundation upon which to describe a study conducted by the current authors that involved the elicitation of subject matter expert opinion from an international sample of academics involved in OHP-related teaching and research on the question of which topic areas might be considered important for inclusion within an OHP curriculum. The chapter closes by drawing conclusions on steps that could be taken by the discipline’s representative bodies towards the consolidation and accreditation of a core curriculum.
Resumo:
The Next Generation Sequencing (NGS) allows to sequence the whole genome of an organism, compared to Maxam and Gilbert and Sanger sequencing that only allow to sequence, hardly, a single gene. Removing the separation of DNA fragments by electrophoresis, and the development of techniques that let the parallelization (analysing simultaneously several DNA fragments) have been crucial for the improvements of this process. The new companies in this ambit, Roche and Illumina, bet for different protocols to achieve these goals. Illumina bets for the sequencing by synthesis (SBS), requiring the library preparation and the use of adapters. Likewise, Illumina has replaced Roche because its lower rate of misincorporation, making it ideal for studies of genetic variability, transcriptomic, epigenomic, and metagenomic, in which this study will focus. However, it is noteworthy that the last progress in sequencing is carried out by the third generation sequencing, using nanotechnology to design small sequencers that sequence the whole genome of an organism quickly and inexpensively. Moreover, they provide more reliable data than current systems because they sequence a single molecule, solving the problem of synchronisation. In this way, PacBio and Nanopore allow a great progress in diagnostic and personalized medicine. Metagenomics provide to make a qualitative and quantitative analysis of the various species present in a sample. The main advantage of this technique is the no necessary isolation and growth of the species, allowing the analysis of nonculturable species. The Illumina protocol studies the variable regions of the 16S rRNA gene, which contains variable and not variables regions providing a phylogenetic classification. Therefore, metagenomics is a topic of interest to know the biodiversity of complex ecosystems and to study the microbiome of patients given the high involvement with certain microbial profiles on the condition of certain metabolic diseases.
Resumo:
Knowing a cell’s transcriptome is a fundamental requisite in order to analyze its response to the environment. Microarrays have supposed a revolution on this field as they are able to yield an overview of gene expression at any environmental condition on a genome-wide scale. This technique consists in the hybridisation of a nucleic acid sample, previously marked, with a probe (which might be made up of cDNA, oligonucleotides or PCR products) anchored to a solid surface (made of glass, plastic, silicon...) giving as a result a dot grid which reveals, after image analysis, which genes are being expressed. Nevertheless, this only can be achieved if information on the species genome has been generated. Different kinds of expression microarrays exist attending to the probe’s nature and the method used in its synthesis. In this poster two of these will be treated: Spotted Microarrays, for which the probe is synthesised prior to its fixation to the array and allow the analysis of two targets simultaneously. They can be easily customized, but lack high reproducibility and sensitivity. Oligonucleotide Microarrays, which are characterized by the direct printing of the probe on the array. In this case the probes consist on, invariably, oligonucleotides that are complementary to a small fraction of the gene it is representing at the microarray. Their application is somewhat restricted. This fact, however, makes them more reproducible. Currently, the approach towards the transcriptome studies from the Next Generation Sequencing technologies offers a large volume of information in a short amount of time needing less previous information on the target organism than that needed by microarrays, but their expensive price limits their use. The versatility of the latter, together with their reduced costs in comparison to other techniques, makes them an interesting resource in applications that may need less complexity.
Resumo:
The quality and the speed for genome sequencing has advanced at the same time that technology boundaries are stretched. This advancement has been divided so far in three generations. The first-generation methods enabled sequencing of clonal DNA populations. The second-generation massively increased throughput by parallelizing many reactions while the third-generation methods allow direct sequencing of single DNA molecules. The first techniques to sequence DNA were not developed until the mid-1970s, when two distinct sequencing methods were developed almost simultaneously, one by Alan Maxam and Walter Gilbert, and the other one by Frederick Sanger. The first one is a chemical method to cleave DNA at specific points and the second one uses ddNTPs, which synthesizes a copy from the DNA chain template. Nevertheless, both methods generate fragments of varying lengths that are further electrophoresed. Moreover, it is important to say that until the 1990s, the sequencing of DNA was relatively expensive and it was seen as a long process. Besides, using radiolabeled nucleotides also compounded the problem through safety concerns and prevented the automation. Some advancements within the first generation include the replacement of radioactive labels by fluorescent labeled ddNTPs and cycle sequencing with thermostable DNA polymerase, which allows automation and signal amplification, making the process cheaper, safer and faster. Another method is Pyrosequencing, which is based on the “sequencing by synthesis” principle. It differs from Sanger sequencing, in that it relies on the detection of pyrophosphate release on nucleotide incorporation. By the end of the last millennia, parallelization of this method started the Next Generation Sequencing (NGS) with 454 as the first of many methods that can process multiple samples, calling it the 2º generation sequencing. Here electrophoresis was completely eliminated. One of the methods that is sometimes used is SOLiD, based on sequencing by ligation of fluorescently dye-labeled di-base probes which competes to ligate to the sequencing primer. Specificity of the di-base probe is achieved by interrogating every 1st and 2nd base in each ligation reaction. The widely used Solexa/Illumina method uses modified dNTPs containing so called “reversible terminators” which blocks further polymerization. The terminator also contains a fluorescent label, which can be detected by a camera. Now, the previous step towards the third generation was in charge of Ion Torrent, who developed a technique that is based in a method of “sequencing-by-synthesis”. Its main feature is the detection of hydrogen ions that are released during base incorporation. Likewise, the third generation takes into account nanotechnology advancements for the processing of unique DNA molecules to a real time synthesis sequencing system like PacBio; and finally, the NANOPORE, projected since 1995, also uses Nano-sensors forming channels obtained from bacteria that conducts the sample to a sensor that allows the detection of each nucleotide residue in the DNA strand. The advancements in terms of technology that we have nowadays have been so quick, that it makes wonder: ¿How do we imagine the next generation?
Resumo:
We present ideas about creating a next generation Intrusion Detection System (IDS) based on the latest immunological theories. The central challenge with computer security is determining the difference between normal and potentially harmful activity. For half a century, developers have protected their systems by coding rules that identify and block specific events. However, the nature of current and future threats in conjunction with ever larger IT systems urgently requires the development of automated and adaptive defensive tools. A promising solution is emerging in the form of Artificial Immune Systems (AIS): The Human Immune System (HIS) can detect and defend against harmful and previously unseen invaders, so can we not build a similar Intrusion Detection System (IDS) for our computers? Presumably, those systems would then have the same beneficial properties as HIS like error tolerance, adaptation and self-monitoring. Current AIS have been successful on test systems, but the algorithms rely on self-nonself discrimination, as stipulated in classical immunology. However, immunologist are increasingly finding fault with traditional self-nonself thinking and a new ‘Danger Theory’ (DT) is emerging. This new theory suggests that the immune system reacts to threats based on the correlation of various (danger) signals and it provides a method of ‘grounding’ the immune response, i.e. linking it directly to the attacker. Little is currently understood of the precise nature and correlation of these signals and the theory is a topic of hot debate. It is the aim of this research to investigate this correlation and to translate the DT into the realms of computer security, thereby creating AIS that are no longer limited by self-nonself discrimination. It should be noted that we do not intend to defend this controversial theory per se, although as a deliverable this project will add to the body of knowledge in this area. Rather we are interested in its merits for scaling up AIS applications by overcoming self-nonself discrimination problems.