908 resultados para throughput
Resumo:
The subject of this thesis is multicolour bioluminescence analysis and how it can provide new tools for drug discovery and development.The mechanism of color tuning in bioluminescent reactions is not fully understood yet but it is object of intense research and several hypothesis have been generated. In the past decade key residues of the active site of the enzyme or in the surface surrounding the active site have been identified as responsible of different color emission. Anyway since bioluminescence reaction is strictly dependent from the interaction between the enzyme and its substrate D-luciferin, modification of the substrate can lead to a different emission spectrum too. In the recent years firefly luciferase and other luciferases underwent mutagenesis in order to obtain mutants with different emission characteristics. Thanks to these new discoveries in the bioluminescence field multicolour luciferases can be nowadays employed in bioanalysis for assay developments and imaging purposes. The use of multicolor bioluminescent enzymes expanded the potential of a range of application in vitro and in vivo. Multiple analysis and more information can be obtained from the same analytical session saving cost and time. This thesis focuses on several application of multicolour bioluminescence for high-throughput screening and in vivo imaging. Multicolor luciferases can be employed as new tools for drug discovery and developments and some examples are provided in the different chapters. New red codon optimized luciferase have been demonstrated to be improved tools for bioluminescence imaging in small animal and the possibility to combine red and green luciferases for BLI has been achieved even if some aspects of the methodology remain challenging and need further improvement. In vivo Bioluminescence imaging has known a rapid progress since its first application no more than 15 years ago. It is becoming an indispensable tool in pharmacological research. At the same time the development of more sensitive and implemented microscopes and low-light imager for a better visualization and quantification of multicolor signals would boost the research and the discoveries in life sciences in general and in drug discovery and development in particular.
Resumo:
Leber’s hereditary optic neuropathy (LHON) and Autosomal Dominant Optic Atrophy (ADOA) are the two most common inherited optic neuropathies and both are the result of mitochondrial dysfunctions. Despite the primary mutations causing these disorders are different, being an mtDNA mutation in subunits of complex I in LHON and defects in the nuclear gene encoding the mitochondrial protein OPA1 in ADOA, both pathologies share some peculiar features, such a variable penetrance and tissue-specificity of the pathological processes. Probably, one of the most interesting and unclear aspect of LHON is the variable penetrance. This phenomenon is common in LHON families, most of them being homoplasmic mutant. Inter-family variability of penetrance may be caused by nuclear or mitochondrial ‘secondary’ genetic determinants or other predisposing triggering factors. We identified a compensatory mechanism in LHON patients, able to distinguish affected individuals from unaffected mutation carriers. In fact, carrier individuals resulted more efficient than affected subjects in increasing the mitochondrial biogenesis to compensate for the energetic defect. Thus, the activation of the mitochondrial biogenesis may be a crucial factor in modulating penetrance, determining the fate of subjects harbouring LHON mutations. Furthermore, mtDNA content can be used as a molecular biomarker which, for the first time, clearly differentiates LHON affected from LHON carrier individuals, providing a valid mechanism that may be exploited for development of therapeutic strategies. Although the mitochondrial biogenesis gained a relevant role in LHON pathogenesis, we failed to identify a genetic modifying factor for the variable penetrance in a set of candidate genes involved in the regulation of this process. A more systematic high-throughput approach will be necessary to select the genetic variants responsible for the different efficiency in activating mitochondrial biogenesis. A genetic modifying factor was instead identified in the MnSOD gene. The SNP Ala16Val in this gene seems to modulate LHON penetrance, since the Ala allele in this position significantly predisposes to be affected. Thus, we propose that high MnSOD activity in mitochondria of LHON subjects may produce an overload of H2O2 for the antioxidant machinery, leading to release from mitochondria of this radical and promoting a severe cell damage and death ADOA is due to mutation in the OPA1 gene in the large majority of cases. The causative nuclear defects in the remaining families with DOA have not been identified yet, but a small number of families have been mapped to other chromosomal loci (OPA3, OPA4, OPA5, OPA7, OPA8). Recently, a form of DOA and premature cataract (ADOAC) has been associated to pathogenic mutations of the OPA3 gene, encoding a mitochondrial protein. In the last year OPA3 has been investigated by two different groups, but a clear function for this protein and the pathogenic mechanism leading to ADOAC are still unclear. Our study on OPA3 provides new information about the pattern of expression of the two isoforms OPA3V1 and OPA3V2, and, moreover, suggests that OPA3 may have a different function in mitochondria from OPA1, the major site for ADOA mutations. In fact, based on our results, we propose that OPA3 is not involved in the mitochondrial fusion process, but, on the contrary, it may regulate mitochondrial fission. Furthermore, at difference from OPA1, we excluded a role for OPA3 in mtDNA maintenance and we failed to identify a direct interaction between OPA3 and OPA1. Considering the results from overexpression and silencing of OPA3, we can conclude that the overexpression has more drastic consequences on the cells than silencing, suggesting that OPA3 may cause optic atrophy via a gain-of-function mechanism. These data provide a new starting point for future investigations aimed at identifying the exact function of OPA3 and the pathogenic mechanism causing ADOAC.
Resumo:
Hybrid technologies, thanks to the convergence of integrated microelectronic devices and new class of microfluidic structures could open new perspectives to the way how nanoscale events are discovered, monitored and controlled. The key point of this thesis is to evaluate the impact of such an approach into applications of ion-channel High Throughput Screening (HTS)platforms. This approach offers promising opportunities for the development of new classes of sensitive, reliable and cheap sensors. There are numerous advantages of embedding microelectronic readout structures strictly coupled to sensing elements. On the one hand the signal-to-noise-ratio is increased as a result of scaling. On the other, the readout miniaturization allows organization of sensors into arrays, increasing the capability of the platform in terms of number of acquired data, as required in the HTS approach, to improve sensing accuracy and reliabiity. However, accurate interface design is required to establish efficient communication between ionic-based and electronic-based signals. The work made in this thesis will show a first example of a complete parallel readout system with single ion channel resolution, using a compact and scalable hybrid architecture suitable to be interfaced to large array of sensors, ensuring simultaneous signal recording and smart control of the signal-to-noise ratio and bandwidth trade off. More specifically, an array of microfluidic polymer structures, hosting artificial lipid bilayers blocks where single ion channel pores are embededed, is coupled with an array of ultra-low noise current amplifiers for signal amplification and data processing. As demonstrating working example, the platform was used to acquire ultra small currents derived by single non-covalent molecular binding between alpha-hemolysin pores and beta-cyclodextrin molecules in artificial lipid membranes.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.
Resumo:
Animal neocentromeres are defined as ectopic centromeres that have formed in non-centromeric locations and avoid some of the features, like the DNA satellite sequence, that normally characterize canonical centromeres. Despite this, they are stable functional centromeres inherited through generations. The only existence of neocentromeres provide convincing evidence that centromere specification is determined by epigenetic rather than sequence-specific mechanisms. For all this reasons, we used them as simplified models to investigate the molecular mechanisms that underlay the formation and the maintenance of functional centromeres. We collected human cell lines carrying neocentromeres in different positions. To investigate the region involved in the process at the DNA sequence level we applied a recent technology that integrates Chromatin Immuno-Precipitation and DNA microarrays (ChIP-on-chip) using rabbit polyclonal antibodies directed against CENP-A or CENP-C human centromeric proteins. These DNA binding-proteins are required for kinetochore function and are exclusively targeted to functional centromeres. Thus, the immunoprecipitation of DNA bound by these proteins allows the isolation of centromeric sequences, including those of the neocentromeres. Neocentromeres arise even in protein-coding genes region. We further analyzed if the increased scaffold attachment sites and the corresponding tighter chromatin of the region involved in the neocentromerization process still were permissive or not to transcription of within encoded genes. Centromere repositioning is a phenomenon in which a neocentromere arisen without altering the gene order, followed by the inactivation of the canonical centromere, becomes fixed in population. It is a process of chromosome rearrangement fundamental in evolution, at the bases of speciation. The repeat-free region where the neocentromere initially forms, progressively acquires extended arrays of satellite tandem repeats that may contribute to its functional stability. In this view our attention focalized to the repositioned horse ECA11 centromere. ChIP-on-chip analysis was used to define the region involved and SNPs studies, mapping within the region involved into neocentromerization, were carried on. We have been able to describe the structural polymorphism of the chromosome 11 centromeric domain of Caballus population. That polymorphism was seen even between homologues chromosome of the same cells. That discovery was the first described ever. Genomic plasticity had a fundamental role in evolution. Centromeres are not static packaged region of genomes. The key question that fascinates biologists is to understand how that centromere plasticity could be combined to the stability and maintenance of centromeric function. Starting from the epigenetic point of view that underlies centromere formation, we decided to analyze the RNA content of centromeric chromatin. RNA, as well as secondary chemically modifications that involve both histones and DNA, represents a good candidate to guide somehow the centromere formation and maintenance. Many observations suggest that transcription of centromeric DNA or of other non-coding RNAs could affect centromere formation. To date has been no thorough investigation addressing the identity of the chromatin-associated RNAs (CARs) on a global scale. This prompted us to develop techniques to identify CARs in a genome-wide approach using high-throughput genomic platforms. The future goal of this study will be to focalize the attention on what strictly happens specifically inside centromere chromatin.
Resumo:
In the last decade, the reverse vaccinology approach shifted the paradigm of vaccine discovery from conventional culture-based methods to high-throughput genome-based approaches for the development of recombinant protein-based vaccines against pathogenic bacteria. Besides reaching its main goal of identifying new vaccine candidates, this new procedure produced also a huge amount of molecular knowledge related to them. In the present work, we explored this knowledge in a species-independent way and we performed a systematic in silico molecular analysis of more than 100 protective antigens, looking at their sequence similarity, domain composition and protein architecture in order to identify possible common molecular features. This meta-analysis revealed that, beside a low sequence similarity, most of the known bacterial protective antigens shared structural/functional Pfam domains as well as specific protein architectures. Based on this, we formulated the hypothesis that the occurrence of these molecular signatures can be predictive of possible protective properties of other proteins in different bacterial species. We tested this hypothesis in Streptococcus agalactiae and identified four new protective antigens. Moreover, in order to provide a second proof of the concept for our approach, we used Staphyloccus aureus as a second pathogen and identified five new protective antigens. This new knowledge-driven selection process, named MetaVaccinology, represents the first in silico vaccine discovery tool based on conserved and predictive molecular and structural features of bacterial protective antigens and not dependent upon the prediction of their sub-cellular localization.
Resumo:
Kristallisation der Arbutin-Synthase und der Strictosidin Glukosidase - zwei Enzyme aus dem sekundären Glykosidstoffwechsel von Rauvolfia serpentina Die vorliegende Arbeit befasst sich mit der Kristallisation und der strukturellen Auswertung der Arbutin-Synthase (AS) und der Strictosidin Glukosidase (SG). Beide Enzyme stammen aus der Medizinalpflanze Rauvolfia serpentina. Für die Kristallisation der Arbutin-Synthase wurden ca. 2500 verschiedene Beding-ungen experimentell untersucht. Für einige dieser Experimente wurde das Enzym molekularbiologisch und chemisch verändert. Trotzdem konnten keine Kristalle erhalten werden. Die bei diesen Veränderungen erhaltenen Ergebnisse wurden anhand von Vergleichen mit Strukturen anderer Glykosyltransferasen der gleichen Familie analysiert. Bei der Reinigung der AS konnte mit verschiedenen Trennsystemen nie eine homogene Lösung produziert werden. Der wahrscheinliche Grund für diese schlechte Isolierbarkeit, und damit der wahrscheinliche Grund für die schwierige Kris-tallisation, liegt in der überdurchschnittlich hohen Anzahl an Cysteinen in der Proteinsequenz. Mit den Aminosäuren Cys171, Cys253 und Cys461 wurden drei Cysteine gefunden, die einem Strukturvergleich nach an der Proteinoberfläche liegen und möglicherweise durch Quervernetzungen mit anderen Proteinmolekülen ein heterogenes Gemisch bilden, das nicht geordnet kristallisieren kann. Durch gezielte Mutationen dieser drei Aminosäuren könnte die Kristallisation zukünftig ermöglicht werden. Für die SG waren bereits Bedingungen bekannt bei denen nicht vermessbare Enzymkristalle (Nadeln) wuchsen. In weit gefächerten Versuchen konnten diese Kristalle jedoch nicht zu 3D-Wachstum angeregt werden. Es wurden mit einem HTS-Screening neue Bedingungen zur Kristallisation gefunden. Anschließend konnten die native Struktur und der Strictosidin/Enzym-Komplex vermessen und aufgeklärt werden. Die SG gehört zur Familie 1 der Glukosidasen (GH-1) und besitzt die in dieser Familie konservierte (beta/alpha)8-Barrel-Faltung. Im Vergleich mit 16 bekannten Glykosidasen der Familie GH-1 wurde die Substratbindung untersucht. Dabei wurde die in der Familie konservierte Zuckerbindung vorgefunden, jedoch große Unterschiede in der Aglykonbindung entdeckt. Es wurden Bedingungen für die Konformationsänderung des Trp388 erkannt. Diese Konformationsänderung dirigiert den Aglykonteil des Substrates auf verschiedene Seiten der Substratbindungstasche und teilt so die Familie GH-1 in zwei Gruppen.
Resumo:
Il termine cloud ha origine dal mondo delle telecomunicazioni quando i provider iniziarono ad utilizzare servizi basati su reti virtuali private (VPN) per la comunicazione dei dati. Il cloud computing ha a che fare con la computazione, il software, l’accesso ai dati e servizi di memorizzazione in modo tale che l’utente finale non abbia idea della posizione fisica dei dati e la configurazione del sistema in cui risiedono. Il cloud computing è un recente trend nel mondo IT che muove la computazione e i dati lontano dai desktop e dai pc portatili portandoli in larghi data centers. La definizione di cloud computing data dal NIST dice che il cloud computing è un modello che permette accesso di rete on-demand a un pool condiviso di risorse computazionali che può essere rapidamente utilizzato e rilasciato con sforzo di gestione ed interazione con il provider del servizio minimi. Con la proliferazione a larga scala di Internet nel mondo le applicazioni ora possono essere distribuite come servizi tramite Internet; come risultato, i costi complessivi di questi servizi vengono abbattuti. L’obbiettivo principale del cloud computing è utilizzare meglio risorse distribuite, combinarle assieme per raggiungere un throughput più elevato e risolvere problemi di computazione su larga scala. Le aziende che si appoggiano ai servizi cloud risparmiano su costi di infrastruttura e mantenimento di risorse computazionali poichè trasferiscono questo aspetto al provider; in questo modo le aziende si possono occupare esclusivamente del business di loro interesse. Mano a mano che il cloud computing diventa più popolare, vengono esposte preoccupazioni riguardo i problemi di sicurezza introdotti con l’utilizzo di questo nuovo modello. Le caratteristiche di questo nuovo modello di deployment differiscono ampiamente da quelle delle architetture tradizionali, e i meccanismi di sicurezza tradizionali risultano inefficienti o inutili. Il cloud computing offre molti benefici ma è anche più vulnerabile a minacce. Ci sono molte sfide e rischi nel cloud computing che aumentano la minaccia della compromissione dei dati. Queste preoccupazioni rendono le aziende restie dall’adoperare soluzioni di cloud computing, rallentandone la diffusione. Negli anni recenti molti sforzi sono andati nella ricerca sulla sicurezza degli ambienti cloud, sulla classificazione delle minacce e sull’analisi di rischio; purtroppo i problemi del cloud sono di vario livello e non esiste una soluzione univoca. Dopo aver presentato una breve introduzione sul cloud computing in generale, l’obiettivo di questo elaborato è quello di fornire una panoramica sulle vulnerabilità principali del modello cloud in base alle sue caratteristiche, per poi effettuare una analisi di rischio dal punto di vista del cliente riguardo l’utilizzo del cloud. In questo modo valutando i rischi e le opportunità un cliente deve decidere se adottare una soluzione di tipo cloud. Alla fine verrà presentato un framework che mira a risolvere un particolare problema, quello del traffico malevolo sulla rete cloud. L’elaborato è strutturato nel modo seguente: nel primo capitolo verrà data una panoramica del cloud computing, evidenziandone caratteristiche, architettura, modelli di servizio, modelli di deployment ed eventuali problemi riguardo il cloud. Nel secondo capitolo verrà data una introduzione alla sicurezza in ambito informatico per poi passare nello specifico alla sicurezza nel modello di cloud computing. Verranno considerate le vulnerabilità derivanti dalle tecnologie e dalle caratteristiche che enucleano il cloud, per poi passare ad una analisi dei rischi. I rischi sono di diversa natura, da quelli prettamente tecnologici a quelli derivanti da questioni legali o amministrative, fino a quelli non specifici al cloud ma che lo riguardano comunque. Per ogni rischio verranno elencati i beni afflitti in caso di attacco e verrà espresso un livello di rischio che va dal basso fino al molto alto. Ogni rischio dovrà essere messo in conto con le opportunità che l’aspetto da cui quel rischio nasce offre. Nell’ultimo capitolo verrà illustrato un framework per la protezione della rete interna del cloud, installando un Intrusion Detection System con pattern recognition e anomaly detection.
Resumo:
The improvement of devices provided by Nanotechnology has put forward new classes of sensors, called bio-nanosensors, which are very promising for the detection of biochemical molecules in a large variety of applications. Their use in lab-on-a-chip could gives rise to new opportunities in many fields, from health-care and bio-warfare to environmental and high-throughput screening for pharmaceutical industry. Bio-nanosensors have great advantages in terms of cost, performance, and parallelization. Indeed, they require very low quantities of reagents and improve the overall signal-to-noise-ratio due to increase of binding signal variations vs. area and reduction of stray capacitances. Additionally, they give rise to new challenges, such as the need to design high-performance low-noise integrated electronic interfaces. This thesis is related to the design of high-performance advanced CMOS interfaces for electrochemical bio-nanosensors. The main focus of the thesis is: 1) critical analysis of noise in sensing interfaces, 2) devising new techniques for noise reduction in discrete-time approaches, 3) developing new architectures for low-noise, low-power sensing interfaces. The manuscript reports a multi-project activity focusing on low-noise design and presents two developed integrated circuits (ICs) as examples of advanced CMOS interfaces for bio-nanosensors. The first project concerns low-noise current-sensing interface for DC and transient measurements of electrophysiological signals. The focus of this research activity is on the noise optimization of the electronic interface. A new noise reduction technique has been developed so as to realize an integrated CMOS interfaces with performance comparable with state-of-the-art instrumentations. The second project intends to realize a stand-alone, high-accuracy electrochemical impedance spectroscopy interface. The system is tailored for conductivity-temperature-depth sensors in environmental applications, as well as for bio-nanosensors. It is based on a band-pass delta-sigma technique and combines low-noise performance with low-power requirements.
Resumo:
372 osteochondrodysplasias and genetically determined dysostoses were reported in 2007 [Superti-Furga and Unger, 2007]. For 215 of these conditions, an association with one or more genes can be stated, while the molecular changes for the remaining syndromes remain illusive to date. Thus, the present dissertation aims at the identification of novel genes involved in processes regarding cartilage/ bone formation, growth, differentiation and homeostasis, which may serve as candidate genes for the above mentioned conditions. Two different approaches were undertaken. Firstly, a high throughput EST sequencing project from a human fetal cartilage library was performed to identify novel genes in early skeletal development (20th week of gestation until 2nd year of life) that could be investigated as potential candidate genes. 5000 EST sequences were generated and analyzed representing 1573 individual transcripts, corresponding to known (1400) and to novel, yet uncharacterized genes (173). About 7% of the proteins were already described in cartilage/ bone development or homeostasis, showing that the generated library is tissue specific. The remaining profile of this library was compared to previously published libraries from different time points (8th–12th, 18th–20th week and adult human cartilage) that also showed a similar distribution, reflecting the quality of the presented library analyzed. Furthermore, three potential candidate genes (LRRC59, CRELD2, ZNF577) were further investigated and their potential involvement in skeletogenesis was discussed. Secondly, a disease-orientated approach was undertaken to identify downstream targets of LMX1B, the gene causing Nail-Patella syndrome (NPS), and to investigate similar conditions. Like NPS, Genitopatellar syndrome (GPS) is characterized by aplasia or hypoplasia of the patella and renal anomalies. Therefore, six GPS patients were enrolled in a study to investigate the molecular changes responsible for this relatively rare disease. A 3.07 Mb deletion including LMX1B and NR5A1 (SF1) was found in one female patient that showed features of both NPS and GPS and investigations revealed a 46,XY karyotype and ovotestes indicating true hermaphroditism. The microdeletion was not seen in any of the five other patients with GPS features only, but a potential regulatory element between the two genes cannot be ruled out yet. Since Lmx1b is expressed in the dorsal limb bud and in podocytes, proteomic approaches and expression profiling were performed with murine material of the limbs and the kidneys to identify its downstream targets. After 2D-gel electrophoresis with protein extracts from E13.5 fore limb buds and newborn kidneys of Lmx1b wild type and knock-out mice and mass spectrometry analysis, only two proteins, agrin and carbonic anhydrase 2, remained of interest, but further analysis of the two genes did not show a transcriptional down regulation by Lmx1b. The focus was switched to expression profiles and RNA from newborn Lmx1b wild type and knock-out kidneys was compared by microarray analysis. Potential Lmx1b targets were almost impossible to study, because of the early death of Lmx1b deficient mice, when the glomeruli, containing podocytes, are still immature. Because Lmx1b is also expressed during limb development, RNA from wild type and knock-out Lmx1b E11.5 fore limb buds was investigated by microarray, revealing four potential Lmx1b downstream targets: neuropilin 2, single-stranded DNA binding protein 2, peroxisome proliferative activated receptor, gamma, co-activator 1 alpha, and short stature homeobox 2. Whole mount in situ hybridization strengthened a potential down regulation of neuropilin 2 by Lmx1b, but further investigations including in situ hybridization and protein-protein interaction studies will be needed.
Resumo:
The research presented in my PhD thesis is part of a wider European project, FishPopTrace, focused on traceability of fish populations and products. My work was aimed at developing and analyzing novel genetic tools for a widely distributed marine fish species, the European hake (Merluccius merluccius), in order to investigate population genetic structure and explore potential applications to traceability scenarios. A total of 395 SNPs (Single Nucleotide Polymorphisms) were discovered from a massive collection of Expressed Sequence Tags, obtained by high-throughput sequencing, and validated on 19 geographic samples from Atlantic and Mediterranean. Genome-scan approaches were applied to identify polymorphisms on genes potentially under divergent selection (outlier SNPs), showing higher genetic differentiation among populations respect to the average observed across loci. Comparative analysis on population structure were carried out on putative neutral and outlier loci at wide (Atlantic and Mediterranean samples) and regional (samples within each basin) spatial scales, to disentangle the effects of demographic and adaptive evolutionary forces on European hake populations genetic structure. Results demonstrated the potential of outlier loci to unveil fine scale genetic structure, possibly identifying locally adapted populations, despite the weak signal showed from putative neutral SNPs. The application of outlier SNPs within the framework of fishery resources management was also explored. A minimum panel of SNP markers showing maximum discriminatory power was selected and applied to a traceability scenario aiming at identifying the basin (and hence the stock) of origin, Atlantic or Mediterranean, of individual fish. This case study illustrates how molecular analytical technologies have operational potential in real-world contexts, and more specifically, potential to support fisheries control and enforcement and fish and fish product traceability.
Resumo:
A novel design based on electric field-free open microwell arrays for the automated continuous-flow sorting of single or small clusters of cells is presented. The main feature of the proposed device is the parallel analysis of cell-cell and cell-particle interactions in each microwell of the array. High throughput sample recovery with a fast and separate transfer from the microsites to standard microtiter plates is also possible thanks to the flexible printed circuit board technology which permits to produce cost effective large area arrays featuring geometries compatible with laboratory equipment. The particle isolation is performed via negative dielectrophoretic forces which convey the particles’ into the microwells. Particles such as cells and beads flow in electrically active microchannels on whose substrate the electrodes are patterned. The introduction of particles within the microwells is automatically performed by generating the required feedback signal by a microscope-based optical counting and detection routine. In order to isolate a controlled number of particles we created two particular configurations of the electric field within the structure. The first one permits their isolation whereas the second one creates a net force which repels the particles from the microwell entrance. To increase the parallelism at which the cell-isolation function is implemented, a new technique based on coplanar electrodes to detect particle presence was implemented. A lock-in amplifying scheme was used to monitor the impedance of the channel perturbed by flowing particles in high-conductivity suspension mediums. The impedance measurement module was also combined with the dielectrophoretic focusing stage situated upstream of the measurement stage, to limit the measured signal amplitude dispersion due to the particles position variation within the microchannel. In conclusion, the designed system complies with the initial specifications making it suitable for cellomics and biotechnology applications.
Resumo:
L’affermazione del trasporto containerizzato verificatasi negli ultimi decenni ha determinato una profonda rivoluzione nell’ambito del trasporto marittimo internazionale. L’unitizzazione dei carichi e l’innovazione tecnologica dei mezzi utilizzati per il trasporto e la movimentazione consentono oggi di gestire ingenti volumi di traffico in tempi rapidi e con costi relativamente contenuti. L’utilizzo di unità standard ha inoltre reso possibile lo sviluppo del trasporto intermodale e la realizzazione di catene logistiche complesse. In questa tesi sono state analizzate le problematiche relative alla gestione delle operazioni che vengono svolte all’interno dei terminal container, i nodi fondamentali del trasporto intermodale. In particolare è stato studiato il caso del nuovo Terminal Container del Porto di Ravenna. Trattandosi di un terminal ancora in fase di progettazione, sono state applicate delle metodologie che consentono di effettuare una valutazione preliminare di quelle che potrebbero essere le potenzialità del nuovo terminal. In primo luogo sono stati determinati il throughput potenziale del terminal, in funzione delle aree di stoccaggio e della capacità operativa della banchina, e il numero medio di mezzi necessari alla movimentazione di tale volume di traffico annuo. Poi si è proceduto all’applicazione di modelli analitici specifici per la valutazione delle performance dell’equipment del terminal. I risultati ottenuti sono stati infine utilizzati per lo studio delle interazioni tra i sub-sistemi principali del terminal attraverso la teoria delle code, allo scopo di valutarne il livello di servizio e individuare eventuali criticità.
Resumo:
Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.