947 resultados para Bio-inspired optimization techniques
Resumo:
We report the results of a study into the factors controlling the quality of nanolithographic imaging. Self-assembled monolayer (SAM) coverage, subsequent postetch pattern definition, and minimum feature size all depend on the quality of the Au substrate used in material mask atomic nanolithographic experiments. We find that sputtered Au substrates yield much smoother surfaces and a higher density of {111}-oriented grains than evaporated Au surfaces. Phase imaging with an atomic force microscope shows that the quality and percentage coverage of SAM adsorption are much greater for sputtered Au surfaces. Exposure of the self-assembled monolayer to an optically cooled atomic Cs beam traversing a two-dimensional array of submicron material masks mounted a few microns above the self-assembled monolayer surface allowed determination of the minimum average Cs dose (2 Cs atoms per self-assembled monolayer molecule) to write the monolayer. Suitable wet etching, with etch rates of 2.2 nm min-1, results in optimized pattern definition. Utilizing these optimizations, material mask features as small as 230 nm in diameter with a fractional depth gradient of 0.820 nm were realized.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Cancer comprises a collection of diseases, all of which begin with abnormal tissue growth from various stimuli, including (but not limited to): heredity, genetic mutation, exposure to harmful substances, radiation as well as poor dieting and lack of exercise. The early detection of cancer is vital to providing life-saving, therapeutic intervention. However, current methods for detection (e.g., tissue biopsy, endoscopy and medical imaging) often suffer from low patient compliance and an elevated risk of complications in elderly patients. As such, many are looking to “liquid biopsies” for clues into presence and status of cancer due to its minimal invasiveness and ability to provide rich information about the native tumor. In such liquid biopsies, peripheral blood is drawn from patients and is screened for key biomarkers, chiefly circulating tumor cells (CTCs). Capturing, enumerating and analyzing the genetic and metabolomic characteristics of these CTCs may hold the key for guiding doctors to better understand the source of cancer at an earlier stage for more efficacious disease management.
The isolation of CTCs from whole blood, however, remains a significant challenge due to their (i) low abundance, (ii) lack of a universal surface marker and (iii) epithelial-mesenchymal transition that down-regulates common surface markers (e.g., EpCAM), reducing their likelihood of detection via positive selection assays. These factors potentiate the need for an improved cell isolation strategy that can collect CTCs via both positive and negative selection modalities as to avoid the reliance on a single marker, or set of markers, for more accurate enumeration and diagnosis.
The technologies proposed herein offer a unique set of strategies to focus, sort and template cells in three independent microfluidic modules. The first module exploits ultrasonic standing waves and a class of elastomeric particles for the rapid and discriminate sequestration of cells. This type of cell handling holds promise not only in sorting, but also in the isolation of soluble markers from biofluids. The second module contains components to focus (i.e., arrange) cells via forces from acoustic standing waves and separate cells in a high throughput fashion via free-flow magnetophoresis. The third module uses a printed array of micromagnets to capture magnetically labeled cells into well-defined compartments, enabling on-chip staining and single cell analysis. These technologies can operate in standalone formats, or can be adapted to operate with established analytical technologies, such as flow cytometry. A key advantage of these innovations is their ability to process erythrocyte-lysed blood in a rapid (and thus high throughput) fashion. They can process fluids at a variety of concentrations and flow rates, target cells with various immunophenotypes and sort cells via positive (and potentially negative) selection. These technologies are chip-based, fabricated using standard clean room equipment, towards a disposable clinical tool. With further optimization in design and performance, these technologies might aid in the early detection, and potentially treatment, of cancer and various other physical ailments.
Resumo:
A tenet of modern radiotherapy (RT) is to identify the treatment target accurately, following which the high-dose treatment volume may be expanded into the surrounding tissues in order to create the clinical and planning target volumes. Respiratory motion can induce errors in target volume delineation and dose delivery in radiation therapy for thoracic and abdominal cancers. Historically, radiotherapy treatment planning in the thoracic and abdominal regions has used 2D or 3D images acquired under uncoached free-breathing conditions, irrespective of whether the target tumor is moving or not. Once the gross target volume has been delineated, standard margins are commonly added in order to account for motion. However, the generic margins do not usually take the target motion trajectory into consideration. That may lead to under- or over-estimate motion with subsequent risk of missing the target during treatment or irradiating excessive normal tissue. That introduces systematic errors into treatment planning and delivery. In clinical practice, four-dimensional (4D) imaging has been popular in For RT motion management. It provides temporal information about tumor and organ at risk motion, and it permits patient-specific treatment planning. The most common contemporary imaging technique for identifying tumor motion is 4D computed tomography (4D-CT). However, CT has poor soft tissue contrast and it induce ionizing radiation hazard. In the last decade, 4D magnetic resonance imaging (4D-MRI) has become an emerging tool to image respiratory motion, especially in the abdomen, because of the superior soft-tissue contrast. Recently, several 4D-MRI techniques have been proposed, including prospective and retrospective approaches. Nevertheless, 4D-MRI techniques are faced with several challenges: 1) suboptimal and inconsistent tumor contrast with large inter-patient variation; 2) relatively low temporal-spatial resolution; 3) it lacks a reliable respiratory surrogate. In this research work, novel 4D-MRI techniques applying MRI weightings that was not used in existing 4D-MRI techniques, including T2/T1-weighted, T2-weighted and Diffusion-weighted MRI were investigated. A result-driven phase retrospective sorting method was proposed, and it was applied to image space as well as k-space of MR imaging. Novel image-based respiratory surrogates were developed, improved and evaluated.
Resumo:
The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.
Resumo:
As the largest contributor to renewable energy, biomass (especially lignocellulosic biomass) has significant potential to address atmospheric emission and energy shortage issues. The bio-fuels derived from lignocellulosic biomass are popularly referred to as second-generation bio-fuels. To date, several thermochemical conversion pathways for the production of second-generation bio-fuels have shown commercial promise; however, most of these remain at various pre-commercial stages. In view of their imminent commercialization, it is important to conduct a profound and comprehensive comparison of these production techniques. Accordingly, the scope of this review is to fill this essential knowledge gap by mapping the entire value chain of second-generation bio-fuels, from technical, economic, and environmental perspectives. This value chain covers i) the thermochemical technologies used to convert solid biomass feedstock into easier-to-handle intermediates, such as bio-oil, syngas, methanol, and Fischer-Tropsch fuel; and ii) the upgrading technologies used to convert intermediates into end products, including diesel, gasoline, renewable jet fuels, hydrogen, char, olefins, and oxygenated compounds. This review also provides an economic and commercial assessment of these technologies, with the aim of identifying the most adaptable technology for the production of bio-fuels, fuel additives, and bio-chemicals. A detailed mapping of the carbon footprints of the various thermochemical routes to second-generation bio-fuels is also carried out. The review concludes by identifying key challenges and future trends for second-generation petroleum substitute bio-fuels.
Resumo:
This paper outlines the development of a crosscorrelation algorithm and a spiking neural network (SNN) for sound localisation based on real sound recorded in a noisy and dynamic environment by a mobile robot. The SNN architecture aims to simulate the sound localisation ability of the mammalian auditory pathways by exploiting the binaural cue of interaural time difference (ITD). The medial superior olive was the inspiration for the SNN architecture which required the integration of an encoding layer which produced biologically realistic spike trains, a model of the bushy cells found in the cochlear nucleus and a supervised learning algorithm. The experimental results demonstrate that biologically inspired sound localisation achieved using a SNN can compare favourably to the more classical technique of cross-correlation.
Resumo:
Inom nära framtid kan det vara ekonomiskt lönsamt att investera i träbaserad biomassa-användning inom stålproduktion. En förutsättning är att biomassans bränsle-egenskaper förbättras genom en sorts förkolning, närmare sagt långsam pyrolysering. Som biomassakälla duger skogsrester och dylikt. Nyttan med att använda biomassa är att man reducerar fossila CO2-utsläpp och på så vis minskar på den globala uppvärmningen; detta kräver dock att återplantering görs för att fånga de frigjorda CO2-utsläppen som härstammar från biomassans skörd, transport, processering och förbränning. En investering i en pyrolyseringsanläggning integrerad i ett stålverk kan vara lönsam ifall utsläppskatten är över 20 € per ton CO2 då biomassakostnaden är 40 € per ton torr substans. Detta är dock inte fallet i dagens läge och i Finland är dessa avgifter beroende av politiska beslut på Europeisk nivå. Det kunde dock vara av politiskt intresse att stödja biomassa-användningen inom stålindustrin eftersom detta skulle skapa nya arbetsplatser såväl inom själva stålindustrin som inom skogsindustrin och möjligen även inom kemi-industrin, beroende på hur de resulterande pyrolysprodukterna (träkol, gas och bio-oljor) utnyttjas. Med tanke på att närmare en femtedel av alla CO2-utsläpp härstammande från industrin kommer från stålindustrin så är det uppenbart att mera miljövänligare alternativ kommer att krävas i framtiden. Lähitulevaisuudessa voi olla taloudellisesti kannattavaa käyttää puuperäistä biomassaa teräksen valmistuksessa. Tämä vaatii kuitenkin sen, että biomassa läpikäy eräänlaisen hiillostuksen, eli pyrolyysin, ja tässä tapauksessa, hitaan pyrolyysin. Biomassalähteeksi kelpaavat esimerkiksi hakkuujätteet ja muu puujäte. Hyöty biomassankäytöstä syntyy siitä, kun se osittain korvaa perinteisesti käytettyjä fossiilisia polttoaineita, jolloin fossiiliset hiilidioksidipäästöt vähenevät ja tällöin myös vaikutus ilmaston lämpenemiseen vähenee. Edellytyksenä on tietysti se, että uusia puita ja kasveja istutetaan, jotta biomassan keräyksestä, kuljetuksesta, käsittelystä sekä poltosta vapautunut hiilidioksidi saadaan jälleen talteen. Arvion mukaan pyrolyysiyksikön integroiminen terästehtaaseen tulee taloudellisesti kannattavaksi, kun hiilidioksidivero nousee yli 20 € per tonni, kun biomassan hinnaksi on arvioitu 40 € per tonni kuiva-ainesta. Kyseinen hintataso ei ole tällä hetkellä voimassa ja Suomen tilanne tässä asiassa on vahvasti kytköksissä Euroopan tasolla tehtäviin päätöksiin. Aihe voisi kuitenkin olla poliittisella tasolla kiinnostava ja taloudellisen tuen arvoista, koska biomassan käyttö loisi lisää työpaikkoja niin terästeollisuudessa kuin puuteollisuudessakin ja mahdollisesti myös kemianteollisuudessa, riippuen siitä miten erinäisiä pyrolyysituotteita (puuhiili, kaasu ja bio-öljy) hyödynnettäisiin. Ottaen huomioon, että terästeollisuus maailmanlaajuisesti vastaa noin viidesosasta kaikista teollisuudesta peräisin olevista hiilidioksidipäästöistä on päivänselvää, että ympäristöystävällisemmille vaihtoehdoille on tarvetta tulevaisuudessa.
Resumo:
Dans l’industrie de l’aluminium, le coke de pétrole calciné est considéré comme étant le composant principal de l’anode. Une diminution dans la qualité du coke de pétrole a été observée suite à une augmentation de sa concentration en impuretés. Cela est très important pour les alumineries car ces impuretés, en plus d’avoir un effet réducteur sur la performance des anodes, contaminent le métal produit. Le coke de pétrole est aussi une source de carbone fossile et, durant sa consommation, lors du processus d’électrolyse, il y a production de CO2. Ce dernier est considéré comme un gaz à effet de serre et il est bien connu pour son rôle dans le réchauffement planétaire et aussi dans les changements climatiques. Le charbon de bois est disponible et est produit mondialement en grande quantité. Il pourrait être une alternative attrayante pour le coke de pétrole dans la fabrication des anodes de carbone utilisées dans les cuves d’électrolyse pour la production de l’aluminium. Toutefois, puisqu’il ne répond pas aux critères de fabrication des anodes, son utilisation représente donc un grand défi. En effet, ses principaux désavantages connus sont sa grande porosité, sa structure désordonnée et son haut taux de minéraux. De plus, sa densité et sa conductivité électrique ont été rapportées comme étant inférieures à celles du coke de pétrole. L’objectif de ce travail est d’explorer l’effet du traitement de chaleur sur les propriétés du charbon de bois et cela, dans le but de trouver celles qui s’approchent le plus des spécifications requises pour la production des anodes. L’évolution de la structure du charbon de bois calciné à haute température a été suivie à l’aide de différentes techniques. La réduction de son contenu en minéraux a été obtenue suite à des traitements avec de l’acide chlorhydrique utilisé à différentes concentrations. Finalement, différentes combinaisons de ces deux traitements, calcination et lixiviation, ont été essayées dans le but de trouver les meilleures conditions de traitement.
Resumo:
Abstract not available
Resumo:
The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).
Resumo:
La programmation par contraintes est une technique puissante pour résoudre, entre autres, des problèmes d’ordonnancement de grande envergure. L’ordonnancement vise à allouer dans le temps des tâches à des ressources. Lors de son exécution, une tâche consomme une ressource à un taux constant. Généralement, on cherche à optimiser une fonction objectif telle la durée totale d’un ordonnancement. Résoudre un problème d’ordonnancement signifie trouver quand chaque tâche doit débuter et quelle ressource doit l’exécuter. La plupart des problèmes d’ordonnancement sont NP-Difficiles. Conséquemment, il n’existe aucun algorithme connu capable de les résoudre en temps polynomial. Cependant, il existe des spécialisations aux problèmes d’ordonnancement qui ne sont pas NP-Complet. Ces problèmes peuvent être résolus en temps polynomial en utilisant des algorithmes qui leur sont propres. Notre objectif est d’explorer ces algorithmes d’ordonnancement dans plusieurs contextes variés. Les techniques de filtrage ont beaucoup évolué dans les dernières années en ordonnancement basé sur les contraintes. La proéminence des algorithmes de filtrage repose sur leur habilité à réduire l’arbre de recherche en excluant les valeurs des domaines qui ne participent pas à des solutions au problème. Nous proposons des améliorations et présentons des algorithmes de filtrage plus efficaces pour résoudre des problèmes classiques d’ordonnancement. De plus, nous présentons des adaptations de techniques de filtrage pour le cas où les tâches peuvent être retardées. Nous considérons aussi différentes propriétés de problèmes industriels et résolvons plus efficacement des problèmes où le critère d’optimisation n’est pas nécessairement le moment où la dernière tâche se termine. Par exemple, nous présentons des algorithmes à temps polynomial pour le cas où la quantité de ressources fluctue dans le temps, ou quand le coût d’exécuter une tâche au temps t dépend de t.
Resumo:
We describe a one-step bio-refinery process for shrimp composites by-products. Its originality lies in a simple rapid (6 h) biotechnological cuticle fragmentation process that recovers all major compounds (chitins, peptides and minerals in particular calcium). The process consists of a controlled exogenous enzymatic proteolysis in a food-grade acidic medium allowing chitin purification (solid phase), and recovery of peptides and minerals (liquid phase). At a pH of between 3.5 and 4, protease activity is effective, and peptides are preserved. Solid phase demineralization kinetics were followed for phosphoric, hydrochloric, acetic, formic and citric acids with pKa ranging from 2.1 to 4.76. Formic acid met the initial aim of (i) 99 % of demineralization yield and (ii) 95 % deproteinization yield at a pH close to 3.5 and a molar ratio of 1.5. The proposed one-step process is proven to be efficient. To formalize the necessary elements for the future optimization of the process, two models to predict shell demineralization kinetics were studied, one based on simplified physical considerations and a second empirical one. The first model did not accurately describe the kinetics for times exceeding 30 minutes, the empirical one performed adequately.
Resumo:
In this dissertation I draw a connection between quantum adiabatic optimization, spectral graph theory, heat-diffusion, and sub-stochastic processes through the operators that govern these processes and their associated spectra. In particular, we study Hamiltonians which have recently become known as ``stoquastic'' or, equivalently, the generators of sub-stochastic processes. The operators corresponding to these Hamiltonians are of interest in all of the settings mentioned above. I predominantly explore the connection between the spectral gap of an operator, or the difference between the two lowest energies of that operator, and certain equilibrium behavior. In the context of adiabatic optimization, this corresponds to the likelihood of solving the optimization problem of interest. I will provide an instance of an optimization problem that is easy to solve classically, but leaves open the possibility to being difficult adiabatically. Aside from this concrete example, the work in this dissertation is predominantly mathematical and we focus on bounding the spectral gap. Our primary tool for doing this is spectral graph theory, which provides the most natural approach to this task by simply considering Dirichlet eigenvalues of subgraphs of host graphs. I will derive tight bounds for the gap of one-dimensional, hypercube, and general convex subgraphs. The techniques used will also adapt methods recently used by Andrews and Clutterbuck to prove the long-standing ``Fundamental Gap Conjecture''.
Resumo:
A Refinaria de Matosinhos é um dos complexos industriais da Galp Energia. A sua estação de tratamento de águas residuais industriais (ETARI) – designada internamente por Unidade 7000 – é composta por quatro tratamentos: o pré-tratamento, o tratamento físico-químico, o tratamento biológico e o pós-tratamento. Dada a interligação existente, é fundamental a otimização de cada um dos tratamentos. Este trabalho teve como objetivos a identificação dos problemas e/ou possibilidades de melhoria do pré-tratamento, tratamento físico-químico e pós-tratamento e principalmente a otimização do tratamento biológico da ETARI. No pré-tratamento verificou-se que a separação de óleos e lamas não era eficaz uma vez que se formam emulsões destas duas fases. Como solução, sugeriu-se a adição de agentes desemulsionantes, que se revelou economicamente inviável. Assim, sugeriu-se como alternativa o recurso a técnicas de tratamento da emulsão gerada, tais como a extração com solvente, centrifugação, ultrassons e micro-ondas. No tratamento físico-químico constatou-se que o controlo da unidade de saturação de ar na água era feito com base na análise visual dos operadores, o que pode conduzir a condições de operação afastadas das ótimas para este tratamento. Assim, sugeriu-se a realização de um estudo de otimização desta unidade com vista à determinação da razão ar/sólidos ótima para este efluente. Para além disto, constatou-se, ainda, que os consumos de coagulante aumentaram cerca de -- % no último ano, pelo que foi sugerido o estudo da viabilidade do processo de eletrocoagulação como substituto do sistema de coagulação existente. No pós-tratamento identificou-se o processo de lavagem dos filtros como sendo a etapa com possibilidade de ser otimizada. Através de um estudo preliminar concluiu-se que a lavagem contínua de um filtro por cada turno melhorava o desempenho dos mesmos. Constatou-se, ainda, que a introdução de ar comprimido na água de lavagem promove uma maior remoção de detritos do leito de areia, no entanto esta prática parece influenciar negativamente o desempenho dos filtros. No caso do tratamento biológico, identificaram-se problemas ao nível do tempo de retenção hidráulico do tratamento biológico II, que apresentou elevada variabilidade. Apesar de identificado concluiu-se que este problema era de difícil solução. Verificou-se, também, que o oxigénio dissolvido não era monitorizado, pelo que se sugeriu a instalação de uma sonda de oxigénio dissolvido numa zona de baixa turbulência do tanque de arejamento. Concluiu-se que o oxigénio era distribuído de forma homogénea por todo o tanque de arejamento e tentou-se identificar quais os fatores que influenciariam este parâmetro, no entanto, dada a elevada variabilidade do efluente e das condições de tratamento, tal não foi possível. Constatou-se, também, que o doseamento de fosfato para o tratamento biológico II era pouco eficiente já Otimização dos sistemas biológicos e melhorias nos tratamentos da ETARI da Refinaria de Matosinhos que em -- % dos dias se verificaram níveis baixos de fosfato no licor misto (< - mg/L). Foi, por isso, proposta a alteração do atual sistema de doseamento por gravidade para um sistema de bomba doseadora. Para além disso verificou-se que os consumos deste nutriente aumentaram significativamente no último ano (cerca de --%), situação que se constatou estar relacionada com um aumento da população microbiana para este período. Foi possível relacionar-se o aparecimento frequente de lamas à superfície dos decantadores secundários com incrementos repentinos de condutividade, pelo que se sugeriu o armazenamento do efluente nas bacias de tempestade, nestas situações. Verificou-se que a remoção de azoto era praticamente ineficaz uma vez que a conversão de azoto amoniacal em nitratos foi muito baixa. Assim, sugeriu-se o recurso à técnica de bio-augmentação ou a transformação do sistema de lamas ativadas num sistema bietápico. Por fim, constatou-se que a temperatura do efluente à entrada da ETARI apresenta valores bastante elevados para o tratamento biológico (aproximadamente de --º C) pelo que se sugeriu a instalação de uma sonda de temperatura no tanque de arejamento de modo a controlar de forma mais eficaz a temperatura do licor misto. Ainda no que diz respeito ao tratamento biológico, foi possível desenvolver-se um conjunto de ferramentas que visaram o funcionamento otimizado deste tratamento. Nesse sentido, foram apresentadas várias sugestões de melhoria: a utilização do índice volumétrico de lamas como indicador da qualidade das lamas em alternativa à percentagem de lamas; foi desenvolvido um conjunto de fluxogramas para a orientação dos operadores de exterior na resolução de problemas; foi criada uma “janela de operação” que pretende ser um guia de apoio à operação; foi ainda proposta a monitorização frequente da idade das lamas e da razão alimento/microrganismo.