993 resultados para Processes optimization
Resumo:
The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).
Resumo:
This research project was driven by the recurring complaints and concerns voiced in the media by residents living in the Valley area of the community of Happy Valley-Goose Bay, Labrador. Drinking water in this town is supplied by two water treatment plants (a municipality treatment plant and a DND treatment plant), which use raw water from two different sources (groundwater from multiple wells versus surface water from Spring Gulch brook) and use two different processes of drinking-water treatment. In fact, the drinking water supplied in the Valley area has a unique distribution arrangement. To meet demand, the Valley area is served by a blend of treated waters from a storage reservoir (Sandhill reservoir), which is fed by both water treatment plants. Most of the time, treated water from the municipal treatment plant dominates in the mixture. As water travels through the distribution system and household plumbing, specific reactions can occur either in the water itself and/or at the solid–liquid interface at the pipe walls; this is strongly influenced by the physical and chemical characteristics of the water. These reactions can introduce undesirable chemical compounds and/or favor the growth of bacteria in the drinking water, causing the deterioration of the quality of water reaching the consumer taps. In the distribution system in general, these chemical constituents and bacteria may pose potential threats to health or the water’s aesthetic qualities (smell, taste or appearance). Drinking water should be not only safe, but also palatable.
Resumo:
Heterotrophic feeding has an important role in the processes of growth and reproduction of mixotrophic corals. The soft coral Sarcophyton cf. glaucum is a good candidate for aquaculture due to its economic interest for the marine aquarium trade and for the bioprospection of marine natural products. The lack of information on heterotrophic feeding of this species with preserved microalgae conducted to development of this work. The present study aimed to evaluate the effect of the conservation processes of microalgae in its suitability as heterotrophic feeding for the mixotrophic coral S. cf. glaucum. Additionally, we aimed to identify the most suitable freeze-dried microalgae species and cell density to be employed in the culture of this mixotrophic coral species. Two experiments were performed: in the first experiment the microalgae Nannochloropsis oculata was supplied to coral fragments in three different preservation forms (live paste, frozen and freeze-dried) at the concentration of 106 cell mL-1; in the second experiment three different microalgae species (Nannochloropsis oculata, Isochrysis galbana and Phaeodactylum tricornutum) were tested in two different amounts: 7.33 mg L-1 (corresponding to the concentration of 106 cell mL-1 of Nannochloropsis oculata) and 3.66 mg L-1. Growth rate, survival, organic weight and photobiology of coral fragments, as well as water quality in culture tanks, were evaluated in both experiments. Preserved forms of microalgae did not demonstrated differences in growth rate, organic weight and survival rate of coral fragments, but affected water quality. Freeze-dried microalgae seems to be a good feed supply for coral aquaculture, as it has the best results and it has the higher shell-life time and the lower associated costs. Between the species evaluated in second experiment, Isochrysis galbana promoted higher specific growth rate and higher percentage of organic weight in the coral fragments; additionally the culture tanks supplied with this microalgae species also presented a better water quality in the end of the experiment.
Resumo:
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.
Resumo:
In this dissertation I draw a connection between quantum adiabatic optimization, spectral graph theory, heat-diffusion, and sub-stochastic processes through the operators that govern these processes and their associated spectra. In particular, we study Hamiltonians which have recently become known as ``stoquastic'' or, equivalently, the generators of sub-stochastic processes. The operators corresponding to these Hamiltonians are of interest in all of the settings mentioned above. I predominantly explore the connection between the spectral gap of an operator, or the difference between the two lowest energies of that operator, and certain equilibrium behavior. In the context of adiabatic optimization, this corresponds to the likelihood of solving the optimization problem of interest. I will provide an instance of an optimization problem that is easy to solve classically, but leaves open the possibility to being difficult adiabatically. Aside from this concrete example, the work in this dissertation is predominantly mathematical and we focus on bounding the spectral gap. Our primary tool for doing this is spectral graph theory, which provides the most natural approach to this task by simply considering Dirichlet eigenvalues of subgraphs of host graphs. I will derive tight bounds for the gap of one-dimensional, hypercube, and general convex subgraphs. The techniques used will also adapt methods recently used by Andrews and Clutterbuck to prove the long-standing ``Fundamental Gap Conjecture''.
Resumo:
Póster presentado en: 21st World Hydrogen Energy Conference 2016. Zaragoza, Spain. 13-16th June, 2016
Resumo:
In Part 1 of this thesis, we propose that biochemical cooperativity is a fundamentally non-ideal process. We show quantal effects underlying biochemical cooperativity and highlight apparent ergodic breaking at small volumes. The apparent ergodic breaking manifests itself in a divergence of deterministic and stochastic models. We further predict that this divergence of deterministic and stochastic results is a failure of the deterministic methods rather than an issue of stochastic simulations.
Ergodic breaking at small volumes may allow these molecular complexes to function as switches to a greater degree than has previously been shown. We propose that this ergodic breaking is a phenomenon that the synapse might exploit to differentiate Ca$^{2+}$ signaling that would lead to either the strengthening or weakening of a synapse. Techniques such as lattice-based statistics and rule-based modeling are tools that allow us to directly confront this non-ideality. A natural next step to understanding the chemical physics that underlies these processes is to consider \textit{in silico} specifically atomistic simulation methods that might augment our modeling efforts.
In the second part of this thesis, we use evolutionary algorithms to optimize \textit{in silico} methods that might be used to describe biochemical processes at the subcellular and molecular levels. While we have applied evolutionary algorithms to several methods, this thesis will focus on the optimization of charge equilibration methods. Accurate charges are essential to understanding the electrostatic interactions that are involved in ligand binding, as frequently discussed in the first part of this thesis.
Resumo:
International audience
Resumo:
In the framework of industrial problems, the application of Constrained Optimization is known to have overall very good modeling capability and performance and stands as one of the most powerful, explored, and exploited tool to address prescriptive tasks. The number of applications is huge, ranging from logistics to transportation, packing, production, telecommunication, scheduling, and much more. The main reason behind this success is to be found in the remarkable effort put in the last decades by the OR community to develop realistic models and devise exact or approximate methods to solve the largest variety of constrained or combinatorial optimization problems, together with the spread of computational power and easily accessible OR software and resources. On the other hand, the technological advancements lead to a data wealth never seen before and increasingly push towards methods able to extract useful knowledge from them; among the data-driven methods, Machine Learning techniques appear to be one of the most promising, thanks to its successes in domains like Image Recognition, Natural Language Processes and playing games, but also the amount of research involved. The purpose of the present research is to study how Machine Learning and Constrained Optimization can be used together to achieve systems able to leverage the strengths of both methods: this would open the way to exploiting decades of research on resolution techniques for COPs and constructing models able to adapt and learn from available data. In the first part of this work, we survey the existing techniques and classify them according to the type, method, or scope of the integration; subsequently, we introduce a novel and general algorithm devised to inject knowledge into learning models through constraints, Moving Target. In the last part of the thesis, two applications stemming from real-world projects and done in collaboration with Optit will be presented.
Resumo:
Nowadays, the chemical industry has reached significant goals to produce essential components for human being. The growing competitiveness of the market caused an important acceleration in R&D activities, introducing new opportunities and procedures for the definition of process improvement and optimization. In this dynamicity, sustainability is becoming one of the key aspects for the technological progress encompassing economic, environmental protection and safety aspects. With respect to the conceptual definition of sustainability, literature reports an extensive discussion of the strategies, as well as sets of specific principles and guidelines. However, literature procedures are not completely suitable and applicable to process design activities. Therefore, the development and introduction of sustainability-oriented methodologies is a necessary step to enhance process and plant design. The definition of key drivers as support system is a focal point for early process design decisions or implementation of process modifications. In this context, three different methodologies are developed to support design activities providing criteria and guidelines in a sustainable perspective. In this framework, a set of key Performance Indicators is selected and adopted to characterize the environmental, safety, economic and energetic aspects of a reference process. The methodologies are based on heat and material balances and the level of detailed for input data are compatible with available information of the specific application. Multiple case-studies are defined to prove the effectiveness of the methodologies. The principal application is the polyolefin productive lifecycle chain with particular focus on polymerization technologies. In this context, different design phases are investigated spanning from early process feasibility study to operative and improvements assessment. This flexibility allows to apply the methodologies at any level of design, providing supporting guidelines for design activities, compare alternative solutions, monitor operating process and identify potential for improvements.
Development of processes for the valorization of lignocellulosic biomass based on renewable energies
Resumo:
The world grapples with climate change from fossil fuel reliance, prompting Europe to pivot to renewable energy. Among renewables, biomass is a bioenergy and bio-carbon source, used to create high-value biomolecules, replacing fossil-based products. Alkyl levulinates, derived from biomass, hold promise as bio-additives and biofuels, especially via acid solvolysis of hexose sugars, necessitating further exploration. Alkyl levulinate's potential extends to converting into γ-valerolactone (GVL), a bio-solvent produced via hydrogenation with molecular-hydrogen. Hydrogen, a key reagent and energy carrier, aids renewable energy integration. This thesis delves into a biorefinery system study, aligning with sustainability goals, integrating biomass valorization, energy production, and hydrogen generation. It investigates optimizing technologies for butyl levulinate production and subsequent GVL hydrogenation. Sustainability remains pivotal, reflecting the global shift towards renewable and carbon bio-resources. The research initially focuses on experimenting with the optimal technology for producing butyl levulinate from biomass-derived hexose fructose. It examines the solvolysis process, investigating optimal conditions, kinetic modeling, and the impact of solvents on fructose conversion. The subsequent part concentrates on the technological aspect of hydrogenating butyl levulinate into GVL. It includes conceptual design, simulation, and optimization of the fructose-to-GVL process scheme based on process intensification. In the final part, the study applies the process to a real case study in Normandy, France, adapting it to local biomass availability and wind energy. It defines a methodology for designing and integrating the energy-supply system, evaluating different scenarios. Sustainability assessment using economic, environmental, and social indicators culminates in an overall sustainability index, indicating scenarios integrating the GVL biorefinery system with wind power and hydrogen energy storage as promising due to high profitability and reduced environmental impact. Sensitivity analyses validate the methodology's reliability, potentially extending to other technological systems.
Resumo:
Nowadays, product development in all its phases plays a fundamental role in the industrial chain. The need for a company to compete at high levels, the need to be quick in responding to market demands and therefore to be able to engineer the product quickly and with a high level of quality, has led to the need to get involved in new more advanced methods/ processes. In recent years, we are moving away from the concept of 2D-based design and production and approaching the concept of Model Based Definition. By using this approach, increasingly complex systems turn out to be easier to deal with but above all cheaper in obtaining them. Thanks to the Model Based Definition it is possible to share data in a lean and simple way to the entire engineering and production chain of the product. The great advantage of this approach is precisely the uniqueness of the information. In this specific thesis work, this approach has been exploited in the context of tolerances with the aid of CAD / CAT software. Tolerance analysis or dimensional variation analysis is a way to understand how sources of variation in part size and assembly constraints propagate between parts and assemblies and how that range affects the ability of a project to meet its requirements. It is critically important to note how tolerance directly affects the cost and performance of products. Worst Case Analysis (WCA) and Statistical analysis (RSS) are the two principal methods in DVA. The thesis aims to show the advantages of using statistical dimensional analysis by creating and examining various case studies, using PTC CREO software for CAD modeling and CETOL 6σ for tolerance analysis. Moreover, it will be provided a comparison between manual and 3D analysis, focusing the attention to the information lost in the 1D case. The results obtained allow us to highlight the need to use this approach from the early stages of the product design cycle.
Resumo:
Insulin was used as model protein to developed innovative Solid Lipid Nanoparticles (SLNs) for the delivery of hydrophilic biotech drugs, with potential use in medicinal chemistry. SLNs were prepared by double emulsion with the purpose of promoting stability and enhancing the protein bioavailability. Softisan(®)100 was selected as solid lipid matrix. The surfactants (Tween(®)80, Span(®)80 and Lipoid(®)S75) and insulin were chosen applying a 2(2) factorial design with triplicate of central point, evaluating the influence of dependents variables as polydispersity index (PI), mean particle size (z-AVE), zeta potential (ZP) and encapsulation efficiency (EE) by factorial design using the ANOVA test. Therefore, thermodynamic stability, polymorphism and matrix crystallinity were checked by Differential Scanning Calorimetry (DSC) and Wide Angle X-ray Diffraction (WAXD), whereas the effect of toxicity of SLNs was check in HepG2 and Caco-2 cells. Results showed a mean particle size (z-AVE) width between 294.6 nm and 627.0 nm, a PI in the range of 0.425-0.750, ZP about -3 mV, and the EE between 38.39% and 81.20%. After tempering the bulk lipid (mimicking the end process of production), the lipid showed amorphous characteristics, with a melting point of ca. 30 °C. The toxicity of SLNs was evaluated in two distinct cell lines (HEPG-2 and Caco-2), showing to be dependent on the concentration of particles in HEPG-2 cells, while no toxicity in was reported in Caco-2 cells. SLNs were stable for 24 h in in vitro human serum albumin (HSA) solution. The resulting SLNs fabricated by double emulsion may provide a promising approach for administration of protein therapeutics and antigens.
Resumo:
Acid drainage influence on the water and sediment quality was investigated in a coal mining area (southern Brazil). Mine drainage showed pH between 3.2 and 4.6 and elevated concentrations of sulfate, As and metals, of which, Fe, Mn and Zn exceeded the limits for the emission of effluents stated in the Brazilian legislation. Arsenic also exceeded the limit, but only slightly. Groundwater monitoring wells from active mines and tailings piles showed pH interval and chemical concentrations similar to those of mine drainage. However, the river and ground water samples of municipal public water supplies revealed a pH range from 7.2 to 7.5 and low chemical concentrations, although Cd concentration slightly exceeded the limit adopted by Brazilian legislation for groundwater. In general, surface waters showed large pH range (6 to 10.8), and changes caused by acid drainage in the chemical composition of these waters were not very significant. Locally, acid drainage seemed to have dissolved carbonate rocks present in the local stratigraphic sequence, attenuating the dispersion of metals and As. Stream sediments presented anomalies of these elements, which were strongly dependent on the proximity of tailings piles and abandoned mines. We found that precipitation processes in sediments and the dilution of dissolved phases were responsible for the attenuation of the concentrations of the metals and As in the acid drainage and river water mixing zone. In general, a larger influence of mining activities on the chemical composition of the surface waters and sediments was observed when enrichment factors in relation to regional background levels were used.
Resumo:
The Centers for High Cost Medication (Centros de Medicação de Alto Custo, CEDMAC), Health Department, São Paulo were instituted by project in partnership with the Clinical Hospital of the Faculty of Medicine, USP, sponsored by the Foundation for Research Support of the State of São Paulo (Fundação de Amparo à Pesquisa do Estado de São Paulo, FAPESP) aimed at the formation of a statewide network for comprehensive care of patients referred for use of immunobiological agents in rheumatological diseases. The CEDMAC of Hospital de Clínicas, Universidade Estadual de Campinas (HC-Unicamp), implemented by the Division of Rheumatology, Faculty of Medical Sciences, identified the need for standardization of the multidisciplinary team conducts, in face of the specificity of care conducts, verifying the importance of describing, in manual format, their operational and technical processes. The aim of this study is to present the methodology applied to the elaboration of the CEDMAC/HC-Unicamp Manual as an institutional tool, with the aim of offering the best assistance and administrative quality. In the methodology for preparing the manuals at HC-Unicamp since 2008, the premise was to obtain a document that is participatory, multidisciplinary, focused on work processes integrated with institutional rules, with objective and didactic descriptions, in a standardized format and with electronic dissemination. The CEDMAC/HC-Unicamp Manual was elaborated in 10 months, with involvement of the entire multidisciplinary team, with 19 chapters on work processes and techniques, in addition to those concerning the organizational structure and its annexes. Published in the electronic portal of HC Manuals in July 2012 as an e-Book (ISBN 978-85-63274-17-5), the manual has been a valuable instrument in guiding professionals in healthcare, teaching and research activities.