902 resultados para Graph-based methods


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Streptococcus pneumoniae is a human pathobiont that colonizes the nasopharynx. S. pneumoniae is responsible for causing non-invasive and invasive disease such as otitis, pneumonia, meningitis, and sepsis, being a leading cause of infectious diseases worldwide. Due to similarities with closely related species sharing the same niche, it may be a challenge to correctly distinguish S. pneumoniae from its relatives when using only non-culture based methods such as real time PCR (qPCR). In 2007, a molecular method targeting the major autolysin (lytA) of S. pneumoniae by a qPCR assay was proposed by Carvalho and collaborators to identify pneumococcus. Since then, this method has been widely used worldwide. In 2013, the gene encoding for the ABC iron transporter lipoprotein PiaA, was proposed by Trzcinzki and collaborators to be used in parallel with the lytA qPCR assay. However, the presence of lytA gene homologues has been described in closely related species such as S. pseudopneumoniae and S. mitis and the presence of piaA gene is not ubiquitous between S. pneumoniae. The hyaluronate lyase gene (hylA) has been described to be ubiquitous in S. pneumoniae. This gene has not been used so far as a target for the identification of S. pneumoniae. The aims of our study were to evaluate the specificity, sensitivity, positive predicted value (PPV) and negative predicted value (NPV) of the lytA and piaA qPCR methods; design and implement a new assay targeting the hylA gene and evaluate the same parameters above described; analyze the assays independently and the possible combinations to access what is the best approach using qPCR to identify S. pneumoniae. A total of 278 previously characterized strains were tested: 61 S. pseudopneumoniae, 37 Viridans group strains, 30 type strains from other streptococcal species and 150 S. pneumoniae strains. The collection included both carriage and disease isolates. By Mulilocus Sequence Analysis (MLSA) we confirmed that strains of S. pseudopneumoniae could be misidentified as S. pneumoniae when lytA qPCR assay is used. The results showed that as a single target, lytA had the best combination of specificity, sensitivity, PPV and NPV being, 98.5%, 100.0%, 98.7% and 100.0% respectively. The combination of targets with the best values of specificity, sensibility, PPV and NPV were lytA and piaA, with 100.0%, 93.3%, 97.9% and 92.6%, respectively. Nonetheless by MLSA we confirmed that strains of S. pseudopneumoniae could be misidentified as S. pneumoniae and some capsulated (23F, 6B and 11A) and non-capsulated S. pneumoniae were not Identified using this assay. The hylA gene as a single target had the lowest PPV. Nonetheless it was capable to correctly identify all S. pneumoniae.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Presently avocado germplasm is conserved ex situ in the form of field repositories across the globe including Australia. The maintenance of germplasm in the field is costly, labour and land intensive, exposed to natural disasters and always at the risk of abiotic and biotic stresses. The aim of this study was to overcome these problems using cryopreservation to store avocado (Persea americana Mill.) somatic embryos (SE). Two vitrification-based methods of cryopreservation were optimised (cryovial and droplet-vitrification) using four avocado cultivars (‘A10′, ‘Reed’, ‘Velvick’ and ‘Duke-7′). SE of the four cultivars were stored for short-term (one hour) in liquid nitrogen using the cryovial-vitrification method and showed a viability of 91%, 73%, 86% and 80% respectively. While when using the droplet vitrification method viabilities of 100%, 85% and 93% were recorded for ‘A10′, ‘Reed’ and ‘Velvick’. For long-term storage, SE of cultivars ‘A10′, ‘Reed’ and ‘Velvick’ were successfully recovered with viability of 65–100% after 3 months of LN storage. For cultivar ‘Reed’ and ‘Velvick’ SE were recovered after 12 months of LN storage with viability of 67% and 59%, respectively. The outcome of this work contributes towards the establishment of a cryopreservation protocol that is applicable across multiple avocado cultivars.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hintergrund: Helicobacter pylori (H. pylori) zählt trotz abnehmender Inzidenz zu den häufigsten bakteriellen Infektionskrankheiten des Menschen. Die Infektion mit H. pylori ist ein Risikofaktor für Krankheiten wie gastroduodenale Geschwüre, Magenkarzinomen und MALT (Mucosa Associated Lymphoid Tissue)-Lymphomen. Zur Diagnostik von H. pylori stehen verschiedene invasive und nichtinvasive Verfahren zur Verfügung. Der 13C-Harnstoff-Atemtest wird zur Kontrolle einer Eradikationstherapie empfohlen, kommt in der Primärdiagnostik von H. pylori derzeit jedoch nicht standardmäßig in Deutschland zum Einsatz. Fragestellung: Welchen medizinischen und gesundheitsökonomischen Nutzen hat die Untersuchung auf H. pylori-Besiedlung mittels 13C-Harnstoff-Atemtest in der Primärdiagnostik im Vergleich zu invasiven und nichtinvasiven diagnostischen Verfahren? Methodik: Basierend auf einer systematischen Literaturrecherche in Verbindung mit einer Handsuche werden Studien zur Testgüte und Kosten-Effektivität des 13C-Harnstoff-Atemtests im Vergleich zu anderen diagnostischen Verfahren zum primären Nachweis von H. pylori identifiziert. Es werden nur medizinische Studien eingeschlossen, die den 13C-Harnstoff-Atemtest direkt mit anderen H. pylori-Testverfahren vergleichen. Goldstandard ist eines oder eine Kombination der biopsiebasierten Testverfahren. Für die gesundheitsökonomische Beurteilung werden nur vollständige gesundheitsökonomische Evaluationsstudien einbezogen, bei denen die Kosten-Effektivität des 13C Harnstoff-Atemtests direkt mit anderen H. pylori-Testverfahren verglichen wird. Ergebnisse: Es werden 30 medizinische Studien für den vorliegenden Bericht eingeschlossen. Im Vergleich zum Immunglobulin G (IgG)-Test ist die Sensitivität des 13C-Harnstoff-Atemtests zwölfmal höher, sechsmal niedriger und einmal gleich, und die Spezifität 13-mal höher, dreimal niedriger und zweimal gleich. Im Vergleich zum Stuhl-Antigen-Test ist die Sensitivität des 13C-Harnstoff-Atemtests neunmal höher, dreimal niedriger und einmal gleich, und die Spezifität neunmal höher, zweimal niedriger und zweimal gleich. Im Vergleich zum Urease-Schnelltest sind die Sensitivität des 13C-Harnstoff-Atemtests viermal höher, dreimal niedriger und viermal gleich und die Spezifität fünfmal höher, fünfmal niedriger und einmal gleich. Im Vergleich mit der Histologie ist die Sensitivität des 13C-Harnstoff-Atemtests einmal höher und zweimal niedriger und die Spezifität zweimal höher und einmal niedriger. In je einem Vergleich zeigt sich kein Unterschied zwischen 13C-Harnstoff-Atemtest und 14C-Harnstoff-Atemtest, sowie eine niedrigere Sensitivität und höhere Spezifität im Vergleich zur Polymerase-Kettenreaktion (PCR). Inwieweit die beschriebenen Unterschiede statistisch signifikant sind, wird in sechs der 30 Studien angegeben. Es werden neun gesundheitsökonomische Evaluationen in dem vorliegenden Bericht berücksichtigt. Die Test-and-Treat-Strategie mittels 13C-Harnstoff-Atemtest wird in sechs Studien mit einem Test-and-Treat-Verfahren auf Basis der Serologie sowie in drei Studien mit einem Test-and-Treat-Verfahren auf Basis des Stuhl-Antigen-Tests verglichen. Dabei ist das Atemtestverfahren dreimal kosteneffektiv gegenüber der serologischen Methode und wird von der Stuhl-Antigen-Test-Strategie einmal dominiert. Vier Studien beinhalten einen Vergleich der Test-and -Treat-Strategie auf Basis des 13C-Harnstoff-Atemtests mit einer empirischen antisekretorischen Therapie, wobei sich das Atemtesverfahren zweimal als kosteneffektive Prozedur erweist und zwei Studien einen Vergleich mit einer empirischen Eradikationstherapie. In fünf Studien wird das Test-and-Treat-Verfahren mittels 13C-Harnstoff-Atemtest einer endoskopiebasierten Strategie gegenübergestellt. Zweimal dominiert die Atemteststrategie die endoskopische Prozedur und einmal wird sie von dieser Strategie dominiert. Diskussion:Sowohl die medizinischen als auch die ökonomischen Studien weisen mehr oder minder gravierende Mängel auf und liefern heterogene Ergebnisse. So werden in der Mehrzahl der medizinischen Studien keine Angaben zur statistischen Signifikanz der berichteten Unterschiede zwischen den jeweiligen Testverfahren gemacht. Im direkten Vergleich weist der 13C-Harnstoff-Atemtest überwiegend eine höhere Testgüte als der IgG und der Stuhl-Antigen-Test auf. Aus den Vergleichen mit dem Urease-Schnelltest lassen sich keine Tendenzen bezüglich der Sensitivität ableiten, wohingegen die Spezifität des 13C-Harnstoff-Atemtests höher einzuschätzen ist. Für die Vergleiche des 13C-Harnstoff-Atemtest mit der Histologie, dem 14C-Harnstoff-Atemtest und der PCR liegen zu wenige Ergebnisse vor. In der eingeschlossenen ökonomischen Literatur deuten einige Studienergebnisse auf eine Kosten-Effektivität der Test-and-Treat-Strategie mittels 13C-Harnstoff-Atemtest gegenüber dem Test-and-Treat-Verfahren auf Basis der Serologie und der empirischen antiskretorischen Therapie hin. Um Tendenzen bezüglich der Kosten-Effektivität der Atemteststrategie gegenüber der Test-and-Treat-Strategie mittels Stuhl-Antigen-Test sowie der empirischen Eradikationstherapie abzuleiten, mangelt es an validen Ergebnissen bzw. ökonomischer Evidenz. Die Untersuchungsresultate hinsichtlich eines Vergleichs mit endoskopiebasierten Verfahren fallen diesbezüglich zu heterogen aus. Insgesamt kann keines der ökonomischen Modelle der Komplexität des Managements von Patienten mit dyspeptischen Beschwerden gänzlich gerecht werden. Schlussfolgerungen/Empfehlungen: Zusammenfassend ist festzuhalten, dass die Studienlage zur medizinischen und ökonomischen Beurteilung des 13C-Harnstoff-Atemtests im Vergleich zu anderen diagnostischen Methoden nicht ausreichend ist, um den Atemtest als primärdiagnostisches Standardverfahren im Rahmen einer Test-and-Treat-Strategie beim Management von Patienten mit dyspeptischen Beschwerden für die deutsche Versorgungslandschaft insbesondere vor dem Hintergrund der Leitlinien der Deutschen Gesellschaft für Verdauungs- und Stoffwechselkrankheiten (DGVS) anstelle einer endoskopiebasierten Methode zu empfehlen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forty-four species of Colletotrichum are confirmed as present in Australia based on DNA sequencing analyses. Many of these species were identified directly as a result of two workshops organised by the Subcommittee on Plant Health Diagnostics in Australia in 2015 that covered morphological and molecular approaches to identification of Colletotrichum. There are several other species of Colletotrichum reported from Australia that remain to be substantiated by DNA sequence-based methods. This body of work aims to provide a basis from which to critically examine a number of isolates of Colletotrichum deposited in Australian culture collections.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

At the ecosystem level, sustainable exploitation of fisheries resources depends not only on the status of target species but also on that of bycatch species, some of which are even more sensitive to exploitation. This is the case for a number of elasmobranchs (skates, rays and sharks) species whose abundance declined during the 20th century. Further, the biology of elamobranchs is still poorly known and traditional fisheries stock assessment methods using fisheries catches and scientific survey data for estimating abundance are expensive or even inapplicable due to the small numbers observed. The GenoPopTaille project attempts to apply to the case of the thornback ray (Raja clavata) recent genetic-based methods for absolute population abundance estimation as well as characterizing its genetic diversity and population structure in the Northeast Atlantic. The poster will present the objectives, challenges and progress made so far by the project.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Leishmania donovani is the known causative agent of both cutaneous (CL) and visceral leishmaniasis in Sri Lanka. CL is considered to be under-reported partly due to relatively poor sensitivity and specificity of microscopic diagnosis. We compared robustness of three previously described polymerase chain reaction (PCR) based methods to detect Leishmania DNA in 38 punch biopsy samples from patients presented with suspected lesions in 2010. Both, Leishmania genus-specific JW11/JW12 KDNA and LITSR/L5.8S internal transcribed spacer (ITS)1 PCR assays detected 92% (35/38) of the samples whereas a KDNA assay specific for L. donovani (LdF/LdR) detected only 71% (27/38) of samples. All positive samples showed a L. donovani banding pattern upon HaeIII ITS1 PCR-restriction fragment length polymorphism analysis. PCR assay specificity was evaluated in samples containing Mycobacterium tuberculosis , Mycobacterium leprae , and human DNA, and there was no cross-amplification in JW11/JW12 and LITSR/L5.8S PCR assays. The LdF/LdR PCR assay did not amplify M. leprae or human DNA although 500 bp and 700 bp bands were observed in M. tuberculosis samples. In conclusion, it was successfully shown in this study that it is possible to diagnose Sri Lankan CL with high accuracy, to genus and species identification, using Leishmania DNA PCR assays.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Light absorption by aerosols has a great impact on climate change. A Photoacoustic spectrometer (PA) coupled with aerosol-based classification techniques represents an in situ method that can quantify the light absorption by aerosols in a real time, yet significant differences have been reported using this method versus filter based methods or the so-called difference method based upon light extinction and light scattering measurements. This dissertation focuses on developing calibration techniques for instruments used in measuring the light absorption cross section, including both particle diameter measurements by the differential mobility analyzer (DMA) and light absorption measurements by PA. Appropriate reference materials were explored for the calibration/validation of both measurements. The light absorption of carbonaceous aerosols was also investigated to provide fundamental understanding to the absorption mechanism. The first topic of interest in this dissertation is the development of calibration nanoparticles. In this study, bionanoparticles were confirmed to be a promising reference material for particle diameter as well as ion-mobility. Experimentally, bionanoparticles demonstrated outstanding homogeneity in mobility compared to currently used calibration particles. A numerical method was developed to calculate the true distribution and to explain the broadening of measured distribution. The high stability of bionanoparticles was also confirmed. For PA measurement, three aerosol with spherical or near spherical shapes were investigated as possible candidates for a reference standard: C60, copper and silver. Comparisons were made between experimental photoacoustic absorption data with Mie theory calculations. This resulted in the identification of C60 particles with a mobility diameter of 150 nm to 400 nm as an absorbing standard at wavelengths of 405 nm and 660 nm. Copper particles with a mobility diameter of 80 nm to 300 nm are also shown to be a promising reference candidate at wavelength of 405 nm. The second topic of this dissertation focuses on the investigation of light absorption by carbonaceous particles using PA. Optical absorption spectra of size and mass selected laboratory generated aerosols consisting of black carbon (BC), BC with non-absorbing coating (ammonium sulfate and sodium chloride) and BC with a weakly absorbing coating (brown carbon derived from humic acid) were measured across the visible to near-IR (500 nm to 840 nm). The manner in which BC mixed with each coating material was investigated. The absorption enhancement of BC was determined to be wavelength dependent. Optical absorption spectra were also taken for size and mass selected smoldering smoke produced from six types of commonly seen wood in a laboratory scale apparatus.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Part 1: Introduction

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Very high resolution remotely sensed images are an important tool for monitoring fragmented agricultural landscapes, which allows farmers and policy makers to make better decisions regarding management practices. An object-based methodology is proposed for automatic generation of thematic maps of the available classes in the scene, which combines edge-based and superpixel processing for small agricultural parcels. The methodology employs superpixels instead of pixels as minimal processing units, and provides a link between them and meaningful objects (obtained by the edge-based method) in order to facilitate the analysis of parcels. Performance analysis on a scene dominated by agricultural small parcels indicates that the combination of both superpixel and edge-based methods achieves a classification accuracy slightly better than when those methods are performed separately and comparable to the accuracy of traditional object-based analysis, with automatic approach.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aims of this thesis were to determine the animal health status in organic dairy farms in Europe and to identify drivers for improving the current situation by means of a systemic approach. Prevalences of production diseases were determined in 192 herds in Germany, France, Spain, and Sweden (Paper I), and stakeholder consultations were performed to investigate potential drivers to improve animal health on the sector level (ibid.). Interactions between farm variables were assessed through impact analysis and evaluated to identify general system behaviour and classify components according to their outgoing and incoming impacts (Paper II-III). The mean values and variances of prevalences indicate that the common rules of organic dairy farming in Europe do not result in consistently low levels of production diseases. Stakeholders deemed it necessary to improve the current status and were generally in favour of establishing thresholds for the prevalence of production diseases in organic dairy herds as well as taking actions to improve farms below that threshold. In order to close the gap between the organic principle of health and the organic farming practice, there is the need to formulate a common objective of good animal health and to install instruments to ensure and prove that the aim is followed by all dairy farmers in Europe who sell their products under the organic label. Regular monitoring and evaluation of herd health performance based on reference values are considered preconditions for identifying farms not reaching the target and thus in need of improvement. Graph-based impact analysis was shown to be a suitable method for modeling and evaluating the manifold interactions between farm factors and for identifying the most influential components on the farm level taking into account direct and indirect impacts as well as impact strengths. Variables likely to affect the system as a whole, and the prevalence of production diseases in particular, varied largely between farms despite some general tendencies. This finding reflects the diversity of farm systems and underlines the importance of applying systemic approaches in health management. Reducing the complexity of farm systems and indicating farm-specific drivers, i.e. areas in a farm, where changes will have a large impact, the presented approach has the potential to complement and enrich current advisory practice and to support farmers’ decision-making in terms of animal health.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Research has demonstrated that mining activities can cause serious impacts on the environment, as well as the surrounding communities, mainly due to the unsafe storage of mine tailings. This research focuses on the sustainability assessment of new technologies for the recovery of metals from mine residues. The assessment consists in the evaluation of the environmental, economic, and social impacts through the Life Cycle based methods: Life Cycle Assessment (LCA), Life Cycle Costing (LCC), and Social Life Cycle Assessment (SLCA). The analyses are performed on the Mondo Minerals bioleaching project, which aim is to recover nickel and cobalt from the Sotkamo and Vuonos mine tailings. The LCA demonstrates that the project contributes to the avoided production of nickel and cobalt concentrates from new resources, hence reducing several environmental impacts. The LCC analysis shows that the company’s main costs are linked to the bioleaching process, caused by electricity consumption and the chemicals used. The SLCA analyses the impacts on three main stakeholder categories: workers, local community, and society. The results demonstrated that a fair salary (or the absence of it) impacts the workers the most, while the local community stakeholder category impacts are related to the access to material resources. The health and safety category is the most impacted category for the society stakeholder. The environmental and economic analyses demonstrate that the recovery of mine tailings may represents a good opportunity for mine companies both to reduce the environmental impacts linked to mine tailings and to increase the profitability. In particular, the project helps reduce the amounts of metals extracted from new resources and demonstrates that the use of the bioleaching technology for the extraction of metals can be economically profitable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Biology is now a “Big Data Science” thanks to technological advancements allowing the characterization of the whole macromolecular content of a cell or a collection of cells. This opens interesting perspectives, but only a small portion of this data may be experimentally characterized. From this derives the demand of accurate and efficient computational tools for automatic annotation of biological molecules. This is even more true when dealing with membrane proteins, on which my research project is focused leading to the development of two machine learning-based methods: BetAware-Deep and SVMyr. BetAware-Deep is a tool for the detection and topology prediction of transmembrane beta-barrel proteins found in Gram-negative bacteria. These proteins are involved in many biological processes and primary candidates as drug targets. BetAware-Deep exploits the combination of a deep learning framework (bidirectional long short-term memory) and a probabilistic graphical model (grammatical-restrained hidden conditional random field). Moreover, it introduced a modified formulation of the hydrophobic moment, designed to include the evolutionary information. BetAware-Deep outperformed all the available methods in topology prediction and reported high scores in the detection task. Glycine myristoylation in Eukaryotes is the binding of a myristic acid on an N-terminal glycine. SVMyr is a fast method based on support vector machines designed to predict this modification in dataset of proteomic scale. It uses as input octapeptides and exploits computational scores derived from experimental examples and mean physicochemical features. SVMyr outperformed all the available methods for co-translational myristoylation prediction. In addition, it allows (as a unique feature) the prediction of post-translational myristoylation. Both the tools here described are designed having in mind best practices for the development of machine learning-based tools outlined by the bioinformatics community. Moreover, they are made available via user-friendly web servers. All this make them valuable tools for filling the gap between sequential and annotated data.