69 resultados para E-tailing
Resumo:
Pós-graduação em Engenharia Civil e Ambiental - FEB
Resumo:
Recent Salmonella outbreaks have prompted the need for new processing options for peanut products. Traditional heating kill-steps have shown to be ineffective in lipid-rich matrices such as peanut products. High pressure processing is one such option for peanut sauce because it has a high water activity, which has proved to be a large contributing factor in microbial lethality due to high pressure processing. Four different formulations of peanut sauce were inoculated with a five strain Salmonella cocktail and high pressure processed. Results indicate that increasing pressure or increasing hold time increases log10 reductions. The Weibull model was fitted to each kill curve, with b and n values significantly optimized for each curve (p-value < 0.05). Most curves had an n parameter value less than 1, indicating that the population had a dramatic initial reduction, but tailed off as time increased, leaving a small resistant population. ANOVA analysis of the b and n parameters show that there are more significant differences between b parameters than n parameters, meaning that most treatments showed similar tailing effect, but differed on the shape of the curve. Comparisons between peanut sauce formulations at the same pressure treatments indicate that increasing amount of organic peanut butter within the sauce formulation decreases log10 reductions. This could be due to a protective effect from the lipids in the peanut butter, or it may be due to other factors such as nutrient availability or water activity. Sauces pressurized at lower temperatures had decreased log10 reductions, indicating that cooler temperatures offered some protective effect. Log10 reductions exceeded 5 logs, indicating that high pressure processing may be a suitable option as a kill-step for Salmonella in industrial processing of peanut sauces. Future research should include high pressure processing on other peanut products with high water activities such as sauces and syrups as well as research to determine the effects of water activity and lipid composition with a food matrix such as peanut sauces.
Resumo:
Within this PhD thesis several methods were developed and validated which can find applicationare suitable for environmental sample and material science and should be applicable for monitoring of particular radionuclides and the analysis of the chemical composition of construction materials in the frame of ESS project. The study demonstrated that ICP-MS is a powerful analytical technique for ultrasensitive determination of 129I, 90Sr and lanthanides in both artificial and environmental samples such as water and soil. In particular ICP-MS with collision cell allows measuring extremely low isotope ratios of iodine. It was demonstrated that isotope ratios of 129I/127I as low as 10-7 can be measured with an accuracy and precision suitable for distinguishing sample origins. ICP-MS with collision cell, in particular in combination with cool plasma conditions, reduces the influence of isobaric interferences on m/z = 90 and is therefore well-suited for 90Sr analysis in water samples. However, the applied ICP-CC-QMS in this work is limited for the measurement of 90Sr due to the tailing of 88Sr+ and in particular Daly detector noise. Hyphenation of capillary electrophoresis with ICP-MS was shown to resolve atomic ions of all lanthanides and polyatomic interferences. The elimination of polyatomic and isobaric ICP-MS interferences was accomplished without compromising the sensitivity by the use of a high resolution mode as available on ICP-SFMS. Combination of laser ablation with ICP-MS allowed direct micro and local uranium isotope ratio measurements at the ultratrace concentrations on the surface of biological samples. In particular, the application of a cooled laser ablation chamber improves the precision and accuracy of uranium isotopic ratios measurements in comparison to the non-cooled laser ablation chamber by up to one order of magnitude. In order to reduce the quantification problem, a mono gas on-line solution-based calibration was built based on the insertion of a microflow nebulizer DS-5 directly into the laser ablation chamber. A micro local method to determine the lateral element distribution on NiCrAlY-based alloy and coating after oxidation in air was tested and validated. Calibration procedures involving external calibration, quantification by relative sensitivity coefficients (RSCs) and solution-based calibration were investigated. The analytical method was validated by comparison of the LA-ICP-MS results with data acquired by EDX.
Resumo:
Web is constantly evolving, thanks to the 2.0 transition, HTML5 new features and the coming of cloud-computing, the gap between Web and traditional desktop applications is tailing off. Web-apps are more and more widespread and bring several benefits compared to traditional ones. On the other hand reference technologies, JavaScript primarly, are not keeping pace, so a paradim shift is taking place in Web programming, and so many new languages and technologies are coming out. First objective of this thesis is to survey the reference and state-of-art technologies for client-side Web programming focusing in particular on what concerns concurrency and asynchronous programming. Taking into account the problems that affect existing technologies, we finally design simpAL-web, an innovative approach to tackle Web-apps development, based on the Agent-oriented programming abstraction and the simpAL language. == Versione in italiano: Il Web è in continua evoluzione, grazie alla transizione verso il 2.0, alle nuove funzionalità introdotte con HTML5 ed all’avvento del cloud-computing, il divario tra le applicazioni Web e quelle desktop tradizionali va assottigliandosi. Le Web-apps sono sempre più diffuse e presentano diversi vantaggi rispetto a quelle tradizionali. D’altra parte le tecnologie di riferimento, JavaScript in primis, non stanno tenendo il passo, motivo per cui la programmazione Web sta andando incontro ad un cambio di paradigma e nuovi linguaggi e tecnologie stanno spuntando sempre più numerosi. Primo obiettivo di questa tesi è di passare al vaglio le tecnologie di riferimento ed allo stato dell’arte per quel che riguarda la programmmazione Web client-side, porgendo particolare attenzione agli aspetti inerenti la concorrenza e la programmazione asincrona. Considerando i principali problemi di cui soffrono le attuali tecnologie passeremo infine alla progettazione di simpAL-web, un approccio innovativo con cui affrontare lo sviluppo di Web-apps basato sulla programmazione orientata agli Agenti e sul linguaggio simpAL.
Resumo:
Atmosphärische Aerosolpartikel wirken in vielerlei Hinsicht auf die Menschen und die Umwelt ein. Eine genaue Charakterisierung der Partikel hilft deren Wirken zu verstehen und dessen Folgen einzuschätzen. Partikel können hinsichtlich ihrer Größe, ihrer Form und ihrer chemischen Zusammensetzung charakterisiert werden. Mit der Laserablationsmassenspektrometrie ist es möglich die Größe und die chemische Zusammensetzung einzelner Aerosolpartikel zu bestimmen. Im Rahmen dieser Arbeit wurde das SPLAT (Single Particle Laser Ablation Time-of-flight mass spectrometer) zur besseren Analyse insbesondere von atmosphärischen Aerosolpartikeln weiterentwickelt. Der Aerosoleinlass wurde dahingehend optimiert, einen möglichst weiten Partikelgrößenbereich (80 nm - 3 µm) in das SPLAT zu transferieren und zu einem feinen Strahl zu bündeln. Eine neue Beschreibung für die Beziehung der Partikelgröße zu ihrer Geschwindigkeit im Vakuum wurde gefunden. Die Justage des Einlasses wurde mithilfe von Schrittmotoren automatisiert. Die optische Detektion der Partikel wurde so verbessert, dass Partikel mit einer Größe < 100 nm erfasst werden können. Aufbauend auf der optischen Detektion und der automatischen Verkippung des Einlasses wurde eine neue Methode zur Charakterisierung des Partikelstrahls entwickelt. Die Steuerelektronik des SPLAT wurde verbessert, so dass die maximale Analysefrequenz nur durch den Ablationslaser begrenzt wird, der höchsten mit etwa 10 Hz ablatieren kann. Durch eine Optimierung des Vakuumsystems wurde der Ionenverlust im Massenspektrometer um den Faktor 4 verringert.rnrnNeben den hardwareseitigen Weiterentwicklungen des SPLAT bestand ein Großteil dieser Arbeit in der Konzipierung und Implementierung einer Softwarelösung zur Analyse der mit dem SPLAT gewonnenen Rohdaten. CRISP (Concise Retrieval of Information from Single Particles) ist ein auf IGOR PRO (Wavemetrics, USA) aufbauendes Softwarepaket, das die effiziente Auswertung der Einzelpartikel Rohdaten erlaubt. CRISP enthält einen neu entwickelten Algorithmus zur automatischen Massenkalibration jedes einzelnen Massenspektrums, inklusive der Unterdrückung von Rauschen und von Problemen mit Signalen die ein intensives Tailing aufweisen. CRISP stellt Methoden zur automatischen Klassifizierung der Partikel zur Verfügung. Implementiert sind k-means, fuzzy-c-means und eine Form der hierarchischen Einteilung auf Basis eines minimal aufspannenden Baumes. CRISP bietet die Möglichkeit die Daten vorzubehandeln, damit die automatische Einteilung der Partikel schneller abläuft und die Ergebnisse eine höhere Qualität aufweisen. Daneben kann CRISP auf einfache Art und Weise Partikel anhand vorgebener Kriterien sortieren. Die CRISP zugrundeliegende Daten- und Infrastruktur wurde in Hinblick auf Wartung und Erweiterbarkeit erstellt. rnrnIm Rahmen der Arbeit wurde das SPLAT in mehreren Kampagnen erfolgreich eingesetzt und die Fähigkeiten von CRISP konnten anhand der gewonnen Datensätze gezeigt werden.rnrnDas SPLAT ist nun in der Lage effizient im Feldeinsatz zur Charakterisierung des atmosphärischen Aerosols betrieben zu werden, während CRISP eine schnelle und gezielte Auswertung der Daten ermöglicht.
Resumo:
We have sequenced the genome of Desulfosporosinus sp. OT, a Gram-positive, acidophilic sulfate-reducing Firmicute isolated from copper tailing sediment in the Norilsk mining-smelting area in Northern Siberia, Russia. This represents the first sequenced genome of a Desulfosporosinus species. The genome has a size of 5.7 Mb and encodes 6,222 putative proteins.
Resumo:
Microarrays have established as instrumental for bacterial detection, identification, and genotyping as well as for transcriptomic studies. For gene expression analyses using limited numbers of bacteria (derived from in vivo or ex vivo origin, for example), RNA amplification is often required prior to labeling and hybridization onto microarrays. Evaluation of the fidelity of the amplification methods is crucial for the robustness and reproducibility of microarray results. We report here the first utilization of random primers and the highly processive Phi29 phage polymerase to amplify material for transcription profiling analyses. We compared two commercial amplification methods (GenomiPhi and MessageAmp kits) with direct reverse-transcription as the reference method, focusing on the robustness of mRNA quantification using either microarrays or quantitative RT-PCR. Both amplification methods using either poly-A tailing followed by in vitro transcription, or direct strand displacement polymerase, showed appreciable linearity. Strand displacement technique was particularly affordable compared to in vitro transcription-based (IVT) amplification methods and consisted in a single tube reaction leading to high amplification yields. Real-time measurements using low-, medium-, and highly expressed genes revealed that this simple method provided linear amplification with equivalent results in terms of relative messenger abundance as those obtained by conventional direct reverse-transcription.
Resumo:
Modified nucleoside triphosphates (dA(Hs)TP, dU(POH)TP, and dC(Val)TP) bearing imidazole, hydroxyl, and carboxylic acid residues connected to the purine and pyrimidine bases through alkyne linkers were prepared. These modified dN*TPs were excellent substrates for various DNA polymerases in primer extension reactions. Moreover, the combined use of terminal deoxynucleotidyl transferase (TdT) and the modified dNTPs led to efficient tailing reactions that rival those of natural counterparts. Finally, the triphosphates were tolerated by polymerases under PCR conditions, and the ensuing modified oligonucleotides served as templates for the regeneration of unmodified DNA. Thus, these modified dN*TPs are fully compatible with in vitro selection methods and can be used to develop artificial peptidases based on DNA.
Resumo:
Stable carbon isotope analysis of methane (delta C-13 of CH4) on atmospheric samples is one key method to constrain the current and past atmospheric CH4 budget. A frequently applied measurement technique is gas chromatography (GC) isotope ratio mass spectrometry (IRMS) coupled to a combustion-preconcentration unit. This report shows that the atmospheric trace gas krypton (Kr) can severely interfere during the mass spectrometric measurement, leading to significant biases in delta C-13 of CH4, if krypton is not sufficiently separated during the analysis. According to our experiments, the krypton interference is likely composed of two individual effects, with the lateral tailing of the doubly charged Kr-86 peak affecting the neighbouring m/z 44 and partially the m/z 45 Faraday cups. Additionally, a broad signal affecting m/z 45 and especially m/z 46 is assumed to result from scattered ions of singly charged krypton. The introduced bias in the measured isotope ratios is dependent on the chromatographic separation, the krypton-to-CH4 mixing ratio in the sample, the focusing of the mass spectrometer as well as the detector configuration and can amount to up to several per mil in delta C-13. Apart from technical solutions to avoid this interference, we present correction routines to a posteriori remove the bias.
Resumo:
Acid rock drainage (ARD) is a problem of international relevance with substantial environmental and economic implications. Reactive transport modeling has proven a powerful tool for the process-based assessment of metal release and attenuation at ARD sites. Although a variety of models has been used to investigate ARD, a systematic model intercomparison has not been conducted to date. This contribution presents such a model intercomparison involving three synthetic benchmark problems designed to evaluate model results for the most relevant processes at ARD sites. The first benchmark (ARD-B1) focuses on the oxidation of sulfide minerals in an unsaturated tailing impoundment, affected by the ingress of atmospheric oxygen. ARD-B2 extends the first problem to include pH buffering by primary mineral dissolution and secondary mineral precipitation. The third problem (ARD-B3) in addition considers the kinetic and pH-dependent dissolution of silicate minerals under low pH conditions. The set of benchmarks was solved by four reactive transport codes, namely CrunchFlow, Flotran, HP1, and MIN3P. The results comparison focused on spatial profiles of dissolved concentrations, pH and pE, pore gas composition, and mineral assemblages. In addition, results of transient profiles for selected elements and cumulative mass loadings were considered in the intercomparison. Despite substantial differences in model formulations, very good agreement was obtained between the various codes. Residual deviations between the results are analyzed and discussed in terms of their implications for capturing system evolution and long-term mass loading predictions.
Resumo:
Although sea-ice extent in the Bellingshausen-Amundsen (BA) seas sector of the Antarctic has shown significant decline over several decades, there is not enough data to draw any conclusion on sea-ice thickness and its change for the BA sector, or for the entire Southern Ocean. This paper presents our results of snow and ice thickness distributions from the SIMBA 2007 experiment in the Bellingshausen Sea, using four different methods (ASPeCt ship observations, downward-looking camera imaging, ship-based electromagnetic induction (EM) sounding, and in situ measurements using ice drills). A snow freeboard and ice thickness model generated from in situ measurements was then applied to contemporaneous ICESat (satellite laser altimetry) measured freeboard to derive ice thickness at the ICESat footprint scale. Errors from in situ measurements and from ICESat freeboard estimations were incorporated into the model, so a thorough evaluation of the model and uncertainty of the ice thickness estimation from ICESat are possible. Our results indicate that ICESat derived snow freeboard and ice thickness distributions (asymmetrical unimodal tailing to right) for first-year ice (0.29 ± 0.14 m for mean snow freeboard and 1.06 ± 0.40 m for mean ice thickness), multi-year ice (0.48 ± 0.26 and 1.59 ± 0.75 m, respectively), and all ice together (0.42 ± 0.24 and 1.38 ± 0.70 m, respectively) for the study area seem reasonable compared with those values from the in situ measurements, ASPeCt observations, and EM measurements. The EM measurements can act as an appropriate supplement for ASPeCt observations taken hourly from the ship's bridge and provide reasonable ice and snow distributions under homogeneous ice conditions. Our proposed approaches: (1) of using empirical equations relating snow freeboard to ice thickness based on in situ measurements and (2) of using isostatic equations that replace snow depth with snow freeboard (or empirical equations that convert freeboard to snow depth), are efficient and important ways to derive ice thickness from ICESat altimetry at the footprint scale for Antarctic sea ice. Spatial and temporal snow and ice thickness from satellite altimetry for the BA sector and for the entire Southern Ocean is therefore possible.
Resumo:
En la actualidad existe un gran conocimiento en la caracterización de rellenos hidráulicos, tanto en su caracterización estática, como dinámica. Sin embargo, son escasos en la literatura estudios más generales y globales de estos materiales, muy relacionados con sus usos y principales problemáticas en obras portuarias y mineras. Los procedimientos semi‐empíricos para la evaluación del efecto silo en las celdas de cajones portuarios, así como para el potencial de licuefacción de estos suelos durantes cargas instantáneas y terremotos, se basan en estudios donde la influencia de los parámetros que los rigen no se conocen en gran medida, dando lugar a resultados con considerable dispersión. Este es el caso, por ejemplo, de los daños notificados por el grupo de investigación del Puerto de Barcelona, la rotura de los cajones portuarios en el Puerto de Barcelona en 2007. Por estos motivos y otros, se ha decidido desarrollar un análisis para la evaluación de estos problemas mediante la propuesta de una metodología teórico‐numérica y empírica. El enfoque teórico‐numérico desarrollado en el presente estudio se centra en la determinación del marco teórico y las herramientas numéricas capaces de solventar los retos que presentan estos problemas. La complejidad del problema procede de varios aspectos fundamentales: el comportamiento no lineal de los suelos poco confinados o flojos en procesos de consolidación por preso propio; su alto potencial de licuefacción; la caracterización hidromecánica de los contactos entre estructuras y suelo (camino preferencial para el flujo de agua y consolidación lateral); el punto de partida de los problemas con un estado de tensiones efectivas prácticamente nulo. En cuanto al enfoque experimental, se ha propuesto una metodología de laboratorio muy sencilla para la caracterización hidromecánica del suelo y las interfaces, sin la necesidad de usar complejos aparatos de laboratorio o procedimientos excesivamente complicados. Este trabajo incluye por tanto un breve repaso a los aspectos relacionados con la ejecución de los rellenos hidráulicos, sus usos principales y los fenómenos relacionados, con el fin de establecer un punto de partida para el presente estudio. Este repaso abarca desde la evolución de las ecuaciones de consolidación tradicionales (Terzaghi, 1943), (Gibson, English & Hussey, 1967) y las metodologías de cálculo (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) hasta las contribuciones en relación al efecto silo (Ranssen, 1985) (Ravenet, 1977) y sobre el fenómeno de la licuefacción (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). Con motivo de este estudio se ha desarrollado exclusivamente un código basado en el método de los elementos finitos (MEF) empleando el programa MATLAB. Para ello, se ha esablecido un marco teórico (Biot, 1941) (Zienkiewicz & Shiomi, 1984) (Segura & Caron, 2004) y numérico (Zienkiewicz & Taylor, 1989) (Huerta & Rodríguez, 1992) (Segura & Carol, 2008) para resolver problemas de consolidación multidimensional con condiciones de contorno friccionales, y los correspondientes modelos constitutivos (Pastor & Zienkiewicz, 1986) (Fiu & Liu, 2011). Asimismo, se ha desarrollado una metodología experimental a través de una serie de ensayos de laboratorio para la calibración de los modelos constitutivos y de la caracterización de parámetros índice y de flujo (Castro, 1969) (Bahda 1997) (Been & Jefferies, 2006). Para ello se han empleado arenas de Hostun como material (relleno hidráulico) de referencia. Como principal aportación se incluyen una serie de nuevos ensayos de corte directo para la caracterización hidromecánica de la interfaz suelo – estructura de hormigón, para diferentes tipos de encofrados y rugosidades. Finalmente, se han diseñado una serie de algoritmos específicos para la resolución del set de ecuaciones diferenciales de gobierno que definen este problema. Estos algoritmos son de gran importancia en este problema para tratar el procesamiento transitorio de la consolidación de los rellenos hidráulicos, y de otros efectos relacionados con su implementación en celdas de cajones, como el efecto silo y la licuefacciones autoinducida. Para ello, se ha establecido un modelo 2D axisimétrico, con formulación acoplada u‐p para elementos continuos y elementos interfaz (de espesor cero), que tratan de simular las condiciones de estos rellenos hidráulicos cuando se colocan en las celdas portuarias. Este caso de estudio hace referencia clara a materiales granulares en estado inicial muy suelto y con escasas tensiones efectivas, es decir, con prácticamente todas las sobrepresiones ocasionadas por el proceso de autoconsolidación (por peso propio). Por todo ello se requiere de algoritmos numéricos específicos, así como de modelos constitutivos particulares, para los elementos del continuo y para los elementos interfaz. En el caso de la simulación de diferentes procedimientos de puesta en obra de los rellenos se ha requerido la modificacion de los algoritmos empleados para poder así representar numéricamente la puesta en obra de estos materiales, además de poder realizar una comparativa de los resultados para los distintos procedimientos. La constante actualización de los parámetros del suelo, hace también de este algoritmo una potente herramienta que permite establecer un interesante juego de perfiles de variables, tales como la densidad, el índice de huecos, la fracción de sólidos, el exceso de presiones, y tensiones y deformaciones. En definitiva, el modelo otorga un mejor entendimiento del efecto silo, término comúnmente usado para definir el fenómeno transitorio del gradiente de presiones laterales en las estructuras de contención en forma de silo. Finalmente se incluyen una serie de comparativas entre los resultados del modelo y de diferentes estudios de la literatura técnica, tanto para el fenómeno de las consolidaciones por preso propio (Fredlund, Donaldson & Gitirana, 2009) como para el estudio del efecto silo (Puertos del Estado, 2006, EuroCódigo (2006), Japan Tech, Stands. (2009), etc.). Para concluir, se propone el diseño de un prototipo de columna de decantación con paredes friccionales, como principal propuesta de futura línea de investigación. Wide research is nowadays available on the characterization of hydraulic fills in terms of either static or dynamic behavior. However, reported comprehensive analyses of these soils when meant for port or mining works are scarce. Moreover, the semi‐empirical procedures for assessing the silo effect on cells in floating caissons, and the liquefaction potential of these soils during sudden loads or earthquakes are based on studies where the underlying influence parameters are not well known, yielding results with significant scatter. This is the case, for instance, of hazards reported by the Barcelona Liquefaction working group, with the failure of harbor walls in 2007. By virtue of this, a complex approach has been undertaken to evaluate the problem by a proposal of numerical and laboratory methodology. Within a theoretical and numerical scope, the study is focused on the numerical tools capable to face the different challenges of this problem. The complexity is manifold; the highly non‐linear behavior of consolidating soft soils; their potentially liquefactable nature, the significance of the hydromechanics of the soil‐structure contact, the discontinuities as preferential paths for water flow, setting “negligible” effective stresses as initial conditions. Within an experimental scope, a straightforward laboratory methodology is introduced for the hydromechanical characterization of the soil and the interface without the need of complex laboratory devices or cumbersome procedures. Therefore, this study includes a brief overview of the hydraulic filling execution, main uses (land reclamation, filled cells, tailing dams, etc.) and the underlying phenomena (self‐weight consolidation, silo effect, liquefaction, etc.). It comprises from the evolution of the traditional consolidation equations (Terzaghi, 1943), (Gibson, English, & Hussey, 1967) and solving methodologies (Townsend & McVay, 1990) (Fredlund, Donaldson and Gitirana, 2009) to the contributions in terms of silo effect (Ranssen, 1895) (Ravenet, 1977) and liquefaction phenomena (Casagrande, 1936) (Castro, 1969) (Been & Jefferies, 1985) (Pastor & Zienkiewicz, 1986). The novelty of the study lies on the development of a Finite Element Method (FEM) code, exclusively formulated for this problem. Subsequently, a theoretical (Biot, 1941) (Zienkiewicz and Shiomi, 1984) (Segura and Carol, 2004) and numerical approach (Zienkiewicz and Taylor, 1989) (Huerta, A. & Rodriguez, A., 1992) (Segura, J.M. & Carol, I., 2008) is introduced for multidimensional consolidation problems with frictional contacts and the corresponding constitutive models (Pastor & Zienkiewicz, 1986) (Fu & Liu, 2011). An experimental methodology is presented for the laboratory test and material characterization (Castro 1969) (Bahda 1997) (Been & Jefferies 2006) using Hostun sands as reference hydraulic fill. A series of singular interaction shear tests for the interface calibration is included. Finally, a specific model algorithm for the solution of the set of differential equations governing the problem is presented. The process of consolidation and settlements involves a comprehensive simulation of the transient process of decantation and the build‐up of the silo effect in cells and certain phenomena related to self‐compaction and liquefaction. For this, an implementation of a 2D axi‐syimmetric coupled model with continuum and interface elements, aimed at simulating conditions and self‐weight consolidation of hydraulic fills once placed into floating caisson cells or close to retaining structures. This basically concerns a loose granular soil with a negligible initial effective stress level at the onset of the process. The implementation requires a specific numerical algorithm as well as specific constitutive models for both the continuum and the interface elements. The simulation of implementation procedures for the fills has required the modification of the algorithm so that a numerical representation of these procedures is carried out. A comparison of the results for the different procedures is interesting for the global analysis. Furthermore, the continuous updating of the model provides an insightful logging of variable profiles such as density, void ratio and solid fraction profiles, total and excess pore pressure, stresses and strains. This will lead to a better understanding of complex phenomena such as the transient gradient in lateral pressures due to silo effect in saturated soils. Interesting model and literature comparisons for the self‐weight consolidation (Fredlund, Donaldson, & Gitirana, 2009) and the silo effect results (Puertos del Estado (2006), EuroCode (2006), Japan Tech, Stands. (2009)). This study closes with the design of a decantation column prototype with frictional walls as the main future line of research.
Resumo:
Las instalaciones de residuos mineros abandonadas procedentes de la minería metálica pueden suponer un grave riesgo medioambiental y de seguridad debido a su potencial contaminante y a la posibilidad de ocasionar accidentes por su fallo estructural. Desgraciadamente, en la mayoría de los países de la Unión Europea existen un gran número de presas, balsas, pilas de lixiviación y escombreras mineras cerradas que se encuentran abandonadas y sin ningún control estructural o ambiental al haberse originado en actividades extractivas finalizadas con anterioridad a la aparición de las primeras normativas de índole ambiental aplicables. Algunas de las instalaciones de residuos mineros abandonadas pueden además contener reservas de metales apreciables al proceder sus residuos de actividades extractivas pasadas en las que se emplearon procesos metalúrgicos e hidrometalúrgicos de concentración con eficiencias de recuperación inferiores a las obtenibles con técnicas extractivas actuales. En la presente tesis doctoral se ha desarrollado una metodología de análisis que puede servir de herramienta útil para la toma de decisiones en los procesos de remediación de instalaciones de residuos mineros procedentes de la minería metálica. Dicha metodología se ha construido a partir de los resultados que se puedan obtener del análisis de riesgos ambientales de las instalaciones estudiadas, estableciendo las metodologías más apropiadas para su remediación en función de los riesgos específicos que presenten. La metodología propuesta incluye además el análisis de la posibilidad de proceder al aprovechamiento de los residuos mineros presentes en las instalaciones mediante la extracción de sus metales, para lo que se han desarrollado una serie de expresiones que pueden emplearse en el análisis de la viabilidad en términos económicos, ambientales y sostenibles de dicho aprovechamiento obtenidas sobre la base de las últimas experiencias al respecto registradas bibliográficamente y del estudio de los costes de inversión y operación que pueden suponer dichas operaciones en función de las tecnologías extractivas empleadas. Para finalizar, en la última parte de la tesis se incluye un caso práctico de aplicación de la metodología desarrollada en el que se estudian las posibilidades de remediación de tres presas mineras abandonadas procedentes de la extracción de plomo y zinc y se analiza la posibilidad de proceder al aprovechamiento de sus residuos mediante la extracción de metales de sus estériles. ABSTRACT The abandoned mining waste facilities from the metallic mining may pose environmental and safety risks due to its pollution potential as well as due to the possibility of structural collapses. Unfortunately, in most countries of the European Union there are a large number of tailing dams, tailing ponds, leaching heaps and mining waste-rock dump sites that are abandoned without any structural or environmental control. That is because they were originated in extractive activities completed prior to the appearance of the first environmental applicable regulations. Some of the abandoned mining waste facilities can also contain significant reserves of metals due to the fact that their waste comes from past mining activities when metallurgical processes and hydrometallurgical concentration were used with lower efficiency than those obtainable with present recovery mining techniques. At present thesis a method of analysis that can serve as a useful tool for decision making in the process of remediation of mining waste facilities from the metal mining has been developed. This methodology has been built from the results that may be obtained from the analysis of environmental risks at studied facilities, therefore establishing the most appropriate methodologies for its remediation based on their specific risks. The proposed methodology also includes the analysis of the possibility to beneficiate the mining residues contained on the mining waste facilities by extracting their metals. For that purpose, some expressions have been developed that can be used in the analysis of the economic viability of such extraction. Expressions are obtained on the basis of recent registered bibliographically, experiences and from the study of investment and operating costs that may suppose such extractions depending on extractive technologies used. Finally, in the last part of the thesis a case of application of the methodology proposed has been developed, in which, the possibilities for remediation of three abandoned mining dams related to the extraction of lead and zinc are studied and the possibility of beneficiate their waste by extracting metals is analyzed.
Resumo:
O processo de beneficiamento do zinco, extraído em Vazante pela Companhia Mineira de Metais - CMM produz um rejeito alcalino e com baixa disponibilidade de nutrientes. Esta dissertação tem como objetivo avaliar o potencial de utilização de espécies leguminosas noduladas e micorrizadas na revegetação de barragem de rejeito da CMM. Neste sentido, foram instalados dois experimentos de campo onde foi realizado o plantio prévio de Brachiaria sp. O primeiro experimento foi composto por 36 tratamentos que foram formados por uma combinação de 17 espécies + 1 testemunha (ausência de plantas) na presença e na ausência de esterco de curral (2,0 L) na cova de plantio. Cada unidade experimental foi formada por 20 exemplares da mesma espécie que foram plantadas em covas abertas manualmente (25 x 25 x 25 cm) num espaçamento de 2 x 2 m. Todas as covas receberam a adubação básica formada por 125 g de superfosfato simples e 60 g de cloreto de potássio. Entre as 17 espécies avaliadas, 3 não pertencem a família Leguminosae e receberam, além da adubação básica, cerca de 25 g de sulfato de amônio por cobertura. O segundo experimento foi montado com o objetivo de avaliar o potencial de espécies leguminosas beneficiarem o estabelecimento e crescimento de espécies não leguminosas na revegetação de barragem de rejeito da CMM. Foram utilizadas três espécies leguminosas (Enterolobium scomburkii, Acacia mangium e Acacia holosericea) e três não leguminosas (Lithraea brasiliensis, Cinnamomum glaziovii e Eugenia jambolana) num esquema fatorial (3 x 3) + 1 testemunha, formando dez tratamentos distribuídos em blocos ao acaso com três repetições. Cada parcela foi formada por 20 plantas (10 leguminosas + 10 não leguminosas) plantadas em espaçamento 2 x 2 m e com a mesma adubação básica utilizada no primeiro experimento. Todas as espécies leguminosas utilizadas foram previamente inoculadas com estirpes selecionadas de bactérias fixadoras de Nitrogênio atmosférico e com uma mistura de fungos micorrízicos provenientes da Embrapa/Agrobiologia. Os experimentos foram avaliados quanto ao estabelecimento e crescimento de plantas (altura e diâmetro do colo) aos 4, 12 e 24 meses após o plantio. Os resultados obtidos permitem concluir que dentre as espécies avaliadas, as mais indicadas para a primeira etapa da revegetação da barragem de rejeito da CMM são: Acacia holosericea, Acacia farnesiana, Acacia auriculiformis, Mimosa caesalpiniifolia, Leucaena leucocephala, Mimosa birmucronata, Enterolobium schomburkii e Prosopis juliflora. O sucesso do consórcio de espécies leguminosas e não leguminosas depende da escolha das espécies a serem combinadas, de maneira que não exista uma efetiva competição por água, nutrientes e luz que possa prejudicar as espécies de menor plasticidade. Das combinações avaliadas, as de maiores potencialidades para o programa de revegetação das barragens de rejeito da CMM são aquelas envolvendo a espécieLithraea brasiliensis.
Resumo:
Purpose – This article aims to investigate whether intermediaries reduce loss aversion in the context of a high-involvement non-frequently purchased hedonic product (tourism packages). Design/methodology/approach – The study incorporates the reference-dependent model into a multinomial logit model with random parameters, which controls for heterogeneity and allows representation of different correlation patterns between non-independent alternatives. Findings – Differentiated loss aversion is found: consumers buying high-involvement non-frequently purchased hedonic products are less loss averse when using an intermediary than when dealing with each provider separately and booking their services independently. This result can be taken as identifying consumer-based added value provided by the intermediaries. Practical implications – Knowing the effect of an increase in their prices is crucial for tourism collective brands (e.g. “sun and sea”, “inland”, “green destinations”, “World Heritage destinations”). This is especially applicable nowadays on account of the fact that many destinations have lowered prices to attract tourists (although, in the future, they will have to put prices back up to their normal levels). The negative effect of raising prices can be absorbed more easily via indirect channels when compared to individual providers, as the influence of loss aversion is lower for the former than the latter. The key implication is that intermediaries can – and should – add value in competition with direct e-tailing. Originality/value – Research on loss aversion in retailing has been prolific, exclusively focused on low-involvement and frequently purchased products without distinguishing the direct or indirect character of the distribution channel. However, less is known about other types of products such as high-involvement non-frequently purchased hedonic products. This article focuses on the latter and analyzes different patterns of loss aversion in direct and indirect channels.