878 resultados para physically based modeling
Resumo:
The aim of this thesis, included within the THESEUS project, is the development of a mathematical model 2DV two-phase, based on the existing code IH-2VOF developed by the University of Cantabria, able to represent together the overtopping phenomenon and the sediment transport. Several numerical simulations were carried out in order to analyze the flow characteristics on a dike crest. The results show that the seaward/landward slope does not affect the evolution of the flow depth and velocity over the dike crest whereas the most important parameter is the relative submergence. Wave heights decrease and flow velocities increase while waves travel over the crest. In particular, by increasing the submergence, the wave height decay and the increase of the velocity are less marked. Besides, an appropriate curve able to fit the variation of the wave height/velocity over the dike crest were found. Both for the wave height and for the wave velocity different fitting coefficients were determined on the basis of the submergence and of the significant wave height. An equation describing the trend of the dimensionless coefficient c_h for the wave height was derived. These conclusions could be taken into consideration for the design criteria and the upgrade of the structures. In the second part of the thesis, new equations for the representation of the sediment transport in the IH-2VOF model were introduced in order to represent beach erosion while waves run-up and overtop the sea banks during storms. The new model allows to calculate sediment fluxes in the water column together with the sediment concentration. Moreover it is possible to model the bed profile evolution. Different tests were performed under low-intensity regular waves with an homogeneous layer of sand on the bottom of a channel in order to analyze the erosion-deposition patterns and verify the model results.
Resumo:
This work is focused on the analysis of sea–level change (last century), based mainly on instrumental observations. During this period, individual components of sea–level change are investigated, both at global and regional scales. Some of the geophysical processes responsible for current sea-level change such as glacial isostatic adjustments and current melting terrestrial ice sources, have been modeled and compared with observations. A new value of global mean sea level change based of tide gauges observations has been independently assessed in 1.5 mm/year, using corrections for glacial isostatic adjustment obtained with different models as a criterion for the tide gauge selection. The long wavelength spatial variability of the main components of sea–level change has been investigated by means of traditional and new spectral methods. Complex non–linear trends and abrupt sea–level variations shown by tide gauges records have been addressed applying different approaches to regional case studies. The Ensemble Empirical Mode Decomposition technique has been used to analyse tide gauges records from the Adriatic Sea to ascertain the existence of cyclic sea-level variations. An Early Warning approach have been adopted to detect tipping points in sea–level records of North East Pacific and their relationship with oceanic modes. Global sea–level projections to year 2100 have been obtained by a semi-empirical approach based on the artificial neural network method. In addition, a model-based approach has been applied to the case of the Mediterranean Sea, obtaining sea-level projection to year 2050.
Resumo:
Parkinson’s disease is a neurodegenerative disorder due to the death of the dopaminergic neurons of the substantia nigra of the basal ganglia. The process that leads to these neural alterations is still unknown. Parkinson’s disease affects most of all the motor sphere, with a wide array of impairment such as bradykinesia, akinesia, tremor, postural instability and singular phenomena such as freezing of gait. Moreover, in the last few years the fact that the degeneration in the basal ganglia circuitry induces not only motor but also cognitive alterations, not necessarily implicating dementia, and that dopamine loss induces also further implications due to dopamine-driven synaptic plasticity got more attention. At the present moment, no neuroprotective treatment is available, and even if dopamine-replacement therapies as well as electrical deep brain stimulation are able to improve the life conditions of the patients, they often present side effects on the long term, and cannot recover the neural loss, which instead continues to advance. In the present thesis both motor and cognitive aspects of Parkinson’s disease and basal ganglia circuitry were investigated, at first focusing on Parkinson’s disease sensory and balance issues by means of a new instrumented method based on inertial sensor to provide further information about postural control and postural strategies used to attain balance, then applying this newly developed approach to assess balance control in mild and severe patients, both ON and OFF levodopa replacement. Given the inability of levodopa to recover balance issues and the new physiological findings than underline the importance in Parkinson’s disease of non-dopaminergic neurotransmitters, it was therefore developed an original computational model focusing on acetylcholine, the most promising neurotransmitter according to physiology, and its role in synaptic plasticity. The rationale of this thesis is that a multidisciplinary approach could gain insight into Parkinson’s disease features still unresolved.
Resumo:
In der Erdöl– und Gasindustrie sind bildgebende Verfahren und Simulationen auf der Porenskala im Begriff Routineanwendungen zu werden. Ihr weiteres Potential lässt sich im Umweltbereich anwenden, wie z.B. für den Transport und Verbleib von Schadstoffen im Untergrund, die Speicherung von Kohlendioxid und dem natürlichen Abbau von Schadstoffen in Böden. Mit der Röntgen-Computertomografie (XCT) steht ein zerstörungsfreies 3D bildgebendes Verfahren zur Verfügung, das auch häufig für die Untersuchung der internen Struktur geologischer Proben herangezogen wird. Das erste Ziel dieser Dissertation war die Implementierung einer Bildverarbeitungstechnik, die die Strahlenaufhärtung der Röntgen-Computertomografie beseitigt und den Segmentierungsprozess dessen Daten vereinfacht. Das zweite Ziel dieser Arbeit untersuchte die kombinierten Effekte von Porenraumcharakteristika, Porentortuosität, sowie die Strömungssimulation und Transportmodellierung in Porenräumen mit der Gitter-Boltzmann-Methode. In einer zylindrischen geologischen Probe war die Position jeder Phase auf Grundlage der Beobachtung durch das Vorhandensein der Strahlenaufhärtung in den rekonstruierten Bildern, das eine radiale Funktion vom Probenrand zum Zentrum darstellt, extrahierbar und die unterschiedlichen Phasen ließen sich automatisch segmentieren. Weiterhin wurden Strahlungsaufhärtungeffekte von beliebig geformten Objekten durch einen Oberflächenanpassungsalgorithmus korrigiert. Die Methode der „least square support vector machine” (LSSVM) ist durch einen modularen Aufbau charakterisiert und ist sehr gut für die Erkennung und Klassifizierung von Mustern geeignet. Aus diesem Grund wurde die Methode der LSSVM als pixelbasierte Klassifikationsmethode implementiert. Dieser Algorithmus ist in der Lage komplexe geologische Proben korrekt zu klassifizieren, benötigt für den Fall aber längere Rechenzeiten, so dass mehrdimensionale Trainingsdatensätze verwendet werden müssen. Die Dynamik von den unmischbaren Phasen Luft und Wasser wird durch eine Kombination von Porenmorphologie und Gitter Boltzmann Methode für Drainage und Imbibition Prozessen in 3D Datensätzen von Böden, die durch synchrotron-basierte XCT gewonnen wurden, untersucht. Obwohl die Porenmorphologie eine einfache Methode ist Kugeln in den verfügbaren Porenraum einzupassen, kann sie dennoch die komplexe kapillare Hysterese als eine Funktion der Wassersättigung erklären. Eine Hysterese ist für den Kapillardruck und die hydraulische Leitfähigkeit beobachtet worden, welche durch die hauptsächlich verbundenen Porennetzwerke und der verfügbaren Porenraumgrößenverteilung verursacht sind. Die hydraulische Konduktivität ist eine Funktion des Wassersättigungslevels und wird mit einer makroskopischen Berechnung empirischer Modelle verglichen. Die Daten stimmen vor allem für hohe Wassersättigungen gut überein. Um die Gegenwart von Krankheitserregern im Grundwasser und Abwässern vorhersagen zu können, wurde in einem Bodenaggregat der Einfluss von Korngröße, Porengeometrie und Fluidflussgeschwindigkeit z.B. mit dem Mikroorganismus Escherichia coli studiert. Die asymmetrischen und langschweifigen Durchbruchskurven, besonders bei höheren Wassersättigungen, wurden durch dispersiven Transport aufgrund des verbundenen Porennetzwerks und durch die Heterogenität des Strömungsfeldes verursacht. Es wurde beobachtet, dass die biokolloidale Verweilzeit eine Funktion des Druckgradienten als auch der Kolloidgröße ist. Unsere Modellierungsergebnisse stimmen sehr gut mit den bereits veröffentlichten Daten überein.
Resumo:
In this thesis different approaches for the modeling and simulation of the blood protein fibrinogen are presented. The approaches are meant to systematically connect the multiple time and length scales involved in the dynamics of fibrinogen in solution and at inorganic surfaces. The first part of the thesis will cover simulations of fibrinogen on an all atom level. Simulations of the fibrinogen protomer and dimer are performed in explicit solvent to characterize the dynamics of fibrinogen in solution. These simulations reveal an unexpectedly large and fast bending motion that is facilitated by molecular hinges located in the coiled-coil region of fibrinogen. This behavior is characterized by a bending and a dihedral angle and the distribution of these angles is measured. As a consequence of the atomistic detail of the simulations it is possible to illuminate small scale behavior in the binding pockets of fibrinogen that hints at a previously unknown allosteric effect. In a second step atomistic simulations of the fibrinogen protomer are performed at graphite and mica surfaces to investigate initial adsorption stages. These simulations highlight the different adsorption mechanisms at the hydrophobic graphite surface and the charged, hydrophilic mica surface. It is found that the initial adsorption happens in a preferred orientation on mica. Many effects of practical interest involve aggregates of many fibrinogen molecules. To investigate such systems, time and length scales need to be simulated that are not attainable in atomistic simulations. It is therefore necessary to develop lower resolution models of fibrinogen. This is done in the second part of the thesis. First a systematically coarse grained model is derived and parametrized based on the atomistic simulations of the first part. In this model the fibrinogen molecule is represented by 45 beads instead of nearly 31,000 atoms. The intra-molecular interactions of the beads are modeled as a heterogeneous elastic network while inter-molecular interactions are assumed to be a combination of electrostatic and van der Waals interaction. A method is presented that determines the charges assigned to beads by matching the electrostatic potential in the atomistic simulation. Lastly a phenomenological model is developed that represents fibrinogen by five beads connected by rigid rods with two hinges. This model only captures the large scale dynamics in the atomistic simulations but can shed light on experimental observations of fibrinogen conformations at inorganic surfaces.
Resumo:
Ozon (O3) ist ein wichtiges Oxidierungs- und Treibhausgas in der Erdatmosphäre. Es hat Einfluss auf das Klima, die Luftqualität sowie auf die menschliche Gesundheit und die Vegetation. Ökosysteme, wie beispielsweise Wälder, sind Senken für troposphärisches Ozon und werden in Zukunft, bedingt durch Stürme, Pflanzenschädlinge und Änderungen in der Landnutzung, heterogener sein. Es ist anzunehmen, dass diese Heterogenitäten die Aufnahme von Treibhausgasen verringern und signifikante Rückkopplungen auf das Klimasystem bewirken werden. Beeinflusst wird der Atmosphären-Biosphären-Austausch von Ozon durch stomatäre Aufnahme, Deposition auf Pflanzenoberflächen und Böden sowie chemische Umwandlungen. Diese Prozesse zu verstehen und den Ozonaustausch für verschiedene Ökosysteme zu quantifizieren sind Voraussetzungen, um von lokalen Messungen auf regionale Ozonflüsse zu schließen.rnFür die Messung von vertikalen turbulenten Ozonflüssen wird die Eddy Kovarianz Methode genutzt. Die Verwendung von Eddy Kovarianz Systemen mit geschlossenem Pfad, basierend auf schnellen Chemilumineszenz-Ozonsensoren, kann zu Fehlern in der Flussmessung führen. Ein direkter Vergleich von nebeneinander angebrachten Ozonsensoren ermöglichte es einen Einblick in die Faktoren zu erhalten, die die Genauigkeit der Messungen beeinflussen. Systematische Unterschiede zwischen einzelnen Sensoren und der Einfluss von unterschiedlichen Längen des Einlassschlauches wurden untersucht, indem Frequenzspektren analysiert und Korrekturfaktoren für die Ozonflüsse bestimmt wurden. Die experimentell bestimmten Korrekturfaktoren zeigten keinen signifikanten Unterschied zu Korrekturfaktoren, die mithilfe von theoretischen Transferfunktionen bestimmt wurden, wodurch die Anwendbarkeit der theoretisch ermittelten Faktoren zur Korrektur von Ozonflüssen bestätigt wurde.rnIm Sommer 2011 wurden im Rahmen des EGER (ExchanGE processes in mountainous Regions) Projektes Messungen durchgeführt, um zu einem besseren Verständnis des Atmosphären-Biosphären Ozonaustauschs in gestörten Ökosystemen beizutragen. Ozonflüsse wurden auf beiden Seiten einer Waldkante gemessen, die einen Fichtenwald und einen Windwurf trennt. Auf der straßenähnlichen Freifläche, die durch den Sturm "Kyrill" (2007) entstand, entwickelte sich eine Sekundärvegetation, die sich in ihrer Phänologie und Blattphysiologie vom ursprünglich vorherrschenden Fichtenwald unterschied. Der mittlere nächtliche Fluss über dem Fichtenwald war -6 bis -7 nmol m2 s-1 und nahm auf -13 nmol m2 s-1 um die Mittagszeit ab. Die Ozonflüsse zeigten eine deutliche Beziehung zur Pflanzenverdunstung und CO2 Aufnahme, was darauf hinwies, dass während des Tages der Großteil des Ozons von den Pflanzenstomata aufgenommen wurde. Die relativ hohe nächtliche Deposition wurde durch nicht-stomatäre Prozesse verursacht. Die Deposition über dem Wald war im gesamten Tagesverlauf in etwa doppelt so hoch wie über der Freifläche. Dieses Verhältnis stimmte mit dem Verhältnis des Pflanzenflächenindex (PAI) überein. Die Störung des Ökosystems verringerte somit die Fähigkeit des Bewuchses, als Senke für troposphärisches Ozon zu fungieren. Der deutliche Unterschied der Ozonflüsse der beiden Bewuchsarten verdeutlichte die Herausforderung bei der Regionalisierung von Ozonflüssen in heterogen bewaldeten Gebieten.rnDie gemessenen Flüsse wurden darüber hinaus mit Simulationen verglichen, die mit dem Chemiemodell MLC-CHEM durchgeführt wurden. Um das Modell bezüglich der Berechnung von Ozonflüssen zu evaluieren, wurden gemessene und modellierte Flüsse von zwei Positionen im EGER-Gebiet verwendet. Obwohl die Größenordnung der Flüsse übereinstimmte, zeigten die Ergebnisse eine signifikante Differenz zwischen gemessenen und modellierten Flüssen. Zudem gab es eine klare Abhängigkeit der Differenz von der relativen Feuchte, mit abnehmender Differenz bei zunehmender Feuchte, was zeigte, dass das Modell vor einer Verwendung für umfangreiche Studien des Ozonflusses weiterer Verbesserungen bedarf.rn
Resumo:
The present thesis work proposes a new physical equivalent circuit model for a recently proposed semiconductor transistor, a 2-drain MSET (Multiple State Electrostatically Formed Nanowire Transistor). It presents a new software-based experimental setup that has been developed for carrying out numerical simulations on the device and on equivalent circuits. As of 2015, we have already approached the scaling limits of the ubiquitous CMOS technology that has been in the forefront of mainstream technological advancement, so many researchers are exploring different ideas in the realm of electrical devices for logical applications, among them MSET transistors. The idea that underlies MSETs is that a single multiple-terminal device could replace many traditional transistors. In particular a 2-drain MSET is akin to a silicon multiplexer, consisting in a Junction FET with independent gates, but with a split drain, so that a voltage-controlled conductive path can connect either of the drains to the source. The first chapter of this work presents the theory of classical JFETs and its common equivalent circuit models. The physical model and its derivation are presented, the current state of equivalent circuits for the JFET is discussed. A physical model of a JFET with two independent gates has been developed, deriving it from previous results, and is presented at the end of the chapter. A review of the characteristics of MSET device is shown in chapter 2. In this chapter, the proposed physical model and its formulation are presented. A listing for the SPICE model was attached as an appendix at the end of this document. Chapter 3 concerns the results of the numerical simulations on the device. At first the research for a suitable geometry is discussed and then comparisons between results from finite-elements simulations and equivalent circuit runs are made. Where points of challenging divergence were found between the two numerical results, the relevant physical processes are discussed. In the fourth chapter the experimental setup is discussed. The GUI-based environments that allow to explore the four-dimensional solution space and to analyze the physical variables inside the device are described. It is shown how this software project has been structured to overcome technical challenges in structuring multiple simulations in sequence, and to provide for a flexible platform for future research in the field.
Resumo:
Statistical shape models (SSMs) have been used widely as a basis for segmenting and interpreting complex anatomical structures. The robustness of these models are sensitive to the registration procedures, i.e., establishment of a dense correspondence across a training data set. In this work, two SSMs based on the same training data set of scoliotic vertebrae, and registration procedures were compared. The first model was constructed based on the original binary masks without applying any image pre- and post-processing, and the second was obtained by means of a feature preserving smoothing method applied to the original training data set, followed by a standard rasterization algorithm. The accuracies of the correspondences were assessed quantitatively by means of the maximum of the mean minimum distance (MMMD) and Hausdorf distance (H(D)). Anatomical validity of the models were quantified by means of three different criteria, i.e., compactness, specificity, and model generalization ability. The objective of this study was to compare quasi-identical models based on standard metrics. Preliminary results suggest that the MMMD distance and eigenvalues are not sensitive metrics for evaluating the performance and robustness of SSMs.
Resumo:
A feature represents a functional requirement fulfilled by a system. Since many maintenance tasks are expressed in terms of features, it is important to establish the correspondence between a feature and its implementation in source code. Traditional approaches to establish this correspondence exercise features to generate a trace of runtime events, which is then processed by post-mortem analysis. These approaches typically generate large amounts of data to analyze. Due to their static nature, these approaches do not support incremental and interactive analysis of features. We propose a radically different approach called live feature analysis, which provides a model at runtime of features. Our approach analyzes features on a running system and also makes it possible to grow feature representations by exercising different scenarios of the same feature, and identifies execution elements even to the sub-method level. We describe how live feature analysis is implemented effectively by annotating structural representations of code based on abstract syntax trees. We illustrate our live analysis with a case study where we achieve a more complete feature representation by exercising and merging variants of feature behavior and demonstrate the efficiency or our technique with benchmarks.
Resumo:
This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.
Resumo:
A computationally efficient procedure for modeling the alkaline hydrolysis of esters is proposed based on calculations performed on methyl acetate and methyl benzoate systems. Extensive geometry and energy comparisons were performed on the simple ester methyl acetate. The effectiveness of performing high level single point ab initio energy calculations on the geometries obtained from semiempirical and ab initio methods was determined. The AM1 and PM3 semiempirical methods are evaluated for their ability to model the transition states and intermediates for ester hydrolysis. The Cramer/Truhlar SM3 solvation method was used to determine activation energies. The most computationally efficient way to model the transition states of large esters is to use the PM3 method. The PM3 transition structure can then be used as a template for the design of haptens capable of inducing catalytic antibodies.
Resumo:
Extensive research conducted over the past several decades has indicated that semipermeable membrane behavior (i.e., the ability of a porous medium to restrict the passage of solutes) may have a significant influence on solute migration through a wide variety of clay-rich soils, including both natural clay formations (aquitards, aquicludes) and engineered clay barriers (e.g., landfill liners and vertical cutoff walls). Restricted solute migration through clay membranes generally has been described using coupled flux formulations based on nonequilibrium (irreversible) thermodynamics. However, these formulations have differed depending on the assumptions inherent in the theoretical development, resulting in some confusion regarding the applicability of the formulations. Accordingly, a critical review of coupled flux formulations for liquid, current, and solutes through a semipermeable clay membrane under isothermal conditions is undertaken with the goals of explicitly resolving differences among the formulations and illustrating the significance of the differences from theoretical and practical perspectives. Formulations based on single-solute systems (i.e., uncharged solute), single-salt systems, and general systems containing multiple cations or anions are presented. Also, expressions relating the phenomenological coefficients in the coupled flux equations to relevant soil properties (e.g., hydraulic conductivity and effective diffusion coefficient) are summarized for each system. A major difference in the formulations is shown to exist depending on whether counter diffusion or salt diffusion is assumed. This difference between counter and salt diffusion is shown to affect the interpretation of values for the effective diffusion coefficient in a clay membrane based on previously published experimental data. Solute transport theories based on both counter and salt diffusion then are used to re-evaluate previously published column test data for the same clay membrane. The results indicate that, despite the theoretical inconsistency between the counter-diffusion assumption and the salt-diffusion conditions of the experiments, the predictive ability of solute transport theory based on the assumption of counter diffusion is not significantly different from that based on the assumption of salt diffusion, provided that the input parameters used in each theory are derived under the same assumption inherent in the theory. Nonetheless, salt-diffusion theory is fundamentally correct and, therefore, is more appropriate for problems involving salt diffusion in clay membranes. Finally, the fact that solute diffusion cannot occur in an ideal or perfect membrane is not explicitly captured in any of the theoretical expressions for total solute flux in clay membranes, but rather is generally accounted for via inclusion of an effective porosity, n(e), or a restrictive tortuosity factor, tau(r), in the formulation of Fick's first law for diffusion. Both n(e) and tau(r) have been correlated as a linear function of membrane efficiency. This linear correlation is supported theoretically by pore-scale modeling of solid-liquid interactions, but experimental support is limited. Additional data are needed to bolster the validity of the linear correlation for clay membranes. Copyright 2012 Elsevier B.V. All rights reserved.
Resumo:
Smoke spikes occurring during transient engine operation have detrimental health effects and increase fuel consumption by requiring more frequent regeneration of the diesel particulate filter. This paper proposes a decision tree approach to real-time detection of smoke spikes for control and on-board diagnostics purposes. A contemporary, electronically controlled heavy-duty diesel engine was used to investigate the deficiencies of smoke control based on the fuel-to-oxygen-ratio limit. With the aid of transient and steady state data analysis and empirical as well as dimensional modeling, it was shown that the fuel-to-oxygen ratio was not estimated correctly during the turbocharger lag period. This inaccuracy was attributed to the large manifold pressure ratios and low exhaust gas recirculation flows recorded during the turbocharger lag period, which meant that engine control module correlations for the exhaust gas recirculation flow and the volumetric efficiency had to be extrapolated. The engine control module correlations were based on steady state data and it was shown that, unless the turbocharger efficiency is artificially reduced, the large manifold pressure ratios observed during the turbocharger lag period cannot be achieved at steady state. Additionally, the cylinder-to-cylinder variation during this period were shown to be sufficiently significant to make the average fuel-to-oxygen ratio a poor predictor of the transient smoke emissions. The steady state data also showed higher smoke emissions with higher exhaust gas recirculation fractions at constant fuel-to-oxygen-ratio levels. This suggests that, even if the fuel-to-oxygen ratios were to be estimated accurately for each cylinder, they would still be ineffective as smoke limiters. A decision tree trained on snap throttle data and pruned with engineering knowledge was able to use the inaccurate engine control module estimates of the fuel-to-oxygen ratio together with information on the engine control module estimate of the exhaust gas recirculation fraction, the engine speed, and the manifold pressure ratio to predict 94% of all spikes occurring over the Federal Test Procedure cycle. The advantages of this non-parametric approach over other commonly used parametric empirical methods such as regression were described. An application of accurate smoke spike detection in which the injection pressure is increased at points with a high opacity to reduce the cumulative particulate matter emissions substantially with a minimum increase in the cumulative nitrogrn oxide emissions was illustrated with dimensional and empirical modeling.
Resumo:
Extensive research conducted over the past several decades has indicated that semipermeable membrane behavior (i.e., the ability of a porous medium to restrict the passage of solutes) may have a significant influence on solute migration through a wide variety of clay-rich soils, including both natural clay formations (aquitards, aquicludes) and engineered clay barriers (e.g., landfill liners and vertical cutoff walls). Restricted solute migration through clay membranes generally has been described using coupled flux formulations based on nonequilibrium (irreversible) thermodynamics. However, these formulations have differed depending on the assumptions inherent in the theoretical development, resulting in some confusion regarding the applicability of the formulations. Accordingly, a critical review of coupled flux formulations for liquid, current, and solutes through a semipermeable clay membrane under isothermal conditions is undertaken with the goals of explicitly resolving differences among the formulations and illustrating the significance of the differences from theoretical and practical perspectives. Formulations based on single-solute systems (i.e., uncharged solute), single-salt systems, and general systems containing multiple cations or anions are presented. Also, expressions relating the phenomenological coefficients in the coupled flux equations to relevant soil properties (e.g., hydraulic conductivity and effective diffusion coefficient) are summarized for each system. A major difference in the formulations is shown to exist depending on whether counter diffusion or salt diffusion is assumed. This difference between counter and salt diffusion is shown to affect the interpretation of values for the effective diffusion coefficient in a clay membrane based on previously published experimental data. Solute transport theories based on both counter and salt diffusion then are used to re-evaluate previously published column test data for the same clay membrane. The results indicate that, despite the theoretical inconsistency between the counter-diffusion assumption and the salt-diffusion conditions of the experiments, the predictive ability of solute transport theory based on the assumption of counter diffusion is not significantly different from that based on the assumption of salt diffusion, provided that the input parameters used in each theory are derived under the same assumption inherent in the theory. Nonetheless, salt-diffusion theory is fundamentally correct and, therefore, is more appropriate for problems involving salt diffusion in clay membranes. Finally, the fact that solute diffusion cannot occur in an ideal or perfect membrane is not explicitly captured in any of the theoretical expressions for total solute flux in clay membranes, but rather is generally accounted for via inclusion of an effective porosity, ne, or a restrictive tortuosity factor, tr, in the formulation of Fick's first law for diffusion. Both ne and tr have been correlated as a linear function of membrane efficiency. This linear correlation is supported theoretically by pore-scale modeling of solid-liquid interactions, but experimental support is limited. Additional data are needed to bolster the validity of the linear correlation for clay membranes.
Resumo:
Region-specific empirically based ground-truth (EBGT) criteria used to estimate the epicentral-location accuracy of seismic events have been developed for the Main Ethiopian Rift and the Tibetan plateau. Explosions recorded during the Ethiopia-Afar Geoscientific Lithospheric Experiment (EAGLE), the International Deep Profiling of Tibet, and the Himalaya (INDEPTH III) experiment provided the necessary GT0 reference events. In each case, the local crustal structure is well known and handpicked arrival times were available, facilitating the establishment of the location accuracy criteria through the stochastic forward modeling of arrival times for epicentral locations. In the vicinity of the Main Ethiopian Rift, a seismic event is required to be recorded on at least 8 stations within the local Pg/Pn crossover distance and to yield a network-quality metric of less than 0.43 in order to be classified as EBGT5(95%) (GT5 with 95% confidence). These criteria were subsequently used to identify 10 new GT5 events with magnitudes greater than 2.1 recorded on the Ethiopian Broadband Seismic Experiment (EBSE) network and 24 events with magnitudes greater than 2.4 recorded on the EAGLE broadband network. The criteria for the Tibetan plateau are similar to the Ethiopia criteria, yet slightly less restrictive as the network-quality metric needs to be less than 0.45. Twenty-seven seismic events with magnitudes greater than 2.5 recorded on the INDEPTH III network were identified as GT5 based on the derived criteria. When considered in conjunction with criteria developed previously for the Kaapvaal craton in southern Africa, it is apparent that increasing restrictions on the network-quality metric mirror increases in the complexity of geologic structure from craton to plateau to rift. Accession Number: WOS:000322569200012