896 resultados para Liouvillean, thermal equilibrium, return to equilibrium
Resumo:
Pós-graduação em Zootecnia - FCAV
Resumo:
An interdisciplinary study was conducted to evaluate the effects of drying and storage time on changes in the quality of natural and fully washed coffees beans dried out in the yard and mechanically dried at a temperature of 60/40°C in air dryer machine. The coffee beans (Coffea arabica L.) harvested in cherries were processed by dry and wet methods, being subjected to pre-drying yard, followed by drying yard in the sun with air heated of 60/40°C until it reached the water content of 11% (wb). After reached the thermal equilibrium with the environment, the beans were packed in jute bag with a capacity of five kilograms and stored in uncontrolled environment during the period of one year, and removing material from each treatment every three months. To characterize the effect of drying and storage time on the coffee quality different methodologies was evaluated. It was observed less drying time for the fully washed coffee 60/40°C, and thus less energy consumed in the drying process until the point of storage, for the natural coffee there was significant effect of time on the chemical quality, biochemical and sensory; fully washed coffee proved to be more tolerant to drying than natural coffee, regardless of drying method, showing a better drink quality and less variation in chemical composition and biochemistry.
Resumo:
We review recent progress in the mathematical theory of quantum disordered systems: the Anderson transition, including some joint work with Marchetti, the (quantum and classical) Edwards-Anderson (EA) spin-glass model and return to equilibrium for a class of spin-glass models, which includes the EA model initially in a very large transverse magnetic field. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4770066]
Resumo:
Spin systems in the presence of disorder are described by two sets of degrees of freedom, associated with orientational (spin) and disorder variables, which may be characterized by two distinct relaxation times. Disordered spin models have been mostly investigated in the quenched regime, which is the usual situation in solid state physics, and in which the relaxation time of the disorder variables is much larger than the typical measurement times. In this quenched regime, disorder variables are fixed, and only the orientational variables are duly thermalized. Recent studies in the context of lattice statistical models for the phase diagrams of nematic liquid-crystalline systems have stimulated the interest of going beyond the quenched regime. The phase diagrams predicted by these calculations for a simple Maier-Saupe model turn out to be qualitative different from the quenched case if the two sets of degrees of freedom are allowed to reach thermal equilibrium during the experimental time, which is known as the fully annealed regime. In this work, we develop a transfer matrix formalism to investigate annealed disordered Ising models on two hierarchical structures, the diamond hierarchical lattice (DHL) and the Apollonian network (AN). The calculations follow the same steps used for the analysis of simple uniform systems, which amounts to deriving proper recurrence maps for the thermodynamic and magnetic variables in terms of the generations of the construction of the hierarchical structures. In this context, we may consider different kinds of disorder, and different types of ferromagnetic and anti-ferromagnetic interactions. In the present work, we analyze the effects of dilution, which are produced by the removal of some magnetic ions. The system is treated in a “grand canonical" ensemble. The introduction of two extra fields, related to the concentration of two different types of particles, leads to higher-rank transfer matrices as compared with the formalism for the usual uniform models. Preliminary calculations on a DHL indicate that there is a phase transition for a wide range of dilution concentrations. Ising spin systems on the AN are known to be ferromagnetically ordered at all temperatures; in the presence of dilution, however, there are indications of a disordered (paramagnetic) phase at low concentrations of magnetic ions.
Resumo:
The Calabrian-Peloritani arc represents key site to unravel evolution of surface processes on top of subducting lithosphere. During the Pleistocene, in fact the arc uplifted at rate of the order of about 1mm/yr, forming high-standing low-relief upland (figure 2). Our study is focused on the relationship between tectonic and land evolution in the Sila Massif, Messina strait and Peloritani Mts. Landforms reflect a competition between tectonic, climatic, and surficial processes. Many landscape evolution models that explore feedbacks between these competing processes, given steady forcing, predict a state of erosional equilibrium, where the rates of river incision and hillslope erosion balance rock uplift. It has been suggested that this may be the final constructive stage of orogenic systems. Assumptions of steady erosion and incision are used in the interpretation of exhumation and uplift rates from different geologic data, and in the formulation of fluvial incision and hillslope evolution models. In the Sila massif we carried out cosmogenic isotopes analysis on 24 samples of modern fluvial sediments to constrain long-term (~103 yr) erosion rate averaged on the catchment area. 35 longitudinal rivers profiles have been analyzed to study the tectonic signal on the landscape evolution. The rivers analyzed exhibit a wide variety of profile forms, diverging from equilibrium state form. Generally the river profiles show at least 2 and often 3 distinct concave-up knickpoint-bounded segments, characterized by different value of concavity and steepness indices. River profiles suggest three main stages of incision. The values of ks and θ in the lower segments evidence a decrease in river incision, due probably to increasing uplift rate. The cosmogenic erosion rates pointed out that old landscape upland is eroding slowly at ~0.1 mm/yr. In the contrary, the flanks of the massif is eroding faster with value from 0.4 to 0.5 mm/yr due to river incision and hillslope processes. Cosmogenic erosion rates mach linearly with steepness indices and with average hillslope gradient. In the Messina area the long term erosion rate from low-T thermochronometry are of the same order than millennium scale cosmogenic erosion rate (1-2 mm/yr). In this part of the chain the fast erosion is active since several million years, probably controlled by extensional tectonic regime. In the Peloritani Mts apatite fission-track and (U-Th)/He thermochronometry are applied to constraint the thermal history of the basement rock. Apatite fission-track ages range between 29.0±5.5 and 5.5±0.9 Ma while apatite (U-Th)/He ages vary from 19.4 to 1.0 Ma. Most of the AFT ages are younger than the overlying terrigenous sequence that in turn postdates the main orogenic phase. Through the coupling of the thermal modelling with the stratigraphic record, a Middle Miocene thermal event due to tectonic burial is unravel. This event affected a inner-intermediate portion of the Peloritani belt confined by young AFT data (<15 Ma) distribution. We interpret this thermal event as due to an out-of–sequence thrusting occurring in the inner portion of the belt. Young (U-Th)/He ages (c. 5 Ma) record a final exhumation stage with increasing rates of denudation since the Pliocene times due to postorogenic extensional tectonics and regional uplift. In the final chapter we change the spatial scale to insert digital topography analysis and field data within a geodynamic model that can explain surface evidence produced by subduction process.
Resumo:
This thesis presents new methods to simulate systems with hydrodynamic and electrostatic interactions. Part 1 is devoted to computer simulations of Brownian particles with hydrodynamic interactions. The main influence of the solvent on the dynamics of Brownian particles is that it mediates hydrodynamic interactions. In the method, this is simulated by numerical solution of the Navier--Stokes equation on a lattice. To this end, the Lattice--Boltzmann method is used, namely its D3Q19 version. This model is capable to simulate compressible flow. It gives us the advantage to treat dense systems, in particular away from thermal equilibrium. The Lattice--Boltzmann equation is coupled to the particles via a friction force. In addition to this force, acting on {it point} particles, we construct another coupling force, which comes from the pressure tensor. The coupling is purely local, i.~e. the algorithm scales linearly with the total number of particles. In order to be able to map the physical properties of the Lattice--Boltzmann fluid onto a Molecular Dynamics (MD) fluid, the case of an almost incompressible flow is considered. The Fluctuation--Dissipation theorem for the hybrid coupling is analyzed, and a geometric interpretation of the friction coefficient in terms of a Stokes radius is given. Part 2 is devoted to the simulation of charged particles. We present a novel method for obtaining Coulomb interactions as the potential of mean force between charges which are dynamically coupled to a local electromagnetic field. This algorithm scales linearly, too. We focus on the Molecular Dynamics version of the method and show that it is intimately related to the Car--Parrinello approach, while being equivalent to solving Maxwell's equations with freely adjustable speed of light. The Lagrangian formulation of the coupled particles--fields system is derived. The quasi--Hamiltonian dynamics of the system is studied in great detail. For implementation on the computer, the equations of motion are discretized with respect to both space and time. The discretization of the electromagnetic fields on a lattice, as well as the interpolation of the particle charges on the lattice is given. The algorithm is as local as possible: Only nearest neighbors sites of the lattice are interacting with a charged particle. Unphysical self--energies arise as a result of the lattice interpolation of charges, and are corrected by a subtraction scheme based on the exact lattice Green's function. The method allows easy parallelization using standard domain decomposition. Some benchmarking results of the algorithm are presented and discussed.
Resumo:
Das in dieser Arbeit vorgestellte Experiment zur Messung des magnetischen Moments des Protons basiert auf der Messung des Verhältnisses von Zyklotronfrequenz und Larmorfrequenz eines einzelnen, in einer kryogenen Doppel-Penning Falle gespeicherten Protons. In dieser Arbeit konnten erstmalig zwei der drei Bewegungsfrequenzen des Protons gleichzeitig im thermischen Gleichgewicht mit entsprechenden hochsensitiven Nachweissystemen nicht-destruktiv detektiert werden, wodurch die Messzeit zur Bestimmung der Zyklotronfrequenz halbiert werden konnte. Ferner wurden im Rahmen dieser Arbeit erstmalig einzelne Spin-Übergänge eines einzelnen Protons detektiert, wodurch die Bestimmung der Larmorfrequenz ermöglicht wird. Mithilfe des kontinuierlichen Stern-Gerlach Effekts wird durch eine sogenannte magnetische Flasche das magnetische Moment an die axiale Bewegungsmode des Protons gekoppelt. Eine Änderung des Spinzustands verursacht folglich einen Frequenzsprung der axialen Bewegungsfrequenz, welche nicht-destruktiv gemessen werden kann. Erschwert wird die Detektion des Spinzustands dadurch, dass die axiale Frequenz nicht nur vom Spinmoment, sondern auch vom Bahnmoment abhängt. Die große experimentelle Herausforderung besteht also in der Verhinderung von Energieschwankungen in den radialen Bewegungsmoden, um die Detektierbarkeit von Spin-Übergängen zu gewährleisten. Durch systematische Studien zur Stabilität der axialen Frequenz sowie einer kompletten Überarbeitung des experimentellen Aufbaus, konnte dieses Ziel erreicht werden. Erstmalig kann der Spinzustand eines einzelnen Protons mit hoher Zuverlässigkeit bestimmt werden. Somit stellt diese Arbeit einen entscheidenden Schritt auf dem Weg zu einer hochpräzisen Messung des magnetischen Moments des Protons dar.
Resumo:
Thermoelektrizität beschreibt die reversible Beeinflussung und Wechselwirkung von Elektrizität und Temperatur T in Systemen abseits des thermischen Gleichgewichtes. In diesen führt ein Temperaturgradient entlang eines thermoelektrischen Materials zu einem kontinuierlichen Ungleichgewicht in der Energieverteilung der Ladungsträger. Dies hat einen Diffusionsstrom der energiereichen Ladungsträger zum kalten Ende und der energiearmen Ladungsträger zum heißen Ende zur Folge. Da in offenen Stromkreisen kein Strom fließt, wird ein Ungleichgewicht der Ströme über das Ausbilden eines elektrischen Feldes kompensiert. Die dadurch entstehende Spannung wird als Seebeck Spannung bezeichnet. Über einen geeigneten Verbraucher, folgend aus dem Ohm'schen Gesetz, kann nun ein Strom fließen und elektrische Energie gewonnen werden. Den umgekehrten Fall beschreibt der sogenannte Peltier Effekt, bei dem ein Stromfluss durch zwei unterschiedliche miteinander verbundene Materialien ein Erwärmen oder Abkühlen der Kontaktstelle zur Folge hat. Die Effizienz eines thermoelektrischen Materials kann über die dimensionslose Größe ZT=S^2*sigma/kappa*T charakterisiert werden. Diese setzt sich zusammen aus den materialspezifischen Größen der elektrischen Leitfähigkeit sigma, der thermischen Leitfähigkeit kappa und dem Seebeck Koeffizienten S als Maß der erzeugten Spannung bei gegebener Temperaturdifferenz. Diese Arbeit verfolgt den Ansatz glaskeramische Materialien mit thermoelektrischen Kristallphasen zu synthetisieren, sie strukturell zu charakterisieren und ihre thermoelektrischen Eigenschaften zu messen, um eine Struktur-Eigenschaft Korrelation zu erarbeiten. Hierbei werden im Detail eine elektronenleitende (Hauptphase SrTi_xNb_{1-x}O_3) sowie eine löcherleitende Glaskeramik (Hauptphase Bi_2Sr_2Co_2O_y) untersucht. Unter dem Begriff Glaskeramiken sind teilkristalline Materialien zu verstehen, die aus Glasschmelzen durch gesteuerte Kristallisation hergestellt werden können. Über den Grad der Kristallisation und die Art der ausgeschiedenen Spezies an Kristallen lassen sich die physikalischen Eigenschaften dieser Systeme gezielt beeinflussen. Glaskeramiken bieten, verursacht durch ihre Restglasphase, eine niedrige thermische Leitfähigkeit und die Fermi Energie lässt sich durch Dotierungen in Richtung des Leitungs- oder Valenzbands verschieben. Ebenso besitzen glaskeramische Materialien durch ihre Porenfreiheit verbesserte mechanische Eigenschaften gegenüber Keramiken und sind weniger anfällig für den Einfluss des Sauerstoffpartialdruckes p_{O_2} auf die Parameter. Ein glaskeramisches und ein gemischt keramisch/glaskeramisches thermoelektrisches Modul aus den entwickelten Materialien werden konzipiert, präpariert, kontaktiert und bezüglich ihrer Leistung vermessen.
Resumo:
Brain functions, such as learning, orchestrating locomotion, memory recall, and processing information, all require glucose as a source of energy. During these functions, the glucose concentration decreases as the glucose is being consumed by brain cells. By measuring this drop in concentration, it is possible to determine which parts of the brain are used during specific functions and consequently, how much energy the brain requires to complete the function. One way to measure in vivo brain glucose levels is with a microdialysis probe. The drawback of this analytical procedure, as with many steadystate fluid flow systems, is that the probe fluid will not reach equilibrium with the brain fluid. Therefore, brain concentration is inferred by taking samples at multiple inlet glucose concentrations and finding a point of convergence. The goal of this thesis is to create a three-dimensional, time-dependent, finite element representation of the brainprobe system in COMSOL 4.2 that describes the diffusion and convection of glucose. Once validated with experimental results, this model can then be used to test parameters that experiments cannot access. When simulations were run using published values for physical constants (i.e. diffusivities, density and viscosity), the resulting glucose model concentrations were within the error of the experimental data. This verifies that the model is an accurate representation of the physical system. In addition to accurately describing the experimental brain-probe system, the model I created is able to show the validity of zero-net-flux for a given experiment. A useful discovery is that the slope of the zero-net-flux line is dependent on perfusate flow rate and diffusion coefficients, but it is independent of brain glucose concentrations. The model was simplified with the realization that the perfusate is at thermal equilibrium with the brain throughout the active region of the probe. This allowed for the assumption that all model parameters are temperature independent. The time to steady-state for the probe is approximately one minute. However, the signal degrades in the exit tubing due to Taylor dispersion, on the order of two minutes for two meters of tubing. Given an analytical instrument requiring a five μL aliquot, the smallest brain process measurable for this system is 13 minutes.
Resumo:
Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at 0T > 16 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.
Resumo:
How do sportspeople succeed in a non-collaborative game? An illustration of a perverse side effect of altruism Are team sports specialists predisposed to collaboration? The scientific literature on this topic is divided. The present article attempts to end this debate by applying experimental game theory. We constituted three groups of volunteers (all students aged around 20): 25 team sports specialists; 23 individual sports specialists (gymnasts, track & field athletes and swimmers) and a control group of 24 non-sportspeople. Each subgroup was divided into 3 teams that played against each other in turn (and not against teams from other subgroups). The teams played a game based on the well-known Prisoner's Dilemma (Tucker, 1950) - the paradoxical "Bluegill Sunbass Game" (Binmore, 1999) with three Nash equilibria (two suboptimal equilibria with a pure strategy and an optimal equilibrium with a mixed, egotistical strategy (p= 1/2)). This game also features a Harsanyi equilibrium (based on constant compliance with a moral code and altruism by empathy: "do not unto others that which you would not have them do unto you"). How, then, was the game played? Two teams of 8 competed on a handball court. Each team wore a distinctive jersey. The game lasted 15 minutes and the players were allowed to touch the handball ball with their feet or hands. After each goal, each team had to return to its own half of the court. Players were allowed to score in either goal and thus cooperate with their teammates or not, as they saw fit. A goal against the nominally opposing team (a "guardian" strategy, by analogy with the Bluegill Sunbass Game) earned a point for everyone in the team. For an own goal (a "sneaker" strategy), only the scorer earned a point - hence the paradox. If all the members of a team work together to score a goal, everyone is happy (the Harsanyi solution). However, the situation was not balanced in the Nashian sense: each player had a reason to be disloyal to his/her team at the merest opportunity. But if everyone adopts a "sneaker" strategy, the game becomes a free-for-all and the chances of scoring become much slimmer. In a context in which doubt reigns as to the honesty of team members and "legal betrayals", what type of sportsperson will score the most goals? By analogy with the Bluegill Sunbass Game, we recorded direct motor interactions (passes and shots) based on either a "guardian" tactic (i.e. collaboration within the team) or a "sneaker" tactic (shots and passes against the player's designated team). So, was the group of team sports specialist more collaborative than the other two groups? The answer was no. A statistical analysis (difference from chance in a logistic regression) enabled us to draw three conclusions: ?For the team sports specialists, the Nash equilibrium (1950) was stronger than the Harsanyi equilibrium (1977). ?The sporting principles of equilibrium and exclusivity are not appropriate in the Bluegill Sunbass Game and are quickly abandoned by the team sports specialists. The latter are opportunists who focus solely on winning and do well out of it. ?The most altruistic players are the main losers in the Bluegill Sunbass Game: they keep the game alive but contribute to their own defeat. In our experiment, the most altruistic players tended to be the females and the individual sports specialists
Resumo:
How do sportspeople succeed in a non-collaborative game? An illustration of a perverse side effect of altruism Are team sports specialists predisposed to collaboration? The scientific literature on this topic is divided. The present article attempts to end this debate by applying experimental game theory. We constituted three groups of volunteers (all students aged around 20): 25 team sports specialists; 23 individual sports specialists (gymnasts, track & field athletes and swimmers) and a control group of 24 non-sportspeople. Each subgroup was divided into 3 teams that played against each other in turn (and not against teams from other subgroups). The teams played a game based on the well-known Prisoner's Dilemma (Tucker, 1950) - the paradoxical "Bluegill Sunbass Game" (Binmore, 1999) with three Nash equilibria (two suboptimal equilibria with a pure strategy and an optimal equilibrium with a mixed, egotistical strategy (p= 1/2)). This game also features a Harsanyi equilibrium (based on constant compliance with a moral code and altruism by empathy: "do not unto others that which you would not have them do unto you"). How, then, was the game played? Two teams of 8 competed on a handball court. Each team wore a distinctive jersey. The game lasted 15 minutes and the players were allowed to touch the handball ball with their feet or hands. After each goal, each team had to return to its own half of the court. Players were allowed to score in either goal and thus cooperate with their teammates or not, as they saw fit. A goal against the nominally opposing team (a "guardian" strategy, by analogy with the Bluegill Sunbass Game) earned a point for everyone in the team. For an own goal (a "sneaker" strategy), only the scorer earned a point - hence the paradox. If all the members of a team work together to score a goal, everyone is happy (the Harsanyi solution). However, the situation was not balanced in the Nashian sense: each player had a reason to be disloyal to his/her team at the merest opportunity. But if everyone adopts a "sneaker" strategy, the game becomes a free-for-all and the chances of scoring become much slimmer. In a context in which doubt reigns as to the honesty of team members and "legal betrayals", what type of sportsperson will score the most goals? By analogy with the Bluegill Sunbass Game, we recorded direct motor interactions (passes and shots) based on either a "guardian" tactic (i.e. collaboration within the team) or a "sneaker" tactic (shots and passes against the player's designated team). So, was the group of team sports specialist more collaborative than the other two groups? The answer was no. A statistical analysis (difference from chance in a logistic regression) enabled us to draw three conclusions: ?For the team sports specialists, the Nash equilibrium (1950) was stronger than the Harsanyi equilibrium (1977). ?The sporting principles of equilibrium and exclusivity are not appropriate in the Bluegill Sunbass Game and are quickly abandoned by the team sports specialists. The latter are opportunists who focus solely on winning and do well out of it. ?The most altruistic players are the main losers in the Bluegill Sunbass Game: they keep the game alive but contribute to their own defeat. In our experiment, the most altruistic players tended to be the females and the individual sports specialists
Resumo:
How do sportspeople succeed in a non-collaborative game? An illustration of a perverse side effect of altruism Are team sports specialists predisposed to collaboration? The scientific literature on this topic is divided. The present article attempts to end this debate by applying experimental game theory. We constituted three groups of volunteers (all students aged around 20): 25 team sports specialists; 23 individual sports specialists (gymnasts, track & field athletes and swimmers) and a control group of 24 non-sportspeople. Each subgroup was divided into 3 teams that played against each other in turn (and not against teams from other subgroups). The teams played a game based on the well-known Prisoner's Dilemma (Tucker, 1950) - the paradoxical "Bluegill Sunbass Game" (Binmore, 1999) with three Nash equilibria (two suboptimal equilibria with a pure strategy and an optimal equilibrium with a mixed, egotistical strategy (p= 1/2)). This game also features a Harsanyi equilibrium (based on constant compliance with a moral code and altruism by empathy: "do not unto others that which you would not have them do unto you"). How, then, was the game played? Two teams of 8 competed on a handball court. Each team wore a distinctive jersey. The game lasted 15 minutes and the players were allowed to touch the handball ball with their feet or hands. After each goal, each team had to return to its own half of the court. Players were allowed to score in either goal and thus cooperate with their teammates or not, as they saw fit. A goal against the nominally opposing team (a "guardian" strategy, by analogy with the Bluegill Sunbass Game) earned a point for everyone in the team. For an own goal (a "sneaker" strategy), only the scorer earned a point - hence the paradox. If all the members of a team work together to score a goal, everyone is happy (the Harsanyi solution). However, the situation was not balanced in the Nashian sense: each player had a reason to be disloyal to his/her team at the merest opportunity. But if everyone adopts a "sneaker" strategy, the game becomes a free-for-all and the chances of scoring become much slimmer. In a context in which doubt reigns as to the honesty of team members and "legal betrayals", what type of sportsperson will score the most goals? By analogy with the Bluegill Sunbass Game, we recorded direct motor interactions (passes and shots) based on either a "guardian" tactic (i.e. collaboration within the team) or a "sneaker" tactic (shots and passes against the player's designated team). So, was the group of team sports specialist more collaborative than the other two groups? The answer was no. A statistical analysis (difference from chance in a logistic regression) enabled us to draw three conclusions: ?For the team sports specialists, the Nash equilibrium (1950) was stronger than the Harsanyi equilibrium (1977). ?The sporting principles of equilibrium and exclusivity are not appropriate in the Bluegill Sunbass Game and are quickly abandoned by the team sports specialists. The latter are opportunists who focus solely on winning and do well out of it. ?The most altruistic players are the main losers in the Bluegill Sunbass Game: they keep the game alive but contribute to their own defeat. In our experiment, the most altruistic players tended to be the females and the individual sports specialists
Resumo:
An understanding of sediment redox conditions across the Paleocene-Eocene thermal maximum (PETM) (?55 Ma) is essential for evaluating changes in processes that control deep-sea oxygenation, as well as identifying the mechanisms responsible for driving the benthic foraminifera extinction. Sites cored on the flanks of Walvis Ridge (Ocean Drilling Program Leg 208, Sites 1262, 1266, and 1263) allow us to examine changes in bottom and pore water redox conditions across a ~2 km depth transect of deep-sea sediments of PETM age recovered from the South Atlantic. Here we present measurements of the concentrations of redox-sensitive trace metals manganese (Mn) and uranium (U) in bulk sediment as proxies for redox chemistry at the sediment-water interface and below. All three Walvis Ridge sites exhibit bulk Mn enrichment factors (EF) ranging between 4 and 12 prior to the warming, values at crustal averages (Mn EF = 1) during the warming interval, and a return to pre-event values during the recovery period. U enrichment factors across the PETM remains at crustal averages (U EF = 1) at Site 1262 (deep) and Site 1266 (intermediate depth). U enrichment factors at Site 1263 (shallow) peaked at 5 immediately prior to the PETM and dropped to values near crustal averages during and after the event. All sites were lower in dissolved oxygen content during the PETM. Before and after the PETM, the deep and intermediate sites were oxygenated, while the shallow site was suboxic. Our geochemical results indicate that oxygen concentrations did indeed drop during the PETM but not sufficiently to cause massive extinction of benthic foraminifera.
Resumo:
Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment.