983 resultados para Decay time


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The time evolution of measured plasma parameters, including the electron energy distribution function (EEDF), in the discharge and post-discharge regime of a pulsed hydrogen magnetic multipole plasma is presented. The time necessary for the plasma to reach equilibrium has been established as 160-mu-s. The present results clarify the mechanisms which initiate the discharge. The decay rates of the charged-particle density and energy in the post-discharge have been measured. These measurements indicate that particle transport to the wall is the dominant loss mechanism for both charged-particle density and energy. The time-resolved EEDF is found to be non-Maxwellian in the discharge and Maxwellian in the late post-discharge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Monte Carlo code (artis) for modelling time-dependent three-dimensional spectral synthesis in chemically inhomogeneous models of Type Ia supernova ejecta is presented. Following the propagation of ?-ray photons, emitted by the radioactive decay of the nucleosynthesis products, energy is deposited in the supernova ejecta and the radiative transfer problem is solved self-consistently, enforcing the constraint of energy conservation in the comoving frame. Assuming a photoionization-dominated plasma, the equations of ionization equilibrium are solved together with the thermal balance equation adopting an approximate treatment of excitation. Since we implement a fully general treatment of line formation, there are no free parameters to adjust. Thus, a direct comparison between synthetic spectra and light curves, calculated from hydrodynamic explosion models, and observations is feasible. The code is applied to the well-known W7 explosion model and the results tested against other studies. Finally, the effect of asymmetric ejecta on broad-band light curves and spectra is illustrated using an elliptical toy model. © 2009 RAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Energy release from radioactive decays contributes significantly to supernova light curves. Previous works, which considered the energy deposited by ?-rays and positrons produced by Ni, Co, Ni, Co, Ti and Sc, have been quite successful in explaining the light curves of both core collapse and thermonuclear supernovae. We point out that Auger and internal conversion electrons, together with the associated X-ray cascade, constitute an additional heat source. When a supernova is transparent to ?-rays, these electrons can contribute significantly to light curves for reasonable nucleosynthetic yields. In particular, the electrons emitted in the decay of Co, which are largely due to internal conversion from a fortuitously low-lying 3/2 state in the daughter Fe, constitute an additional significant energy-deposition channel. We show that when the heating by these electrons is accounted for, a slow-down in the light curve of SN 1998bw is naturally obtained for typical hypernova nucleosynthetic yields. Additionally, we show that for generic Type Ia supernova yields, the Auger electrons emitted in the ground-state to ground-state electron capture decay of Fe exceed the energy released by the Ti decay chain for many years after the explosion. © 2009 RAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear wave theory models are commonly applied to predict the performance of bottom-hinged oscillating wave surge converters (OWSC) in operational sea states. To account for non-linear effects, the additional input of coefficients not included in the model itself becomes necessary. In ocean engineering it is
common practice to obtain damping coefficients of floating structures from free decay tests. This paper presents results obtained from experimental tank tests and numerical computational fluid dynamics simulations of OWSC’s. Agreement between numerical and experimental methods is found to be very good, with CFD providing more data points at small amplitude rotations.
Analysis of the obtained data reveals that linear quadratic-damping, as commonly used in time domain models, is not able to accurately model the occurring damping over the whole regime of rotation amplitudes. The authors
conclude that a hyperbolic function is most suitable to express the instantaneous damping ratio over the rotation amplitude and would be the best choice to be used in coefficient based time domain models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the X-ray line flux of the Mn Kα line at 5.9 keV from the decay of 55Fe is a promising diagnostic to distinguish between Type Ia supernova (SN Ia) explosion models. Using radiation transport calculations, we compute the line flux for two three-dimensional explosion models: a near-Chandrasekhar mass delayed detonation and a violent merger of two (1.1 and 0.9 M⊙) white dwarfs. Both models are based on solar metallicity zero-age main-sequence progenitors. Due to explosive nuclear burning at higher density, the delayed-detonation model synthesizes ˜3.5 times more radioactive 55Fe than the merger model. As a result, we find that the peak Mn Kα line flux of the delayed-detonation model exceeds that of the merger model by a factor of ˜4.5. Since in both models the 5.9-keV X-ray flux peaks five to six years after the explosion, a single measurement of the X-ray line emission at this time can place a constraint on the explosion physics that is complementary to those derived from earlier phase optical spectra or light curves. We perform detector simulations of current and future X-ray telescopes to investigate the possibilities of detecting the X-ray line at 5.9 keV. Of the currently existing telescopes, XMM-Newton/pn is the best instrument for close (≲1-2 Mpc), non-background limited SNe Ia because of its large effective area. Due to its low instrumental background, Chandra/ACIS is currently the best choice for SNe Ia at distances above ˜2 Mpc. For the delayed-detonation scenario, a line detection is feasible with Chandra up to ˜3 Mpc for an exposure time of 106 s. We find that it should be possible with currently existing X-ray instruments (with exposure times ≲5 × 105 s) to detect both of our models at sufficiently high S/N to distinguish between them for hypothetical events within the Local Group. The prospects for detection will be better with future missions. For example, the proposed Athena/X-IFU instrument could detect our delayed-detonation model out to a distance of ˜5 Mpc. This would make it possible to study future events occurring during its operational life at distances comparable to those of the recent supernovae SN 2011fe (˜6.4 Mpc) and SN 2014J (˜3.5 Mpc).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les logiciels sont en constante évolution, nécessitant une maintenance et un développement continus. Ils subissent des changements tout au long de leur vie, que ce soit pendant l'ajout de nouvelles fonctionnalités ou la correction de bogues dans le code. Lorsque ces logiciels évoluent, leurs architectures ont tendance à se dégrader avec le temps et deviennent moins adaptables aux nouvelles spécifications des utilisateurs. Elles deviennent plus complexes et plus difficiles à maintenir. Dans certains cas, les développeurs préfèrent refaire la conception de ces architectures à partir du zéro plutôt que de prolonger la durée de leurs vies, ce qui engendre une augmentation importante des coûts de développement et de maintenance. Par conséquent, les développeurs doivent comprendre les facteurs qui conduisent à la dégradation des architectures, pour prendre des mesures proactives qui facilitent les futurs changements et ralentissent leur dégradation. La dégradation des architectures se produit lorsque des développeurs qui ne comprennent pas la conception originale du logiciel apportent des changements au logiciel. D'une part, faire des changements sans comprendre leurs impacts peut conduire à l'introduction de bogues et à la retraite prématurée du logiciel. D'autre part, les développeurs qui manquent de connaissances et–ou d'expérience dans la résolution d'un problème de conception peuvent introduire des défauts de conception. Ces défauts ont pour conséquence de rendre les logiciels plus difficiles à maintenir et évoluer. Par conséquent, les développeurs ont besoin de mécanismes pour comprendre l'impact d'un changement sur le reste du logiciel et d'outils pour détecter les défauts de conception afin de les corriger. Dans le cadre de cette thèse, nous proposons trois principales contributions. La première contribution concerne l'évaluation de la dégradation des architectures logicielles. Cette évaluation consiste à utiliser une technique d’appariement de diagrammes, tels que les diagrammes de classes, pour identifier les changements structurels entre plusieurs versions d'une architecture logicielle. Cette étape nécessite l'identification des renommages de classes. Par conséquent, la première étape de notre approche consiste à identifier les renommages de classes durant l'évolution de l'architecture logicielle. Ensuite, la deuxième étape consiste à faire l'appariement de plusieurs versions d'une architecture pour identifier ses parties stables et celles qui sont en dégradation. Nous proposons des algorithmes de bit-vecteur et de clustering pour analyser la correspondance entre plusieurs versions d'une architecture. La troisième étape consiste à mesurer la dégradation de l'architecture durant l'évolution du logiciel. Nous proposons un ensemble de m´etriques sur les parties stables du logiciel, pour évaluer cette dégradation. La deuxième contribution est liée à l'analyse de l'impact des changements dans un logiciel. Dans ce contexte, nous présentons une nouvelle métaphore inspirée de la séismologie pour identifier l'impact des changements. Notre approche considère un changement à une classe comme un tremblement de terre qui se propage dans le logiciel à travers une longue chaîne de classes intermédiaires. Notre approche combine l'analyse de dépendances structurelles des classes et l'analyse de leur historique (les relations de co-changement) afin de mesurer l'ampleur de la propagation du changement dans le logiciel, i.e., comment un changement se propage à partir de la classe modifiée è d'autres classes du logiciel. La troisième contribution concerne la détection des défauts de conception. Nous proposons une métaphore inspirée du système immunitaire naturel. Comme toute créature vivante, la conception de systèmes est exposée aux maladies, qui sont des défauts de conception. Les approches de détection sont des mécanismes de défense pour les conception des systèmes. Un système immunitaire naturel peut détecter des pathogènes similaires avec une bonne précision. Cette bonne précision a inspiré une famille d'algorithmes de classification, appelés systèmes immunitaires artificiels (AIS), que nous utilisions pour détecter les défauts de conception. Les différentes contributions ont été évaluées sur des logiciels libres orientés objets et les résultats obtenus nous permettent de formuler les conclusions suivantes: • Les métriques Tunnel Triplets Metric (TTM) et Common Triplets Metric (CTM), fournissent aux développeurs de bons indices sur la dégradation de l'architecture. La d´ecroissance de TTM indique que la conception originale de l'architecture s’est dégradée. La stabilité de TTM indique la stabilité de la conception originale, ce qui signifie que le système est adapté aux nouvelles spécifications des utilisateurs. • La séismologie est une métaphore intéressante pour l'analyse de l'impact des changements. En effet, les changements se propagent dans les systèmes comme les tremblements de terre. L'impact d'un changement est plus important autour de la classe qui change et diminue progressivement avec la distance à cette classe. Notre approche aide les développeurs à identifier l'impact d'un changement. • Le système immunitaire est une métaphore intéressante pour la détection des défauts de conception. Les résultats des expériences ont montré que la précision et le rappel de notre approche sont comparables ou supérieurs à ceux des approches existantes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to characterise the laser ablation process from high-Tc superconductors, the time evolution of plasma produced by a Q-switching Nd:YAG laser from a GdBa2Cu3O7 superconducting sample has been studied using spectroscopic and ion-probe techniques. It has been observed that there is a fairly large delay for the onset of the emission from oxide species in comparison with those from atoms and ions of the constituent elements present in the plasma. Faster decay occurs for emission from oxides and ions compared with that from neutral atoms. These observations support the view that oxides are not directly produced from the target, but are formed by the recombination process while the plasma cools down. Plasma parameters such as temperature and velocity are also evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an imaginary-time path-integral study of the problem of quantum decay of a metastable state of a uniaxial magnetic particle placed in the magnetic field at an arbitrary angle. Our findings agree with earlier results of Zaslavskii obtained by mapping the spin Hamiltonian onto a particle Hamiltonian. In the limit of low barrier, weak dependence of the decay rate on the angle is found, except for the field which is almost normal to the anisotropy axis, where the rate is sharply peaked, and for the field approaching the parallel orientation, where the rate rapidly goes to zero. This distinct angular dependence, together with the dependence of the rate on the field strength, provides an independent test for macroscopic spin tunneling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The real-time dynamics of Na_n (n=3-21) cluster multiphoton ionization and fragmentation has been studied in beam experiments applying femtosecond pump-probe techniques in combination with ion and electron spectroscopy. Three dimensional wave packet motions in the trimer Na_3 ground state X and excited state B have been observed. We report the first study of cluster properties (energy, bandwidth and lifetime of intermediate resonances Na_n^*) with femtosecond laser pulses. The observation of four absorption resonances for the cluster Na_8 with different energy widths and different decay patterns is more difficult to interpret by surface plasmon like resonances than by molecular structure and dynamics. Timeresolved fragmentation of cluster ions Na_n^+ indicates that direct photo-induced fragmentation processes are more important at short times than the statistical unimolecular decay.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Wildlife managers often require estimates of abundance. Direct methods of estimation are often impractical, especially in closed-forest environments, so indirect methods such as dung or nest surveys are increasingly popular. 2. Dung and nest surveys typically have three elements: surveys to estimate abundance of the dung or nests; experiments to estimate the production (defecation or nest construction) rate; and experiments to estimate the decay or disappearance rate. The last of these is usually the most problematic, and was the subject of this study. 3. The design of experiments to allow robust estimation of mean time to decay was addressed. In most studies to date, dung or nests have been monitored until they disappear. Instead, we advocate that fresh dung or nests are located, with a single follow-up visit to establish whether the dung or nest is still present or has decayed. 4. Logistic regression was used to estimate probability of decay as a function of time, and possibly of other covariates. Mean time to decay was estimated from this function. 5. Synthesis and applications. Effective management of mammal populations usually requires reliable abundance estimates. The difficulty in estimating abundance of mammals in forest environments has increasingly led to the use of indirect survey methods, in which abundance of sign, usually dung (e.g. deer, antelope and elephants) or nests (e.g. apes), is estimated. Given estimated rates of sign production and decay, sign abundance estimates can be converted to estimates of animal abundance. Decay rates typically vary according to season, weather, habitat, diet and many other factors, making reliable estimation of mean time to decay of signs present at the time of the survey problematic. We emphasize the need for retrospective rather than prospective rates, propose a strategy for survey design, and provide analysis methods for estimating retrospective rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peat soils consist of poorly decomposed plant detritus, preserved by low decay rates, and deep peat deposits are globally significant stores in the carbon cycle. High water tables and low soil temperatures are commonly held to be the primary reasons for low peat decay rates. However, recent studies suggest a thermodynamic limit to peat decay, whereby the slow turnover of peat soil pore water may lead to high concentrations of phenols and dissolved inorganic carbon. In sufficient concentrations, these chemicals may slow or even halt microbial respiration, providing a negative feedback to peat decay. We document the analysis of a simple, one-dimensional theoretical model of peatland pore water residence time distributions (RTDs). The model suggests that broader, thicker peatlands may be more resilient to rapid decay caused by climate change because of slow pore water turnover in deep layers. Even shallow peat deposits may also be resilient to rapid decay if rainfall rates are low. However, the model suggests that even thick peatlands may be vulnerable to rapid decay under prolonged high rainfall rates, which may act to flush pore water with fresh rainwater. We also used the model to illustrate a particular limitation of the diplotelmic (i.e., acrotelm and catotelm) model of peatland structure. Model peatlands of contrasting hydraulic structure exhibited identical water tables but contrasting RTDs. These scenarios would be treated identically by diplotelmic models, although the thermodynamic limit suggests contrasting decay regimes. We therefore conclude that the diplotelmic model be discarded in favor of model schemes that consider continuous variation in peat properties and processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The case is made for a more careful analysis of the large time asymptotic of infinite particle systems in the thermodynamic limit beyond zero density. The insufficiency of current analysis even in the model case of free particles is demonstrated. Recent advances based on more sophisticated analytical tools like functions of mean variation and Hardy spaces are sketched.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We examine to what degree we can expect to obtain accurate temperature trends for the last two decades near the surface and in the lower troposphere. We compare temperatures obtained from surface observations and radiosondes as well as satellite-based measurements from the Microwave Soundings Units (MSU), which have been adjusted for orbital decay and non-linear instrument-body effects, and reanalyses from the European Centre for Medium-Range Weather Forecasts (ERA) and the National Centre for Environmental Prediction (NCEP). In regions with abundant conventional data coverage, where the MSU has no major influence on the reanalysis, temperature anomalies obtained from microwave sounders, radiosondes and from both reanalyses agree reasonably. Where coverage is insufficient, in particular over the tropical oceans, large differences are found between the MSU and either reanalysis. These differences apparently relate to changes in the satellite data availability and to differing satellite retrieval methodologies, to which both reanalyses are quite sensitive over the oceans. For NCEP, this results from the use of raw radiances directly incorporated into the analysis, which make the reanalysis sensitive to changes in the underlying algorithms, e.g. those introduced in August 1992. For ERA, the bias-correction of the one-dimensional variational analysis may introduce an error when the satellite relative to which the correction is calculated is biased itself or when radiances change on a time scale longer than a couple of months, e.g. due to orbit decay. ERA inhomogeneities are apparent in April 1985, October/November 1986 and April 1989. These dates can be identified with the replacements of satellites. It is possible that a negative bias in the sea surface temperatures (SSTs) used in the reanalyses may have been introduced over the period of the satellite record. This could have resulted from a decrease in the number of ship measurements, a concomitant increase in the importance of satellite-derived SSTs, and a likely cold bias in the latter. Alternately, a warm bias in SSTs could have been caused by an increase in the percentage of buoy measurements (relative to deeper ship intake measurements) in the tropical Pacific. No indications for uncorrected inhomogeneities of land surface temperatures could be found. Near-surface temperatures have biases in the boundary layer in both reanalyses, presumably due to the incorrect treatment of snow cover. The increase of near-surface compared to lower tropospheric temperatures in the last two decades may be due to a combination of several factors, including high-latitude near-surface winter warming due to an enhanced NAO and upper-tropospheric cooling due to stratospheric ozone decrease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The question of linear sheared-disturbance evolution in constant-shear parallel flow is here reexamined with regard to the temporary-amplification phenomenon noted first by Orr in 1907. The results apply directly to Rossby waves on a beta-plane, and are also relevant to the Eady model of baroclinic instability. It is shown that an isotropic initial distribution of standing waves maintains a constant energy level throughout the shearing process, the amplification of some waves being precisely balanced by the decay of the others. An expression is obtained for the energy of a distribution of disturbances whose wavevectors lie within a given angular wedge and an upper bound derived. It is concluded that the case for ubiquitous amplification made in recent studies may have been somewhat overstated: while carefully-chosen individual Fourier components can amplify considerably before they decay. a general distribution will tend to exhibit little or no amplification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Basic concepts of the form of high-latitude ionospheric flows and their excitation and decay are discussed in the light of recent high time-resolution measurements made by ground-based radars. It is first pointed out that it is in principle impossible to adequately parameterize these flows by any single quantity derived from concurrent interplanetary conditions. Rather, even at its simplest, the flow must be considered to consist of two basic time-dependent components. The first is the flow driven by magnetopause coupling processes alone, principally by dayside reconnection. These flows may indeed be reasonably parameterized in terms of concurrent near-Earth interplanetary conditions, principally by the interplanetary magnetic field (IMF) vector. The second is the flow driven by tail reconnection alone. As a first approximation these flows may also be parameterized in terms of interplanetary conditions, principally the north-south component of the IMF, but with a delay in the flow response of around 30-60 min relative to the IMF. A delay in the tail response of this order must be present due to the finite speed of information propagation in the system, and we show how "growth" and "decay" of the field and flow configuration then follow as natural consequences. To discuss the excitation and decay of the two reconnection-driven components of the flow we introduce that concept of a flow-free equilibrium configuration for a magnetosphere which contains a given (arbitrary) amount of open flux. Reconnection events act either to create or destroy open flux, thus causing departures of the system from the equilibrium configuration. Flow is then excited which moves the system back towards equilibrium with the changed amount of open flux. We estimate that the overall time scale associated with the excitation and decay of the flow is about 15 min. The response of the system to both impulsive (flux transfer event) and continuous reconnection is discussed in these terms.