977 resultados para CONVECTIVE PARAMETERIZATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

To interpret the mean depth of cosmic ray air shower maximum and its dispersion, we parametrize those two observables as functions of the rst two moments of the lnA distribution. We examine the goodness of this simple method through simulations of test mass distributions. The application of the parameterization to Pierre Auger Observatory data allows one to study the energy dependence of the mean lnA and of its variance under the assumption of selected hadronic interaction models. We discuss possible implications of these dependences in term of interaction models and astrophysical cosmic ray sources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A theoretical approach is used here to explain experimental results obtained from the electrosynthesis of polypyrrole-2-carboxylic acid (PPY-2-COOH) films in nonaqueous medium. An analysis of the Fukui function (reactivity index) indicates that the monomer (pyrrole-2-carboxylic acid, PY-2-COOH), and dimers and trimers are oxidized in the C4 or C5 positions of the heterocyclic ring of the PY-2-COOH structure. After calculating the heat of formation using semiempirical Austin Model 1 post-Hartree-Fock parameterization for dimer species, both C4 and C5 positions adjacent to the aromatic rings of PPY-2-COOH were considered the most susceptible ones to oxidative coupling reactions. The ZINDO-S/CI semiempirical method was used to simulate the electronic transitions typically seen in the UV-VIS-NIR range in monomer and oligomers with different conjugation lengths. The use of an electrochemical quartz crystal microbalance provides sufficient information to propose a polymerization mechanism of PY-2-COOH based on molecular modeling and experimental results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] Peak aerobic power in humans (VO2,peak) is markedly affected by inspired O2 tension (FIO2). The question to be answered in this study is what factor plays a major role in the limitation of muscle peak VO2 in hypoxia: arterial O2 partial pressure (Pa,O2) or O2 content (Ca,O2)? Thus, cardiac output (dye dilution with Cardio-green), leg blood flow (thermodilution), intra-arterial blood pressure and femoral arterial-to-venous differences in blood gases were determined in nine lowlanders studied during incremental exercise using a large (two-legged cycle ergometer exercise: Bike) and a small (one-legged knee extension exercise: Knee)muscle mass in normoxia, acute hypoxia (AH) (FIO2 = 0.105) and after 9 weeks of residence at 5260 m (CH). Reducing the size of the active muscle mass blunted by 62% the effect of hypoxia on VO2,peak in AH and abolished completely the effect of hypoxia on VO2,peak after altitude acclimatization. Acclimatization improved Bike peak exercise Pa,O2 from 34 +/- 1 in AH to 45 +/- 1 mmHg in CH(P <0.05) and Knee Pa,O2 from 38 +/- 1 to 55 +/- 2 mmHg(P <0.05). Peak cardiac output and leg blood flow were reduced in hypoxia only during Bike. Acute hypoxia resulted in reduction of systemic O2 delivery (46 and 21%) and leg O2 delivery (47 and 26%) during Bike and Knee, respectively, almost matching the corresponding reduction in VO2,peak. Altitude acclimatization restored fully peak systemic and leg O(2) delivery in CH (2.69 +/- 0.27 and 1.28 +/- 0.11 l min(-1), respectively) to sea level values (2.65 +/- 0.15 and 1.16 +/- 0.11 l min(-1), respectively) during Knee, but not during Bike. During Knee in CH, leg oxygen delivery was similar to normoxia and, therefore, also VO2,peak in spite of a Pa,O2 of 55 mmHg. Reducing the size of the active mass improves pulmonary gas exchange during hypoxic exercise, attenuates the Bohr effect on oxygen uploading at the lungs and preserves sea level convective O2 transport to the active muscles. Thus, the altitude-acclimatized human has potentially a similar exercising capacity as at sea level when the exercise model allows for an adequate oxygen delivery (blood flow x Ca,O2), with only a minor role of Pa,O2 per se, when Pa,O2 is more than 55 mmHg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] Hypoxia-induced hyperventilation is critical to improve blood oxygenation, particularly when the arterial Po2 lies in the steep region of the O2 dissociation curve of the hemoglobin (ODC). Hyperventilation increases alveolar Po2 and, by increasing pH, left shifts the ODC, increasing arterial saturation (Sao2) 6 to 12 percentage units. Pulmonary gas exchange (PGE) is efficient at rest and, hence, the alveolar-arterial Po2 difference (Pao2-Pao2) remains close to 0 to 5mm Hg. The (Pao2-Pao2) increases with exercise duration and intensity and the level of hypoxia. During exercise in hypoxia, diffusion limitation explains most of the additional Pao2-Pao2. With altitude, acclimatization exercise (Pao2-Pao2) is reduced, but does not reach the low values observed in high altitude natives, who possess an exceptionally high DLo2. Convective O2 transport depends on arterial O2 content (Cao2), cardiac output (Q), and muscle blood flow (LBF). During whole-body exercise in severe acute hypoxia and in chronic hypoxia, peak Q and LBF are blunted, contributing to the limitation of maximal oxygen uptake (Vo2max). During small-muscle exercise in hypoxia, PGE is less perturbed, Cao2 is higher, and peak Q and LBF achieve values similar to normoxia. Although the Po2 gradient driving O2 diffusion into the muscles is reduced in hypoxia, similar levels of muscle O2 diffusion are observed during small-mass exercise in chronic hypoxia and in normoxia, indicating that humans have a functional reserve in muscle O2 diffusing capacity, which is likely utilized during exercise in hypoxia. In summary, hypoxia reduces Vo2max because it limits O2 diffusion in the lung.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] 1. The present study examined whether reductions in muscle blood flow with exercise-induced dehydration would reduce substrate delivery and metabolite and heat removal to and from active skeletal muscles during prolonged exercise in the heat. A second aim was to examine the effects of dehydration on fuel utilisation across the exercising leg and identify factors related to fatigue. 2. Seven cyclists performed two cycle ergometer exercise trials in the heat (35 C; 61 +/- 2 % of maximal oxygen consumption rate, VO2,max), separated by 1 week. During the first trial (dehydration, DE), they cycled until volitional exhaustion (135 +/- 4 min, mean +/- s.e.m.), while developing progressive DE and hyperthermia (3.9 +/- 0.3 % body weight loss and 39.7 +/- 0.2 C oesophageal temperature, Toes). On the second trial (control), they cycled for the same period of time maintaining euhydration by ingesting fluids and stabilising Toes at 38.2 +/- 0.1 degrees C. 3. After 20 min of exercise in both trials, leg blood flow (LBF) and leg exchange of lactate, glucose, free fatty acids (FFA) and glycerol were similar. During the 20 to 135 +/- 4 min period of exercise, LBF declined significantly in DE but tended to increase in control. Therefore, after 120 and 135 +/- 4 min of DE, LBF was 0.6 +/- 0.2 and 1.0 +/- 0.3 l min-1 lower (P < 0.05), respectively, compared with control. 4. The lower LBF after 2 h in DE did not alter glucose or FFA delivery compared with control. However, DE resulted in lower (P < 0.05) net FFA uptake and higher (P < 0.05) muscle glycogen utilisation (45 %), muscle lactate accumulation (4.6-fold) and net lactate release (52 %), without altering net glycerol release or net glucose uptake. 5. In both trials, the mean convective heat transfer from the exercising legs to the body core ranged from 6.3 +/- 1.7 to 7.2 +/- 1.3 kJ min-1, thereby accounting for 35-40 % of the estimated rate of heat production ( approximately 18 kJ min-1). 6. At exhaustion in DE, blood lactate values were low whereas blood glucose and muscle glycogen levels were still high. Exhaustion coincided with high body temperature ( approximately 40 C). 7. In conclusion, the present results demonstrate that reductions in exercising muscle blood flow with dehydration do not impair either the delivery of glucose and FFA or the removal of lactate during moderately intense prolonged exercise in the heat. However, dehydration during exercise in the heat elevates carbohydrate oxidation and lactate production. A major finding is that more than one-half of the metabolic heat liberated in the contracting leg muscles is dissipated directly to the surrounding environment. The present results indicate that hyperthermia, rather than altered metabolism, is the main factor underlying the early fatigue with dehydration during prolonged exercise in the heat.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quasars and AGN play an important role in many aspects of the modern cosmology. Of particular interest is the issue of the interplay between AGN activity and formation and evolution of galaxies and structures. Studies on nearby galaxies revealed that most (and possibly all) galaxy nuclei contain a super-massive black hole (SMBH) and that between a third and half of them are showing some evidence of activity (Kormendy and Richstone, 1995). The discovery of a tight relation between black holes mass and velocity dispersion of their host galaxy suggests that the evolution of the growth of SMBH and their host galaxy are linked together. In this context, studying the evolution of AGN, through the luminosity function (LF), is fundamental to constrain the theories of galaxy and SMBH formation and evolution. Recently, many theories have been developed to describe physical processes possibly responsible of a common formation scenario for galaxies and their central black hole (Volonteri et al., 2003; Springel et al., 2005a; Vittorini et al., 2005; Hopkins et al., 2006a) and an increasing number of observations in different bands are focused on collecting larger and larger quasar samples. Many issues remain however not yet fully understood. In the context of the VVDS (VIMOS-VLT Deep Survey), we collected and studied an unbiased sample of spectroscopically selected faint type-1 AGN with a unique and straightforward selection function. Indeed, the VVDS is a large, purely magnitude limited spectroscopic survey of faint objects, free of any morphological and/or color preselection. We studied the statistical properties of this sample and its evolution up to redshift z 4. Because of the contamination of the AGN light by their host galaxies at the faint magnitudes explored by our sample, we observed that a significant fraction of AGN in our sample would be missed by the UV excess and morphological criteria usually adopted for the pre-selection of optical QSO candidates. If not properly taken into account, this failure in selecting particular sub-classes of AGN could, in principle, affect some of the conclusions drawn from samples of AGN based on these selection criteria. The absence of any pre-selection in the VVDS leads us to have a very complete sample of AGN, including also objects with unusual colors and continuum shape. The VVDS AGN sample shows in fact redder colors than those expected by comparing it, for example, with the color track derived from the SDSS composite spectrum. In particular, the faintest objects have on average redder colors than the brightest ones. This can be attributed to both a large fraction of dust-reddened objects and a significant contamination from the host galaxy. We have tested these possibilities by examining the global spectral energy distribution of each object using, in addition to the U, B, V, R and I-band magnitudes, also the UV-Galex and the IR-Spitzer bands, and fitting it with a combination of AGN and galaxy emission, allowing also for the possibility of extinction of the AGN flux. We found that for 44% of our objects the contamination from the host galaxy is not negligible and this fraction decreases to 21% if we restrict the analysis to a bright subsample (M1450 <-22.15). Our estimated integral surface density at IAB < 24.0 is 500 AGN per square degree, which represents the highest surface density of a spectroscopically confirmed sample of optically selected AGN. We derived the luminosity function in B-band for 1.0 < z < 3.6 using the 1/Vmax estimator. Our data, more than one magnitude fainter than previous optical surveys, allow us to constrain the faint part of the luminosity function up to high redshift. A comparison of our data with the 2dF sample at low redshift (1 < z < 2.1) shows that the VDDS data can not be well fitted with the pure luminosity evolution (PLE) models derived by previous optically selected samples. Qualitatively, this appears to be due to the fact that our data suggest the presence of an excess of faint objects at low redshift (1.0 < z < 1.5) with respect to these models. By combining our faint VVDS sample with the large sample of bright AGN extracted from the SDSS DR3 (Richards et al., 2006b) and testing a number of different evolutionary models, we find that the model which better represents the combined luminosity functions, over a wide range of redshift and luminosity, is a luminosity dependent density evolution (LDDE) model, similar to those derived from the major Xsurveys. Such a parameterization allows the redshift of the AGN density peak to change as a function of luminosity, thus fitting the excess of faint AGN that we find at 1.0 < z < 1.5. On the basis of this model we find, for the first time from the analysis of optically selected samples, that the peak of the AGN space density shifts significantly towards lower redshift going to lower luminosity objects. The position of this peak moves from z 2.0 for MB <-26.0 to z 0.65 for -22< MB <-20. This result, already found in a number of X-ray selected samples of AGN, is consistent with a scenario of “AGN cosmic downsizing”, in which the density of more luminous AGN, possibly associated to more massive black holes, peaks earlier in the history of the Universe (i.e. at higher redshift), than that of low luminosity ones, which reaches its maximum later (i.e. at lower redshift). This behavior has since long been claimed to be present in elliptical galaxies and it is not easy to reproduce it in the hierarchical cosmogonic scenario, where more massive Dark Matter Halos (DMH) form on average later by merging of less massive halos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il modello afflussi-deflussi e di erosione Kineros2, fisicamente basato, distribuito e a scala di evento, è stato applicato a due bacini idrografici montani della provincia di Bologna (Italia) al fine di testare e valutare il suo funzionamento in ambiente appenninico. Dopo la parametrizzazione dei due bacini, Kineros2 è stato calibrato e validato utilizzando dati sperimentali di portata e di concentrazione dei solidi sospesi, collezionati alla chiusura dei bacini grazie alla presenza di due stazioni di monitoraggio idrotorbidimetrico. La modellazione ha consentito di valutare la capacità del modello di riprodurre correttamente le dinamiche idrologiche osservate, nonchè di trarre conclusioni sulle sue potenzialità e limitazioni.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]We present a new method to construct a trivariate T-spline representation of complex solids for the application of isogeometric analysis. We take a genus-zero solid as a basis of our study, but at the end of the work we explain the way to generalize the results to any genus solids. The proposed technique only demands a surface triangulation of the solid as input data. The key of this method lies in obtaining a volumetric parameterization between the solid and the parametric domain, the unitary cube. To do that, an adaptive tetrahedral mesh of the parametric domain is isomorphically transformed onto the solid by applying a mesh untangling and smoothing procedure...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]We present a new method to construct a trivariate T-spline representation of complex genuszero solids for the application of isogeometric analysis. The proposed technique only demands a surface triangulation of the solid as input data. The key of this method lies in obtaining a volumetric parameterization between the solid and the parametric domain, the unitary cube. To do that, an adaptive tetrahedral mesh of the parametric domain is isomorphically transformed onto the solid by applying a mesh untangling and smoothing procedure. The control points of the trivariate T-spline are calculated by imposing the interpolation conditions on points sited both on the inner and on the surface of the solid...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]The meccano method is a novel and promising mesh generation technique for simultaneously creating adaptive tetrahedral meshes and volume parameterizations of a complex solid. The method combines several former procedures: a mapping from the meccano boundary to the solid surface, a 3-D local refinement algorithm and a simultaneous mesh untangling and smoothing. In this paper we present the main advantages of our method against other standard mesh generation techniques. We show that our method constructs meshes that can be locally refined by using the Kossaczky bisection rule and maintaining a high mesh quality. Finally, we generate volume T-mesh for isogeometric analysis, based on the volume parameterization obtained by the method…

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]We present a new method to construct a trivariate T-spline representation of complex solids for the application of isogeometric analysis. The proposed technique only demands the surface of the solid as input data. The key of this method lies in obtaining a volumetric parameterization between the solid and a simple parametric domain. To do that, an adaptive tetrahedral mesh of the parametric domain is isomorphically transformed onto the solid by applying the meccano method. The control points of the trivariate T-spline are calculated by imposing the interpolation conditions on points situated both on the inner and on the surface of the solid...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Congresos y conferencias

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN]We present a new strategy, based on the meccano method [1, 2, 3], to construct a T-spline parameterization of 2D geometries for the application of isogeometric analysis. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between 2D objects and the parametric domain, the unit square. The key of the method lies in defining an isomorphic transformation between the parametric and physical T-mesh finding the optimal position of the interior nodes by applying a new T-mesh untangling and smoothing procedure. Bivariate T-spline representation is calculated by imposing the interpolation conditions on points sited both on the interior and on the boundary of the geometry…