41 resultados para proportional to absolute temperature (PTAT)

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sparus aurata larvae reared under controlled water-temperature conditions during the first 24 days after hatching displayed a linear relationship between age (t) and standard length (SL): SL = 2.68 + 0.19 t (r2 = 0.91l). Increments were laid down in the sagittae with daily periodicity starting on day of hatching. Standard length (SL) and sagittae radius (OR) were correlated: SL(mm) = 2.65 + 0.012 OR(mm). The series of measurements of daily growth increment widths (DWI), food density and water temperature were analyzed by means of time series analysis. The DWI series were strongly autocorrelated, the growth on any one day was dependent upon growth on the previous day. Time series of water temperatures showed, as expected, a random pattern of variation, while food consumed daily was a function of food consumed the two previous days. The DWI series and the food density were correlated positively at lags 1 and 2. The results provided evidence of the importance of food intake upon the sagittae growth when temperature is optimal (20ºC). Sagittae growth was correlated with growth on the previous day, so this should be taken into account when fish growth is derived from sagittae growth rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a simple adaptive procedure for playing a game. In thisprocedure, players depart from their current play with probabilities thatare proportional to measures of regret for not having used other strategies(these measures are updated every period). It is shown that our adaptiveprocedure guaranties that with probability one, the sample distributionsof play converge to the set of correlated equilibria of the game. Tocompute these regret measures, a player needs to know his payoff functionand the history of play. We also offer a variation where every playerknows only his own realized payoff history (but not his payoff function).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using the extended Thomas-Fermi version of density-functional theory (DFT), calculations are presented for the barrier for the reaction Na20++Na20+¿Na402+. The deviation from the simple Coulomb barrier is shown to be proportional to the electron density at the bond midpoint of the supermolecule (Na20+)2. An extension of conventional quantum-chemical studies of homonuclear diatomic molecular ions is then effected to apply to the supermolecular ions of the alkali metals. This then allows the Na results to be utilized to make semiquantitative predictions of position and height of the maximum of the fusion barrier for other alkali clusters. These predictions are confirmed by means of similar DFT calculations for the K clusters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present numerical results of the deterministic Ginzburg-Landau equation with a concentration-dependent diffusion coefficient, for different values of the volume fraction phi of the minority component. The morphology of the domains affects the dynamics of phase separation. The effective growth exponents, but not the scaled functions, are found to be temperature dependent.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bulk and single-particle properties of hot hyperonic matter are studied within the Brueckner-Hartree-Fock approximation extended to finite temperature. The bare interaction in the nucleon sector is the Argonne V18 potential supplemented with an effective three-body force to reproduce the saturating properties of nuclear matter. The modern Nijmegen NSC97e potential is employed for the hyperon-nucleon and hyperon-hyperon interactions. The effect of temperature on the in-medium effective interaction is found to be, in general, very small and the single-particle potentials differ by at most 25% for temperatures in the range from 0 to 60 MeV. The bulk properties of infinite matter of baryons, either nuclear isospin symmetric or a Beta-stable composition that includes a nonzero fraction of hyperons, are obtained. It is found that the presence of hyperons can modify the thermodynamical properties of the system in a non-negligible way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use wave packet mode quantization to compute the creation of massless scalar quantum particles in a colliding plane wave spacetime. The background spacetime represents the collision of two gravitational shock waves followed by trailing gravitational radiation which focus into a Killing-Cauchy horizon. The use of wave packet modes simplifies the problem of mode propagation through the different spacetime regions which was previously studied with the use of monochromatic modes. It is found that the number of particles created in a given wave packet mode has a thermal spectrum with a temperature which is inversely proportional to the focusing time of the plane waves and which depends on the mode trajectory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Past temperature variations are usually inferred from proxy data or estimated using general circulation models. Comparisons between climate estimations derived from proxy records and from model simulations help to better understand mechanisms driving climate variations, and also offer the possibility to identify deficiencies in both approaches. This paper presents regional temperature reconstructions based on tree-ring maximum density series in the Pyrenees, and compares them with the output of global simulations for this region and with regional climate model simulations conducted for the target region. An ensemble of 24 reconstructions of May-to-September regional mean temperature was derived from 22 maximum density tree-ring site chronologies distributed over the larger Pyrenees area. Four different tree-ring series standardization procedures were applied, combining two detrending methods: 300-yr spline and the regional curve standardization (RCS). Additionally, different methodological variants for the regional chronology were generated by using three different aggregation methods. Calibration verification trials were performed in split periods and using two methods: regression and a simple variance matching. The resulting set of temperature reconstructions was compared with climate simulations performed with global (ECHO-G) and regional (MM5) climate models. The 24 variants of May-to-September temperature reconstructions reveal a generally coherent pattern of inter-annual to multi-centennial temperature variations in the Pyrenees region for the last 750 yr. However, some reconstructions display a marked positive trend for the entire length of the reconstruction, pointing out that the application of the RCS method to a suboptimal set of samples may lead to unreliable results. Climate model simulations agree with the tree-ring based reconstructions at multi-decadal time scales, suggesting solar variability and volcanism as the main factors controlling preindustrial mean temperature variations in the Pyrenees. Nevertheless, the comparison also highlights differences with the reconstructions, mainly in the amplitude of past temperature variations and in the 20th century trends. Neither proxy-based reconstructions nor model simulations are able to perfectly track the temperature variations of the instrumental record, suggesting that both approximations still need further improvements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estudi elaborat a partir d’una estada al Royal Brompton Hospital, Londres, Regne Unit, durant octubre i novembre del 2006.Els beneficis de la estimulació beta-adrenèrgica en pacients amb lesió pulmonar aguda (LPA) són coneguts, però no es disposa de dades sobre el possible efecte antiinflamatori. El condensat d'aire exhalat (CAE) és una tècnica no-invasiva de recollida de mostres del tracte respiratori inferior, podent ser útil en la monitorització de patologies respiratòries. S’ha usat marcadors biològics en el CAE de pacients ventilats mecànicament amb LPA per estudiar el possible efecte antiinflamatori que el salbutamol hi podria exercir. El CAE va ser recollit abans i després de l'administració de salbutamol inahalat. Inmediatament després es va mesurar la conductivitat i el pH abans i després de la desgasificació amb heli. Es va mesurar la concentració de nitrits i nitrats. Les mostres varen ser liofilitzades i guardades a -80ºC. La concentració de leucotriè B4 es va mesurar després de la reconstitució de la mostra. Els resultats s'expressen com a mitjana (error estàndard de la mostra). No s'han detectat diferències entre els valors de CAE basals dels pacients amb LPA i els de referència de la població sana de Barcelona. Es conclou doncs que el CAE és una tècnica no invasiva que pot ser usada en la monitorització de paceints ventilats mecànicament. El salbutamol inhalat incrementa de manera significativa el pH del CAE dels paceints amb LPA, tot i que un efecte directe de la inhalació de slabutamol no pot ser desestimat.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present sharpened lower bounds on the size of cut free proofs for first-order logic. Prior lower bounds for eliminating cuts from a proof established superexponential lower bounds as a stack of exponentials, with the height of the stack proportional to the maximum depth d of the formulas in the original proof. Our new lower bounds remove the constant of proportionality, giving an exponential stack of height equal to d − O(1). The proof method is based on more efficiently expressing the Gentzen-Solovay cut formulas as low depth formulas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The preceding two editions of CoDaWork included talks on the possible considerationof densities as infinite compositions: Egozcue and D´ıaz-Barrero (2003) extended theEuclidean structure of the simplex to a Hilbert space structure of the set of densitieswithin a bounded interval, and van den Boogaart (2005) generalized this to the setof densities bounded by an arbitrary reference density. From the many variations ofthe Hilbert structures available, we work with three cases. For bounded variables, abasis derived from Legendre polynomials is used. For variables with a lower bound, westandardize them with respect to an exponential distribution and express their densitiesas coordinates in a basis derived from Laguerre polynomials. Finally, for unboundedvariables, a normal distribution is used as reference, and coordinates are obtained withrespect to a Hermite-polynomials-based basis.To get the coordinates, several approaches can be considered. A numerical accuracyproblem occurs if one estimates the coordinates directly by using discretized scalarproducts. Thus we propose to use a weighted linear regression approach, where all k-order polynomials are used as predictand variables and weights are proportional to thereference density. Finally, for the case of 2-order Hermite polinomials (normal reference)and 1-order Laguerre polinomials (exponential), one can also derive the coordinatesfrom their relationships to the classical mean and variance.Apart of these theoretical issues, this contribution focuses on the application of thistheory to two main problems in sedimentary geology: the comparison of several grainsize distributions, and the comparison among different rocks of the empirical distribution of a property measured on a batch of individual grains from the same rock orsediment, like their composition

Relevância:

100.00% 100.00%

Publicador:

Resumo:

CO2 emissions induced by human activities are the major cause of climate change; hence, strong environmental policy that limits the growing dependence on fossil fuel is indispensable. Tradable permits and environmental taxes are the usual tools used in CO2 reduction strategies. Such economic tools provide incentives to polluting industries to reduce their emissions through market signals. The aim of this work is to investigate the direct and indirect effects of an environmental tax on Spanish products and services. We apply an environmentally extended input-output (EIO) model to identify CO2 emission intensities of products and services and, accordingly, we estimate the tax proportional to these intensities. The short-term price effects are analyzed using an input-output price model. The effect of tax introduction on consumption prices and its influence on consumers’ welfare are determined. We also quantify the environmental impacts of such taxation in terms of the reduction in CO2 emissions. The results, based on the Spanish economy for the year 2007, show that sectors with relatively poor environmental profile are subjected to high environmental tax rates. And consequently, applying a CO2 tax on these sectors, increases production prices and induces a slight increase in consumer price index and a decrease in private welfare. The revenue from the tax could be used to counter balance the negative effects on social welfare and also to stimulate the increase of renewable energy shares in the most impacting sectors. Finally, our analysis highlights that the environmental and economic goals cannot be met at the same time with the environmental taxation and this shows the necessity of finding other (complementary or alternative) measures to ensure both the economic and ecological efficiencies. Keywords: CO2 emissions; environmental tax; input-output model, effects of environmental taxation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents our investigation on iterativedecoding performances of some sparse-graph codes on block-fading Rayleigh channels. The considered code ensembles are standard LDPC codes and Root-LDPC codes, first proposed in and shown to be able to attain the full transmission diversity. We study the iterative threshold performance of those codes as a function of fading gains of the transmission channel and propose a numerical approximation of the iterative threshold versus fading gains, both both LDPC and Root-LDPC codes.Also, we show analytically that, in the case of 2 fading blocks,the iterative threshold root of Root-LDPC codes is proportional to (α1 α2)1, where α1 and α2 are corresponding fading gains.From this result, the full diversity property of Root-LDPC codes immediately follows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compare two methods for visualising contingency tables and developa method called the ratio map which combines the good properties of both.The first is a biplot based on the logratio approach to compositional dataanalysis. This approach is founded on the principle of subcompositionalcoherence, which assures that results are invariant to considering subsetsof the composition. The second approach, correspondence analysis, isbased on the chi-square approach to contingency table analysis. Acornerstone of correspondence analysis is the principle of distributionalequivalence, which assures invariance in the results when rows or columnswith identical conditional proportions are merged. Both methods may bedescribed as singular value decompositions of appropriately transformedmatrices. Correspondence analysis includes a weighting of the rows andcolumns proportional to the margins of the table. If this idea of row andcolumn weights is introduced into the logratio biplot, we obtain a methodwhich obeys both principles of subcompositional coherence and distributionalequivalence.