57 resultados para additive representability
Gaussian estimates for the density of the non-linear stochastic heat equation in any space dimension
Resumo:
In this paper, we establish lower and upper Gaussian bounds for the probability density of the mild solution to the stochastic heat equation with multiplicative noise and in any space dimension. The driving perturbation is a Gaussian noise which is white in time with some spatially homogeneous covariance. These estimates are obtained using tools of the Malliavin calculus. The most challenging part is the lower bound, which is obtained by adapting a general method developed by Kohatsu-Higa to the underlying spatially homogeneous Gaussian setting. Both lower and upper estimates have the same form: a Gaussian density with a variance which is equal to that of the mild solution of the corresponding linear equation with additive noise.
Resumo:
Time series regression models are especially suitable in epidemiology for evaluating short-term effects of time-varying exposures on health. The problem is that potential for confounding in time series regression is very high. Thus, it is important that trend and seasonality are properly accounted for. Our paper reviews the statistical models commonly used in time-series regression methods, specially allowing for serial correlation, make them potentially useful for selected epidemiological purposes. In particular, we discuss the use of time-series regression for counts using a wide range Generalised Linear Models as well as Generalised Additive Models. In addition, recently critical points in using statistical software for GAM were stressed, and reanalyses of time series data on air pollution and health were performed in order to update already published. Applications are offered through an example on the relationship between asthma emergency admissions and photochemical air pollutants
Resumo:
As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completelyabsent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and byMartín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involvedparts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method isintroduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that thetheoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approachhas reasonable properties from a compositional point of view. In particular, it is “natural” in the sense thatit recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in thesame paper a substitution method for missing values on compositional data sets is introduced
Resumo:
De acuerdo con los objetivos generales del proyecto y plan de trabajo previsto, para esta anualidad, se obtuvieron fibras y microfibras de celulosa a partir de dos fuentes: celulosa vegetal de pino y eucalipto y celulosa bacterial. Las microfibrillas han sido utilizadas como material de refuerzo para la fabricación de materiales compuestos a partir de caucho natural, policaprolactona y polivinil alcohol. Las muestras se fabricaron mediante la técnica de "casting" en medio acuoso y temperatura ambiente. Las muestras fueron caracterizados en sus propiedades mecánicas, físicas y térmicas. Se observó que, en general, la adición de las microfibrillas de celulosa en las matrices poliméricas provoca una mejora sustancial en las propiedades mecánicas del material en comparación con el polímero sin reforzar. Los resultados pueden resumirse de la siguiente manera: 1.Fabricación de materiales compuestos a base de caucho natural y fibras de celulosa. Se obtuvieron fibras y nanofibras de celulosa que fueron modificadas químicamente y usadas como refuerzo en matriz de caucho. Los resultados mostraron mejora de propiedades mecánicas del material, principalmente en los materiales compuestos reforzados con nanofibras. 2. Obtención de whiskers de celulosa y su utilización como material de refuerzo en una matriz de policaprolactona. Se obtuvieron whiskers de celulosa a partir de pasta blanqueada. La adición en una matriz de policaprolactona produjo materiales compuestos con propiedades mecánicas superiores a la matriz, con buena dispersión de los whiskers. 3. Obtención de fibras de celulosa bacterial y nanofibras de celulosa, aislamiento y utilización sobre una matriz de polivinil alcohol. Se obtuvo celulosa bacterial a partir de la bacteria Gluconacetobacter xylinum. Además se fabricaron nanofibras de celulosa a partir eucalipto blanqueado. La celulosa bacterial como material de refuerzo no produjo importantes mejoras en las propiedades mecánicas de la matriz; en cambio se observaron mejoras destacables con la nanofibra como refuerzo.
Resumo:
Pippenger [Pi77] showed the existence of (6m,4m,3m,6)-concentrator for each positive integer m using a probabilistic method. We generalize his approach and prove existence of (6m,4m,3m,5.05)-concentrator (which is no longer regular, but has fewer edges). We apply this result to improve the constant of approximation of almost additive set functions by additive set functions from 44.5 (established by Kalton and Roberts in [KaRo83] to 39. We show a more direct connection of the latter problem to the Whitney type estimate for approximation of continuous functions on a cube in &b&R&/b&&sup&d&/sup& by linear functions, and improve the estimate of this Whitney constant from 802 (proved by Brudnyi and Kalton in [BrKa00] to 73.
Resumo:
Background: GTF2I codes for a general intrinsic transcription factor and calcium channel regulator TFII-I, with high and ubiquitous expression, and a strong candidate for involvement in the morphological and neuro-developmental anomalies of the Williams-Beuren syndrome (WBS). WBS is a genetic disorder due to a recurring deletion of about 1,55-1,83 Mb containing 25-28 genes in chromosome band 7q11.23 including GTF2I. Completed homozygous loss of either the Gtf2i or Gtf2ird1 function in mice provided additional evidence for the involvement of both genes in the craniofacial and cognitive phenotype. Unfortunately nothing is now about the behavioral characterization of heterozygous mice. Methods: By gene targeting we have generated a mutant mice with a deletion of the first 140 amino-acids of TFII-I. mRNA and protein expression analysis were used to document the effect of the study deletion. We performed behavioral characterization of heterozygous mutant mice to document in vivo implications of TFII-I in the cognitive profile of WBS patients. Results: Homozygous and heterozygous mutant mice exhibit craniofacial alterations, most clearly represented in homozygous condition. Behavioral test demonstrate that heterozygous mutant mice exhibit some neurobehavioral alterations and hyperacusis or odynacusis that could be associated with specific features of WBS phenotype. Homozygous mutant mice present highly compromised embryonic viability and fertility. Regarding cellular model, we documented a retarded growth in heterozygous MEFs respect to homozygous or wild-type MEFs. Conclusion: Our data confirm that, although additive effects of haploinsufficiency at several genes may contribute to the full craniofacial or neurocognitive features of WBS, correct expression of GTF2I is one of the main players. In addition, these findings show that the deletion of the fist 140 amino-acids of TFII-I altered it correct function leading to a clear phenotype, at both levels, at the cellular model and at the in vivo model.
Resumo:
Background: Prolificacy is the most important trait influencing the reproductive efficiency of pig production systems. The low heritability and sex-limited expression of prolificacy have hindered to some extent the improvement of this trait through artificial selection. Moreover, the relative contributions of additive, dominant and epistatic QTL to the genetic variance of pig prolificacy remain to be defined. In this work, we have undertaken this issue by performing one-dimensional and bi-dimensional genome scans for number of piglets born alive (NBA) and total number of piglets born (TNB) in a three generation Iberian by Meishan F2 intercross. Results: The one-dimensional genome scan for NBA and TNB revealed the existence of two genome-wide highly significant QTL located on SSC13 (P < 0.001) and SSC17 (P < 0.01) with effects on both traits. This relative paucity of significant results contrasted very strongly with the wide array of highly significant epistatic QTL that emerged in the bi-dimensional genome-wide scan analysis. As much as 18 epistatic QTL were found for NBA (four at P < 0.01 and five at P < 0.05) and TNB (three at P < 0.01 and six at P < 0.05), respectively. These epistatic QTL were distributed in multiple genomic regions, which covered 13 of the 18 pig autosomes, and they had small individual effects that ranged between 3 to 4% of the phenotypic variance. Different patterns of interactions (a × a, a × d, d × a and d × d) were found amongst the epistatic QTL pairs identified in the current work.Conclusions: The complex inheritance of prolificacy traits in pigs has been evidenced by identifying multiple additive (SSC13 and SSC17), dominant and epistatic QTL in an Iberian × Meishan F2 intercross. Our results demonstrate that a significant fraction of the phenotypic variance of swine prolificacy traits can be attributed to first-order gene-by-gene interactions emphasizing that the phenotypic effects of alleles might be strongly modulated by the genetic background where they segregate.
Resumo:
In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.
Resumo:
Murine models and association studies in eating disorder (ED) patients have shown a role for the brain-derived neurotrophic factor (BDNF) in eating behavior. Some studies have shown association of BDNF -270C/T single-nucleotide polymorphism (SNP) with bulimia nervosa (BN), while BDNF Val66Met variant has been shown to be associated with both BN and anorexia nervosa (AN). To further test the role of this neurotrophin in humans, we screened 36 SNPs in the BDNF gene and tested for their association with ED and plasma BDNF levels as a quantitative trait. We performed a family-based association study in 106 ED nuclear families and analyzed BDNF blood levels in 110 ED patients and in 50 sib pairs discordant for ED. The rs7124442T/rs11030102C/rs11030119G haplotype was found associated with high BDNF levels (mean BDNF TCG haplotype carriers = 43.6 ng/ml vs. mean others 23.0 ng/ml, P = 0.016) and BN (Z = 2.64; P recessive = 0.008), and the rs7934165A/270T haplotype was associated with AN (Z =-2.64; P additive = 0.008). The comparison of BDNF levels in 50 ED discordant sib pairs showed elevated plasma BDNF levels for the ED group (mean controls = 41.0 vs. mean ED = 52.7; P = 0.004). Our data strongly suggest that altered BDNF levels modulated by BDNF gene variability are associated with the susceptibility to ED, providing physiological evidence that BDNF plays a role in the development of AN and BN, and strongly arguing for its involvement in eating behavior and body weight regulation.
Resumo:
Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.
Resumo:
Price bubbles in an Arrow-Debreu valuation equilibrium in infinite-timeeconomy are a manifestation of lack of countable additivity of valuationof assets. In contrast, known examples of price bubbles in sequentialequilibrium in infinite time cannot be attributed to the lack of countableadditivity of valuation. In this paper we develop a theory of valuation ofassets in sequential markets (with no uncertainty) and study the nature ofprice bubbles in light of this theory. We consider an operator, calledpayoff pricing functional, that maps a sequence of payoffs to the minimumcost of an asset holding strategy that generates it. We show that thepayoff pricing functional is linear and countably additive on the set ofpositive payoffs if and only if there is no Ponzi scheme, and providedthat there is no restriction on long positions in the assets. In the knownexamples of equilibrium price bubbles in sequential markets valuation islinear and countably additive. The presence of a price bubble indicatesthat the asset's dividends can be purchased in sequential markers at acost lower than the asset's price. We also present examples of equilibriumprice bubbles in which valuation is nonlinear but not countably additive.
Resumo:
We analyze a model of conflict with endogenous choice of effort, wheresubsets of the contenders may force the resolution to be sequential:First the alliance fights it out with the rest and in case they win later they fight it out among themselves. For three-player games, wefind that it will not be in the interest of any two of them to form analliance. We obtain this result under two different scenarios:equidistant preferences with varying relative strengths, and vicinityof preferences with equal distribution of power. We conclude that thecommonly made assumption of super-additive coalitional worth is suspect.
Resumo:
Nanocrystalline TiO2 modified with Nb has been produced through the sol-gel technique. Nanopowders have been obtained by means of the hydrolysis of pure alkoxides with deionized water and peptization of the resulting hydrolysate with diluted acid nitric at 100 C. The addition of Nb stabilizes the anatase phase to higher temperatures. XRD spectra of the undoped and the Nb-doped samples show that the undoped sample has been almost totally converted to rutile at 600 C, meanwhile the doped samples present still a low percentage of rutile phase. Nanocrystalline powders stabilized at 600 C with grain sizes of about 17 nm have successfully been synthesized by the addition of Nb with a concentration of 2% at., which appears to be an adequate additive concentration to improve the gas sensor performances, such as it is suggested by the catalytic conversion efficiency experiments performed from FTIR measurements. FTIR absorbance spectra show that catalytic conversion of CO occurs at lower temperatures when niobium is introduced. The electrical response of the films to different concentrations of CO and ethanol has been monitored in dry and wet environments in order to test the influence of humidity in the sensor response. The addition of Nb decreases the working temperature and increases the stability of the layers. Also, large enhancement of the response time is obtained even with lower working temperatures. Moreover, humidity effects on the gas sensor response toward CO and ethanol are less important in Nb-doped samples than in the undoped ones.
Resumo:
Nanocrystalline TiO2 modified with Nb has been produced through the sol-gel technique. Nanopowders have been obtained by means of the hydrolysis of pure alkoxides with deionized water and peptization of the resulting hydrolysate with diluted acid nitric at 100 C. The addition of Nb stabilizes the anatase phase to higher temperatures. XRD spectra of the undoped and the Nb-doped samples show that the undoped sample has been almost totally converted to rutile at 600 C, meanwhile the doped samples present still a low percentage of rutile phase. Nanocrystalline powders stabilized at 600 C with grain sizes of about 17 nm have successfully been synthesized by the addition of Nb with a concentration of 2% at., which appears to be an adequate additive concentration to improve the gas sensor performances, such as it is suggested by the catalytic conversion efficiency experiments performed from FTIR measurements. FTIR absorbance spectra show that catalytic conversion of CO occurs at lower temperatures when niobium is introduced. The electrical response of the films to different concentrations of CO and ethanol has been monitored in dry and wet environments in order to test the influence of humidity in the sensor response. The addition of Nb decreases the working temperature and increases the stability of the layers. Also, large enhancement of the response time is obtained even with lower working temperatures. Moreover, humidity effects on the gas sensor response toward CO and ethanol are less important in Nb-doped samples than in the undoped ones.
Resumo:
En este documento se ilustra de un modo práctico, el empleo de tres instrumentos que permiten al actuario definir grupos arancelarios y estimar premios de riesgo en el proceso que tasa la clase para el seguro de no vida. El primero es el análisis de segmentación (CHAID y XAID) usado en primer lugar en 1997 por UNESPA en su cartera común de coches. El segundo es un proceso de selección gradual con el modelo de regresión a base de distancia. Y el tercero es un proceso con el modelo conocido y generalizado de regresión linear, que representa la técnica más moderna en la bibliografía actuarial. De estos últimos, si combinamos funciones de eslabón diferentes y distribuciones de error, podemos obtener el aditivo clásico y modelos multiplicativos