974 resultados para Variance Models
Resumo:
Pliocene and Pleistocene sediments of the Oman margin and Owen Ridge are characterized by continuous alternation of light and dark layers of nannofossil ooze and marly nannofossil ooze and cyclic variation of wet-bulk density. Origin of the wet-bulk density and color cycles was examined at Ocean Drilling Program Site 722 on the Owen Ridge and Site 728 on the Oman margin using 3.4-m.y.-long GRAPE (gamma ray attenuation) wet-bulk density records and records of sediment color represented as changes in gray level on black-and-white core photographs. At Sites 722 and 728 sediments display a weak correlation of decreasing wet-bulk density with increasing darkness of sediment color. Wet-bulk density is inversely related to organic carbon concentration and displays little relation to calcium carbonate concentration, which varies inversely with the abundance of terrigenous sediment components. Sediment color darkens with increasing terrigenous sediment abundance (decreasing carbonate content) and with increasing organic carbon concentration. Upper Pleistocene sediments at Site 722 display a regular pattern of dark colored intervals coinciding with glacial periods, whereas at Site 728 the pattern of color variation is more irregular. There is not a consistent relationship between the dark intervals and their relative wet-bulk density in the upper Pleistocene sections at Sites 722 and 728, suggesting that dominance of organic matter or terrigenous sediment as primary coloring agents varies. Spectra of wet-bulk density and optical density time series display concentration of variance at orbital periodicities of 100, 41, 23, and 19 k.y. A strong 41-k.y. periodicity characterizes wet-bulk density and optical density variation at both sites throughout most of the past 3.4 m.y. Cyclicity at the 41-k.y. periodicity is characterized by a lack of coherence between wet-bulk density and optical density suggesting that the bulk density and color cycles reflect the mixed influence of varying abundance of terrigenous sediments and organic matter. The 23-k.y. periodicity in wet-bulk density and sediment color cycles is generally characterized by significant coherence between wet-bulk density and optical density, which reflects an inverse relationship between these parameters. Varying organic matter abundance, associated with changes in productivity or preservation, is inferred to more strongly influence changes in wet-bulk density and sediment color at this periodicity.
Resumo:
Ten ODP sites drilled in a depth transect (2164-4775 m water depth) during Leg 172 recovered high-deposition rate (>20 cm/kyr) sedimentary sections from sediment drifts in the western North Atlantic. For each site an age model covering the past 0.8-0.9 Ma has been developed. The time scales have a resolution of 10-20 kyr and are derived by tuning variations of estimated carbonate content to the orbital parameters precession and obliquity. Based on the similarity in the signature of proxy records and the spectral character of the time series, the sites are divided into two groups: precession cycles are better developed in carbonate records from a group of shallow sites (2164-2975 m water depth, Sites 1055-1058) while the deeper sites (2995-4775 m water depth, Sites 1060-1063) are characterized by higher spectral density in the obliquity band. The resulting time scales show excellent coherence with other dated carbonate and isotope records from low latitudes. Besides the typical Milankovitch cyclicity significant variance of the resulting carbonate time series is concentrated at millennial-scale changes with periods of about 12, 6, 4, 2.5, and 1.5 kyr. Comparisons of carbonate records from the Blake Bahama Outer Ridge and the Bermuda Rise reveal a remarkable similarity in the time and frequency domain indicating a basin-wide uniform sedimentation pattern during the last 0.9 Ma.
Resumo:
Based on the faunal record of planktonic foraminifers in three long gravity sediment cores from the eastern equatorial Atlantic, the sea-surface temperature history ove the last 750,000 years was studied at a resolution of 3,000 to 10,000 years. Detailed oxygen-isotope and paleomagnetic stratigraphy helped to identify the following major faunal events: Globorotaloides hexagonus and Globorotalia tumida flexuosa became extinct in the eastern tropical Atlantic at the isotope stage 4/5 boundary, now dated at 68,000 years B.P. The persistent occurrence of the pink variety of Globigerinoides ruber started during the late stage 12 at 410,000 years B.P. CARTUNE-age. This datum may provide an easily detectible faunal stratigraphic marker for the mid-Brunhes Chron. The updated scheme of the Ericson zones helped the recognition of a hiatus at the northwestern slope of the Sierra Leone Basin covering oxygen-isotope stages 10 to 12. Classifying the planktonic foraminifer counts into six faunal assemblages, according to the factor analysis derived model of Pflaumann (1985), the tropical and the tropical-upwelling communities account for 57 % at Site 16415, and 86 % at Site 13519, respectively of the variance of the faunal record. A largely continuous paleotemperature record for both winter and summer seasons was obtained from the top of the Sierra Leone Rise with the winter temperatures ranging between 20 and 25 °C, and the summer ones between 24 and 30 °C. The record of cores from greater water depths is frequently interrupted by samples with no-analogue faunal communities and/or poor preservation. Based on the seasonality signal, during cold periods the termal equator shifted to a geographically mnore asymmetrical northern position. Dissolution altering the faunal communities becomes stronger with greater water depth, the estimated mean minimum loss of specimens increases from 70 % to 80 % between 2,860 and 3,850 water depth although some species will be more susceptible than others. Enhanced dissolution occured during stage 4 but also during cold phases in the warm stage 7 and 9. Correlations between the Foraminiferal Dissolution Index and the estimated sea-surface temperatures are significant. Foraminiferal flux rates, negatively correlated to the flux rates of organic carbon and of diatoms, may be a result of enhanced dissolution during cold stages, destroying still more of the faunal signal than indicated by the calculated minimum loss. The fluctuations of the oxygen-isotope curves and the hibernal sea-surfave temperatures are fairly coherent. During warm oxygen-isotope stages the temperature maxima lag often by 5 to 15 ka behind the respective sotope minima. During cold stages, sea-surface temperature changes are partly out of phase and contain additional fluctuations.
Resumo:
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
Resumo:
The present study explores a “hydrophobic” energy function for folding simulations of the protein lattice model. The contribution of each monomer to conformational energy is the product of its “hydrophobicity” and the number of contacts it makes, i.e., E(h⃗, c⃗) = −Σi=1N cihi = −(h⃗.c⃗) is the negative scalar product between two vectors in N-dimensional cartesian space: h⃗ = (h1, … , hN), which represents monomer hydrophobicities and is sequence-dependent; and c⃗ = (c1, … , cN), which represents the number of contacts made by each monomer and is conformation-dependent. A simple theoretical analysis shows that restrictions are imposed concomitantly on both sequences and native structures if the stability criterion for protein-like behavior is to be satisfied. Given a conformation with vector c⃗, the best sequence is a vector h⃗ on the direction upon which the projection of c⃗ − c̄⃗ is maximal, where c̄⃗ is the diagonal vector with components equal to c̄, the average number of contacts per monomer in the unfolded state. Best native conformations are suggested to be not maximally compact, as assumed in many studies, but the ones with largest variance of contacts among its monomers, i.e., with monomers tending to occupy completely buried or completely exposed positions. This inside/outside segregation is reflected on an apolar/polar distribution on the corresponding sequence. Monte Carlo simulations in two dimensions corroborate this general scheme. Sequences targeted to conformations with large contact variances folded cooperatively with thermodynamics of a two-state transition. Sequences targeted to maximally compact conformations, which have lower contact variance, were either found to have degenerate ground state or to fold with much lower cooperativity.
Resumo:
Variability in population growth rate is thought to have negative consequences for organism fitness. Theory for matrix population models predicts that variance in population growth rate should be the sum of the variance in each matrix entry times the squared sensitivity term for that matrix entry. I analyzed the stage-specific demography of 30 field populations from 17 published studies for pattern between the variance of a demographic term and its contribution to population growth. There were no instances in which a matrix entry both was highly variable and had a large effect on population growth rate; instead, correlations between estimates of temporal variance in a term and contribution to population growth (sensitivity or elasticity) were overwhelmingly negative. In addition, survivorship or growth sensitivities or elasticities always exceeded those of fecundity, implying that the former two terms always contributed more to population growth rate. These results suggest that variable life history stages tend to contribute relatively little to population growth rates because natural selection may alter life histories to minimize stages with both high sensitivity and high variation.
Resumo:
Negli ultimi anni i modelli VAR sono diventati il principale strumento econometrico per verificare se può esistere una relazione tra le variabili e per valutare gli effetti delle politiche economiche. Questa tesi studia tre diversi approcci di identificazione a partire dai modelli VAR in forma ridotta (tra cui periodo di campionamento, set di variabili endogene, termini deterministici). Usiamo nel caso di modelli VAR il test di Causalità di Granger per verificare la capacità di una variabile di prevedere un altra, nel caso di cointegrazione usiamo modelli VECM per stimare congiuntamente i coefficienti di lungo periodo ed i coefficienti di breve periodo e nel caso di piccoli set di dati e problemi di overfitting usiamo modelli VAR bayesiani con funzioni di risposta di impulso e decomposizione della varianza, per analizzare l'effetto degli shock sulle variabili macroeconomiche. A tale scopo, gli studi empirici sono effettuati utilizzando serie storiche di dati specifici e formulando diverse ipotesi. Sono stati utilizzati tre modelli VAR: in primis per studiare le decisioni di politica monetaria e discriminare tra le varie teorie post-keynesiane sulla politica monetaria ed in particolare sulla cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015) e regola del GDP nominale in Area Euro (paper 1); secondo per estendere l'evidenza dell'ipotesi di endogeneità della moneta valutando gli effetti della cartolarizzazione delle banche sul meccanismo di trasmissione della politica monetaria negli Stati Uniti (paper 2); terzo per valutare gli effetti dell'invecchiamento sulla spesa sanitaria in Italia in termini di implicazioni di politiche economiche (paper 3). La tesi è introdotta dal capitolo 1 in cui si delinea il contesto, la motivazione e lo scopo di questa ricerca, mentre la struttura e la sintesi, così come i principali risultati, sono descritti nei rimanenti capitoli. Nel capitolo 2 sono esaminati, utilizzando un modello VAR in differenze prime con dati trimestrali della zona Euro, se le decisioni in materia di politica monetaria possono essere interpretate in termini di una "regola di politica monetaria", con specifico riferimento alla cosiddetta "nominal GDP targeting rule" (McCallum 1988 Hall e Mankiw 1994; Woodford 2012). I risultati evidenziano una relazione causale che va dallo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo alle variazioni dei tassi di interesse di mercato a tre mesi. La stessa analisi non sembra confermare l'esistenza di una relazione causale significativa inversa dalla variazione del tasso di interesse di mercato allo scostamento tra i tassi di crescita del PIL nominale e PIL obiettivo. Risultati simili sono stati ottenuti sostituendo il tasso di interesse di mercato con il tasso di interesse di rifinanziamento della BCE. Questa conferma di una sola delle due direzioni di causalità non supporta un'interpretazione della politica monetaria basata sulla nominal GDP targeting rule e dà adito a dubbi in termini più generali per l'applicabilità della regola di Taylor e tutte le regole convenzionali della politica monetaria per il caso in questione. I risultati appaiono invece essere più in linea con altri approcci possibili, come quelli basati su alcune analisi post-keynesiane e marxiste della teoria monetaria e più in particolare la cosiddetta "regola di solvibilità" (Brancaccio e Fontana 2013, 2015). Queste linee di ricerca contestano la tesi semplicistica che l'ambito della politica monetaria consiste nella stabilizzazione dell'inflazione, del PIL reale o del reddito nominale intorno ad un livello "naturale equilibrio". Piuttosto, essi suggeriscono che le banche centrali in realtà seguono uno scopo più complesso, che è il regolamento del sistema finanziario, con particolare riferimento ai rapporti tra creditori e debitori e la relativa solvibilità delle unità economiche. Il capitolo 3 analizza l’offerta di prestiti considerando l’endogeneità della moneta derivante dall'attività di cartolarizzazione delle banche nel corso del periodo 1999-2012. Anche se gran parte della letteratura indaga sulla endogenità dell'offerta di moneta, questo approccio è stato adottato raramente per indagare la endogeneità della moneta nel breve e lungo termine con uno studio degli Stati Uniti durante le due crisi principali: scoppio della bolla dot-com (1998-1999) e la crisi dei mutui sub-prime (2008-2009). In particolare, si considerano gli effetti dell'innovazione finanziaria sul canale dei prestiti utilizzando la serie dei prestiti aggiustata per la cartolarizzazione al fine di verificare se il sistema bancario americano è stimolato a ricercare fonti più economiche di finanziamento come la cartolarizzazione, in caso di politica monetaria restrittiva (Altunbas et al., 2009). L'analisi si basa sull'aggregato monetario M1 ed M2. Utilizzando modelli VECM, esaminiamo una relazione di lungo periodo tra le variabili in livello e valutiamo gli effetti dell’offerta di moneta analizzando quanto la politica monetaria influisce sulle deviazioni di breve periodo dalla relazione di lungo periodo. I risultati mostrano che la cartolarizzazione influenza l'impatto dei prestiti su M1 ed M2. Ciò implica che l'offerta di moneta è endogena confermando l'approccio strutturalista ed evidenziando che gli agenti economici sono motivati ad aumentare la cartolarizzazione per una preventiva copertura contro shock di politica monetaria. Il capitolo 4 indaga il rapporto tra spesa pro capite sanitaria, PIL pro capite, indice di vecchiaia ed aspettativa di vita in Italia nel periodo 1990-2013, utilizzando i modelli VAR bayesiani e dati annuali estratti dalla banca dati OCSE ed Eurostat. Le funzioni di risposta d'impulso e la scomposizione della varianza evidenziano una relazione positiva: dal PIL pro capite alla spesa pro capite sanitaria, dalla speranza di vita alla spesa sanitaria, e dall'indice di invecchiamento alla spesa pro capite sanitaria. L'impatto dell'invecchiamento sulla spesa sanitaria è più significativo rispetto alle altre variabili. Nel complesso, i nostri risultati suggeriscono che le disabilità strettamente connesse all'invecchiamento possono essere il driver principale della spesa sanitaria nel breve-medio periodo. Una buona gestione della sanità contribuisce a migliorare il benessere del paziente, senza aumentare la spesa sanitaria totale. Tuttavia, le politiche che migliorano lo stato di salute delle persone anziane potrebbe essere necessarie per una più bassa domanda pro capite dei servizi sanitari e sociali.
Resumo:
Understanding spatial distributions and how environmental conditions influence catch-per-unit-effort (CPUE) is important for increased fishing efficiency and sustainable fisheries management. This study investigated the relationship between CPUE, spatial factors, temperature, and depth using generalized additive models. Combinations of factors, and not one single factor, were frequently included in the best model. Parameters which best described CPUE varied by geographic region. The amount of variance, or deviance, explained by the best models ranged from a low of 29% (halibut, Charlotte region) to a high of 94% (sablefish, Charlotte region). Depth, latitude, and longitude influenced most species in several regions. On the broad geographic scale, depth was associated with CPUE for every species, except dogfish. Latitude and longitude influenced most species, except halibut (Areas 4 A/D), sablefish, and cod. Temperature was important for describing distributions of halibut in Alaska, arrowtooth flounder in British Columbia, dogfish, Alaska skate, and Aleutian skate. The species-habitat relationships revealed in this study can be used to create improved fishing and management strategies.
Resumo:
Late Pleistocene signals of calcium carbonate, organic carbon, and opaline silica concentration and accumulation are documented in a series of cores from a zonal/meridional/depth transect in the equatorial Atlantic Ocean to reconstruct the regional sedimentary history. Spectral analysis reveals that maxima and minima in biogenous sedimentation occur with glacial-interglacial cyclicity as a function of both (1) primary production at the sea surface modulated by orbitally forced variation in trade wind zonality and (2) destruction at the seafloor by variation in the chemical character of advected intermediate and deep water from high latitudes modulated by high-latitude ice volume. From these results a pattern emerges in which the relative proportion of signal variance from the productivity signal centered on the precessional (23 kyr) band decreases while that of the destruction signal centered on the obliquity (41 kyr) and eccentricity (100 kyr) periods increases below ~3600-m ocean depth.
Resumo:
It is shown that variance-balanced designs can be obtained from Type I orthogonal arrays for many general models with two kinds of treatment effects, including ones for interference, with general dependence structures. These designs can be used to obtain optimal and efficient designs. Some examples and design comparisons are given. (C) 2002 Elsevier B.V. All rights reserved.
Resumo:
Single male sexually selected traits have been found to exhibit substantial genetic variance, even though natural and sexual selection are predicted to deplete genetic variance in these traits. We tested whether genetic variance in multiple male display traits of Drosophila serrata was maintained under field conditions. A breeding design involving 300 field-reared males and their laboratory-reared offspring allowed the estimation of the genetic variance-covariance matrix for six male cuticular hydrocarbons (CHCs) under field conditions. Despite individual CHCs displaying substantial genetic variance under field conditions, the vast majority of genetic variance in CHCs was not closely associated with the direction of sexual selection measured on field phenotypes. Relative concentrations of three CHCs correlated positively with body size in the field, but not under laboratory conditions, suggesting condition-dependent expression of CHCs under field conditions. Therefore condition dependence may not maintain genetic variance in preferred combinations of male CHCs under field conditions, suggesting that the large mutational target supplied by the evolution of condition dependence may not provide a solution to the lek paradox in this species. Sustained sexual selection may be adequate to deplete genetic variance in the direction of selection, perhaps as a consequence of the low rate of favorable mutations expected in multiple trait systems.
Resumo:
Determining the dimensionality of G provides an important perspective on the genetic basis of a multivariate suite of traits. Since the introduction of Fisher's geometric model, the number of genetically independent traits underlying a set of functionally related phenotypic traits has been recognized as an important factor influencing the response to selection. Here, we show how the effective dimensionality of G can be established, using a method for the determination of the dimensionality of the effect space from a multivariate general linear model introduced by AMEMIYA (1985). We compare this approach with two other available methods, factor-analytic modeling and bootstrapping, using a half-sib experiment that estimated G for eight cuticular hydrocarbons of Drosophila serrata. In our example, eight pheromone traits were shown to be adequately represented by only two underlying genetic dimensions by Amemiya's approach and factor-analytic modeling of the covariance structure at the sire level. In, contrast, bootstrapping identified four dimensions with significant genetic variance. A simulation study indicated that while the performance of Amemiya's method was more sensitive to power constraints, it performed as well or better than factor-analytic modeling in correctly identifying the original genetic dimensions at moderate to high levels of heritability. The bootstrap approach consistently overestimated the number of dimensions in all cases and performed less well than Amemiya's method at subspace recovery.
Resumo:
Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered. © 2002 The College of Optometrists.
Resumo:
The modelling of mechanical structures using finite element analysis has become an indispensable stage in the design of new components and products. Once the theoretical design has been optimised a prototype may be constructed and tested. What can the engineer do if the measured and theoretically predicted vibration characteristics of the structure are significantly different? This thesis considers the problems of changing the parameters of the finite element model to improve the correlation between a physical structure and its mathematical model. Two new methods are introduced to perform the systematic parameter updating. The first uses the measured modal model to derive the parameter values with the minimum variance. The user must provide estimates for the variance of the theoretical parameter values and the measured data. Previous authors using similar methods have assumed that the estimated parameters and measured modal properties are statistically independent. This will generally be the case during the first iteration but will not be the case subsequently. The second method updates the parameters directly from the frequency response functions. The order of the finite element model of the structure is reduced as a function of the unknown parameters. A method related to a weighted equation error algorithm is used to update the parameters. After each iteration the weighting changes so that on convergence the output error is minimised. The suggested methods are extensively tested using simulated data. An H frame is then used to demonstrate the algorithms on a physical structure.