989 resultados para Dependent Observations
Resumo:
It is shown that variance-balanced designs can be obtained from Type I orthogonal arrays for many general models with two kinds of treatment effects, including ones for interference, with general dependence structures. These designs can be used to obtain optimal and efficient designs. Some examples and design comparisons are given. (C) 2002 Elsevier B.V. All rights reserved.
Resumo:
2000 Mathematics Subject Classification: 62L10.
Resumo:
We consider the problem of estimating a population size from successive catches taken during a removal experiment and propose two estimating functions approaches, the traditional quasi-likelihood (TQL) approach for dependent observations and the conditional quasi-likelihood (CQL) approach using the conditional mean and conditional variance of the catch given previous catches. Asymptotic covariance of the estimates and the relationship between the two methods are derived. Simulation results and application to the catch data from smallmouth bass show that the proposed estimating functions perform better than other existing methods, especially in the presence of overdispersion.
Resumo:
O objectivo principal da presente tese consiste no desenvolvimento de estimadores robustos do variograma com boas propriedades de eficiência. O variograma é um instrumento fundamental em Geoestatística, pois modela a estrutura de dependência do processo em estudo e influencia decisivamente a predição de novas observações. Os métodos tradicionais de estimação do variograma não são robustos, ou seja, são sensíveis a pequenos desvios das hipóteses do modelo. Essa questão é importante, pois as propriedades que motivam a aplicação de tais métodos, podem não ser válidas nas vizinhanças do modelo assumido. O presente trabalho começa por conter uma revisão dos principais conceitos em Geoestatística e da estimação tradicional do variograma. De seguida, resumem-se algumas noções fundamentais sobre robustez estatística. No seguimento, apresenta-se um novo método de estimação do variograma que se designou por estimador de múltiplos variogramas. O método consiste em quatro etapas, nas quais prevalecem, alternadamente, os critérios de robustez ou de eficiência. A partir da amostra inicial, são calculadas, de forma robusta, algumas estimativas pontuais do variograma; com base nessas estimativas pontuais, são estimados os parâmetros do modelo pelo método dos mínimos quadrados; as duas fases anteriores são repetidas, criando um conjunto de múltiplas estimativas da função variograma; por fim, a estimativa final do variograma é definida pela mediana das estimativas obtidas anteriormente. Assim, é possível obter um estimador que tem boas propriedades de robustez e boa eficiência em processos Gaussianos. A investigação desenvolvida revelou que, quando se usam estimativas discretas na primeira fase da estimação do variograma, existem situações onde a identificabilidade dos parâmetros não está assegurada. Para os modelos de variograma mais comuns, foi possível estabelecer condições, pouco restritivas, que garantem a unicidade de solução na estimação do variograma. A estimação do variograma supõe sempre a estacionaridade da média do processo. Como é importante que existam procedimentos objectivos para avaliar tal condição, neste trabalho sugere-se um teste para validar essa hipótese. A estatística do teste é um estimador-MM, cuja distribuição é desconhecida nas condições de dependência assumidas. Tendo em vista a sua aproximação, apresenta-se uma versão do método bootstrap adequada ao estudo de observações dependentes de processos espaciais. Finalmente, o estimador de múltiplos variogramas é avaliado em termos da sua aplicação prática. O trabalho contém um estudo de simulação que confirma as propriedades estabelecidas. Em todos os casos analisados, o estimador de múltiplos variogramas produziu melhores resultados do que as alternativas usuais, tanto para a distribuição assumida, como para distribuições contaminadas.
Resumo:
In der vorliegenden Arbeit wurde die Druckabhängigkeit der molekularen Dynamik mittels 2H-NMR und Viskositätsmessungen untersucht. Für die Messungen wurde der niedermolekulare organische Glasbildner ortho-Terphenyl (OTP) ausgewählt, da dieser aufgrund einer Vielzahl vorliegender Arbeiten als Modellsubstanz angesehen werden kann. Daneben wurden auch Messungen an Salol durchgeführt.Die Untersuchungen erstreckten sich über einen weiten Druck- und Temperaturbereich ausgehend von der Schmelze bis weit in die unterkühlte Flüssigkeit. Dieser Bereich wurde aufgrund experimenteller Voraussetzungen immer durch eine Druckerhöhung erreicht.Beide Substanzen zeigten druckabhängig ein Verhalten, das dem der Temperaturvariation bei Normaldruck sehr ähnelt. Auf einer Zeitskala der molekularen Dynamik von 10E-9 s bis zu 10E+2 s wurde daher am Beispiel von OTP ein Druck-Temperatur-Zeit-Superpositionsprinzip diskutiert. Zudem konnte eine Temperatur-Dichte-Skalierung mit rho T-1/4 erfolgreich durchgeführt werden. Dies entspricht einem rein repulsiven Potentialverlauf mit rho -12±3 .Zur Entscheidung, ob die Verteilungsbreiten der mittleren Rotationskorrelationszeiten durch Druckvariation beeinflußt werden, wurden auch Ergebnisse anderer experimenteller Methoden herangezogen. Unter Hinzuziehung aller Meßergebnisse kann sowohl eine Temperatur- als auch Druckabhängigkeit der Verteilungsbreite bestätigt werden. Zur Auswertung von Viskositätsdaten wurde ein Verfahren vorgestellt, das eine quantitative Aussage über den Fragilitätsindex von unterkühlten Flüssigkeiten auch dann zuläßt, wenn die Messungen nicht bis zur Glasübergangstemperatur Tg durchgeführt werden. Die Auswertung der druckabhängigen Viskositätsdaten von OTP und Salol zeigt einen sehr differenzierten druckabhängigen Verlauf des Fragilitätsindexes für beide Glasbildner. OTP zeigt zunächst eine leichte Abnahme und danach wieder eine Zunahme des Fragilitätsindexes, dieses Ergebnis wird auch von Simulationsdaten, die der Literatur entnommen wurden, unterstützt. Salol hingegen zeigt zunächst eine deutliche Zunahme und danach eine Abnahme des Fragilitätsindexes. Das unterschiedliche Verhalten der beiden Glasbildner mit ähnlichem Fragilitätsindex bei Normaldruck wird auf die Wasserstoffbrückenbindungen innerhalb von Salol zurückgeführt.
Resumo:
Factorial experiments with spatially arranged units occur in many situations, particularly in agricultural field trials. The design of such experiments when observations are spatially correlated is investigated in this paper. We show that having a large number of within-factor level changes in rows and columns is important for efficient and robust designs, and demonstrate how designs with these properties can be constructed. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
In early generation variety trials, large numbers of new breeders' lines need to be compared, and usually there is little seed available for each new line. A so-called unreplicated trial has each new line on just one plot at a site, but includes several (often around five) replicated check or control (or standard) varieties. The total proportion of check plots is usually between 10% and 20%. The aim of the trial is to choose some good performing lines (usually around 1/3 of those tested) to go on for further testing, rather than precise estimation of their mean yield. Now that spatial analyses of data from field experiments are becoming more common, there is interest in an efficient layout of an experiment given a proposed spatial analysis. Some possible design criteria are discussed, and efficient layouts under spatial dependence are considered.
Resumo:
In early generation variety trials, large numbers of new breeders' lines (varieties) may be compared, with each having little seed available. A so-called unreplicated trial has each new variety on just one plot at a site, but includes several replicated control varieties, making up around 10% and 20% of the trial. The aim of the trial is to choose some (usually around one third) good performing new varieties to go on for further testing, rather than precise estimation of their mean yields. Now that spatial analyses of data from field experiments are becoming more common, there is interest in an efficient layout of an experiment given a proposed spatial analysis and an efficiency criterion. Common optimal design criteria values depend on the usual C-matrix, which is very large, and hence it is time consuming to calculate its inverse. Since most varieties are unreplicated, the variety incidence matrix has a simple form, and some matrix manipulations can dramatically reduce the computation needed. However, there are many designs to compare, and numerical optimisation lacks insight into good design features. Some possible design criteria are discussed, and approximations to their values considered. These allow the features of efficient layouts under spatial dependence to be given and compared. (c) 2006 Elsevier Inc. All rights reserved.
Resumo:
The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.
We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.
We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.
The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.
Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.
The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.
Resumo:
The problem of decentralized sequential detection is studied in this thesis, where local sensors are memoryless, receive independent observations, and no feedback from the fusion center. In addition to traditional criteria of detection delay and error probability, we introduce a new constraint: the number of communications between local sensors and the fusion center. This metric is able to reflect both the cost of establishing communication links as well as overall energy consumption over time. A new formulation for communication-efficient decentralized sequential detection is proposed where the overall detection delay is minimized with constraints on both error probabilities and the communication cost. Two types of problems are investigated based on the communication-efficient formulation: decentralized hypothesis testing and decentralized change detection. In the former case, an asymptotically person-by-person optimum detection framework is developed, where the fusion center performs a sequential probability ratio test based on dependent observations. The proposed algorithm utilizes not only reported statistics from local sensors, but also the reporting times. The asymptotically relative efficiency of proposed algorithm with respect to the centralized strategy is expressed in closed form. When the probabilities of false alarm and missed detection are close to one another, a reduced-complexity algorithm is proposed based on a Poisson arrival approximation. In addition, decentralized change detection with a communication cost constraint is also investigated. A person-by-person optimum change detection algorithm is proposed, where transmissions of sensing reports are modeled as a Poisson process. The optimum threshold value is obtained through dynamic programming. An alternative method with a simpler fusion rule is also proposed, where the threshold values in the algorithm are determined by a combination of sequential detection analysis and constrained optimization. In both decentralized hypothesis testing and change detection problems, tradeoffs in parameter choices are investigated through Monte Carlo simulations.
Resumo:
We report a theoretical study of the multiple oxidation states (1+, 0, 1−, and 2−) of a meso,meso-linked diporphyrin, namely bis[10,15,20-triphenylporphyrinatozinc(II)-5-yl]butadiyne (4), using Time-Dependent Density Functional Theory (TDDFT). The origin of electronic transitions of singlet excited states is discussed in comparison to experimental spectra for the corresponding oxidation states of the close analogue bis{10,15,20-tris[3‘,5‘-di-tert-butylphenyl]porphyrinatozinc(II)-5-yl}butadiyne (3). The latter were measured in previous work under in situ spectroelectrochemical conditions. Excitation energies and orbital compositions of the excited states were obtained for these large delocalized aromatic radicals, which are unique examples of organic mixed-valence systems. The radical cations and anions of butadiyne-bridged diporphyrins such as 3 display characteristic electronic absorption bands in the near-IR region, which have been successfully predicted with use of these computational methods. The radicals are clearly of the “fully delocalized” or Class III type. The key spectral features of the neutral and dianionic states were also reproduced, although due to the large size of these molecules, quantitative agreement of energies with observations is not as good in the blue end of the visible region. The TDDFT calculations are largely in accord with a previous empirical model for the spectra, which was based simplistically on one-electron transitions among the eight key frontier orbitals of the C4 (1,4-butadiyne) linked diporphyrins.
Resumo:
Fracture behavior of Cu-Ni laminate composites has been investigated by tensile testing. It was found that as the individual layer thickness decreases from 100 to 20nm, the resultant fracture angle of the Cu-Ni laminate changes from 72 degrees to 50 degrees. Cross-sectional observations reveal that the fracture of the Ni layers transforms from opening to shear mode as the layer thickness decreases while that of the Cu layers keeps shear mode. Competition mechanisms were proposed to understand the variation in fracture mode of the metallic laminate composites associated with length scale.
Resumo:
Thin films of expoxy nanocomposites modified by multiwall carbon nanotubes (MWCNTs) were prepared by shear mixing and spin casting. The electrical behaviour and its dependence with temperature between 243 and 353 degrees Kelvin were characterized by measuring the direct current (DC) conductivity. Depending on the fabrication process, both linear and non-linear relationships between conductivity and temperature were observed. In addition, the thermal history also played a role in dictating the conductivity. The implications of these observations for potential application of these files as strain sensors are discussed.
Resumo:
For over half a century, it has been known that the rate of morphological evolution appears to vary with the time frame of measurement. Rates of microevolutionary change, measured between successive generations, were found to be far higher than rates of macroevolutionary change inferred from the fossil record. More recently, it has been suggested that rates of molecular evolution are also time dependent, with the estimated rate depending on the timescale of measurement. This followed surprising observations that estimates of mutation rates, obtained in studies of pedigrees and laboratory mutation-accumulation lines, exceeded long-term substitution rates by an order of magnitude or more. Although a range of studies have provided evidence for such a pattern, the hypothesis remains relatively contentious. Furthermore, there is ongoing discussion about the factors that can cause molecular rate estimates to be dependent on time. Here we present an overview of our current understanding of time-dependent rates. We provide a summary of the evidence for time-dependent rates in animals, bacteria and viruses. We review the various biological and methodological factors that can cause rates to be time dependent, including the effects of natural selection, calibration errors, model misspecification and other artefacts. We also describe the challenges in calibrating estimates of molecular rates, particularly on the intermediate timescales that are critical for an accurate characterization of time-dependent rates. This has important consequences for the use of molecular-clock methods to estimate timescales of recent evolutionary events.