996 resultados para Dependent Observations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We establish the validity of subsampling confidence intervals for themean of a dependent series with heavy-tailed marginal distributions.Using point process theory, we study both linear and nonlinear GARCH-liketime series models. We propose a data-dependent method for the optimalblock size selection and investigate its performance by means of asimulation study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is shown that variance-balanced designs can be obtained from Type I orthogonal arrays for many general models with two kinds of treatment effects, including ones for interference, with general dependence structures. These designs can be used to obtain optimal and efficient designs. Some examples and design comparisons are given. (C) 2002 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62L10.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wurde die Druckabhängigkeit der molekularen Dynamik mittels 2H-NMR und Viskositätsmessungen untersucht. Für die Messungen wurde der niedermolekulare organische Glasbildner ortho-Terphenyl (OTP) ausgewählt, da dieser aufgrund einer Vielzahl vorliegender Arbeiten als Modellsubstanz angesehen werden kann. Daneben wurden auch Messungen an Salol durchgeführt.Die Untersuchungen erstreckten sich über einen weiten Druck- und Temperaturbereich ausgehend von der Schmelze bis weit in die unterkühlte Flüssigkeit. Dieser Bereich wurde aufgrund experimenteller Voraussetzungen immer durch eine Druckerhöhung erreicht.Beide Substanzen zeigten druckabhängig ein Verhalten, das dem der Temperaturvariation bei Normaldruck sehr ähnelt. Auf einer Zeitskala der molekularen Dynamik von 10E-9 s bis zu 10E+2 s wurde daher am Beispiel von OTP ein Druck-Temperatur-Zeit-Superpositionsprinzip diskutiert. Zudem konnte eine Temperatur-Dichte-Skalierung mit rho T-1/4 erfolgreich durchgeführt werden. Dies entspricht einem rein repulsiven Potentialverlauf mit rho -12±3 .Zur Entscheidung, ob die Verteilungsbreiten der mittleren Rotationskorrelationszeiten durch Druckvariation beeinflußt werden, wurden auch Ergebnisse anderer experimenteller Methoden herangezogen. Unter Hinzuziehung aller Meßergebnisse kann sowohl eine Temperatur- als auch Druckabhängigkeit der Verteilungsbreite bestätigt werden. Zur Auswertung von Viskositätsdaten wurde ein Verfahren vorgestellt, das eine quantitative Aussage über den Fragilitätsindex von unterkühlten Flüssigkeiten auch dann zuläßt, wenn die Messungen nicht bis zur Glasübergangstemperatur Tg durchgeführt werden. Die Auswertung der druckabhängigen Viskositätsdaten von OTP und Salol zeigt einen sehr differenzierten druckabhängigen Verlauf des Fragilitätsindexes für beide Glasbildner. OTP zeigt zunächst eine leichte Abnahme und danach wieder eine Zunahme des Fragilitätsindexes, dieses Ergebnis wird auch von Simulationsdaten, die der Literatur entnommen wurden, unterstützt. Salol hingegen zeigt zunächst eine deutliche Zunahme und danach eine Abnahme des Fragilitätsindexes. Das unterschiedliche Verhalten der beiden Glasbildner mit ähnlichem Fragilitätsindex bei Normaldruck wird auf die Wasserstoffbrückenbindungen innerhalb von Salol zurückgeführt.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Factorial experiments with spatially arranged units occur in many situations, particularly in agricultural field trials. The design of such experiments when observations are spatially correlated is investigated in this paper. We show that having a large number of within-factor level changes in rows and columns is important for efficient and robust designs, and demonstrate how designs with these properties can be constructed. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In early generation variety trials, large numbers of new breeders' lines need to be compared, and usually there is little seed available for each new line. A so-called unreplicated trial has each new line on just one plot at a site, but includes several (often around five) replicated check or control (or standard) varieties. The total proportion of check plots is usually between 10% and 20%. The aim of the trial is to choose some good performing lines (usually around 1/3 of those tested) to go on for further testing, rather than precise estimation of their mean yield. Now that spatial analyses of data from field experiments are becoming more common, there is interest in an efficient layout of an experiment given a proposed spatial analysis. Some possible design criteria are discussed, and efficient layouts under spatial dependence are considered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In early generation variety trials, large numbers of new breeders' lines (varieties) may be compared, with each having little seed available. A so-called unreplicated trial has each new variety on just one plot at a site, but includes several replicated control varieties, making up around 10% and 20% of the trial. The aim of the trial is to choose some (usually around one third) good performing new varieties to go on for further testing, rather than precise estimation of their mean yields. Now that spatial analyses of data from field experiments are becoming more common, there is interest in an efficient layout of an experiment given a proposed spatial analysis and an efficiency criterion. Common optimal design criteria values depend on the usual C-matrix, which is very large, and hence it is time consuming to calculate its inverse. Since most varieties are unreplicated, the variety incidence matrix has a simple form, and some matrix manipulations can dramatically reduce the computation needed. However, there are many designs to compare, and numerical optimisation lacks insight into good design features. Some possible design criteria are discussed, and approximations to their values considered. These allow the features of efficient layouts under spatial dependence to be given and compared. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.

We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.

We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.

The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.

Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.

The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The problem of decentralized sequential detection is studied in this thesis, where local sensors are memoryless, receive independent observations, and no feedback from the fusion center. In addition to traditional criteria of detection delay and error probability, we introduce a new constraint: the number of communications between local sensors and the fusion center. This metric is able to reflect both the cost of establishing communication links as well as overall energy consumption over time. A new formulation for communication-efficient decentralized sequential detection is proposed where the overall detection delay is minimized with constraints on both error probabilities and the communication cost. Two types of problems are investigated based on the communication-efficient formulation: decentralized hypothesis testing and decentralized change detection. In the former case, an asymptotically person-by-person optimum detection framework is developed, where the fusion center performs a sequential probability ratio test based on dependent observations. The proposed algorithm utilizes not only reported statistics from local sensors, but also the reporting times. The asymptotically relative efficiency of proposed algorithm with respect to the centralized strategy is expressed in closed form. When the probabilities of false alarm and missed detection are close to one another, a reduced-complexity algorithm is proposed based on a Poisson arrival approximation. In addition, decentralized change detection with a communication cost constraint is also investigated. A person-by-person optimum change detection algorithm is proposed, where transmissions of sensing reports are modeled as a Poisson process. The optimum threshold value is obtained through dynamic programming. An alternative method with a simpler fusion rule is also proposed, where the threshold values in the algorithm are determined by a combination of sequential detection analysis and constrained optimization. In both decentralized hypothesis testing and change detection problems, tradeoffs in parameter choices are investigated through Monte Carlo simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The persistent nature of addiction has been associated with activity-induced plasticity of neurons within the striatum and nucleus accumbens (NAc). To identify the molecular processes leading to these adaptations, we performed Cre/loxP-mediated genetic ablations of two key regulators of gene expression in response to activity, the Ca2+/calmodulin-dependent protein kinase IV (CaMKIV) and its postulated main target, the cAMP-responsive element binding protein (CREB). We found that acute cocaine-induced gene expression in the striatum was largely unaffected by the loss of CaMKIV. On the behavioral level, mice lacking CaMKIV in dopaminoceptive neurons displayed increased sensitivity to cocaine as evidenced by augmented expression of locomotor sensitization and enhanced conditioned place preference and reinstatement after extinction. However, the loss of CREB in the forebrain had no effect on either of these behaviors, even though it robustly blunted acute cocaine-induced transcription. To test the relevance of these observations for addiction in humans, we performed an association study of CAMK4 and CREB promoter polymorphisms with cocaine addiction in a large sample of addicts. We found that a single nucleotide polymorphism in the CAMK4 promoter was significantly associated with cocaine addiction, whereas variations in the CREB promoter regions did not correlate with drug abuse. These findings reveal a critical role for CaMKIV in the development and persistence of cocaine-induced behaviors, through mechanisms dissociated from acute effects on gene expression and CREB-dependent transcription.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Local scale windfield and air mass characteristics during the onset of two foehn wind events in an alpine hydro-catchment are presented. Grounding of the topographically modified foehn was found to be dependent on daytime surface heating and topographic channelling of flow. The foehn front was observed to advance down-valley until the valley widened significantly. The foehn wind appeared to decouple from the surface downstream of the accelerated flow associated with the valley constriction. and to be lifted above local thermally generated circulations including a lake breeze. Towards evening. the foehn front retreated up valley in response to reduced surface heating and the intrusion into the study area of a deep and cool air mass associated with a regional scale mountain-plain circulation. Differences in the local windfield observed during both case study events reflect the importance of different thermal and dynamic forcings on airflow in complex terrain. These are the result of variation in surface energy exchanges, channelling and blocking of airflow. Observations presented here have both theoretical and applied implications with regard to forecasting foehn onset, wind hazard management, recreational activities and air quality management in alpine settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the first time it was possible to observe regular quasiperiodic scintillations (QPS) in VHF radio-satellite transmissions from orbiting satellites simultaneously at short (2.1 km) and long (121 km) meridional baselines in the vicinity of a typical mid-latitude station (Brisbane; 27.5degreesS and 152.9degreesE geog. and 35.6degrees invar.lat.), using three sites (St. Lucia-S, Taringa-T in Brisbane and Boreen Pt.-B, north of Brisbane). A few pronounced quasiperiodic (QP) events were recorded showing unambiguous regular structures at the sites which made it possible to deduce a time displacement of the regular fading minimum at S, T and B. The QP structure is highly dependent on the geometry of the ray-path from a satellite to the observer which is manifested as a change of a QP event from symmetrical to non-symmetrical for stations separated by 2.1 km, and to a radical change in the structure of the event over a distance of 121 km. It is suggested the short-duration intense QP events are due to a Fresnel diffraction (or a reflection mechanism) of radio-satellite signals by a single ionospheric irregularity in a form of an ellipsoid with a large ionization gradient along the major axis. The structure of a QP event depends on the angle of viewing of the irregular blob from a radio-satellite. In view of this it is suggested that the reported variety of the ionization formation, responsible for different types of QPS, is only apparent but not real. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.