1000 resultados para PSEUDO-OBSERVATIONS
Resumo:
Regression models for the mean quality-adjusted survival time are specified from hazard functions of transitions between two states and the mean quality-adjusted survival time may be a complex function of covariates. We discuss a regression model for the mean quality-adjusted survival (QAS) time based on pseudo-observations, which has the advantage of directly modeling the effect of covariates in the QAS time. Both Monte Carlo Simulations and a real data set are studied. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Targeted observations are generally taken in regions of high baroclinicity, but often show little impact. One plausible explanation is that important dynamical information, such as upshear tilt, is not extracted from the targeted observations by the data assimilation scheme and used to correct initial condition error. This is investigated by generating pseudo targeted observations which contain a singular vector (SV) structure that is not present in the background field or routine observations, i.e. assuming that the background has an initial condition error with tilted growing structure. Experiments were performed for a single case-study with varying numbers of pseudo targeted observations. These were assimilated by the Met Office four-dimensional variational (4D-Var) data assimilation scheme, which uses a 6 h window for observations and background-error covariances calculated using the National Meteorological Centre (NMC) method. The forecasts were run using the operational Met Office Unified Model on a 24 km grid. The results presented clearly demonstrate that a 6 h window 4D-Var system is capable of extracting baroclinic information from a limited set of observations and using it to correct initial condition error. To capture the SV structure well (projection of 0.72 in total energy), 50 sondes over an area of 1×106 km2 were required. When the SV was represented by only eight sondes along an example targeting flight track covering a smaller area, the projection onto the SV structure was lower; the resulting forecast perturbations showed an SV structure with increased tilt and reduced initial energy. The total energy contained in the perturbations decreased as the SV structure was less well described by the set of observations (i.e. as fewer pseudo observations were assimilated). The assimilated perturbation had lower energy than the SV unless the pseudo observations were assimilated with the dropsonde observation errors halved from operational values. Copyright © 2010 Royal Meteorological Society
Resumo:
The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society
Conditioning of incremental variational data assimilation, with application to the Met Office system
Resumo:
Implementations of incremental variational data assimilation require the iterative minimization of a series of linear least-squares cost functions. The accuracy and speed with which these linear minimization problems can be solved is determined by the condition number of the Hessian of the problem. In this study, we examine how different components of the assimilation system influence this condition number. Theoretical bounds on the condition number for a single parameter system are presented and used to predict how the condition number is affected by the observation distribution and accuracy and by the specified lengthscales in the background error covariance matrix. The theoretical results are verified in the Met Office variational data assimilation system, using both pseudo-observations and real data.
Resumo:
One of the prerequisites for achieving skill in decadal climate prediction is to initialize and predict the circulation in the Atlantic Ocean successfully. The RAPID array measures the Atlantic Meridional Overturning Circulation (MOC) at 26°N. Here we develop a method to include these observations in the Met Office Decadal Prediction System (DePreSys). The proposed method uses covariances of overturning transport anomalies at 26°N with ocean temperature and salinity anomalies throughout the ocean to create the density structure necessary to reproduce the observed transport anomaly. Assimilating transport alone in this way effectively reproduces the observed transport anomalies at 26°N and is better than using basin-wide temperature and salinity observations alone. However, when the transport observations are combined with in situ temperature and salinity observations in the analysis, the transport is not currently reproduced so well. The reasons for this are investigated using pseudo-observations in a twin experiment framework. Sensitivity experiments show that the MOC on monthly time-scales, at least in the HadCM3 model, is modulated by a mechanism where non-local density anomalies appear to be more important for transport variability at 26°N than local density gradients.
Resumo:
We consider consider the problem of dichotomizing a continuous covariate when performing a regression analysis based on a generalized estimation approach. The problem involves estimation of the cutpoint for the covariate and testing the hypothesis that the binary covariate constructed from the continuous covariate has a significant impact on the outcome. Due to the multiple testing used to find the optimal cutpoint, we need to make an adjustment to the usual significance test to preserve the type-I error rates. We illustrate the techniques on one data set of patients given unrelated hematopoietic stem cell transplantation. Here the question is whether the CD34 cell dose given to patient affects the outcome of the transplant and what is the smallest cell dose which is needed for good outcomes. (C) 2010 Elsevier BM. All rights reserved.
The impact of common versus separate estimation of orbit parameters on GRACE gravity field solutions
Resumo:
Gravity field parameters are usually determined from observations of the GRACE satellite mission together with arc-specific parameters in a generalized orbit determination process. When separating the estimation of gravity field parameters from the determination of the satellites’ orbits, correlations between orbit parameters and gravity field coefficients are ignored and the latter parameters are biased towards the a priori force model. We are thus confronted with a kind of hidden regularization. To decipher the underlying mechanisms, the Celestial Mechanics Approach is complemented by tools to modify the impact of the pseudo-stochastic arc-specific parameters on the normal equations level and to efficiently generate ensembles of solutions. By introducing a time variable a priori model and solving for hourly pseudo-stochastic accelerations, a significant reduction of noisy striping in the monthly solutions can be achieved. Setting up more frequent pseudo-stochastic parameters results in a further reduction of the noise, but also in a notable damping of the observed geophysical signals. To quantify the effect of the a priori model on the monthly solutions, the process of fixing the orbit parameters is replaced by an equivalent introduction of special pseudo-observations, i.e., by explicit regularization. The contribution of the thereby introduced a priori information is determined by a contribution analysis. The presented mechanism is valid universally. It may be used to separate any subset of parameters by pseudo-observations of a special design and to quantify the damage imposed on the solution.
Resumo:
The NASA mission GRAIL (Gravity Recovery and Interior Laboratory) inherited its concept from the GRACE (Gravity Recovery and Climate Experiment) mission to determine the gravity field of the Moon. We present lunar gravity fields based on the data of GRAIL’s primary mission phase. Gravity field recovery is realized in the framework of the Celestial Mechanics Approach, using a development version of the Bernese GNSS Software along with Ka-band range-rate data series as observations and the GNI1B positions provided by NASA JPL as pseudo-observations. By comparing our results with the official level-2 GRAIL gravity field models we show that the lunar gravity field can be recovered with a high quality by adapting the Celestial Mechanics Approach, even when using pre-GRAIL gravity field models as a priori fields and when replacing sophisticated models of non-gravitational accelerations by appropriately spaced pseudo-stochastic pulses (i.e., instantaneous velocity changes). We present and evaluate two lunar gravity field solutions up to degree and order 200 – AIUB-GRL200A and AIUB-GRL200B. While the first solution uses no gravity field information beyond degree 200, the second is obtained by using the official GRAIL field GRGM900C up to degree and order 660 as a priori information. This reduces the omission errors and demonstrates the potential quality of our solution if we resolved the gravity field to higher degree.
Resumo:
Abstract : Images acquired from unmanned aerial vehicles (UAVs) can provide data with unprecedented spatial and temporal resolution for three-dimensional (3D) modeling. Solutions developed for this purpose are mainly operating based on photogrammetry concepts, namely UAV-Photogrammetry Systems (UAV-PS). Such systems are used in applications where both geospatial and visual information of the environment is required. These applications include, but are not limited to, natural resource management such as precision agriculture, military and police-related services such as traffic-law enforcement, precision engineering such as infrastructure inspection, and health services such as epidemic emergency management. UAV-photogrammetry systems can be differentiated based on their spatial characteristics in terms of accuracy and resolution. That is some applications, such as precision engineering, require high-resolution and high-accuracy information of the environment (e.g. 3D modeling with less than one centimeter accuracy and resolution). In other applications, lower levels of accuracy might be sufficient, (e.g. wildlife management needing few decimeters of resolution). However, even in those applications, the specific characteristics of UAV-PSs should be well considered in the steps of both system development and application in order to yield satisfying results. In this regard, this thesis presents a comprehensive review of the applications of unmanned aerial imagery, where the objective was to determine the challenges that remote-sensing applications of UAV systems currently face. This review also allowed recognizing the specific characteristics and requirements of UAV-PSs, which are mostly ignored or not thoroughly assessed in recent studies. Accordingly, the focus of the first part of this thesis is on exploring the methodological and experimental aspects of implementing a UAV-PS. The developed system was extensively evaluated for precise modeling of an open-pit gravel mine and performing volumetric-change measurements. This application was selected for two main reasons. Firstly, this case study provided a challenging environment for 3D modeling, in terms of scale changes, terrain relief variations as well as structure and texture diversities. Secondly, open-pit-mine monitoring demands high levels of accuracy, which justifies our efforts to improve the developed UAV-PS to its maximum capacities. The hardware of the system consisted of an electric-powered helicopter, a high-resolution digital camera, and an inertial navigation system. The software of the system included the in-house programs specifically designed for camera calibration, platform calibration, system integration, onboard data acquisition, flight planning and ground control point (GCP) detection. The detailed features of the system are discussed in the thesis, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The accuracy of the results was evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy were assessed. The second part of this thesis concentrates on improving the techniques of sparse and dense reconstruction. The proposed solutions are alternatives to traditional aerial photogrammetry techniques, properly adapted to specific characteristics of unmanned, low-altitude imagery. Firstly, a method was developed for robust sparse matching and epipolar-geometry estimation. The main achievement of this method was its capacity to handle a very high percentage of outliers (errors among corresponding points) with remarkable computational efficiency (compared to the state-of-the-art techniques). Secondly, a block bundle adjustment (BBA) strategy was proposed based on the integration of intrinsic camera calibration parameters as pseudo-observations to Gauss-Helmert model. The principal advantage of this strategy was controlling the adverse effect of unstable imaging networks and noisy image observations on the accuracy of self-calibration. The sparse implementation of this strategy was also performed, which allowed its application to data sets containing a lot of tie points. Finally, the concepts of intrinsic curves were revisited for dense stereo matching. The proposed technique could achieve a high level of accuracy and efficiency by searching only through a small fraction of the whole disparity search space as well as internally handling occlusions and matching ambiguities. These photogrammetric solutions were extensively tested using synthetic data, close-range images and the images acquired from the gravel-pit mine. Achieving absolute 3D mapping accuracy of 11±7 mm illustrated the success of this system for high-precision modeling of the environment.
Resumo:
The kinetics of liquid phase semiconductor photocatalytic and photoassisted reactions are an area of some debate, reignited recently by an article by Ollis(1) in which he proposed a simple pseudo- steady- state model to interpret the Langmuir- Hinshelwood type kinetics, commonly observed in such systems. In the current article, support for this model, over other models, is provided by a reinterpretation of the results of a study, reported initially in 1999,2 of the photoassisted mineralization of 4- chlorophenol, 4-CP, by titania films and dispersions as a function of incident light intensity, I. On the basis of this model, these results indicate that 4- CP is adsorbed more strongly on P25 TiO2 when it is in a dispersed, rather than a film form, due to a higher rate constant for adsorption, k(1). In addition, the kinetics of 4- CP removal appear to depend on I-beta where, beta = 1 or 0.6 for when the TiO2 is in a film or a dispersed form, respectively. These findings are discussed both in terms of the pseudo- steady- state model and other popular kinetic models.
Resumo:
Monitoring stream networks through time provides important ecological information. The sampling design problem is to choose locations where measurements are taken so as to maximise information gathered about physicochemical and biological variables on the stream network. This paper uses a pseudo-Bayesian approach, averaging a utility function over a prior distribution, in finding a design which maximizes the average utility. We use models for correlations of observations on the stream network that are based on stream network distances and described by moving average error models. Utility functions used reflect the needs of the experimenter, such as prediction of location values or estimation of parameters. We propose an algorithmic approach to design with the mean utility of a design estimated using Monte Carlo techniques and an exchange algorithm to search for optimal sampling designs. In particular we focus on the problem of finding an optimal design from a set of fixed designs and finding an optimal subset of a given set of sampling locations. As there are many different variables to measure, such as chemical, physical and biological measurements at each location, designs are derived from models based on different types of response variables: continuous, counts and proportions. We apply the methodology to a synthetic example and the Lake Eacham stream network on the Atherton Tablelands in Queensland, Australia. We show that the optimal designs depend very much on the choice of utility function, varying from space filling to clustered designs and mixtures of these, but given the utility function, designs are relatively robust to the type of response variable.
Resumo:
Two new line clipping algorithms, the opposite-corner algorithm and the perpendicular-distance algorithm, that are based on simple geometric observations are presented. These algorithms do not require computation of outcodes nor do they depend on the parametric representations of the lines. It is shown that the opposite-corner algorithm perform consistently better than an algorithm due to Nicholl, Lee, and Nicholl which is claimed to be better than the classic algorithm due to Cohen-Sutherland and the more recent Liang-Barsky algorithm. The pseudo-code of the opposite-corner algorithm is provided in the Appendix.
Resumo:
An area of about 22,000 km² on the northern Blake Plateau, off the coast of South Carolina, contains an estimated 2 billion metric tons of phosphorite concretions, and about 1.2 billion metric tons of mixed ferromanganese-phosphorite pavement. Other offshore phosphorites occur between the Blake Plateau and known continental deposits, buried under variable thicknesses of sediments. The phosphorite resembles other marine phosphorites in composition, consisting primarily of carbonate-fluorapatite, some calcite, minor quartz and other minerals. The apatite is optically pseudo-isotropic and contains about 6% [CO3]**2- replacing [PO4]**3- in its structure. JOIDES drillings and other evidence show that the phosphorite is a lag deposit derived from Miocene strata correlatable with phosphatic Middle Tertiary sediments on the continent. It has undergone variable cycles of erosion, reworking, partial dissolution and reprecipitation. Its present form varies from phosphatized carbonate debris, loose pellets, and pebbles, to continuous pavements, plates, and conglomeratic boulders weighing hundreds of kilograms. No primary phosphatization is currently taking place on the Blake Plateau. The primary phosphate-depositing environment involved reducing conditions and required at least temporary absence of the powerful Gulf Stream current that now sweeps the bottom of the Blake Plateau and has eroded away the bulk of the Hawthorne-equivalent sediments with which the phosphorites were once associated.
Resumo:
It is thought that Lysias’ speech XXIII, Against Pancleon, was delivered in a paragraphe or ‘counter-indictment process’, called antigraphe in an initial phase. However a review of these concepts and, in general, of some aspects of Athenian judicial procedure has allowed us to conclude that the mentioned speech was made by the plaintiff, client of the logographer, against the defendant in a ‘action for false testimony’, dike pseudomartyrion.