25 resultados para Studentized residuals
Resumo:
Insect returns from the UK's Doppler weather radars were collected in the summers of 2007 and 2008, to ascertain their usefulness in providing information about boundary layer winds. Such observations could be assimilated into numerical weather prediction models to improve forecasts of convective showers before precipitation begins. Significant numbers of insect returns were observed during daylight hours on a number of days through this period, when they were detected at up to 30 km range from the radars, and up to 2 km above sea level. The range of detectable insect returns was found to vary with time of year and temperature. There was also a very weak correlation with wind speed and direction. Use of a dual-polarized radar revealed that the insects did not orient themselves at random, but showed distinct evidence of common orientation on several days, sometimes at an angle to their direction of travel. Observation minus model background residuals of wind profiles showed greater bias and standard deviation than that of other wind measurement types, which may be due to the insects' headings/airspeeds and to imperfect data extraction. The method used here, similar to the Met Office's procedure for extracting precipitation returns, requires further development as clutter contamination remained one of the largest error contributors. Wind observations derived from the insect returns would then be useful for data assimilation applications.
Resumo:
Carbonate rocks are important hydrocarbon reservoir rocks with complex textures and petrophysical properties (porosity and permeability) mainly resulting from various diagenetic processes (compaction, dissolution, precipitation, cementation, etc.). These complexities make prediction of reservoir characteristics (e.g. porosity and permeability) from their seismic properties very difficult. To explore the relationship between the seismic, petrophysical and geological properties, ultrasonic compressional- and shear-wave velocity measurements were made under a simulated in situ condition of pressure (50 MPa hydrostatic effective pressure) at frequencies of approximately 0.85 MHz and 0.7 MHz, respectively, using a pulse-echo method. The measurements were made both in vacuum-dry and fully saturated conditions in oolitic limestones of the Great Oolite Formation of southern England. Some of the rocks were fully saturated with oil. The acoustic measurements were supplemented by porosity and permeability measurements, petrological and pore geometry studies of resin-impregnated polished thin sections, X-ray diffraction analyses and scanning electron microscope studies to investigate submicroscopic textures and micropores. It is shown that the compressional- and shear-wave velocities (V-p and V-s, respectively) decrease with increasing porosity and that V-p decreases approximately twice as fast as V-s. The systematic differences in pore structures (e.g. the aspect ratio) of the limestones produce large residuals in the velocity versus porosity relationship. It is demonstrated that the velocity versus porosity relationship can be improved by removing the pore-structure-dependent variations from the residuals. The introduction of water into the pore space decreases the shear moduli of the rocks by about 2 GPa, suggesting that there exists a fluid/matrix interaction at grain contacts, which reduces the rigidity. The predicted Biot-Gassmann velocity values are greater than the measured velocity values due to the rock-fluid interaction. This is not accounted for in the Biot-Gassmann velocity models and velocity dispersion due to a local flow mechanism. The velocities predicted by the Raymer and time-average relationships overestimated the measured velocities even more than the Biot model.
Resumo:
Mostly because of a lack of observations, fundamental aspects of the St. Lawrence Estuary's wintertime response to forcing remain poorly understood. The results of a field campaign over the winter of 2002/03 in the estuary are presented. The response of the system to tidal forcing is assessed through the use of harmonic analyses of temperature, salinity, sea level, and current observations. The analyses confirm previous evidence for the presence of semidiurnal internal tides, albeit at greater depths than previously observed for ice-free months. The low-frequency tidal streams were found to be mostly baroclinic in character and to produce an important neap tide intensification of the estuarine circulation. Despite stronger atmospheric momentum forcing in winter, the response is found to be less coherent with the winds than seen in previous studies of ice-free months. The tidal residuals show the cold intermediate layer in the estuary is renewed rapidly ( 14 days) in late March by the advection of a wedge of near-freezing waters from the Gulf of St. Lawrence. In situ processes appeared to play a lesser role in the renewal of this layer. In particular, significant wintertime deepening of the estuarine surface mixed layer was prevented by surface stability, which remained high throughout the winter. The observations also suggest that the bottom circulation was intensified during winter, with the intrusion in the deep layer of relatively warm Atlantic waters, such that the 3 C isotherm rose from below 150 m to near 60 m.
Resumo:
Measurements of anthropogenic tracers such as chlorofluorocarbons and tritium must be quantitatively combined with ocean general circulation models as a component of systematic model development. The authors have developed and tested an inverse method, using a Green's function, to constrain general circulation models with transient tracer data. Using this method chlorofluorocarbon-11 and -12 (CFC-11 and -12) observations are combined with a North Atlantic configuration of the Miami Isopycnic Coordinate Ocean Model with 4/3 degrees resolution. Systematic differences can be seen between the observed CFC concentrations and prior CFC fields simulated by the model. These differences are reduced by the inversion, which determines the optimal gas transfer across the air-sea interface, accounting for uncertainties in the tracer observations. After including the effects of unresolved variability in the CFC fields, the model is found to be inconsistent with the observations because the model/data misfit slightly exceeds the error estimates. By excluding observations in waters ventilated north of the Greenland-Scotland ridge (sigma (0) < 27.82 kg m(-3); shallower than about 2000 m), the fit is improved, indicating that the Nordic overflows are poorly represented in the model. Some systematic differences in the model/data residuals remain and are related, in part, to excessively deep model ventilation near Rockall and deficient ventilation in the main thermocline of the eastern subtropical gyre. Nevertheless, there do not appear to be gross errors in the basin-scale model circulation. Analysis of the CFC inventory using the constrained model suggests that the North Atlantic Ocean shallower than about 2000 m was near 20% saturated in the mid-1990s. Overall, this basin is a sink to 22% of the total atmosphere-to-ocean CFC-11 flux-twice the global average value. The average water mass formation rates over the CFC transient are 7.0 and 6.0 Sv (Sv = 10(6) m(3) s(-1)) for subtropical mode water and subpolar mode water, respectively.
Resumo:
This paper provides a generalisation of the structural time series version of the Almost Ideal Demand System (AIDS) that allows for time-varying coefficients (TVC/AIDS) in the presence of cross-equation constraints. An empirical appraisal of the TVC/AIDS is made using a dynamic AIDS with trending intercept as the baseline model with a data set from the Italian Household Budget Survey (1986-2001). The assessment is based on four criteria: adherence to theoretical constraints, statistical diagnostics on residuals, forecasting performance and economic meaningfulness. No clear evidence is found for superior performance of the TVC/AIDS, apart from improved short-term forecasts.
Resumo:
This paper exploits a structural time series approach to model the time pattern of multiple and resurgent food scares and their direct and cross-product impacts on consumer response. A structural time series Almost Ideal Demand System (STS-AIDS) is embedded in a vector error correction framework to allow for dynamic effects (VEC-STS-AIDS). Italian aggregate household data on meat demand is used to assess the time-varying impact of a resurgent BSE crisis (1996 and 2000) and the 1999 Dioxin crisis. The VEC-STS-AIDS model monitors the short-run impacts and performs satisfactorily in terms of residuals diagnostics, overcoming the major problems encountered by the customary vector error correction approach.
Resumo:
This research is associated with the goal of the horticultural sector of the Colombian southwest, which is to obtain climatic information, specifically, to predict the monthly average temperature in sites where it has not been measured. The data correspond to monthly average temperature, and were recorded in meteorological stations at Valle del Cauca, Colombia, South America. Two components are identified in the data of this research: (1) a component due to the temporal aspects, determined by characteristics of the time series, distribution of the monthly average temperature through the months and the temporal phenomena, which increased (El Nino) and decreased (La Nina) the temperature values, and (2) a component due to the sites, which is determined for the clear differentiation of two populations, the valley and the mountains, which are associated with the pattern of monthly average temperature and with the altitude. Finally, due to the closeness between meteorological stations it is possible to find spatial correlation between data from nearby sites. In the first instance a random coefficient model without spatial covariance structure in the errors is obtained by month and geographical location (mountains and valley, respectively). Models for wet periods in mountains show a normal distribution in the errors; models for the valley and dry periods in mountains do not exhibit a normal pattern in the errors. In models of mountains and wet periods, omni-directional weighted variograms for residuals show spatial continuity. The random coefficient model without spatial covariance structure in the errors and the random coefficient model with spatial covariance structure in the errors are capturing the influence of the El Nino and La Nina phenomena, which indicates that the inclusion of the random part in the model is appropriate. The altitude variable contributes significantly in the models for mountains. In general, the cross-validation process indicates that the random coefficient model with spatial spherical and the random coefficient model with spatial Gaussian are the best models for the wet periods in mountains, and the worst model is the model used by the Colombian Institute for Meteorology, Hydrology and Environmental Studies (IDEAM) to predict temperature.
Resumo:
Combinations of drugs are increasingly being used for a wide variety of diseases and conditions. A pre-clinical study may allow the investigation of the response at a large number of dose combinations. In determining the response to a drug combination, interest may lie in seeking evidence of synergism, in which the joint action is greater than the actions of the individual drugs, or of antagonism, in which it is less. Two well-known response surface models representing no interaction are Loewe additivity and Bliss independence, and Loewe or Bliss synergism or antagonism is defined relative to these. We illustrate an approach to fitting these models for the case in which the marginal single drug dose-response relationships are represented by four-parameter logistic curves with common upper and lower limits, and where the response variable is normally distributed with a common variance about the dose-response curve. When the dose-response curves are not parallel, the relative potency of the two drugs varies according to the magnitude of the desired effect and the models for Loewe additivity and synergism/antagonism cannot be explicitly expressed. We present an iterative approach to fitting these models without the assumption of parallel dose-response curves. A goodness-of-fit test based on residuals is also described. Implementation using the SAS NLIN procedure is illustrated using data from a pre-clinical study. Copyright © 2007 John Wiley & Sons, Ltd.
Resumo:
Heterogeneity in lifetime data may be modelled by multiplying an individual's hazard by an unobserved frailty. We test for the presence of frailty of this kind in univariate and bivariate data with Weibull distributed lifetimes, using statistics based on the ordered Cox-Snell residuals from the null model of no frailty. The form of the statistics is suggested by outlier testing in the gamma distribution. We find through simulation that the sum of the k largest or k smallest order statistics, for suitably chosen k , provides a powerful test when the frailty distribution is assumed to be gamma or positive stable, respectively. We provide recommended values of k for sample sizes up to 100 and simple formulae for estimated critical values for tests at the 5% level.
Resumo:
In this study we quantify the relationship between the aerosol optical depth increase from a volcanic eruption and the severity of the subsequent surface temperature decrease. This investigation is made by simulating 10 different sizes of eruption in a global circulation model (GCM) by changing stratospheric sulfate aerosol optical depth at each time step. The sizes of the simulated eruptions range from Pinatubo‐sized up to the magnitude of supervolcanic eruptions around 100 times the size of Pinatubo. From these simulations we find that there is a smooth monotonic relationship between the global mean maximum aerosol optical depth anomaly and the global mean temperature anomaly and we derive a simple mathematical expression which fits this relationship well. We also construct similar relationships between global mean aerosol optical depth and the temperature anomaly at every individual model grid box to produce global maps of best‐fit coefficients and fit residuals. These maps are used with caution to find the eruption size at which a local temperature anomaly is clearly distinct from the local natural variability and to approximate the temperature anomalies which the model may simulate following a Tambora‐sized eruption. To our knowledge, this is the first study which quantifies the relationship between aerosol optical depth and resulting temperature anomalies in a simple way, using the wealth of data that is available from GCM simulations.
Resumo:
The performance of flood inundation models is often assessed using satellite observed data; however these data have inherent uncertainty. In this study we assess the impact of this uncertainty when calibrating a flood inundation model (LISFLOOD-FP) for a flood event in December 2006 on the River Dee, North Wales, UK. The flood extent is delineated from an ERS-2 SAR image of the event using an active contour model (snake), and water levels at the flood margin calculated through intersection of the shoreline vector with LiDAR topographic data. Gauged water levels are used to create a reference water surface slope for comparison with the satellite-derived water levels. Residuals between the satellite observed data points and those from the reference line are spatially clustered into groups of similar values. We show that model calibration achieved using pattern matching of observed and predicted flood extent is negatively influenced by this spatial dependency in the data. By contrast, model calibration using water elevations produces realistic calibrated optimum friction parameters even when spatial dependency is present. To test the impact of removing spatial dependency a new method of evaluating flood inundation model performance is developed by using multiple random subsamples of the water surface elevation data points. By testing for spatial dependency using Moran’s I, multiple subsamples of water elevations that have no significant spatial dependency are selected. The model is then calibrated against these data and the results averaged. This gives a near identical result to calibration using spatially dependent data, but has the advantage of being a statistically robust assessment of model performance in which we can have more confidence. Moreover, by using the variations found in the subsamples of the observed data it is possible to assess the effects of observational uncertainty on the assessment of flooding risk.
Resumo:
Total ozone trends are typically studied using linear regression models that assume a first-order autoregression of the residuals [so-called AR(1) models]. We consider total ozone time series over 60°S–60°N from 1979 to 2005 and show that most latitude bands exhibit long-range correlated (LRC) behavior, meaning that ozone autocorrelation functions decay by a power law rather than exponentially as in AR(1). At such latitudes the uncertainties of total ozone trends are greater than those obtained from AR(1) models and the expected time required to detect ozone recovery correspondingly longer. We find no evidence of LRC behavior in southern middle-and high-subpolar latitudes (45°–60°S), where the long-term ozone decline attributable to anthropogenic chlorine is the greatest. We thus confirm an earlier prediction based on an AR(1) analysis that this region (especially the highest latitudes, and especially the South Atlantic) is the optimal location for the detection of ozone recovery, with a statistically significant ozone increase attributable to chlorine likely to be detectable by the end of the next decade. In northern middle and high latitudes, on the other hand, there is clear evidence of LRC behavior. This increases the uncertainties on the long-term trend attributable to anthropogenic chlorine by about a factor of 1.5 and lengthens the expected time to detect ozone recovery by a similar amount (from ∼2030 to ∼2045). If the long-term changes in ozone are instead fit by a piecewise-linear trend rather than by stratospheric chlorine loading, then the strong decrease of northern middle- and high-latitude ozone during the first half of the 1990s and its subsequent increase in the second half of the 1990s projects more strongly on the trend and makes a smaller contribution to the noise. This both increases the trend and weakens the LRC behavior at these latitudes, to the extent that ozone recovery (according to this model, and in the sense of a statistically significant ozone increase) is already on the verge of being detected. The implications of this rather controversial interpretation are discussed.
Resumo:
In this paper, we present a polynomial-based noise variance estimator for multiple-input multiple-output single-carrier block transmission (MIMO-SCBT) systems. It is shown that the optimal pilots for noise variance estimation satisfy the same condition as that for channel estimation. Theoretical analysis indicates that the proposed estimator is statistically more efficient than the conventional sum of squared residuals (SSR) based estimator. Furthermore, we obtain an efficient implementation of the estimator by exploiting its special structure. Numerical results confirm our theoretical analysis.
Resumo:
Most of the operational Sea Surface Temperature (SST) products derived from satellite infrared radiometry use multi-spectral algorithms. They show, in general, reasonable performances with root mean square (RMS) residuals around 0.5 K when validated against buoy measurements, but have limitations, particularly a component of the retrieval error that relates to such algorithms' limited ability to cope with the full variability of atmospheric absorption and emission. We propose to use forecast atmospheric profiles and a radiative transfer model to simulate the algorithmic errors of multi-spectral algorithms. In the practical case of SST derived from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG), we demonstrate that simulated algorithmic errors do explain a significant component of the actual errors observed for the non linear (NL) split window algorithm in operational use at the Centre de Météorologie Spatiale (CMS). The simulated errors, used as correction terms, reduce significantly the regional biases of the NL algorithm as well as the standard deviation of the differences with drifting buoy measurements. The availability of atmospheric profiles associated with observed satellite-buoy differences allows us to analyze the origins of the main algorithmic errors observed in the SEVIRI field of view: a negative bias in the inter-tropical zone, and a mid-latitude positive bias. We demonstrate how these errors are explained by the sensitivity of observed brightness temperatures to the vertical distribution of water vapour, propagated through the SST retrieval algorithm.
Resumo:
This paper considers the effect of using a GARCH filter on the properties of the BDS test statistic as well as a number of other issues relating to the application of the test. It is found that, for certain values of the user-adjustable parameters, the finite sample distribution of the test is far-removed from asymptotic normality. In particular, when data generated from some completely different model class are filtered through a GARCH model, the frequency of rejection of iid falls, often substantially. The implication of this result is that it might be inappropriate to use non-rejection of iid of the standardised residuals of a GARCH model as evidence that the GARCH model ‘fits’ the data.