999 resultados para remote reading
Resumo:
This is a longitudinal case study of a child who taught herself to read before she went to school. This case study is drawn from a wider study of a group of precocious readers, all of whom had received no explicit instruction, but who had had positive literacy experiences in their homes. The subject of this study was able to read fluently at the age of 5 years and 4 months. Her reading was at least 5 years ahead of her chronological age and her spelling was 4 years ahead. Her reading speed was also very proficient. Moreover, tests indicated that her pseudoword reading was highly accurate and that she was highly proficient on a series of measures of phonemic awareness. Her performance was also assessed at the ages of 6, 7, and 11 years. She continued to show high levels of ability in all aspects of literacy. This study contrasts with recent case studies on very precocious readers who showed poor levels of phonological awareness and who were unable to spell at an early age.
Resumo:
This paper presents an image motion model for airborne three-line-array (TLA) push-broom cameras. Both aircraft velocity and attitude instability are taken into account in modeling image motion. Effects of aircraft pitch, roll, and yaw on image motion are analyzed based on geometric relations in designated coordinate systems. The image motion is mathematically modeled by image motion velocity multiplied by exposure time. Quantitative analysis to image motion velocity is then conducted in simulation experiments. The results have shown that image motion caused by aircraft velocity is space invariant while image motion caused by aircraft attitude instability is more complicated. Pitch,roll and yaw all contribute to image motion to different extents. Pitch dominates the along-track image motion and both roll and yaw greatly contribute to the cross-track image motion. These results provide a valuable base for image motion compensation to ensure high accuracy imagery in aerial photogrammetry.
Resumo:
Airborne lidar provides accurate height information of objects on the earth and has been recognized as a reliable and accurate surveying tool in many applications. In particular, lidar data offer vital and significant features for urban land-cover classification, which is an important task in urban land-use studies. In this article, we present an effective approach in which lidar data fused with its co-registered images (i.e. aerial colour images containing red, green and blue (RGB) bands and near-infrared (NIR) images) and other derived features are used effectively for accurate urban land-cover classification. The proposed approach begins with an initial classification performed by the Dempster–Shafer theory of evidence with a specifically designed basic probability assignment function. It outputs two results, i.e. the initial classification and pseudo-training samples, which are selected automatically according to the combined probability masses. Second, a support vector machine (SVM)-based probability estimator is adopted to compute the class conditional probability (CCP) for each pixel from the pseudo-training samples. Finally, a Markov random field (MRF) model is established to combine spatial contextual information into the classification. In this stage, the initial classification result and the CCP are exploited. An efficient belief propagation (EBP) algorithm is developed to search for the global minimum-energy solution for the maximum a posteriori (MAP)-MRF framework in which three techniques are developed to speed up the standard belief propagation (BP) algorithm. Lidar and its co-registered data acquired by Toposys Falcon II are used in performance tests. The experimental results prove that fusing the height data and optical images is particularly suited for urban land-cover classification. There is no training sample needed in the proposed approach, and the computational cost is relatively low. An average classification accuracy of 93.63% is achieved.
Resumo:
This letter presents an effective approach for selection of appropriate terrain modeling methods in forming a digital elevation model (DEM). This approach achieves a balance between modeling accuracy and modeling speed. A terrain complexity index is defined to represent a terrain's complexity. A support vector machine (SVM) classifies terrain surfaces into either complex or moderate based on this index associated with the terrain elevation range. The classification result recommends a terrain modeling method for a given data set in accordance with its required modeling accuracy. Sample terrain data from the lunar surface are used in constructing an experimental data set. The results have shown that the terrain complexity index properly reflects the terrain complexity, and the SVM classifier derived from both the terrain complexity index and the terrain elevation range is more effective and generic than that designed from either the terrain complexity index or the terrain elevation range only. The statistical results have shown that the average classification accuracy of SVMs is about 84.3% ± 0.9% for terrain types (complex or moderate). For various ratios of complex and moderate terrain types in a selected data set, the DEM modeling speed increases up to 19.5% with given DEM accuracy.
Resumo:
Satellite-based Synthetic Aperture Radar (SAR) has proved useful for obtaining information on flood extent, which, when intersected with a Digital Elevation Model (DEM) of the floodplain, provides water level observations that can be assimilated into a hydrodynamic model to decrease forecast uncertainty. With an increasing number of operational satellites with SAR capability, information on the relationship between satellite first visit and revisit times and forecast performance is required to optimise the operational scheduling of satellite imagery. By using an Ensemble Transform Kalman Filter (ETKF) and a synthetic analysis with the 2D hydrodynamic model LISFLOOD-FP based on a real flooding case affecting an urban area (summer 2007,Tewkesbury, Southwest UK), we evaluate the sensitivity of the forecast performance to visit parameters. We emulate a generic hydrologic-hydrodynamic modelling cascade by imposing a bias and spatiotemporal correlations to the inflow error ensemble into the hydrodynamic domain. First, in agreement with previous research, estimation and correction for this bias leads to a clear improvement in keeping the forecast on track. Second, imagery obtained early in the flood is shown to have a large influence on forecast statistics. Revisit interval is most influential for early observations. The results are promising for the future of remote sensing-based water level observations for real-time flood forecasting in complex scenarios.
Resumo:
Approximately 1–2% of net primary production by land plants is re-emitted to the atmosphere as isoprene and monoterpenes. These emissions play major roles in atmospheric chemistry and air pollution–climate interactions. Phenomenological models have been developed to predict their emission rates, but limited understanding of the function and regulation of these emissions has led to large uncertainties in model projections of air quality and greenhouse gas concentrations. We synthesize recent advances in diverse fields, from cell physiology to atmospheric remote sensing, and use this information to propose a simple conceptual model of volatile isoprenoid emission based on regulation of metabolism in the chloroplast. This may provide a robust foundation for scaling up emissions from the cellular to the global scale.
Resumo:
Ensemble-based data assimilation is rapidly proving itself as a computationally-efficient and skilful assimilation method for numerical weather prediction, which can provide a viable alternative to more established variational assimilation techniques. However, a fundamental shortcoming of ensemble techniques is that the resulting analysis increments can only span a limited subspace of the state space, whose dimension is less than the ensemble size. This limits the amount of observational information that can effectively constrain the analysis. In this paper, a data selection strategy that aims to assimilate only the observational components that matter most and that can be used with both stochastic and deterministic ensemble filters is presented. This avoids unnecessary computations, reduces round-off errors and minimizes the risk of importing observation bias in the analysis. When an ensemble-based assimilation technique is used to assimilate high-density observations, the data-selection procedure allows the use of larger localization domains that may lead to a more balanced analysis. Results from the use of this data selection technique with a two-dimensional linear and a nonlinear advection model using both in situ and remote sounding observations are discussed.
Resumo:
Remote sensing observations often have correlated errors, but the correlations are typically ignored in data assimilation for numerical weather prediction. The assumption of zero correlations is often used with data thinning methods, resulting in a loss of information. As operational centres move towards higher-resolution forecasting, there is a requirement to retain data providing detail on appropriate scales. Thus an alternative approach to dealing with observation error correlations is needed. In this article, we consider several approaches to approximating observation error correlation matrices: diagonal approximations, eigendecomposition approximations and Markov matrices. These approximations are applied in incremental variational assimilation experiments with a 1-D shallow water model using synthetic observations. Our experiments quantify analysis accuracy in comparison with a reference or ‘truth’ trajectory, as well as with analyses using the ‘true’ observation error covariance matrix. We show that it is often better to include an approximate correlation structure in the observation error covariance matrix than to incorrectly assume error independence. Furthermore, by choosing a suitable matrix approximation, it is feasible and computationally cheap to include error correlation structure in a variational data assimilation algorithm.
Resumo:
The atmospheric circulation over the North Atlantic-European sector experienced exceptional but highly contrasting conditions in the recent 2010 and 2012 winters (November-March; with the year dated by the relevant January). Evidence is given for the remarkably different locations of the eddy-driven westerly jet over the North Atlantic. In the 2010 winter the maximum of the jet stream was systematically between 30ºN and 40ºN (in the ‘south jet regime’), while in the 2012 winter it was predominantly located around 55ºN (north jet regime). These jet features underline the occurrence of either weak flow (2010) or strong and persistent ridges throughout the troposphere (2012). This is confirmed by the very different occurrence of blocking systems over the North Atlantic, associated with episodes of strong cyclonic (anticyclonic) Rossby wave breaking in 2010 (2012) winters. These dynamical features underlie strong precipitation and temperature anomalies over parts of Europe, with detrimental impacts on many socioeconomic sectors. Despite the highly contrasting atmospheric states, mid and high-latitude boundary conditions do not reveal strong differences in these two winters. The two winters were associated with opposite ENSO phases, but there is no causal evidence of a remote forcing from the Pacific sea surface temperatures. Finally, the exceptionality of the two winters is demonstrated in relation to the last 140 years. It is suggested that these winters may be seen as archetypes of North Atlantic jet variability under current climate conditions.
Resumo:
Full-waveform laser scanning data acquired with a Riegl LMS-Q560 instrument were used to classify an orange orchard into orange trees, grass and ground using waveform parameters alone. Gaussian decomposition was performed on this data capture from the National Airborne Field Experiment in November 2006 using a custom peak-detection procedure and a trust-region-reflective algorithm for fitting Gauss functions. Calibration was carried out using waveforms returned from a road surface, and the backscattering coefficient c was derived for every waveform peak. The processed data were then analysed according to the number of returns detected within each waveform and classified into three classes based on pulse width and c. For single-peak waveforms the scatterplot of c versus pulse width was used to distinguish between ground, grass and orange trees. In the case of multiple returns, the relationship between first (or first plus middle) and last return c values was used to separate ground from other targets. Refinement of this classification, and further sub-classification into grass and orange trees was performed using the c versus pulse width scatterplots of last returns. In all cases the separation was carried out using a decision tree with empirical relationships between the waveform parameters. Ground points were successfully separated from orange tree points. The most difficult class to separate and verify was grass, but those points in general corresponded well with the grass areas identified in the aerial photography. The overall accuracy reached 91%, using photography and relative elevation as ground truth. The overall accuracy for two classes, orange tree and combined class of grass and ground, yielded 95%. Finally, the backscattering coefficient c of single-peak waveforms was also used to derive reflectance values of the three classes. The reflectance of the orange tree class (0.31) and ground class (0.60) are consistent with published values at the wavelength of the Riegl scanner (1550 nm). The grass class reflectance (0.46) falls in between the other two classes as might be expected, as this class has a mixture of the contributions of both vegetation and ground reflectance properties.
Resumo:
The effect of spatial and temporal variations in the radiative damping rate on the response to an imposed forcing or diabatic heating is examined in a zonal-mean model of the middle atmosphere. Attention is restricted to the extratropics, where a linear approach is viable. It is found that regions with weak radiative damping rates are more sensitive in terms of temperature to the remote influence of the diabatic circulation. The delay in the response in such regions can mean that ‘downward’ control is not achieved on seasonal time-scales. A seasonal variation in the radiative damping rate modulates the evolution of the response and leaves a transient-like signature in the annual mean temperature field. Several idealized examples are considered, motivated by topical questions. It is found that wave drag outside the polar vortex can significantly affect the temperatures in its interior, so that high-latitude, high-altitude gravity-wave drag is not the only mechanism for warming the southern hemisphere polar vortex. Diabatic mass transport through the 100 hPa surface is found to lag the seasonal evolution of the wave drag that drives the transport, and thus cannot be considered to be in the downward control regime. On the other hand, the seasonal variation of the radiative damping rate is found to make only a weak contribution to the annual mean temperature increase that has been observed above the ozone hole. Copyright © 2002 Royal Meteorological Society.
Resumo:
Currently there are few observations of the urban wind field at heights other than rooftop level. Remote sensing instruments such as Doppler lidars provide wind speed data at many heights, which would be useful in determining wind loadings of tall buildings, and predicting local air quality. Studies comparing remote sensing with traditional anemometers carried out in flat, homogeneous terrain often use scan patterns which take several minutes. In an urban context the flow changes quickly in space and time, so faster scans are required to ensure little change in the flow over the scan period. We compare 3993 h of wind speed data collected using a three-beam Doppler lidar wind profiling method with data from a sonic anemometer (190 m). Both instruments are located in central London, UK; a highly built-up area. Based on wind profile measurements every 2 min, the uncertainty in the hourly mean wind speed due to the sampling frequency is 0.05–0.11 m s−1. The lidar tended to overestimate the wind speed by ≈0.5 m s−1 for wind speeds below 20 m s−1. Accuracy may be improved by increasing the scanning frequency of the lidar. This method is considered suitable for use in urban areas.
Resumo:
Global NDVI data are routinely derived from the AVHRR, SPOT-VGT, and MODIS/Terra earth observation records for a range of applications from terrestrial vegetation monitoring to climate change modeling. This has led to a substantial interest in the harmonization of multisensor records. Most evaluations of the internal consistency and continuity of global multisensor NDVI products have focused on time-series harmonization in the spectral domain, often neglecting the spatial domain. We fill this void by applying variogram modeling (a) to evaluate the differences in spatial variability between 8-km AVHRR, 1-km SPOT-VGT, and 1-km, 500-m, and 250-m MODIS NDVI products over eight EOS (Earth Observing System) validation sites, and (b) to characterize the decay of spatial variability as a function of pixel size (i.e. data regularization) for spatially aggregated Landsat ETM+ NDVI products and a real multisensor dataset. First, we demonstrate that the conjunctive analysis of two variogram properties – the sill and the mean length scale metric – provides a robust assessment of the differences in spatial variability between multiscale NDVI products that are due to spatial (nominal pixel size, point spread function, and view angle) and non-spatial (sensor calibration, cloud clearing, atmospheric corrections, and length of multi-day compositing period) factors. Next, we show that as the nominal pixel size increases, the decay of spatial information content follows a logarithmic relationship with stronger fit value for the spatially aggregated NDVI products (R2 = 0.9321) than for the native-resolution AVHRR, SPOT-VGT, and MODIS NDVI products (R2 = 0.5064). This relationship serves as a reference for evaluation of the differences in spatial variability and length scales in multiscale datasets at native or aggregated spatial resolutions. The outcomes of this study suggest that multisensor NDVI records cannot be integrated into a long-term data record without proper consideration of all factors affecting their spatial consistency. Hence, we propose an approach for selecting the spatial resolution, at which differences in spatial variability between NDVI products from multiple sensors are minimized. This approach provides practical guidance for the harmonization of long-term multisensor datasets.