25 resultados para satellite data processing

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Satellite information, in combination with conventional point source measurements, can be a valuable source of information. This thesis is devoted to the spatial estimation of areal rainfall over a region using both the measurements from a dense and sparse network of rain-gauges and images from the meteorological satellites. A primary concern is to study the effects of such satellite assisted rainfall estimates on the performance of rainfall-runoff models. Low-cost image processing systems and peripherals are used to process and manipulate the data. Both secondary as well as primary satellite images were used for analysis. The secondary data was obtained from the in-house satellite receiver and the primary data was obtained from an outside source. Ground truth data was obtained from the local Water Authority. A number of algorithms are presented that combine the satellite and conventional data sources to produce areal rainfall estimates and the results are compared with some of the more traditional methodologies. The results indicate that the satellite cloud information is valuable in the assessment of the spatial distribution of areal rainfall, for both half-hourly as well as daily estimates of rainfall. It is also demonstrated how the performance of the simple multiple regression rainfall-runoff model is improved when satellite cloud information is used as a separate input in addition to rainfall estimates from conventional means. The use of low-cost equipment, from image processing systems to satellite imagery, makes it possible for developing countries to introduce such systems in areas where the benefits are greatest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photonic technologies for data processing in the optical domain are expected to play a major role in future high-speed communications. Nonlinear effects in optical fibres have many attractive features and great, but not yet fully explored potential for optical signal processing. Here we provide an overview of our recent advances in developing novel techniques and approaches to all-optical processing based on fibre nonlinearities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the first experimental implementation of a recently designed quasi-lossless fiber span with strongly reduced signal power excursion. The resulting fiber waveguide medium can be advantageously used both in lightwave communications and in all-optical nonlinear data processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the first experimental implementation of a recently designed quasi-lossless fibre span with strongly reduced signal power excursion. The resulting fibre waveguide medium can be advantageously used both in lightwave communications and in all-optical nonlinear data processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the first experimental implementation of a recently designed quasi-lossless fiber span with strongly reduced signal power excursion. The resulting fiber waveguide medium can be advantageously used both in lightwave communications and in all-optical nonlinear data processing. © 2005 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the first experimental implementation of a recently designed quasi-lossless fibre span with strongly reduced signal power excursion. The resulting fibre waveguide medium can be advantageously used both in lightwave communications and in all-optical nonlinear data processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tonal, textural and contextual properties are used in manual photointerpretation of remotely sensed data. This study has used these three attributes to produce a lithological map of semi arid northwest Argentina by semi automatic computer classification procedures of remotely sensed data. Three different types of satellite data were investigated, these were LANDSAT MSS, TM and SIR-A imagery. Supervised classification procedures using tonal features only produced poor classification results. LANDSAT MSS produced classification accuracies in the range of 40 to 60%, while accuracies of 50 to 70% were achieved using LANDSAT TM data. The addition of SIR-A data produced increases in the classification accuracy. The increased classification accuracy of TM over the MSS is because of the better discrimination of geological materials afforded by the middle infra red bands of the TM sensor. The maximum likelihood classifier consistently produced classification accuracies 10 to 15% higher than either the minimum distance to means or decision tree classifier, this improved accuracy was obtained at the cost of greatly increased processing time. A new type of classifier the spectral shape classifier, which is computationally as fast as a minimum distance to means classifier is described. However, the results for this classifier were disappointing, being lower in most cases than the minimum distance or decision tree procedures. The classification results using only tonal features were felt to be unacceptably poor, therefore textural attributes were investigated. Texture is an important attribute used by photogeologists to discriminate lithology. In the case of TM data, texture measures were found to increase the classification accuracy by up to 15%. However, in the case of the LANDSAT MSS data the use of texture measures did not provide any significant increase in the accuracy of classification. For TM data, it was found that second order texture, especially the SGLDM based measures, produced highest classification accuracy. Contextual post processing was found to increase classification accuracy and improve the visual appearance of classified output by removing isolated misclassified pixels which tend to clutter classified images. Simple contextual features, such as mode filters were found to out perform more complex features such as gravitational filter or minimal area replacement methods. Generally the larger the size of the filter, the greater the increase in the accuracy. Production rules were used to build a knowledge based system which used tonal and textural features to identify sedimentary lithologies in each of the two test sites. The knowledge based system was able to identify six out of ten lithologies correctly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the development of an operational river basin water resources information management system. The river or drainage basin is the fundamental unit of the system; in both the modelling and prediction of hydrological processes, and in the monitoring of the effect of catchment management policies. A primary concern of the study is the collection of sufficient and sufficiently accurate information to model hydrological processes. Remote sensing, in combination with conventional point source measurement, can be a valuable source of information, but is often overlooked by hydrologists, due to the cost of acquisition and processing. This thesis describes a number of cost effective methods of acquiring remotely sensed imagery, from airborne video survey to real time ingestion of meteorological satellite data. Inexpensive micro-computer systems and peripherals are used throughout to process and manipulate the data. Spatial information systems provide a means of integrating these data with topographic and thematic cartographic data, and historical records. For the system to have any real potential the data must be stored in a readily accessible format and be easily manipulated within the database. The design of efficient man-machine interfaces and the use of software enginering methodologies are therefore included in this thesis as a major part of the design of the system. The use of low cost technologies, from micro-computers to video cameras, enables the introduction of water resources information management systems into developing countries where the potential benefits are greatest.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper reviews some basic issues and methods involved in using neural networks to respond in a desired fashion to a temporally-varying environment. Some popular network models and training methods are introduced. A speech recognition example is then used to illustrate the central difficulty of temporal data processing: learning to notice and remember relevant contextual information. Feedforward network methods are applicable to cases where this problem is not severe. The application of these methods are explained and applications are discussed in the areas of pure mathematics, chemical and physical systems, and economic systems. A more powerful but less practical algorithm for temporal problems, the moving targets algorithm, is sketched and discussed. For completeness, a few remarks are made on reinforcement learning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

All-optical data processing is expected to play a major role in future optical communications. The fiber nonlinear optical loop mirror (NOLM) is a valuable tool in optical signal processing applications. This paper presents an overview of our recent advances in developing NOLM-based all-optical processing techniques for application in fiber-optic communications. The use of in-line NOLMs as a general technique for all-optical passive 2R (reamplification, reshaping) regeneration of return-to-zero (RZ) on-off keyed signals in both high-speed, ultralong-distance transmission systems and terrestrial photonic networks is reviewed. In this context, a theoretical model enabling the description of the stable propagation of carrier pulses with periodic all-optical self-regeneration in fiber systems with in-line deployment of nonlinear optical devices is presented. A novel, simple pulse processing scheme using nonlinear broadening in normal dispersion fiber and loop mirror intensity filtering is described, and its employment is demonstrated as an optical decision element at a RZ receiver as well as an in-line device to realize a transmission technique of periodic all-optical RZ-nonreturn-to-zero-like format conversion. The important issue of phase-preserving regeneration of phase-encoded signals is also addressed by presenting a new design of NOLM based on distributed Raman amplification in the loop fiber. © 2008 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes the development of a complete data visualisation system for large tabular databases, such as those commonly found in a business environment. A state-of-the-art 'cyberspace cell' data visualisation technique was investigated and a powerful visualisation system using it was implemented. Although allowing databases to be explored and conclusions drawn, it had several drawbacks, the majority of which were due to the three-dimensional nature of the visualisation. A novel two-dimensional generic visualisation system, known as MADEN, was then developed and implemented, based upon a 2-D matrix of 'density plots'. MADEN allows an entire high-dimensional database to be visualised in one window, while permitting close analysis in 'enlargement' windows. Selections of records can be made and examined, and dependencies between fields can be investigated in detail. MADEN was used as a tool for investigating and assessing many data processing algorithms, firstly data-reducing (clustering) methods, then dimensionality-reducing techniques. These included a new 'directed' form of principal components analysis, several novel applications of artificial neural networks, and discriminant analysis techniques which illustrated how groups within a database can be separated. To illustrate the power of the system, MADEN was used to explore customer databases from two financial institutions, resulting in a number of discoveries which would be of interest to a marketing manager. Finally, the database of results from the 1992 UK Research Assessment Exercise was analysed. Using MADEN allowed both universities and disciplines to be graphically compared, and supplied some startling revelations, including empirical evidence of the 'Oxbridge factor'.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

All-optical data processing is expected to play a major role in future optical communications. Nonlinear effects in optical fibres have many attractive features and a great, not yet fully explored potential in optical signal processing. Here, we overview our recent advances in developing novel techniques and approaches to all-optical processing based on optical fibre nonlinearities.