964 resultados para Point Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Accelerating Moment Release (AMR) preceding earthquakes with magnitude above 5 in Australia that occurred during the last 20 years was analyzed to test the Critical Point Hypothesis. Twelve earthquakes in the catalog were chosen based on a criterion for the number of nearby events. Results show that seven sequences with numerous events recorded leading up to the main earthquake exhibited accelerating moment release. Two occurred near in time and space to other earthquakes preceded by AM R. The remaining three sequences had very few events in the catalog so the lack of AMR detected in the analysis may be related to catalog incompleteness. Spatio-temporal scanning of AMR parameters shows that 80% of the areas in which AMR occurred experienced large events. In areas of similar background seismicity with no large events, 10 out of 12 cases exhibit no AMR, and two others are false alarms where AMR was observed but no large event followed. The relationship between AMR and Load-Unload Response Ratio (LURR) was studied. Both methods predict similar critical region sizes, however, the critical point time using AMR is slightly earlier than the time of the critical point LURR anomaly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Our aim was to calculate the global burden of disease and risk factors for 2001, to examine regional trends from 1990 to 2001, and to provide a starting point for the analysis of the Disease Control Priorities Project (DCPP). Methods We calculated mortality, incidence, prevalence, and disability adjusted life years (DALYs) for 136 diseases and injuries, for seven income/geographic country groups. To assess trends, we re-estimated all-cause mortality for 1990 with the same methods as for 2001. We estimated mortality and disease burden attributable to 19 risk factors. Findings About 56 million people died in 2001. Of these, 10.6 million were children, 99% of whom lived in low-and-middle-income countries. More than half of child deaths in 2001 were attributable to acute respiratory infections, measles, diarrhoea, malaria, and HIV/AIDS. The ten leading diseases for global disease burden were perinatal conditions, lower respiratory infections, ischaemic heart disease, cerebrovascular disease, HIV/AIDS, diarrhoeal diseases, unipolar major depression, malaria, chronic obstructive pulmonary disease, and tuberculosis. There was a 20% reduction in global disease burden per head due to communicable, maternal, perinatal, and nutritional conditions between 1990 and 2001. Almost half the disease burden in low-and-middle-income countries is now from non-communicable diseases (disease burden per head in Sub-Saharan Africa and the low-and-middle-income countries of Europe and Central Asia increased between 1990 and 2001). Undernutrition remains the leading risk factor for health loss. An estimated 45% of global mortality and 36% of global disease burden are attributable to the joint hazardous effects of the 19 risk factors studied. Uncertainty in all-cause mortality estimates ranged from around 1% in high-income countries to 15-20% in Sub-Saharan Africa. Uncertainty was larger for mortality from specific diseases, and for incidence and prevalence of non-fatal outcomes. Interpretation Despite uncertainties about mortality and burden of disease estimates, our findings suggest that substantial gains in health have been achieved in most populations, countered by the HIV/AIDS epidemic in Sub-Saharan Africa and setbacks in adult mortality in countries of the former Soviet Union. our results on major disease, injury, and risk factor causes of loss of health, together with information on the cost-effectiveness of interventions, can assist in accelerating progress towards better health and reducing the persistent differentials in health between poor and rich countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The coupling of sandy beach aquifers with the swash zone in the vicinity of the water table exit point is investigated through simultaneous measurements of the instantaneous shoreline (swash front) location, pore pressures and the water table exit point. The field observations reveal new insights into swash-aquifer coupling not previously gleaned from measurements of pore pressure only. In particular, for the case where the exit point is seaward of the observation point, the pore pressure response is correlated with the distance between the exit point and the shoreline in that when the distance is large the rate of pressure drop is fast and when the distance is small the rate decreases. The observations expose limitations in a simple model describing exit point dynamics which is based only on the force balance on a particle of water at the sand surface and neglects subsurface pressures. A new modified form of the model is shown to significantly improve the model-data comparison through a parameterization of the effects of capillarity into the aquifer storage coefficient. The model enables sufficiently accurate predictions of the exit point to determine when the swash uprush propagates over a saturated or a partially saturated sand surface, potentially an important factor in the morphological evolution of the beach face. Observations of the shoreward propagation of the swash-induced pore pressure waves ahead of the runup limit shows that the magnitude of the pressure fluctuation decays exponentially and that there is a linear increase in time lags, behavior similar to that of tidally induced water table waves. The location of the exit point and the intermittency of wave runup events is also shown to be significant in terms of the shore-normal energy distribution. Seaward of the mean exit point location, peak energies are small because of the saturated sand surface within the seepage face acting as a "rigid lid'' and limiting pressure fluctuations. Landward of the mean exit point the peak energies grow before decreasing landward of the maximum shoreline position.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large amounts of information can be overwhelming and costly to process, especially when transmitting data over a network. A typical modern Geographical Information System (GIS) brings all types of data together based on the geographic component of the data and provides simple point-and-click query capabilities as well as complex analysis tools. Querying a Geographical Information System, however, can be prohibitively expensive due to the large amounts of data which may need to be processed. Since the use of GIS technology has grown dramatically in the past few years, there is now a need more than ever, to provide users with the fastest and least expensive query capabilities, especially since an approximated 80 % of data stored in corporate databases has a geographical component. However, not every application requires the same, high quality data for its processing. In this paper we address the issues of reducing the cost and response time of GIS queries by preaggregating data by compromising the data accuracy and precision. We present computational issues in generation of multi-level resolutions of spatial data and show that the problem of finding the best approximation for the given region and a real value function on this region, under a predictable error, in general is "NP-complete.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In developing neural network techniques for real world applications it is still very rare to see estimates of confidence placed on the neural network predictions. This is a major deficiency, especially in safety-critical systems. In this paper we explore three distinct methods of producing point-wise confidence intervals using neural networks. We compare and contrast Bayesian, Gaussian Process and Predictive error bars evaluated on real data. The problem domain is concerned with the calibration of a real automotive engine management system for both air-fuel ratio determination and on-line ignition timing. This problem requires real-time control and is a good candidate for exploring the use of confidence predictions due to its safety-critical nature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of quantitative methods has become increasingly important in the study of neurodegenerative disease. Disorders such as Alzheimer's disease (AD) are characterized by the formation of discrete, microscopic, pathological lesions which play an important role in pathological diagnosis. This article reviews the advantages and limitations of the different methods of quantifying the abundance of pathological lesions in histological sections, including estimates of density, frequency, coverage, and the use of semiquantitative scores. The major sampling methods by which these quantitative measures can be obtained from histological sections, including plot or quadrat sampling, transect sampling, and point-quarter sampling, are also described. In addition, the data analysis methods commonly used to analyse quantitative data in neuropathology, including analyses of variance (ANOVA) and principal components analysis (PCA), are discussed. These methods are illustrated with reference to particular problems in the pathological diagnosis of AD and dementia with Lewy bodies (DLB).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growth and advances made in computer technology have led to the present interest in picture processing techniques. When considering image data compression the tendency is towards trans-form source coding of the image data. This method of source coding has reached a stage where very high reductions in the number of bits representing the data can be made while still preserving image fidelity. The point has thus been reached where channel errors need to be considered, as these will be inherent in any image comnunication system. The thesis first describes general source coding of images with the emphasis almost totally on transform coding. The transform technique adopted is the Discrete Cosine Transform (DCT) which becomes common to both transform coders. Hereafter the techniques of source coding differ substantially i.e. one tech­nique involves zonal coding, the other involves threshold coding. Having outlined the theory and methods of implementation of the two source coders, their performances are then assessed first in the absence, and then in the presence, of channel errors. These tests provide a foundation on which to base methods of protection against channel errors. Six different protection schemes are then proposed. Results obtained, from each particular, combined, source and channel error protection scheme, which are described in full are then presented. Comparisons are made between each scheme and indicate the best one to use given a particular channel error rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Overlaying maps using a desktop GIS is often the first step of a multivariate spatial analysis. The potential of this operation has increased considerably as data sources an dWeb services to manipulate them are becoming widely available via the Internet. Standards from the OGC enable such geospatial ‘mashups’ to be seamless and user driven, involving discovery of thematic data. The user is naturally inclined to look for spatial clusters and ‘correlation’ of outcomes. Using classical cluster detection scan methods to identify multivariate associations can be problematic in this context, because of a lack of control on or knowledge about background populations. For public health and epidemiological mapping, this limiting factor can be critical but often the focus is on spatial identification of risk factors associated with health or clinical status. In this article we point out that this association itself can ensure some control on underlying populations, and develop an exploratory scan statistic framework for multivariate associations. Inference using statistical map methodologies can be used to test the clustered associations. The approach is illustrated with a hypothetical data example and an epidemiological study on community MRSA. Scenarios of potential use for online mashups are introduced but full implementation is left for further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over recent years much has been learned about the way in which depth cues are combined (e.g. Landy et al., 1995). The majority of this work has used subjective measures, a rating scale or a point of subjective equality, to deduce the relative contributions of different cues to perception. We have adopted a very different approach by using two interval forced-choice (2IFC) performance measures and a signal processing framework. We performed summation experiments for depth cue increment thresholds between pairs of pictorial depth cues in displays depicting slanted planar surfaces made from arrays of circular 'contrast' elements. Summation was found to be ideal when size-gradient was paired with contrast-gradient for a wide range of depth-gradient magnitudes in the null stimulus. For a pairing of size-gradient and linear perspective, substantial summation (> 1.5 dB) was found only when the null stimulus had intermediate depth gradients; when flat or steeply inclined surfaces were depicted, summation was diminished or abolished. Summation was also abolished when one of the target cues was (i) not a depth cue, or (ii) added in conflict. We conclude that vision has a depth mechanism for the constructive combination of pictorial depth cues and suggest two generic models of summation to describe the results. Using similar psychophysical methods, Bradshaw and Rogers (1996) revealed a mechanism for the depth cues of motion parallax and binocular disparity. Whether this is the same or a different mechanism from the one reported here awaits elaboration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have attempted to establish normative values of components of the magnetic evoked field to flash and pattern reversal stimuli prior to clinical use of the MEG. Full visual field, binocular evoked magnetic fields were recorded from 100 subjects 16 to 86 years of age with a single channel dc Squid (BTI) second-order gradiometer at a point 5-6cm above the inion. The majority of subjects showed a large positive component (out going magnetic field) of mean latency 115 ms (SD range 2.5 -11.8 in different decades of life) to the pattern reversal stimulus. In many subjects, this P100M was preceeded and succeeded by negative deflections (in going field). About 6% of subjects showed an inverted response i.e. a PNP wave. Waveforms to flash were more variable in shape with several positive components; the most consistent having a mean latency of 110ms (SD range 6.4-23.2). Responses to both stimuli were consistent when measured on the same subject on six different occasions (SD range 4.8 to 7.3). The data suggest that norms can be established for evoked magnetic field components, in particular for the pattern reversal P100M, which could be used in the diagnosis of neuro-opthalmological disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the development of an operational river basin water resources information management system. The river or drainage basin is the fundamental unit of the system; in both the modelling and prediction of hydrological processes, and in the monitoring of the effect of catchment management policies. A primary concern of the study is the collection of sufficient and sufficiently accurate information to model hydrological processes. Remote sensing, in combination with conventional point source measurement, can be a valuable source of information, but is often overlooked by hydrologists, due to the cost of acquisition and processing. This thesis describes a number of cost effective methods of acquiring remotely sensed imagery, from airborne video survey to real time ingestion of meteorological satellite data. Inexpensive micro-computer systems and peripherals are used throughout to process and manipulate the data. Spatial information systems provide a means of integrating these data with topographic and thematic cartographic data, and historical records. For the system to have any real potential the data must be stored in a readily accessible format and be easily manipulated within the database. The design of efficient man-machine interfaces and the use of software enginering methodologies are therefore included in this thesis as a major part of the design of the system. The use of low cost technologies, from micro-computers to video cameras, enables the introduction of water resources information management systems into developing countries where the potential benefits are greatest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satellite information, in combination with conventional point source measurements, can be a valuable source of information. This thesis is devoted to the spatial estimation of areal rainfall over a region using both the measurements from a dense and sparse network of rain-gauges and images from the meteorological satellites. A primary concern is to study the effects of such satellite assisted rainfall estimates on the performance of rainfall-runoff models. Low-cost image processing systems and peripherals are used to process and manipulate the data. Both secondary as well as primary satellite images were used for analysis. The secondary data was obtained from the in-house satellite receiver and the primary data was obtained from an outside source. Ground truth data was obtained from the local Water Authority. A number of algorithms are presented that combine the satellite and conventional data sources to produce areal rainfall estimates and the results are compared with some of the more traditional methodologies. The results indicate that the satellite cloud information is valuable in the assessment of the spatial distribution of areal rainfall, for both half-hourly as well as daily estimates of rainfall. It is also demonstrated how the performance of the simple multiple regression rainfall-runoff model is improved when satellite cloud information is used as a separate input in addition to rainfall estimates from conventional means. The use of low-cost equipment, from image processing systems to satellite imagery, makes it possible for developing countries to introduce such systems in areas where the benefits are greatest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis addresses data assimilation, which typically refers to the estimation of the state of a physical system given a model and observations, and its application to short-term precipitation forecasting. A general introduction to data assimilation is given, both from a deterministic and' stochastic point of view. Data assimilation algorithms are reviewed, in the static case (when no dynamics are involved), then in the dynamic case. A double experiment on two non-linear models, the Lorenz 63 and the Lorenz 96 models, is run and the comparative performance of the methods is discussed in terms of quality of the assimilation, robustness "in the non-linear regime and computational time. Following the general review and analysis, data assimilation is discussed in the particular context of very short-term rainfall forecasting (nowcasting) using radar images. An extended Bayesian precipitation nowcasting model is introduced. The model is stochastic in nature and relies on the spatial decomposition of the rainfall field into rain "cells". Radar observations are assimilated using a Variational Bayesian method in which the true posterior distribution of the parameters is approximated by a more tractable distribution. The motion of the cells is captured by a 20 Gaussian process. The model is tested on two precipitation events, the first dominated by convective showers, the second by precipitation fronts. Several deterministic and probabilistic validation methods are applied and the model is shown to retain reasonable prediction skill at up to 3 hours lead time. Extensions to the model are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivated by the increasing demand and challenges of video streaming in this thesis, we investigate methods by which the quality of the video can be improved. We utilise overlay networks that have been created by implemented relay nodes to produce path diversity, and show through analytical and simulation models for which environments path diversity can improve the packet loss probability. We take the simulation and analytical models further by implementing a real overlay network on top of Planetlab, and show that when the network conditions remain constant the video quality received by the client can be improved. In addition, we show that in the environments where path diversity improves the video quality forward error correction can be used to further enhance the quality. We then investigate the effect of IEEE 802.11e Wireless LAN standard with quality of service enabled on the video quality received by a wireless client. We find that assigning all the video to a single class outperforms a cross class assignment scheme proposed by other researchers. The issue of virtual contention at the access point is also examined. We increase the intelligence of our relay nodes and enable them to cache video, in order to maximise the usefulness of these caches. For this purpose, we introduce a measure, called the PSNR profit, and present an optimal caching method for achieving the maximum PSNR profit at the relay nodes where partitioned video contents are stored and provide an enhanced quality for the client. We also show that the optimised cache the degradation in the video quality received by the client becomes more graceful than the non-optimised system when the network experiences packet loss or is congested.