982 resultados para scattered data interpolation


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential framework for inference in such projected processes is presented, where the observations are considered one at a time. We introduce a C++ library for carrying out such projected, sequential estimation which adds several novel features. In particular we have incorporated the ability to use a generic observation operator, or sensor model, to permit data fusion. We can also cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the variogram parameters is based on maximum likelihood estimation. We illustrate the projected sequential method in application to synthetic and real data sets. We discuss the software implementation and suggest possible future extensions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emerging data streaming applications in Wireless Sensor Networks require reliable and energy-efficient Transport Protocols. Our recent Wireless Sensor Network deployment in the Burdekin delta, Australia, for water monitoring [T. Le Dinh, W. Hu, P. Sikka, P. Corke, L. Overs, S. Brosnan, Design and deployment of a remote robust sensor network: experiences from an outdoor water quality monitoring network, in: Second IEEE Workshop on Practical Issues in Building Sensor Network Applications (SenseApp 2007), Dublin, Ireland, 2007] is one such example. This application involves streaming sensed data such as pressure, water flow rate, and salinity periodically from many scattered sensors to the sink node which in turn relays them via an IP network to a remote site for archiving, processing, and presentation. While latency is not a primary concern in this class of application (the sampling rate is usually in terms of minutes or hours), energy-efficiency is. Continuous long-term operation and reliable delivery of the sensed data to the sink are also desirable. This paper proposes ERTP, an Energy-efficient and Reliable Transport Protocol for Wireless Sensor Networks. ERTP is designed for data streaming applications, in which sensor readings are transmitted from one or more sensor sources to a base station (or sink). ERTP uses a statistical reliability metric which ensures the number of data packets delivered to the sink exceeds the defined threshold. Our extensive discrete event simulations and experimental evaluations show that ERTP is significantly more energyefficient than current approaches and can reduce energy consumption by more than 45% when compared to current approaches. Consequently, sensor nodes are more energy-efficient and the lifespan of the unattended WSN is increased.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of stable isotope ratios δ18O and δ2H are well established in assessment of groundwater systems and their hydrology. The conventional approach is based on x/y plots and relation to various MWL’s, and plots of either ratio against parameters such as Clor EC. An extension of interpretation is the use of 2D maps and contour plots, and 2D hydrogeological vertical sections. An enhancement of presentation and interpretation is the production of “isoscapes”, usually as 2.5D surface projections. We have applied groundwater isotopic data to a 3D visualisation, using the alluvial aquifer system of the Lockyer Valley. The 3D framework is produced in GVS (Groundwater Visualisation System). This format enables enhanced presentation by displaying the spatial relationships and allowing interpolation between “data points” i.e. borehole screened zones where groundwater enters. The relative variations in the δ18O and δ2H values are similar in these ambient temperature systems. However, δ2H better reflects hydrological processes, whereas δ18O also reflects aquifer/groundwater exchange reactions. The 3D model has the advantage that it displays borehole relations to spatial features, enabling isotopic ratios and their values to be associated with, for example, bedrock groundwater mixing, interaction between aquifers, relation to stream recharge, and to near-surface and return irrigation water evaporation. Some specific features are also shown, such as zones of leakage of deeper groundwater (in this case with a GAB signature). Variations in source of recharging water at a catchment scale can be displayed. Interpolation between bores is not always possible depending on numbers and spacing, and by elongate configuration of the alluvium. In these cases, the visualisation uses discs around the screens that can be manually expanded to test extent or intersections. Separate displays are used for each of δ18O and δ2H and colour coding for isotope values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose Intensity modulated radiotherapy (IMRT) treatments require more beam-on time and produce more linac head leakage to deliver similar doses to conventional, unmodulated, radiotherapy treatments. It is necessary to take this increased leakage into account when evaluating the results of radiation surveys around bunkers that are, or will be, used for IMRT. The recommended procedure of 15 applying a monitor-unit based workload correction factor to secondary barrier survey measurements, to account for this increased leakage when evaluating radiation survey measurements around IMRT bunkers, can lead to potentially-costly over estimation of the required barrier thickness. This study aims to provide initial guidance on the validity of reducing the value of the correction factor when applied to different radiation barriers (primary barriers, doors, maze walls and other walls) by 20 evaluating three different bunker designs. Methods Radiation survey measurements of primary, scattered and leakage radiation were obtained at each of five survey points around each of three different radiotherapy bunkers and the contribution of leakage to the total measured radiation dose at each point was evaluated. Measurements at each survey point were made with the linac gantry set to 12 equidistant positions from 0 to 330o, to 25 assess the effects of radiation beam direction on the results. Results For all three bunker designs, less than 0.5% of dose measured at and alongside the primary barriers, less than 25% of the dose measured outside the bunker doors and up to 100% of the dose measured outside other secondary barriers was found to be caused by linac head leakage. Conclusions Results of this study suggest that IMRT workload corrections are unnecessary, for 30 survey measurements made at and alongside primary barriers. Use of reduced IMRT workload correction factors is recommended when evaluating survey measurements around a bunker door, provided that a subset of the measurements used in this study are repeated for the bunker in question. Reduction of the correction factor for other secondary barrier survey measurements is not recommended unless the contribution from leakage is separetely evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Staphylococcus aureus (S. aureus) is a prominent human and livestock pathogen investigated widely using omic technologies. Critically, due to availability, low visibility or scattered resources, robust network and statistical contextualisation of the resulting data is generally under-represented. Here, we present novel meta-analyses of freely-accessible molecular network and gene ontology annotation information resources for S. aureus omics data interpretation. Furthermore, through the application of the gene ontology annotation resources we demonstrate their value and ability (or lack-there-of) to summarise and statistically interpret the emergent properties of gene expression and protein abundance changes using publically available data. This analysis provides simple metrics for network selection and demonstrates the availability and impact that gene ontology annotation selection can have on the contextualisation of bacterial omics data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Species distribution models (SDMs) are considered to exemplify Pattern rather than Process based models of a species' response to its environment. Hence when used to map species distribution, the purpose of SDMs can be viewed as interpolation, since species response is measured at a few sites in the study region, and the aim is to interpolate species response at intermediate sites. Increasingly, however, SDMs are also being used to also extrapolate species-environment relationships beyond the limits of the study region as represented by the training data. Regardless of whether SDMs are to be used for interpolation or extrapolation, the debate over how to implement SDMs focusses on evaluating the quality of the SDM, both ecologically and mathematically. This paper proposes a framework that includes useful tools previously employed to address uncertainty in habitat modelling. Together with existing frameworks for addressing uncertainty more generally when modelling, we then outline how these existing tools help inform development of a broader framework for addressing uncertainty, specifically when building habitat models. As discussed earlier we focus on extrapolation rather than interpolation, where the emphasis on predictive performance is diluted by the concerns for robustness and ecological relevance. We are cognisant of the dangers of excessively propagating uncertainty. Thus, although the framework provides a smorgasbord of approaches, it is intended that the exact menu selected for a particular application, is small in size and targets the most important sources of uncertainty. We conclude with some guidance on a strategic approach to identifying these important sources of uncertainty. Whilst various aspects of uncertainty in SDMs have previously been addressed, either as the main aim of a study or as a necessary element of constructing SDMs, this is the first paper to provide a more holistic view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a validation study on the application of a novel interslice interpolation technique for musculoskeletal structure segmentation of articulated joints and muscles on human magnetic resonance imaging data. The interpolation technique is based on morphological shape-based interpolation combined with intensity based voxel classification. Shape-based interpolation in the absence of the original intensity image has been investigated intensively. However, in some applications of medical image analysis, the intensity image of the slice to be interpolated is available. For example, when manual segmentation is conducted on selected slices, the segmentation on those unselected slices can be obtained by interpolation. We proposed a two- step interpolation method to utilize both the shape information in the manual segmentation and local intensity information in the image. The method was tested on segmentations of knee, hip and shoulder joint bones and hamstring muscles. The results were compared with two existing interpolation methods. Based on the calculated Dice similarity coefficient and normalized error rate, the proposed method outperformed the other two methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contamination of urban streams is a rising topic worldwide, but the assessment and investigation of stormwater induced contamination is limited by the high amount of water quality data needed to obtain reliable results. In this study, stream bed sediments were studied to determine their contamination degree and their applicability in monitoring aquatic metal contamination in urban areas. The interpretation of sedimentary metal concentrations is, however, not straightforward, since the concentrations commonly show spatial and temporal variations as a response to natural processes. The variations of and controls on metal concentrations were examined at different scales to increase the understanding of the usefulness of sediment metal concentrations in detecting anthropogenic metal contamination patterns. The acid extractable concentrations of Zn, Cu, Pb and Cd were determined from the surface sediments and water of small streams in the Helsinki Metropolitan region, southern Finland. The data consists of two datasets: sediment samples from 53 sites located in the catchment of the Stream Gräsanoja and sediment and water samples from 67 independent catchments scattered around the metropolitan region. Moreover, the sediment samples were analyzed for their physical and chemical composition (e.g. total organic carbon, clay-%, Al, Li, Fe, Mn) and the speciation of metals (in the dataset of the Stream Gräsanoja). The metal concentrations revealed that the stream sediments were moderately contaminated and caused no immediate threat to the biota. However, at some sites the sediments appeared to be polluted with Cu or Zn. The metal concentrations increased with increasing intensity of urbanization, but site specific factors, such as point sources, were responsible for the occurrence of the highest metal concentrations. The sediment analyses revealed, thus a need for more detailed studies on the processes and factors that cause the hot spot metal concentrations. The sediment composition and metal speciation analyses indicated that organic matter is a very strong indirect control on metal concentrations, and it should be accounted for when studying anthropogenic metal contamination patterns. The fine-scale spatial and temporal variations of metal concentrations were low enough to allow meaningful interpretation of substantial metal concentration differences between sites. Furthermore, the metal concentrations in the stream bed sediments were correlated with the urbanization of the catchment better than the total metal concentrations in the water phase. These results suggest that stream sediments show true potential for wider use in detecting the spatial differences in metal contamination of urban streams. Consequently, using the sediment approach regional estimates of the stormwater related metal contamination could be obtained fairly cost-effectively, and the stability and reliability of results would be higher compared to analyses of single water samples. Nevertheless, water samples are essential in analysing the dissolved concentrations of metals, momentary discharges from point sources in particular.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two methods based on wavelet/wavelet packet expansion to denoise and compress optical tomography data containing scattered noise are presented, In the first, the wavelet expansion coefficients of noisy data are shrunk using a soft threshold. In the second, the data are expanded into a wavelet packet tree upon which a best basis search is done. The resulting coefficients are truncated on the basis of energy content. It can be seen that the first method results in efficient denoising of experimental data when scattering particle density in the medium surrounding the object was up to 12.0 x 10(6) per cm(3). This method achieves a compression ratio of approximate to 8:1. The wavelet packet based method resulted in a compression of up to 11:1 and also exhibited reasonable noise reduction capability. Tomographic reconstructions obtained from denoised data are presented. (C) 1999 Published by Elsevier Science B.V. All rights reserved,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining is concerned with analysing large volumes of (often unstructured) data to automatically discover interesting regularities or relationships which in turn lead to better understanding of the underlying processes. The field of temporal data mining is concerned with such analysis in the case of ordered data streams with temporal interdependencies. Over the last decade many interesting techniques of temporal data mining were proposed and shown to be useful in many applications. Since temporal data mining brings together techniques from different fields such as statistics, machine learning and databases, the literature is scattered among many different sources. In this article, we present an overview of techniques of temporal data mining.We mainly concentrate on algorithms for pattern discovery in sequential data streams.We also describe some recent results regarding statistical analysis of pattern discovery methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop iterative diffraction tomography algorithms, which are similar to the distorted Born algorithms, for inverting scattered intensity data. Within the Born approximation, the unknown scattered field is expressed as a multiplicative perturbation to the incident field. With this, the forward equation becomes stable, which helps us compute nearly oscillation-free solutions that have immediate bearing on the accuracy of the Jacobian computed for use in a deterministic Gauss-Newton (GN) reconstruction. However, since the data are inherently noisy and the sensitivity of measurement to refractive index away from the detectors is poor, we report a derivative-free evolutionary stochastic scheme, providing strictly additive updates in order to bridge the measurement-prediction misfit, to arrive at the refractive index distribution from intensity transport data. The superiority of the stochastic algorithm over the GN scheme for similar settings is demonstrated by the reconstruction of the refractive index profile from simulated and experimentally acquired intensity data. (C) 2014 Optical Society of America

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents two different forms of the Born approximations for acoustic and elastic wavefields and discusses their application to the inversion of seismic data. The Born approximation is valid for small amplitude heterogeneities superimposed over a slowly varying background. The first method is related to frequency-wavenumber migration methods. It is shown to properly recover two independent acoustic parameters within the bandpass of the source time function of the experiment for contrasts of about 5 percent from data generated using an exact theory for flat interfaces. The independent determination of two parameters is shown to depend on the angle coverage of the medium. For surface data, the impedance profile is well recovered.

The second method explored is mathematically similar to iterative tomographic methods recently introduced in the geophysical literature. Its basis is an integral relation between the scattered wavefield and the medium parameters obtained after applying a far-field approximation to the first-order Born approximation. The Davidon-Fletcher-Powell algorithm is used since it converges faster than the steepest descent method. It consists essentially of successive backprojections of the recorded wavefield, with angular and propagation weighing coefficients for density and bulk modulus. After each backprojection, the forward problem is computed and the residual evaluated. Each backprojection is similar to a before-stack Kirchhoff migration and is therefore readily applicable to seismic data. Several examples of reconstruction for simple point scatterer models are performed. Recovery of the amplitudes of the anomalies are improved with successive iterations. Iterations also improve the sharpness of the images.

The elastic Born approximation, with the addition of a far-field approximation is shown to correspond physically to a sum of WKBJ-asymptotic scattered rays. Four types of scattered rays enter in the sum, corresponding to P-P, P-S, S-P and S-S pairs of incident-scattered rays. Incident rays propagate in the background medium, interacting only once with the scatterers. Scattered rays propagate as if in the background medium, with no interaction with the scatterers. An example of P-wave impedance inversion is performed on a VSP data set consisting of three offsets recorded in two wells.