802 resultados para Data-driven Methods
Resumo:
This thesis presents two different forms of the Born approximations for acoustic and elastic wavefields and discusses their application to the inversion of seismic data. The Born approximation is valid for small amplitude heterogeneities superimposed over a slowly varying background. The first method is related to frequency-wavenumber migration methods. It is shown to properly recover two independent acoustic parameters within the bandpass of the source time function of the experiment for contrasts of about 5 percent from data generated using an exact theory for flat interfaces. The independent determination of two parameters is shown to depend on the angle coverage of the medium. For surface data, the impedance profile is well recovered.
The second method explored is mathematically similar to iterative tomographic methods recently introduced in the geophysical literature. Its basis is an integral relation between the scattered wavefield and the medium parameters obtained after applying a far-field approximation to the first-order Born approximation. The Davidon-Fletcher-Powell algorithm is used since it converges faster than the steepest descent method. It consists essentially of successive backprojections of the recorded wavefield, with angular and propagation weighing coefficients for density and bulk modulus. After each backprojection, the forward problem is computed and the residual evaluated. Each backprojection is similar to a before-stack Kirchhoff migration and is therefore readily applicable to seismic data. Several examples of reconstruction for simple point scatterer models are performed. Recovery of the amplitudes of the anomalies are improved with successive iterations. Iterations also improve the sharpness of the images.
The elastic Born approximation, with the addition of a far-field approximation is shown to correspond physically to a sum of WKBJ-asymptotic scattered rays. Four types of scattered rays enter in the sum, corresponding to P-P, P-S, S-P and S-S pairs of incident-scattered rays. Incident rays propagate in the background medium, interacting only once with the scatterers. Scattered rays propagate as if in the background medium, with no interaction with the scatterers. An example of P-wave impedance inversion is performed on a VSP data set consisting of three offsets recorded in two wells.
Resumo:
Commercial catches taken in southwestern Australian waters by trawl fisheries targeting prawns and scallops and from gillnet and longline fisheries targeting sharks were sampled at different times of the year between 2002 and 2008. This sampling yielded 33 elasmobranch species representing 17 families. Multivariate statistics elucidated the ways in which the species compositions of elasmobranchs differed among fishing methods and provided benchmark data for detecting changes in the elasmobranch fauna in the future. Virtually all elasmobranchs caught by trawling, which consisted predominantly of rays, were discarded as bycatch, as were approximately a quarter of the elasmobranchs caught by both gillnetting and longlining. The maximum lengths and the lengths at maturity of four abundant bycatch species, Heterodontus portusjacksoni, Aptychotrema vincentiana, Squatina australis, and Myliobatis australis, were greater for females than males. The L50 determined for the males of these species at maturity by using full clasper calcification as the criterion of maturity did not differ significantly from the corresponding L50 derived by using gonadal data as the criterion for maturity. The proportions of the individuals of these species with lengths less than those at which 50% reach maturity were far greater in trawl samples than in gillnet and longline samples. This result was due to differences in gear selectivity and to trawling being undertaken in shallow inshore waters that act as nursery areas for these species. Sound quantitative data on the species compositions of elasmobranchs caught by commercial fisheries and the biological characteristics of the main elasmobranch bycatch species are crucial for developing strategies for conserving these important species and thus the marine ecosystems of which they are part.
Resumo:
In the problem of one-class classification (OCC) one of the classes, the target class, has to be distinguished from all other possible objects, considered as nontargets. In many biomedical problems this situation arises, for example, in diagnosis, image based tumor recognition or analysis of electrocardiogram data. In this paper an approach to OCC based on a typicality test is experimentally compared with reference state-of-the-art OCC techniques-Gaussian, mixture of Gaussians, naive Parzen, Parzen, and support vector data description-using biomedical data sets. We evaluate the ability of the procedures using twelve experimental data sets with not necessarily continuous data. As there are few benchmark data sets for one-class classification, all data sets considered in the evaluation have multiple classes. Each class in turn is considered as the target class and the units in the other classes are considered as new units to be classified. The results of the comparison show the good performance of the typicality approach, which is available for high dimensional data; it is worth mentioning that it can be used for any kind of data (continuous, discrete, or nominal), whereas state-of-the-art approaches application is not straightforward when nominal variables are present.
Resumo:
Estimating rare events from zero-heavy data (data with many zero values) is a common challenge in fisheries science and ecology. For example, loggerhead sea turtles (Caretta caretta) and leatherback sea turtles (Dermochelys coriacea) account for less than 1% of total catch in the U.S. Atlantic pelagic longline fishery. Nevertheless, the Southeast Fisheries Science Center (SEFSC) of the National Marine Fisheries Service (NMFS) is charged with assessing the effect of this fishery on these federally protected species. Annual estimates of loggerhead and leatherback bycatch in a fishery can affect fishery management and species conservation decisions. However, current estimates have wide confidence intervals, and their accuracy is unknown. We evaluate 3 estimation methods, each at 2 spatiotemporal scales, in simulations of 5 spatial scenarios representing incidental capture of sea turtles by the U.S. Atlantic pelagic longline fishery. The delta-log normal method of estimating bycatch for calendar quarter and fishing area strata was the least biased estimation method in the spatial scenarios believed to be most realistic. This result supports the current estimation procedure used by the SEFSC.
Resumo:
We present a gradient-based motion capture system that robustly tracks a human hand, based on abstracted visual information - silhouettes. Despite the ambiguity in the visual data and despite the vulnerability of gradient-based methods in the face of such ambiguity, we minimise problems related to misfit by using a model of the hand's physiology, which is entirely non-visual, subject-invariant, and assumed to be known a priori. By modelling seven distinct aspects of the hand's physiology we derive prior densities which are incorporated into the tracking system within a Bayesian framework. We demonstrate how the posterior is formed, and how our formulation leads to the extraction of the maximum a posteriori estimate using a gradient-based search. Our results demonstrate an enormous improvement in tracking precision and reliability, while also achieving near real-time performance. © 2009 IEEE.
Resumo:
Data quality (DQ) assessment can be significantly enhanced with the use of the right DQ assessment methods, which provide automated solutions to assess DQ. The range of DQ assessment methods is very broad: from data profiling and semantic profiling to data matching and data validation. This paper gives an overview of current methods for DQ assessment and classifies the DQ assessment methods into an existing taxonomy of DQ problems. Specific examples of the placement of each DQ method in the taxonomy are provided and illustrate why the method is relevant to the particular taxonomy position. The gaps in the taxonomy, where no current DQ methods exist, show where new methods are required and can guide future research and DQ tool development.
Resumo:
DNA microarrays provide a huge amount of data and require therefore dimensionality reduction methods to extract meaningful biological information. Independent Component Analysis (ICA) was proposed by several authors as an interesting means. Unfortunately, experimental data are usually of poor quality- because of noise, outliers and lack of samples. Robustness to these hurdles will thus be a key feature for an ICA algorithm. This paper identifies a robust contrast function and proposes a new ICA algorithm. © 2007 IEEE.
Resumo:
Data in an organisation often contains business secrets that organisations do not want to release. However, there are occasions when it is necessary for an organisation to release its data such as when outsourcing work or using the cloud for Data Quality (DQ) related tasks like data cleansing. Currently, there is no mechanism that allows organisations to release their data for DQ tasks while ensuring that it is suitably protected from releasing business related secrets. The aim of this paper is therefore to present our current progress on determining which methods are able to modify secret data and retain DQ problems. So far we have identified the ways in which data swapping and the SHA-2 hash function alterations methods can be used to preserve missing data, incorrectly formatted values, and domain violations DQ problems while minimising the risk of disclosing secrets. © (2012) by the AIS/ICIS Administrative Office All rights reserved.