1000 resultados para Data Cube


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Astronomy has evolved almost exclusively by the use of spectroscopic and imaging techniques, operated separately. With the development of modern technologies, it is possible to obtain data cubes in which one combines both techniques simultaneously, producing images with spectral resolution. To extract information from them can be quite complex, and hence the development of new methods of data analysis is desirable. We present a method of analysis of data cube (data from single field observations, containing two spatial and one spectral dimension) that uses Principal Component Analysis (PCA) to express the data in the form of reduced dimensionality, facilitating efficient information extraction from very large data sets. PCA transforms the system of correlated coordinates into a system of uncorrelated coordinates ordered by principal components of decreasing variance. The new coordinates are referred to as eigenvectors, and the projections of the data on to these coordinates produce images we will call tomograms. The association of the tomograms (images) to eigenvectors (spectra) is important for the interpretation of both. The eigenvectors are mutually orthogonal, and this information is fundamental for their handling and interpretation. When the data cube shows objects that present uncorrelated physical phenomena, the eigenvector`s orthogonality may be instrumental in separating and identifying them. By handling eigenvectors and tomograms, one can enhance features, extract noise, compress data, extract spectra, etc. We applied the method, for illustration purpose only, to the central region of the low ionization nuclear emission region (LINER) galaxy NGC 4736, and demonstrate that it has a type 1 active nucleus, not known before. Furthermore, we show that it is displaced from the centre of its stellar bulge.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

* Supported partially by the Bulgarian National Science Fund under Grant MM-1405/2004

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As e-commerce is becoming more and more popular, the number of customer reviews that a product receives grows rapidly. In order to enhance customer satisfaction and their shopping experiences, it has become important to analysis customers reviews to extract opinions on the products that they buy. Thus, Opinion Mining is getting more important than before especially in doing analysis and forecasting about customers’ behavior for businesses purpose. The right decision in producing new products or services based on data about customers’ characteristics means profit for organization/company. This paper proposes a new architecture for Opinion Mining, which uses a multidimensional model to integrate customers’ characteristics and their comments about products (or services). The key step to achieve this objective is to transfer comments (opinions) to a fact table that includes several dimensions, such as, customers, products, time and locations. This research presents a comprehensive way to calculate customers’ orientation for all possible products’ attributes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research proposes a multi-dimensional model for Opinion Mining, which integrates customers' characteristics and their opinions about products (or services). Customer opinions are valuable for companies to deliver right products or services to their customers. This research presents a comprehensive framework to evaluate opinions' orientation based on products' hierarchy attributes. It also provides an alternative way to obtain opinion summaries for different groups of customers and different categories of produces.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the paper through extensive study and design, the technical plan for establishing the exploration database center is made to combine imported and self developed techniques. By research and repeated experiment a modern database center has been set up with its hardware and network having advanced performance, its system well configured, its data store and management complete, and its data support being fast and direct. Through study on the theory, method and model of decision an exploration decision assistant schema is designed with one decision plan of well location decision support system being evaluated and put into action. 1. Study on the establishment of Shengli exploration database center Research is made on the hardware configuration of the database center including its workstations and all connected hardware and system. The hardware of the database center is formed by connecting workstations, microcomputer workstations, disk arrays, and those equipments used for seismic processing and interpretation. Research on the data store and management includes the analysis of the contents to be managed, data flow, data standard, data QC, data backup and restore policy, optimization of database system. A reasonable data management regulation and workflow is made and the scientific exploration data management system is created. Data load is done by working out a schedule firstly and at last 200 more projects of seismic surveys has been loaded amount to 25TB. 2. Exploration work support system and its application Seismic data processing system support has the following features, automatic extraction of seismic attributes, GIS navigation, data order, extraction of any sized data cube, pseudo huge capacity disk array, standard output exchange format etc. The prestack data can be accessed by the processing system or data can be transferred to other processing system through standard exchange format. For supporting seismic interpretation system the following features exist such as auto scan and store of interpretation result, internal data quality control etc. the interpretation system is connected directly with database center to get real time support of seismic data, formation data and well data. Comprehensive geological study support is done through intranet with the ability to query or display data graphically on the navigation system under some geological constraints. Production management support system is mainly used to collect, analyze and display production data with its core technology on the controlled data collection and creation of multiple standard forms. 3. exploration decision support system design By classification of workflow and data flow of all the exploration stages and study on decision theory and method, target of each decision step, decision model and requirement, three concept models has been formed for the Shengli exploration decision support system including the exploration distribution support system, the well location support system and production management support system. the well location decision support system has passed evaluation and been put into action. 4. Technical advance Hardware and software match with high performance for the database center. By combining parallel computer system, database server, huge capacity ATL, disk array, network and firewall together to create the first exploration database center in China with reasonable configuration, high performance and able to manage the whole data sets of exploration. Huge exploration data management technology is formed where exploration data standards and management regulations are made to guarantee data quality, safety and security. Multifunction query and support system for comprehensive exploration information support. It includes support system for geological study, seismic processing and interpretation and production management. In the system a lot of new database and computer technology have been used to provide real time information support for exploration work. Finally is the design of Shengli exploration decision support system. 5. Application and benefit Data storage has reached the amount of 25TB with thousand of users in Shengli oil field to access data to improve work efficiency multiple times. The technology has also been applied by many other units of SINOPEC. Its application of providing data to a project named Exploration achievements and Evaluation of Favorable Targets in Hekou Area shortened the data preparation period from 30 days to 2 days, enriching data abundance 15 percent and getting information support from the database center perfectly. Its application to provide former processed result for a project named Pre-stack depth migration in Guxi fracture zone reduced the amount of repeated process and shortened work period of one month and improved processing precision and quality, saving capital investment of data processing of 30 million yuan. It application by providing project database automatically in project named Geological and seismic study of southern slope zone of Dongying Sag shortened data preparation time so that researchers have more time to do research, thus to improve interpretation precision and quality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

What geophysical inversion studied includes the common mathematics physical property of inversion and the constitution and appraisal method of solution in geophysics domain, i.e. using observed physical phenomenon from the earth surface to infer space changing and physical property structure of medium within the earth. Seismic inversion is a branch of geophysical inversion. The basic purpose of seismic inversion is to utilizing seismic wave propagating law in the medium underground to infer stratum structure and space distribution of physical property according to data acquisition, processing and interpretation, and then offer the vital foundation for exploratory development. Poststack inversion is convenient and swift, its acoustic impedance inversion product can reflect reservoir interior changing rule to a certain degree, but poststack data lack abundant amplitude and travel time information included in prestack data because of multiple superimpose and weaken the sensitiveness which reflecting reservoir property. Compared with poststack seismic inversion, prestack seismic inversion has better fidelity and more adequate information. Prestack seismic inversion, including waveform inversion, not only suitable for thin strata physical property inversion, it can also inverse reservoir oil-bearing ability. Prestack seismic inversion and prestack elastic impedance inversion maintain avo information, sufficiently applying seismic gathering data with different incident angle, partial angle stack, gradient and intercept seismic data cube. Prestack inversion and poststack inversion technology were studied in this dissertation. A joint inversion method which synthesize prestack elastic wave waveform inversion, prestack elastic impedance inversion and poststack inversion was proposed by making fully use of prestack inversion multiple information and relatively fast and steady characteristic of poststack inversion. Using the proposed method to extract rock physics attribute cube with clear physical significance and reflecting reservoir characterization, such as P-wave and S-wave impedance, P-wave and S-wave velocity, velocity ratio, density, Poisson ratio and Lame’s constant. Regarding loose sand reservoir in lower member of Minghuazhen formation, 32-6 south districts in Qinhuangdao,as the research object, be aimed at the different between shallow layer loose sand and deep layer tight sand, first of all, acquire physical property parameters suitable for this kind of heavy oil pool according to experimental study, establishing initial pressure and shear wave relational model; Afterwards, performing prestack elastic wave forward and inversion research, summarizing rules under the guidance of theoretical research and numerical simulation, performing elastic impedance inversion, calculating rock physics attributes; Finally, predicting sand body distribution according to rock physics parameters, and predicting favorable oil area combine well-logging materials and made good results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We apply a coded aperture snapshot spectral imager (CASSI) to fluorescence microscopy. CASSI records a two-dimensional (2D) spectrally filtered projection of a three-dimensional (3D) spectral data cube. We minimize a convex quadratic function with total variation (TV) constraints for data cube estimation from the 2D snapshot. We adapt the TV minimization algorithm for direct fluorescent bead identification from CASSI measurements by combining a priori knowledge of the spectra associated with each bead type. Our proposed method creates a 2D bead identity image. Simulated fluorescence CASSI measurements are used to evaluate the behavior of the algorithm. We also record real CASSI measurements of a ten bead type fluorescence scene and create a 2D bead identity map. A baseline image from filtered-array imaging system verifies CASSI's 2D bead identity map.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Full-field Fourier-domain optical coherence tomography (3F-OCT) is a full-field version of spectral domain/swept source optical coherence tomography. A set of two-dimensional Fourier holograms is recorded at discrete wavenumbers spanning the swept source tuning range. The resultant three-dimensional data cube contains comprehensive information on the three-dimensional spatial properties of the sample, including its morphological layout and optical scatter. The morphological layout can be reconstructed in software via three-dimensional discrete Fourier transformation. The spatial resolution of the 3F-OCT reconstructed image, however, is degraded due to the presence of a phase cross-term, whose origin and effects are addressed in this paper. We present a theoretical and experimental study of the imaging performance of 3F-OCT, with particular emphasis on elimination of the deleterious effects of the phase cross-term.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We report a new approach in optical coherence tomography (OCT) called full-field Fourier-domain OCT (3F-OCT). A three-dimensional image of a sample is obtained by digital reconstruction of a three-dimensional data cube, acquired with a Fourier holography recording system, illuminated with a swept source. We present a theoretical and experimental study of the signal-to-noise ratio of the 3F-OCT approach versus serial image acquisition (flying-spot OCT) approach. (c) 2005 Optical Society of America.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Full-field Fourier-domain optical coherence tomography (3F-OCT) is a full-field version of spectraldomain/swept-source optical coherence tomography. A set of two-dimensional Fourier holograms is recorded at discrete wavenumbers spanning the swept-source tuning range. The resultant three-dimensional data cube contains comprehensive information on the three-dimensional morphological layout of the sample that can be reconstructed in software via three-dimensional discrete Fourier-transform. This method of recording of the OCT signal confers signal-to-noise ratio improvement in comparison with "flying-spot" time-domain OCT. The spatial resolution of the 3F-OCT reconstructed image, however, is degraded due to the presence of a phase cross-term, whose origin and effects are addressed in this paper. We present theoretical and experimental study of imaging performance of 3F-OCT, with particular emphasis on elimination of the deleterious effects of the phase cross-term.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis introduces two related lines of study on classification of hyperspectral images with nonlinear methods. First, it describes a quantitative and systematic evaluation, by the author, of each major component in a pipeline for classifying hyperspectral images (HSI) developed earlier in a joint collaboration [23]. The pipeline, with novel use of nonlinear classification methods, has reached beyond the state of the art in classification accuracy on commonly used benchmarking HSI data [6], [13]. More importantly, it provides a clutter map, with respect to a predetermined set of classes, toward the real application situations where the image pixels not necessarily fall into a predetermined set of classes to be identified, detected or classified with.

The particular components evaluated are a) band selection with band-wise entropy spread, b) feature transformation with spatial filters and spectral expansion with derivatives c) graph spectral transformation via locally linear embedding for dimension reduction, and d) statistical ensemble for clutter detection. The quantitative evaluation of the pipeline verifies that these components are indispensable to high-accuracy classification.

Secondly, the work extends the HSI classification pipeline with a single HSI data cube to multiple HSI data cubes. Each cube, with feature variation, is to be classified of multiple classes. The main challenge is deriving the cube-wise classification from pixel-wise classification. The thesis presents the initial attempt to circumvent it, and discuss the potential for further improvement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Terrestrial remote sensing imagery involves the acquisition of information from the Earth's surface without physical contact with the area under study. Among the remote sensing modalities, hyperspectral imaging has recently emerged as a powerful passive technology. This technology has been widely used in the fields of urban and regional planning, water resource management, environmental monitoring, food safety, counterfeit drugs detection, oil spill and other types of chemical contamination detection, biological hazards prevention, and target detection for military and security purposes [2-9]. Hyperspectral sensors sample the reflected solar radiation from the Earth surface in the portion of the spectrum extending from the visible region through the near-infrared and mid-infrared (wavelengths between 0.3 and 2.5 µm) in hundreds of narrow (of the order of 10 nm) contiguous bands [10]. This high spectral resolution can be used for object detection and for discriminating between different objects based on their spectral xharacteristics [6]. However, this huge spectral resolution yields large amounts of data to be processed. For example, the Airbone Visible/Infrared Imaging Spectrometer (AVIRIS) [11] collects a 512 (along track) X 614 (across track) X 224 (bands) X 12 (bits) data cube in 5 s, corresponding to about 140 MBs. Similar data collection ratios are achieved by other spectrometers [12]. Such huge data volumes put stringent requirements on communications, storage, and processing. The problem of signal sbspace identification of hyperspectral data represents a crucial first step in many hypersctral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction (DR) yelding gains in data storage and retrieval and in computational time and complexity. Additionally, DR may also improve algorithms performance since it reduce data dimensionality without losses in the useful signal components. The computation of statistical estimates is a relevant example of the advantages of DR, since the number of samples required to obtain accurate estimates increases drastically with the dimmensionality of the data (Hughes phnomenon) [13].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a unified framework using the unit cube for measurement, representation and usage of the range of motion (ROM) of body joints with multiple degrees of freedom (d.o.f) to be used for digital human models (DHM). Traditional goniometry needs skill and kn owledge; it is intrusive and has limited applicability for multi-d.o.f. joints. Measurements using motion capture systems often involve complicated mathematics which itself need validation. In this paper we use change of orientation as the measure of rotation; this definition does not require the identification of any fixed axis of rotation. A two-d.o.f. joint ROM can be represented as a Gaussian map. Spherical polygon representation of ROM, though popular, remains inaccurate, vulnerable due to singularities on parametric sphere and difficult to use for point classification. The unit cube representation overcomes these difficulties. In the work presented here, electromagnetic trackers have been effectively used for measuring the relative orientation of a body segment of interest with respect to another body segment. The orientation is then mapped on a surface gridded cube. As the body segment is moved, the grid cells visited are identified and visualized. Using the visual display as a feedback, the subject is instructed to cover as many grid cells as he can. In this way we get a connected patch of contiguous grid cells. The boundary of this patch represents the active ROM of the concerned joint. The tracker data is converted into the motion of a direction aligned with the axis of the segment and a rotation about this axis later on. The direction identifies the grid cells on the cube and rotation about the axis is represented as a range and visualized using color codes. Thus the present methodology provides a simple, intuitive and accura te determination and representation of up to 3 d.o.f. joints. Basic results are presented for the shoulder. The measurement scheme to be used for wrist and neck, and approach for estimation of the statistical distribution of ROM for a given population are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the problem of mining targeted association rules over multidimensional market-basket data. Here, each transaction has, in addition to the set of purchased items, ancillary dimension attributes associated with it. Based on these dimensions, transactions can be visualized as distributed over cells of an n-dimensional cube. In this framework, a targeted association rule is of the form {X -> Y} R, where R is a convex region in the cube and X. Y is a traditional association rule within region R. We first describe the TOARM algorithm, based on classical techniques, for identifying targeted association rules. Then, we discuss the concepts of bottom-up aggregation and cubing, leading to the CellUnion technique. This approach is further extended, using notions of cube-count interleaving and credit-based pruning, to derive the IceCube algorithm. Our experiments demonstrate that IceCube consistently provides the best execution time performance, especially for large and complex data cubes.