855 resultados para Automatic Data Processing.


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Programming for parallel architectures that do not have a shared address space is extremely difficult due to the need for explicit communication between memories of different compute devices. A heterogeneous system with CPUs and multiple GPUs, or a distributed-memory cluster are examples of such systems. Past works that try to automate data movement for distributed-memory architectures can lead to excessive redundant communication. In this paper, we propose an automatic data movement scheme that minimizes the volume of communication between compute devices in heterogeneous and distributed-memory systems. We show that by partitioning data dependences in a particular non-trivial way, one can generate data movement code that results in the minimum volume for a vast majority of cases. The techniques are applicable to any sequence of affine loop nests and works on top of any choice of loop transformations, parallelization, and computation placement. The data movement code generated minimizes the volume of communication for a particular configuration of these. We use a combination of powerful static analyses relying on the polyhedral compiler framework and lightweight runtime routines they generate, to build a source-to-source transformation tool that automatically generates communication code. We demonstrate that the tool is scalable and leads to substantial gains in efficiency. On a heterogeneous system, the communication volume is reduced by a factor of 11X to 83X over state-of-the-art, translating into a mean execution time speedup of 1.53X. On a distributed-memory cluster, our scheme reduces the communication volume by a factor of 1.4X to 63.5X over state-of-the-art, resulting in a mean speedup of 1.55X. In addition, our scheme yields a mean speedup of 2.19X over hand-optimized UPC codes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on the computer integrated and flexible laser processing system, develop the intelligent measuring sub-system. A novel model has been built to compensate the deviations of the main frame, a new-developed 3-D laser tracker system is applied to adjust the accuracy of the system. Analyzing the characteristic of all kinds of automobile dies, which is the main processing object of the laser processing system, classify the types of the surface and border needed to be measured and be processed. According to different types of surface and border, develop 2-D adaptive measuring method based on B?zier curve and 3-D adaptive measuring method based on spline curve. During the data processing, a new 3-D probe compensation method has been described in details. Some measuring experiments and laser processing experiments are carried out to testify the methods. All the methods have been applied in the computer integrated and flexible laser processing system invented by the Institute of Mechanics, CAS.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Radio Frequency Identification (RFID) technology allows automatic data capture from tagged objects moving in a supply chain. This data can be very useful if it is used to answer traceability queries, however it is distributed across many different repositories, owned by different companies. © 2012 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Statistical analysis of diffusion tensor imaging (DTI) data requires a computational framework that is both numerically tractable (to account for the high dimensional nature of the data) and geometric (to account for the nonlinear nature of diffusion tensors). Building upon earlier studies exploiting a Riemannian framework to address these challenges, the present paper proposes a novel metric and an accompanying computational framework for DTI data processing. The proposed approach grounds the signal processing operations in interpolating curves. Well-chosen interpolating curves are shown to provide a computational framework that is at the same time tractable and information relevant for DTI processing. In addition, and in contrast to earlier methods, it provides an interpolation method which preserves anisotropy, a central information carried by diffusion tensor data. © 2013 Springer Science+Business Media New York.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the paper through extensive study and design, the technical plan for establishing the exploration database center is made to combine imported and self developed techniques. By research and repeated experiment a modern database center has been set up with its hardware and network having advanced performance, its system well configured, its data store and management complete, and its data support being fast and direct. Through study on the theory, method and model of decision an exploration decision assistant schema is designed with one decision plan of well location decision support system being evaluated and put into action. 1. Study on the establishment of Shengli exploration database center Research is made on the hardware configuration of the database center including its workstations and all connected hardware and system. The hardware of the database center is formed by connecting workstations, microcomputer workstations, disk arrays, and those equipments used for seismic processing and interpretation. Research on the data store and management includes the analysis of the contents to be managed, data flow, data standard, data QC, data backup and restore policy, optimization of database system. A reasonable data management regulation and workflow is made and the scientific exploration data management system is created. Data load is done by working out a schedule firstly and at last 200 more projects of seismic surveys has been loaded amount to 25TB. 2. Exploration work support system and its application Seismic data processing system support has the following features, automatic extraction of seismic attributes, GIS navigation, data order, extraction of any sized data cube, pseudo huge capacity disk array, standard output exchange format etc. The prestack data can be accessed by the processing system or data can be transferred to other processing system through standard exchange format. For supporting seismic interpretation system the following features exist such as auto scan and store of interpretation result, internal data quality control etc. the interpretation system is connected directly with database center to get real time support of seismic data, formation data and well data. Comprehensive geological study support is done through intranet with the ability to query or display data graphically on the navigation system under some geological constraints. Production management support system is mainly used to collect, analyze and display production data with its core technology on the controlled data collection and creation of multiple standard forms. 3. exploration decision support system design By classification of workflow and data flow of all the exploration stages and study on decision theory and method, target of each decision step, decision model and requirement, three concept models has been formed for the Shengli exploration decision support system including the exploration distribution support system, the well location support system and production management support system. the well location decision support system has passed evaluation and been put into action. 4. Technical advance Hardware and software match with high performance for the database center. By combining parallel computer system, database server, huge capacity ATL, disk array, network and firewall together to create the first exploration database center in China with reasonable configuration, high performance and able to manage the whole data sets of exploration. Huge exploration data management technology is formed where exploration data standards and management regulations are made to guarantee data quality, safety and security. Multifunction query and support system for comprehensive exploration information support. It includes support system for geological study, seismic processing and interpretation and production management. In the system a lot of new database and computer technology have been used to provide real time information support for exploration work. Finally is the design of Shengli exploration decision support system. 5. Application and benefit Data storage has reached the amount of 25TB with thousand of users in Shengli oil field to access data to improve work efficiency multiple times. The technology has also been applied by many other units of SINOPEC. Its application of providing data to a project named Exploration achievements and Evaluation of Favorable Targets in Hekou Area shortened the data preparation period from 30 days to 2 days, enriching data abundance 15 percent and getting information support from the database center perfectly. Its application to provide former processed result for a project named Pre-stack depth migration in Guxi fracture zone reduced the amount of repeated process and shortened work period of one month and improved processing precision and quality, saving capital investment of data processing of 30 million yuan. It application by providing project database automatically in project named Geological and seismic study of southern slope zone of Dongying Sag shortened data preparation time so that researchers have more time to do research, thus to improve interpretation precision and quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reflectivity sequences extraction is a key part of impedance inversion in seismic exploration. Although many valid inversion methods exist, with crosswell seismic data, the frequency brand of seismic data can not be broadened to satisfy the practical need. It is an urgent problem to be solved. Pre-stack depth migration which developed in these years becomes more and more robust in the exploration. It is a powerful technology of imaging to the geological object with complex structure and its final result is reflectivity imaging. Based on the reflectivity imaging of crosswell seismic data and wave equation, this paper completed such works as follows: Completes the workflow of blind deconvolution, Cauchy criteria is used to regulate the inversion(sparse inversion). Also the precondition conjugate gradient(PCG) based on Krylov subspace is combined with to decrease the computation, improves the speed, and the transition matrix is not necessary anymore be positive and symmetric. This method is used to the high frequency recovery of crosswell seismic section and the result is satisfactory. Application of rotation transform and viterbi algorithm in the preprocess of equation prestack depth migration. In equation prestack depth migration, the grid of seismic dataset is required to be regular. Due to the influence of complex terrain and fold, the acquisition geometry sometimes becomes irregular. At the same time, to avoid the aliasing produced by the sparse sample along the on-line, interpolation should be done between tracks. In this paper, I use the rotation transform to make on-line run parallel with the coordinate, and also use the viterbi algorithm to complete the automatic picking of events, the result is satisfactory. 1. Imaging is a key part of pre-stack depth migration besides extrapolation. Imaging condition can influence the final result of reflectivity sequences imaging greatly however accurate the extrapolation operator is. The author does migration of Marmousi under different imaging conditions. And analyzes these methods according to the results. The results of computation show that imaging condition which stabilize source wave field and the least-squares estimation imaging condition in this paper are better than the conventional correlation imaging condition. The traditional pattern of "distributed computing and mass decision" is wisely adopted in the field of seismic data processing and becoming an obstacle of the promoting of the enterprise management level. Thus at the end of this paper, a systemic solution scheme, which employs the mode of "distributed computing - centralized storage - instant release", is brought forward, based on the combination of C/S and B/S release models. The architecture of the solution, the corresponding web technology and the client software are introduced. The application shows that the validity of this scheme.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Huelse, M, Barr, D R W, Dudek, P: Cellular Automata and non-static image processing for embodied robot systems on a massively parallel processor array. In: Adamatzky, A et al. (eds) AUTOMATA 2008, Theory and Applications of Cellular Automata. Luniver Press, 2008, pp. 504-510. Sponsorship: EPSRC

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A regularized algorithm for the recovery of band-limited signals from noisy data is described. The regularization is characterized by a single parameter. Iterative and non-iterative implementations of the algorithm are shown to have useful properties, the former offering the advantage of flexibility and the latter a potential for rapid data processing. Comparative results, using experimental data obtained in laser anemometry studies with a photon correlator, are presented both with and without regularization. © 1983 Taylor & Francis Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The detection of dense harmful algal blooms (HABs) by satellite remote sensing is usually based on analysis of chlorophyll-a as a proxy. However, this approach does not provide information about the potential harm of bloom, nor can it identify the dominant species. The developed HAB risk classification method employs a fully automatic data-driven approach to identify key characteristics of water leaving radiances and derived quantities, and to classify pixels into “harmful”, “non-harmful” and “no bloom” categories using Linear Discriminant Analysis (LDA). Discrimination accuracy is increased through the use of spectral ratios of water leaving radiances, absorption and backscattering. To reduce the false alarm rate the data that cannot be reliably classified are automatically labelled as “unknown”. This method can be trained on different HAB species or extended to new sensors and then applied to generate independent HAB risk maps; these can be fused with other sensors to fill gaps or improve spatial or temporal resolution. The HAB discrimination technique has obtained accurate results on MODIS and MERIS data, correctly identifying 89% of Phaeocystis globosa HABs in the southern North Sea and 88% of Karenia mikimotoi blooms in the Western English Channel. A linear transformation of the ocean colour discriminants is used to estimate harmful cell counts, demonstrating greater accuracy than if based on chlorophyll-a; this will facilitate its integration into a HAB early warning system operating in the southern North Sea.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A Time of flight (ToF) mass spectrometer suitable in terms of sensitivity, detector response and time resolution, for application in fast transient Temporal Analysis of Products (TAP) kinetic catalyst characterization is reported. Technical difficulties associated with such application as well as the solutions implemented in terms of adaptations of the ToF apparatus are discussed. The performance of the ToF was validated and the full linearity of the specific detector over the full dynamic range was explored in order to ensure its applicability for the TAP application. The reported TAP-ToF setup is the first system that achieves the high level of sensitivity allowing monitoring of the full 0-200 AMU range simultaneously with sub-millisecond time resolution. In this new setup, the high sensitivity allows the use of low intensity pulses ensuring that transport through the reactor occurs in the Knudsen diffusion regime and that the data can, therefore, be fully analysed using the reported theoretical TAP models and data processing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Data processing is an essential part of Acoustic Doppler Profiler (ADP) surveys, which have become the standard tool in assessing flow characteristics at tidal power development sites. In most cases, further processing beyond the capabilities of the manufacturer provided software tools is required. These additional tasks are often implemented by every user in mathematical toolboxes like MATLAB, Octave or Python. This requires the transfer of the data from one system to another and thus increases the possibility of errors. The application of dedicated tools for visualisation of flow or geographic data is also often beneficial and a wide range of tools are freely available, though again problems arise from the necessity of transferring the data. Furthermore, almost exclusively PCs are supported directly by the ADP manufacturers, whereas small computing solutions like tablet computers, often running Android or Linux operating systems, seem better suited for online monitoring or data acquisition in field conditions. While many manufacturers offer support for developers, any solution is limited to a single device of a single manufacturer. A common data format for all ADP data would allow development of applications and quicker distribution of new post processing methodologies across the industry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Quantile normalization (QN) is a technique for microarray data processing and is the default normalization method in the Robust Multi-array Average (RMA) procedure, which was primarily designed for analysing gene expression data from Affymetrix arrays. Given the abundance of Affymetrix microarrays and the popularity of the RMA method, it is crucially important that the normalization procedure is applied appropriately. In this study we carried out simulation experiments and also analysed real microarray data to investigate the suitability of RMA when it is applied to dataset with different groups of biological samples. From our experiments, we showed that RMA with QN does not preserve the biological signal included in each group, but rather it would mix the signals between the groups. We also showed that the Median Polish method in the summarization step of RMA has similar mixing effect. RMA is one of the most widely used methods in microarray data processing and has been applied to a vast volume of data in biomedical research. The problematic behaviour of this method suggests that previous studies employing RMA could have been misadvised or adversely affected. Therefore we think it is crucially important that the research community recognizes the issue and starts to address it. The two core elements of the RMA method, quantile normalization and Median Polish, both have the undesirable effects of mixing biological signals between different sample groups, which can be detrimental to drawing valid biological conclusions and to any subsequent analyses. Based on the evidence presented here and that in the literature, we recommend exercising caution when using RMA as a method of processing microarray gene expression data, particularly in situations where there are likely to be unknown subgroups of samples.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Field programmable gate array devices boast abundant resources with which custom accelerator components for signal, image and data processing may be realised; however, realising high performance, low cost accelerators currently demands manual register transfer level design. Software-programmable ’soft’ processors have been proposed as a way to reduce this design burden but they are unable to support performance and cost comparable to custom circuits. This paper proposes a new soft processing approach for FPGA which promises to overcome this barrier. A high performance, fine-grained streaming processor, known as a Streaming Accelerator Element, is proposed which realises accelerators as large scale custom multicore networks. By adopting a streaming execution approach with advanced program control and memory addressing capabilities, typical program inefficiencies can be almost completely eliminated to enable performance and cost which are unprecedented amongst software-programmable solutions. When used to realise accelerators for fast fourier transform, motion estimation, matrix multiplication and sobel edge detection it is shown how the proposed architecture enables real-time performance and with performance and cost comparable with hand-crafted custom circuit accelerators and up to two orders of magnitude beyond existing soft processors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Estudar os mecanismos subjacentes à produção de fala é uma tarefa complexa e exigente, requerendo a obtenção de dados mediante a utilização de variadas técnicas, onde se incluem algumas modalidades imagiológicas. De entre estas, a Ressonância Magnética (RM) tem ganho algum destaque, nos últimos anos, posicionando-se como uma das mais promissoras no domínio da produção de fala. Um importante contributo deste trabalho prende-se com a otimização e implementação de protocolos (RM) e proposta de estratégias de processamento de imagem ajustados aos requisitos da produção de fala, em geral, e às especificidades dos diferentes sons. Para além disso, motivados pela escassez de dados para o Português Europeu (PE), constitui-se como objetivo a obtenção de dados articulatórios que permitam complementar informação já existente e clarificar algumas questões relativas à produção dos sons do PE (nomeadamente, consoantes laterais e vogais nasais). Assim, para as consoantes laterais foram obtidas imagens RM (2D e 3D), através de produções sustidas, com recurso a uma sequência Eco de Gradiente (EG) rápida (3D VIBE), no plano sagital, englobando todo o trato vocal. O corpus, adquirido por sete falantes, contemplou diferentes posições silábicas e contextos vocálicos. Para as vogais nasais, foram adquiridas, em três falantes, imagens em tempo real com uma sequência EG - Spoiled (TurboFLASH), nos planos sagital e coronal, obtendo-se uma resolução temporal de 72 ms (14 frames/s). Foi efetuada aquisição sincronizada das imagens com o sinal acústico mediante utilização de um microfone ótico. Para o processamento e análise de imagem foram utilizados vários algoritmos semiautomáticos. O tratamento e análise dos dados permitiu efetuar uma descrição articulatória das consoantes laterais, ancorada em dados qualitativos (e.g., visualizações 3D, comparação de contornos) e quantitativos que incluem áreas, funções de área do trato vocal, extensão e área das passagens laterais, avaliação de efeitos contextuais e posicionais, etc. No que respeita à velarização da lateral alveolar /l/, os resultados apontam para um /l/ velarizado independentemente da sua posição silábica. Relativamente ao /L/, em relação ao qual a informação disponível era escassa, foi possível verificar que a sua articulação é bastante mais anteriorizada do que tradicionalmente descrito e também mais extensa do que a da lateral alveolar. A resolução temporal de 72 ms conseguida com as aquisições de RM em tempo real, revelou-se adequada para o estudo das características dinâmicas das vogais nasais, nomeadamente, aspetos como a duração do gesto velar, gesto oral, coordenação entre gestos, etc. complementando e corroborando resultados, já existentes para o PE, obtidos com recurso a outras técnicas instrumentais. Para além disso, foram obtidos novos dados de produção relevantes para melhor compreensão da nasalidade (variação área nasal/oral no tempo, proporção nasal/oral). Neste estudo, fica patente a versatilidade e potencial da RM para o estudo da produção de fala, com contributos claros e importantes para um melhor conhecimento da articulação do Português, para a evolução de modelos de síntese de voz, de base articulatória, e para aplicação futura em áreas mais clínicas (e.g., perturbações da fala).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the fundamental problems with image processing of petrographic thin sections is that the appearance (colour I intensity) of a mineral grain will vary with the orientation of the crystal lattice to the preferred direction of the polarizing filters on a petrographic microscope. This makes it very difficult to determine grain boundaries, grain orientation and mineral species from a single captured image. To overcome this problem, the Rotating Polarizer Stage was used to replace the fixed polarizer and analyzer on a standard petrographic microscope. The Rotating Polarizer Stage rotates the polarizers while the thin section remains stationary, allowing for better data gathering possibilities. Instead of capturing a single image of a thin section, six composite data sets are created by rotating the polarizers through 900 (or 1800 if quartz c-axes measurements need to be taken) in both plane and cross polarized light. The composite data sets can be viewed as separate images and consist of the average intensity image, the maximum intensity image, the minimum intensity image, the maximum position image, the minimum position image and the gradient image. The overall strategy used by the image processing system is to gather the composite data sets, determine the grain boundaries using the gradient image, classify the different mineral species present using the minimum and maximum intensity images and then perform measurements of grain shape and, where possible, partial crystallographic orientation using the maximum intensity and maximum position images.