846 resultados para image processing and analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compared with structured data sources that are usually stored and analyzed in spreadsheets, relational databases, and single data tables, unstructured construction data sources such as text documents, site images, web pages, and project schedules have been less intensively studied due to additional challenges in data preparation, representation, and analysis. In this paper, our vision for data management and mining addressing such challenges are presented, together with related research results from previous work, as well as our recent developments of data mining on text-based, web-based, image-based, and network-based construction databases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bighead carp is one of the most important freshwater filter-feeding fish of Chinese aquaculture. In recent decades, there have been a number of contradictory conclusions on the digestibility of algae by bighead carp based on the results from gut contents and digestive enzyme analysis or radiolabelled isotope techniques. Phytoplankton in the gut contents of bighead carp (cultured in a large net cage in Lake Donghu) were studied during March-May. In biomass, the dominant phytoplankters in the fore-gut contents were the centric diatom Cyclotella (average 54.5%, range 33.8-74.3%) and the dinoflagellate Cryptomonas (average 22.8%, range 6.8-55.8%). Phytoplankton in water samples were generally present in proportionate amounts in samples from the fore-guts of bighead carp. The size of most phytoplankton present in the intestine of bighead carp was between 8 and 20 mum in length. Bighead carp was also able to collect particles (as small as 5-6 mum) much smaller than their filtering net meshes, suggesting the importance of mucus in collecting small particles, Examination of the change in the integrity of Cyclotella on passage through the esophagus of bighead carp indicated that disruption of the algal cell walls is principally by the pharyngeal teeth, explaining the previous contradictory conclusions. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A programmable vision chip with variable resolution and row-pixel-mixed parallel image processors is presented. The chip consists of a CMOS sensor array, with row-parallel 6-bit Algorithmic ADCs, row-parallel gray-scale image processors, pixel-parallel SIMD Processing Element (PE) array, and instruction controller. The resolution of the image in the chip is variable: high resolution for a focused area and low resolution for general view. It implements gray-scale and binary mathematical morphology algorithms in series to carry out low-level and mid-level image processing and sends out features of the image for various applications. It can perform image processing at over 1,000 frames/s (fps). A prototype chip with 64 x 64 pixels resolution and 6-bit gray-scale image is fabricated in 0.18 mu m Standard CMOS process. The area size of chip is 1.5 mm x 3.5 mm. Each pixel size is 9.5 mu m x 9.5 mu m and each processing element size is 23 mu m x 29 mu m. The experiment results demonstrate that the chip can perform low-level and mid-level image processing and it can be applied in the real-time vision applications, such as high speed target tracking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A synthesized photochromic compound-pyrrylfulgide-is prepared as a thin film doped in a polymethylmethacrylate (PMMA) matrix. Under irradiation by UV light, the film converts from the bleached state into a colored state that has a maximum absorption at 635 nm and is thermally stable at room temperature. When the colored state is irradiated by a linearly polarized 650 nm laser, the film returns to the bleached state; photoinduced anisotropy is produced during this process. Application of optical image processing methods using the photoinduced anisotropy of the pyrrylfulgide/PMMA film is described. Examples in non-Fourier optical image processing, such as contrast reversal and image subtraction and summation, as well as in Fourier optical image processing, such as low-pass filtering and edge enhancement, are presented. (c) 2006 Optical Society of America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A 2.5-D and 3-D multi-fold GPR survey was carried out in the Archaeological Park of Aquileia (northern Italy). The primary objective of the study was the identification of targets of potential archaeological interest in an area designated by local archaeological authorities. The second geophysical objective was to test 2-D and 3-D multi-fold methods and to study localised targets of unknown shape and dimensions in hostile soil conditions. Several portions of the acquisition grid were processed in common offset (CO), common shot (CSG) and common mid point (CMP) geometry. An 8×8 m area was studied with orthogonal CMPs thus achieving a 3-D subsurface coverage with azimuthal range limited to two normal components. Coherent noise components were identified in the pre-stack domain and removed by means of FK filtering of CMP records. Stack velocities were obtained from conventional velocity analysis and azimuthal velocity analysis of 3-D pre-stack gathers. Two major discontinuities were identified in the area of study. The deeper one most probably coincides with the paleosol at the base of the layer associated with activities of man in the area in the last 2500 years. This interpretation is in agreement with the results obtained from nearby cores and excavations. The shallow discontinuity is observed in a part of the investigated area and it shows local interruptions with a linear distribution on the grid. Such interruptions may correspond to buried targets of archaeological interest. The prominent enhancement of the subsurface images obtained by means of multi-fold techniques, compared with the relatively poor quality of the conventional single-fold georadar sections, indicates that multi-fold methods are well suited for the application to high resolution studies in archaeology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As a by-product of the ‘information revolution’ which is currently unfolding, lifetimes of man (and indeed computer) hours are being allocated for the automated and intelligent interpretation of data. This is particularly true in medical and clinical settings, where research into machine-assisted diagnosis of physiological conditions gains momentum daily. Of the conditions which have been addressed, however, automated classification of allergy has not been investigated, even though the numbers of allergic persons are rising, and undiagnosed allergies are most likely to elicit fatal consequences. On the basis of the observations of allergists who conduct oral food challenges (OFCs), activity-based analyses of allergy tests were performed. Algorithms were investigated and validated by a pilot study which verified that accelerometer-based inquiry of human movements is particularly well-suited for objective appraisal of activity. However, when these analyses were applied to OFCs, accelerometer-based investigations were found to provide very poor separation between allergic and non-allergic persons, and it was concluded that the avenues explored in this thesis are inadequate for the classification of allergy. Heart rate variability (HRV) analysis is known to provide very significant diagnostic information for many conditions. Owing to this, electrocardiograms (ECGs) were recorded during OFCs for the purpose of assessing the effect that allergy induces on HRV features. It was found that with appropriate analysis, excellent separation between allergic and nonallergic subjects can be obtained. These results were, however, obtained with manual QRS annotations, and these are not a viable methodology for real-time diagnostic applications. Even so, this was the first work which has categorically correlated changes in HRV features to the onset of allergic events, and manual annotations yield undeniable affirmation of this. Fostered by the successful results which were obtained with manual classifications, automatic QRS detection algorithms were investigated to facilitate the fully automated classification of allergy. The results which were obtained by this process are very promising. Most importantly, the work that is presented in this thesis did not obtain any false positive classifications. This is a most desirable result for OFC classification, as it allows complete confidence to be attributed to classifications of allergy. Furthermore, these results could be particularly advantageous in clinical settings, as machine-based classification can detect the onset of allergy which can allow for early termination of OFCs. Consequently, machine-based monitoring of OFCs has in this work been shown to possess the capacity to significantly and safely advance the current state of clinical art of allergy diagnosis

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain

Relevância:

100.00% 100.00%

Publicador: