918 resultados para High-speed digital imaging
Resumo:
The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.
Resumo:
We present information-theory analysis of the tradeoff between bit-error rate improvement and the data-rate loss using skewed channel coding to suppress pattern-dependent errors in digital communications. Without loss of generality, we apply developed general theory to the particular example of a high-speed fiber communication system with a strong patterning effect. © 2007 IEEE.
Resumo:
A new mesoscale simulation model for solids dissolution based on an computationally efficient and versatile digital modelling approach (DigiDiss) is considered and validated against analytical solutions and published experimental data for simple geometries. As the digital model is specifically designed to handle irregular shapes and complex multi-component structures, use of the model is explored for single crystals (sugars) and clusters. Single crystals and the cluster were first scanned using X-ray microtomography to obtain a digital version of their structures. The digitised particles and clusters were used as a structural input to digital simulation. The same particles were then dissolved in water and the dissolution process was recorded by a video camera and analysed yielding: the overall dissolution times and images of particle size and shape during the dissolution. The results demonstrate the coherence of simulation method to reproduce experimental behaviour, based on known chemical and diffusion properties of constituent phase. The paper discusses how further sophistications to the modelling approach will need to include other important effects such as complex disintegration effects (particle ejection, uncertainties in chemical properties). The nature of the digital modelling approach is well suited to for future implementation with high speed computation using hybrid conventional (CPU) and graphical processor (GPU) systems.
Resumo:
The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^
Resumo:
A methodology has been developed and presented to enable the use of small to medium scale acoustic hover facilities for the quantitative measurement of rotor impulsive noise. The methodology was applied to the University of Maryland Acoustic Chamber resulting in accurate measurements of High Speed Impulsive (HSI) noise for rotors running at tip Mach numbers between 0.65 and 0.85 – with accuracy increasing as the tip Mach number was increased. Several factors contributed to the success of this methodology including: • High Speed Impulsive (HSI) noise is characterized by very distinct pulses radiated from the rotor. The pulses radiate high frequency energy – but the energy is contained in short duration time pulses. • The first reflections from these pulses can be tracked (using ray theory) and, through adjustment of the microphone position and suitably applied acoustic treatment at the reflected surface, reduced to small levels. A computer code was developed that automates this process. The code also tracks first bounce reflection timing, making it possible to position the first bounce reflections outside of a measurement window. • Using a rotor with a small number of blades (preferably one) reduces the number of interfering first bounce reflections and generally improves the measured signal fidelity. The methodology will help the gathering of quantitative hovering rotor noise data in less than optimal acoustic facilities and thus enable basic rotorcraft research and rotor blade acoustic design.
Resumo:
Strawberries harvested for processing as frozen fruits are currently de-calyxed manually in the field. This process requires the removal of the stem cap with green leaves (i.e. the calyx) and incurs many disadvantages when performed by hand. Not only does it necessitate the need to maintain cutting tool sanitation, but it also increases labor time and exposure of the de-capped strawberries before in-plant processing. This leads to labor inefficiency and decreased harvest yield. By moving the calyx removal process from the fields to the processing plants, this new practice would reduce field labor and improve management and logistics, while increasing annual yield. As labor prices continue to increase, the strawberry industry has shown great interest in the development and implementation of an automated calyx removal system. In response, this dissertation describes the design, operation, and performance of a full-scale automatic vision-guided intelligent de-calyxing (AVID) prototype machine. The AVID machine utilizes commercially available equipment to produce a relatively low cost automated de-calyxing system that can be retrofitted into existing food processing facilities. This dissertation is broken up into five sections. The first two sections include a machine overview and a 12-week processing plant pilot study. Results of the pilot study indicate the AVID machine is able to de-calyx grade-1-with-cap conical strawberries at roughly 66 percent output weight yield at a throughput of 10,000 pounds per hour. The remaining three sections describe in detail the three main components of the machine: a strawberry loading and orientation conveyor, a machine vision system for calyx identification, and a synchronized multi-waterjet knife calyx removal system. In short, the loading system utilizes rotational energy to orient conical strawberries. The machine vision system determines cut locations through RGB real-time feature extraction. The high-speed multi-waterjet knife system uses direct drive actuation to locate 30,000 psi cutting streams to precise coordinates for calyx removal. Based on the observations and studies performed within this dissertation, the AVID machine is seen to be a viable option for automated high-throughput strawberry calyx removal. A summary of future tasks and further improvements is discussed at the end.
Resumo:
The article seeks to investigate patterns of performance and relationships between grip strength, gait speed and self-rated health, and investigate the relationships between them, considering the variables of gender, age and family income. This was conducted in a probabilistic sample of community-dwelling elderly aged 65 and over, members of a population study on frailty. A total of 689 elderly people without cognitive deficit suggestive of dementia underwent tests of gait speed and grip strength. Comparisons between groups were based on low, medium and high speed and strength. Self-related health was assessed using a 5-point scale. The males and the younger elderly individuals scored significantly higher on grip strength and gait speed than the female and oldest did; the richest scored higher than the poorest on grip strength and gait speed; females and men aged over 80 had weaker grip strength and lower gait speed; slow gait speed and low income arose as risk factors for a worse health evaluation. Lower muscular strength affects the self-rated assessment of health because it results in a reduction in functional capacity, especially in the presence of poverty and a lack of compensatory factors.
Resumo:
Considering the difficulties in finding good-quality images for the development and test of computer-aided diagnosis (CAD), this paper presents a public online mammographic images database free for all interested viewers and aimed to help develop and evaluate CAD schemes. The digitalization of the mammographic images is made with suitable contrast and spatial resolution for processing purposes. The broad recuperation system allows the user to search for different images, exams, or patient characteristics. Comparison with other databases currently available has shown that the presented database has a sufficient number of images, is of high quality, and is the only one to include a functional search system.
Resumo:
An accurate estimate of machining time is very important for predicting delivery time, manufacturing costs, and also to help production process planning. Most commercial CAM software systems estimate the machining time in milling operations simply by dividing the entire tool path length by the programmed feed rate. This time estimate differs drastically from the real process time because the feed rate is not always constant due to machine and computer numerical controlled (CNC) limitations. This study presents a practical mechanistic method for milling time estimation when machining free-form geometries. The method considers a variable called machine response time (MRT) which characterizes the real CNC machine`s capacity to move in high feed rates in free-form geometries. MRT is a global performance feature which can be obtained for any type of CNC machine configuration by carrying out a simple test. For validating the methodology, a workpiece was used to generate NC programs for five different types of CNC machines. A practical industrial case study was also carried out to validate the method. The results indicated that MRT, and consequently, the real machining time, depends on the CNC machine`s potential: furthermore, the greater MRT, the larger the difference between predicted milling time and real milling time. The proposed method achieved an error range from 0.3% to 12% of the real machining time, whereas the CAM estimation achieved from 211% to 1244% error. The MRT-based process is also suggested as an instrument for helping in machine tool benchmarking.
Resumo:
This paper presents a novel algorithm to successfully achieve viable integrity and authenticity addition and verification of n-frame DICOM medical images using cryptographic mechanisms. The aim of this work is the enhancement of DICOM security measures, especially for multiframe images. Current approaches have limitations that should be properly addressed for improved security. The algorithm proposed in this work uses data encryption to provide integrity and authenticity, along with digital signature. Relevant header data and digital signature are used as inputs to cipher the image. Therefore, one can only retrieve the original data if and only if the images and the inputs are correct. The encryption process itself is a cascading scheme, where a frame is ciphered with data related to the previous frames, generating also additional data on image integrity and authenticity. Decryption is similar to encryption, featuring also the standard security verification of the image. The implementation was done in JAVA, and a performance evaluation was carried out comparing the speed of the algorithm with other existing approaches. The evaluation showed a good performance of the algorithm, which is an encouraging result to use it in a real environment.
Resumo:
One of the goals of the ARC funded Eresearch project called Sharing access and analytical tools for ethnographic digital media using high speed networks, or simply EthnoER is to take outputs of normal linguistic analytical processes and present them online in a system we have called the EthnoER online presentation and annotation system, or EOPAS.
Resumo:
The cholinergic system is thought to play an important role in hippocampal-dependent learning and memory. However, the mechanism of action of the cholinergic system in these actions in not well understood. Here we examined the effect of muscarinic receptor stimulation in hippocampal CA1 pyramidal neurons using whole-cell recordings in acute brain slices coupled with high-speed imaging of intracellular calcium. Activation of muscarinic acetylcholine receptors by synaptic stimulation of cholinergic afferents or application of muscarinic agonist in CA1 pyramidal neurons evoked a focal rise in free calcium in the apical dendrite that propagated as a wave into the soma and invaded the nucleus. The calcium rise to a single action potential was reduced during muscarinic stimulation. Conversely, the calcium rise during trains of action potentials was enhanced during muscarinic stimulation. The enhancement of free intracellular calcium was most pronounced in the soma and nuclear regions. In many cases, the calcium rise was distinguished by a clear inflection in the rising phase of the calcium transient, indicative of a regenerative response. Both calcium waves and the amplification of action potential-induced calcium transients were blocked the emptying of intracellular calcium stores or by antagonism of inositol 1,4,5-trisphosphate receptors with heparin or caffeine. Ryanodine receptors were not essential for the calcium waves or enhancement of calcium responses. Because rises in nuclear calcium are known to initiate the transcription of novel genes, we suggest that these actions of cholinergic stimulation may underlie its effects on learning and memory.
Resumo:
Olm MA, Kogler JE Jr, Macchione M, Shoemark A, Saldiva PH, Rodrigues JC. Primary ciliary dyskinesia: evaluation using cilia beat frequency assessment via spectral analysis of digital microscopy images. J Appl Physiol 111: 295-302, 2011. First published May 5, 2011; doi:10.1152/japplphysiol.00629.2010.-Ciliary beat frequency (CBF) measurements provide valuable information for diagnosing of primary ciliary dyskinesia (PCD). We developed a system for measuring CBF, used it in association with electron microscopy to diagnose PCD, and then analyzed characteristics of PCD patients. 1 The CBF measurement system was based on power spectra measured through digital imaging. Twenty-four patients suspected of having PCD (age 1-19 yr) were selected from a group of 75 children and adolescents with pneumopathies of unknown causes. Ten healthy, nonsmoking volunteers (age >= 17 yr) served as a control group. Nasal brush samples were collected, and CBF and electron microscopy were performed. PCD was diagnosed in 12 patients: 5 had radial spoke defects, 3 showed absent central microtubule pairs with transposition, 2 had outer dynein arm defects, 1 had a shortened outer dynein arm, and 1 had a normal ultrastructure. Previous studies have reported that the most common cilia defects are in the dynein arm. As expected, the mean CBF was higher in the control group (P < 0.001) and patients with normal ultrastructure (P < 0.002), than in those diagnosed with cilia ultrastructural defects (i.e., PCD patients). An obstructive ventilatory pattern was observed in 70% of the PCD patients who underwent pulmonary function tests. All PCD patients presented bronchial wall thickening on chest computed tomography scans. The protocol and diagnostic techniques employed allowed us to diagnose PCD in 16% of patients in this study.
Resumo:
In this paper, methods are presented for automatic detection of the nipple and the pectoral muscle edge in mammograms via image processing in the Radon domain. Radon-domain information was used for the detection of straight-line candidates with high gradient. The longest straight-line candidate was used to identify the pectoral muscle edge. The nipple was detected as the convergence point of breast tissue components, indicated by the largest response in the Radon domain. Percentages of false-positive (FP) and false-negative (FN) areas were determined by comparing the areas of the pectoral muscle regions delimited manually by a radiologist and by the proposed method applied to 540 mediolateral-oblique (MLO) mammographic images. The average FP and FN were 8.99% and 9.13%, respectively. In the detection of the nipple, an average error of 7.4 mm was obtained with reference to the nipple as identified by a radiologist on 1,080 mammographic images (540 MLO and 540 craniocaudal views).
Resumo:
Purpose: The objective of this in vitro study was to compare the degree of microleakage of composite restorations performed by lasers and conventional drills associated with two adhesive systems. Materials and Methods: Sixty bovine teeth were divided into 6 groups (n = 10). The preparations were performed in groups 1 and 2 with a high-speed drill (HID), in groups 3 and 5 with Er:YAG laser, and in groups 4 and 6 with Er,Cr:YSGG laser. The specimens were restored with resin composite associated with an etch-and-rinse two-step adhesive system (Single Bond 2 [SB]) (groups 1, 3, 4) and a self-etching adhesive (One-Up Bond F [OB]) (groups 2, 5, 6). After storage, the specimens were polished, thermocycled, immersed in 50% silver nitrate tracer solution, and then sectioned longitudinally. The specimens were placed under a stereomicroscope (25X) and digital images were obtained. These were evaluated by three blinded evaluators who assigned a microleakage score (0 to 3). The original data were submitted to Kruskal-Wallis and Mann-Whitney statistical tests. Results: The occlusal/enamel margins demonstrated no differences in microleakage for all treatments (p > 0.05). The gingival/dentin margins presented similar microleakage in cavities prepared with Er:YAG, Er,Cr:YSGG, and HD using the etch-and-rinse two-step adhesive system (SB) (p > 0.05); otherwise, both Er:YAG and Er,Cr:YSGG lasers demonstrated lower microleakage scores with OB than SB adhesive (p < 0.05). Conclusion: The microleakage score at gingival margins is dependent on the interaction of the hard tissue removal tool and the adhesive system used. The self-etching adhesive system had a lower microleakage score at dentin margins for cavities prepared with Er:YAG and Er,Cr:YSGG than the etch-and-rinse two-step adhesive system.