943 resultados para Digital processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To develop a method for objective assessment of fine motor timing variability in Parkinson’s disease (PD) patients, using digital spiral data gathered by a touch screen device. BACKGROUND: A retrospective analysis was conducted on data from 105 subjects including65 patients with advanced PD (group A), 15 intermediate patients experiencing motor fluctuations (group I), 15 early stage patients (group S), and 10 healthy elderly subjects (HE) were examined. The subjects were asked to perform repeated upper limb motor tasks by tracing a pre-drawn Archimedes spiral as shown on the screen of the device. The spiral tracing test was performed using an ergonomic pen stylus, using dominant hand. The test was repeated three times per test occasion and the subjects were instructed to complete it within 10 seconds. Digital spiral data including stylus position (x-ycoordinates) and timestamps (milliseconds) were collected and used in subsequent analysis. The total number of observations with the test battery were as follows: Swedish group (n=10079), Italian I group (n=822), Italian S group (n = 811), and HE (n=299). METHODS: The raw spiral data were processed with three data processing methods. To quantify motor timing variability during spiral drawing tasks Approximate Entropy (APEN) method was applied on digitized spiral data. APEN is designed to capture the amount of irregularity or complexity in time series. APEN requires determination of two parameters, namely, the window size and similarity measure. In our work and after experimentation, window size was set to 4 and similarity measure to 0.2 (20% of the standard deviation of the time series). The final score obtained by APEN was normalized by total drawing completion time and used in subsequent analysis. The score generated by this method is hence on denoted APEN. In addition, two more methods were applied on digital spiral data and their scores were used in subsequent analysis. The first method was based on Digital Wavelet Transform and Principal Component Analysis and generated a score representing spiral drawing impairment. The score generated by this method is hence on denoted WAV. The second method was based on standard deviation of frequency filtered drawing velocity. The score generated by this method is hence on denoted SDDV. Linear mixed-effects (LME) models were used to evaluate mean differences of the spiral scores of the three methods across the four subject groups. Test-retest reliability of the three scores was assessed after taking mean of the three possible correlations (Spearman’s rank coefficients) between the three test trials. Internal consistency of the methods was assessed by calculating correlations between their scores. RESULTS: When comparing mean spiral scores between the four subject groups, the APEN scores were different between HE subjects and three patient groups (P=0.626 for S group with 9.9% mean value difference, P=0.089 for I group with 30.2%, and P=0.0019 for A group with 44.1%). However, there were no significant differences in mean scores of the other two methods, except for the WAV between the HE and A groups (P<0.001). WAV and SDDV were highly and significantly correlated to each other with a coefficient of 0.69. However, APEN was not correlated to neither WAV nor SDDV with coefficients of 0.11 and 0.12, respectively. Test-retest reliability coefficients of the three scores were as follows: APEN (0.9), WAV(0.83) and SD-DV (0.55). CONCLUSIONS: The results show that the digital spiral analysis-based objective APEN measure is able to significantly differentiate the healthy subjects from patients at advanced level. In contrast to the other two methods (WAV and SDDV) that are designed to quantify dyskinesias (over-medications), this method can be useful for characterizing Off symptoms in PD. The APEN was not correlated to none of the other two methods indicating that it measures a different construct of upper limb motor function in PD patients than WAV and SDDV. The APEN also had a better test-retest reliability indicating that it is more stable and consistent over time than WAV and SDDV.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper summarises the results of using image processing technique to get information about the load of timber trucks before their arrival using digital images or geo tagged images. Once the images are captured and sent to sawmill by drivers from forest, we can predict their arrival time using geo tagged coordinates, count the number of (timber) logs piled up in a truck, identify their type and calculate their diameter. With this information we can schedule and prioritise the inflow and unloading of trucks in the light of production schedules and raw material stocks available at the sawmill yard. It is important to keep all the actors in a supply chain integrated coordinated, so that optimal working routines can be reached in the sawmill yard.   

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The demands of image processing related systems are robustness, high recognition rates, capability to handle incomplete digital information, and magnanimous flexibility in capturing shape of an object in an image. It is exactly here that, the role of convex hulls comes to play. The objective of this paper is twofold. First, we summarize the state of the art in computational convex hull development for researchers interested in using convex hull image processing to build their intuition, or generate nontrivial models. Secondly, we present several applications involving convex hulls in image processing related tasks. By this, we have striven to show researchers the rich and varied set of applications they can contribute to. This paper also makes a humble effort to enthuse prospective researchers in this area. We hope that the resulting awareness will result in new advances for specific image recognition applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effectiveness of Cognitive Behavioral Therapy (CBT) for eating disorders has established a link between cognitive processes and unhealthy eating behaviors. However, the relationship between individual differences in unhealthy eating behaviors that are not related to clinical eating disorders, such as overeating and restrained eating, and the processing of food related verbal stimuli remains undetermined. Furthermore, the cognitive processes that promote unhealthy and healthy exercise patterns remain virtually unexplored by previous research. The present study compared individual differences in attitudes and behaviors around eating and exercise to responses to food and exercise-related words using a Lexical Decision Task (LDT). Participants were recruited from Colby (n = 61) and the greater Waterville community (n = 16). The results indicate the following trends in the data: Individuals who scored high in “thin ideal” responded faster to food-related words than individuals with low “thin Ideal” scores did. Regarding the exercise-related data, individuals who engage in more “low intensity exercise” responded faster to exercise-related words than individuals who engage in less “low intensity exercise” did. These findings suggest that cognitive schemata about food and exercise might mediate individual’s eating and exercise patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with a mathematical fundament for digital signal processing under point view of interval mathematics. Intend treat the open problem of precision and repesention of data in digital systems, with a intertval version of signals representation. Signals processing is a rich and complex area, therefore, this work makes a cutting with focus in systems linear invariant in the time. A vast literature in the area exists, but, some concepts in interval mathematics need to be redefined or to be elaborated for the construction of a solid theory of interval signal processing. We will construct a basic fundaments for signal processing in the interval version, such as basic properties linearity, stability, causality, a version to intervalar of linear systems e its properties. They will be presented interval versions of the convolution and the Z-transform. Will be made analysis of convergences of systems using interval Z-transform , a essentially interval distance, interval complex numbers , application in a interval filter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fiber reinforced epoxy composites are used in a wide variety of applications in the aerospace field. These materials have high specific moduli, high specific strength and their properties can be tailored to application requirements. In order to screening optimum materials behavior, the effects of external environments on the mechanical properties during usage must be clearly understood. The environmental action, such as high moisture concentration, high temperatures, corrosive fluids or ultraviolet radiation (UV), can affect the performance of advanced composites during service. These factors can limit the applications of composites by deteriorating the mechanical properties over a period of time. Properties determination is attributed to the chemical and/or physical damages caused in the polymer matrix, loss of adhesion of fiber/resin interface, and/or reduction of fiber strength and stiffness. The dynamic elastic properties are important characteristics of glass fiber reinforced composites (GRFC). They control the damping behavior of composite structures and are also an ideal tool for monitoring the development of GFRC's mechanical properties during their processing or service. One of the most used tests is the vibration damping. In this work, the measurement consisted of recording the vibration decay of a rectangular plate excited by a controlled mechanism to identify the elastic and damping properties of the material under test. The frequency amplitude were measured by accelerometers and calculated by using a digital method. The present studies have been performed to explore relations between the dynamic mechanical properties, damping test and the influence of high moisture concentration of glass fiber reinforced composites (plain weave). The results show that the E' decreased with the increase in the exposed time for glass fiber/epoxy composites specimens exposed at 80 degrees C and 90% RH. The E' values found were: 26.7, 26.7, 25.4, 24.7 and 24.7 GPa for 0, 15, 30, 45 and 60 days of exposure, respectively. (c) 2005 Springer Science + Business Media, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital radiography in the inspection of welded pipes to be installed under deep water offshore gas and oil pipelines, like a presalt in Brazil, in the paper has been investigated. The aim is to use digital radiography for nondestructive testing of welds as it is already in use in the medical, aerospace, security, automotive, and petrochemical sectors. Among the current options, the DDA (Digital Detector Array) is considered as one of the best solutions to replace industrial films, as well as to increase the sensitivity to reduce the inspection cycle time. This paper shows the results of this new technique, comparing it to radiography with industrial films systems. In this paper, 20 test specimens of longitudinal welded pipe joints, specially prepared with artificial defects like cracks, lack of fusion, lack of penetration, and porosities and slag inclusions with varying dimensions and in 06 different base metal wall thicknesses, were tested and a comparison of the techniques was made. These experiments verified the purposed rules for parameter definitions and selections to control the required digital radiographic image quality as described in the draft international standard ISO/DIS 10893-7. This draft is first standard establishing the parameters for digital radiography on weld seam of welded steel pipes for pressure purposes to be used on gas and oil pipelines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To evaluate reproducibility and precision of ocular measurements by digital photograph analysis, in addition to the transformation of the measures according to the individual iris diameter as an oculometric reference. Methods: Twenty-four eyes have been digitally photographed in a standardized way at two distances. Two researchers have analyzed these printed images using a caliper and these digital forms by ImageJ 1.37 (TM). Several external ocular parameters were estimated (mm and as iris diameter) and methods of measurement compared regarding their precision, agreement and correlation. Results: Caliper and digital analysis of oculometric measures provided significant agreement and correlation, nevertheless the precision of digital measures was higher. The estimates of numeric transformation from oculometric measures according to individual iris diameter resulted in great correlation to caliper measures and high agreement when compared to different distances of taking the photographs. Conclusions: Facial digital photographs allowed oculometric precise and reproducible estimates, endorsing clinical research usefulness. Using iris diameter as individual oculometric reference disclosed high reproducibility when facial photographs were taken at different distances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJETIVO: Avaliar o desempenho da análise de imagem digital na estimativa da área acometida pelas úlceras crônicas dos membros inferiores. MÉTODOS: Estudo prospectivo em que foram mensuradas úlceras empregando o método planimétrico clássico, utilizando desenho dos seus contornos em filme plástico transparente, medida sua área posteriormente por folha milimetrada. Esses valores foram utilizados como padrão para a comparação com a estimativa de área pelas fotografias digitais padronizadas das úlceras e dos desenhos das mesmas em filme plástico. Para criar um referencial de conversão dos pixels em milímetros, foi empregado um adesivo com tamanho conhecido, adjacente à úlcera. RESULTADOS: foram avaliadas 42 lesões em 20 pacientes portadores de úlceras crônicas de membros inferiores. As áreas das úlceras variaram de 0,24 a 101,65cm². Observou-se forte correlação entre as medidas planimétricas e as fotos das úlceras (R²=0,86 p<0,01), porém a correlação das medidas planimétricas com as fotos digitais dos desenhos das úlceras foi ainda maior (R²=0,99 p<0,01). CONCLUSÃO: A fotografia digital padronizada revelou-se método rápido, preciso e não-invasivo capaz de estimar a área afetada por úlceras. A avaliação das medidas fotográficas dos contornos das úlceras deve ser preferida em relação à análise de sua fotografia direta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: the purpose of this in vivo study was to compare the accuracy of primary incisor length determined by direct digital radiography (straight-line measurement and grid superimposition) and measurement of the actual tooth length. Methods. Twenty-two primary maxillary incisors that required extractions were selected from 3- to 5-year-old children. The teeth were radiographed with an intraoral sensor using the long cone technique and a sensor holder (30-cm focus-to-sensor distance). The exposure time was 03 seconds. Tooth length was estimated by using straight-line and grid measurements provided by the distance measurement feature of the Computed Dental Radiography digital dental imaging system. The actual tooth length was obtained by measuring the extracted tooth with G digital caliper. Data were analyzed statistically by Pearson's correlation coefficient and a paired t test. Results: There were statistically significant differences (P=.007) between the 2 measurement techniques and between the actual tooth lengths and grid measurements. There was no statistically significant difference (P=38) between straight-line measurements and actual tooth lengths, showing that the straight-line measurements were more accurate. Underestimation of the actual tooth length, however, occurred in 45% of the straight-line measurements and in 73% of the grid measurements. Conclusion: It is possible to determine primary tooth length in digital radiographs using onscreen measurements with 0 reasonable degree of accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to analyze the color alterations performed by the CIE L*a*b* system in the digital imaging of shade guide tabs, which were obtained photographically according to the automatic and manual modes. This study also sought to examine the observers' agreement in quantifying the coordinates. Four Vita Lumin Vaccum shade guide tabs were used: A3.5, B1, B3 and C4. An EOS Canon digital camera was used to record the digital images of the shade tabs, and the images were processed using Adobe Photoshop software. A total of 80 observations (five replicates of each shade according to two observers in two modes, specifically, automatic and manual) were obtained, leading to color values of L*, a* and b*. The color difference (AE) between the modes was calculated and classified as either clinically acceptable or unacceptable. The results indicated that there was agreement between the two observers in obtaining the L*, a* and b* values related to all guides. However, the B1, B3, and C4 shade tabs had AE values classified as clinically acceptable (Delta E = 0.44, Delta E = 2.04 and Delta E = 2.69, respectively). The A3.5 shade tab had a AE value classified as clinically unacceptable (Delta E = 4.17), as it presented higher values for luminosity in the automatic mode (L* = 54.0) than in the manual mode (L* = 50.6). It was concluded that the B1, B3 and C4 shade tabs can be used at any of the modes in digital camera (manual or automatic), which was a different finding from that observed for the A3.5 shade tab.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A body of research has developed within the context of nonlinear signal and image processing that deals with the automatic, statistical design of digital window-based filters. Based on pairs of ideal and observed signals, a filter is designed in an effort to minimize the error between the ideal and filtered signals. The goodness of an optimal filter depends on the relation between the ideal and observed signals, but the goodness of a designed filter also depends on the amount of sample data from which it is designed. In order to lessen the design cost, a filter is often chosen from a given class of filters, thereby constraining the optimization and increasing the error of the optimal filter. To a great extent, the problem of filter design concerns striking the correct balance between the degree of constraint and the design cost. From a different perspective and in a different context, the problem of constraint versus sample size has been a major focus of study within the theory of pattern recognition. This paper discusses the design problem for nonlinear signal processing, shows how the issue naturally transitions into pattern recognition, and then provides a review of salient related pattern-recognition theory. In particular, it discusses classification rules, constrained classification, the Vapnik-Chervonenkis theory, and implications of that theory for morphological classifiers and neural networks. The paper closes by discussing some design approaches developed for nonlinear signal processing, and how the nature of these naturally lead to a decomposition of the error of a designed filter into a sum of the following components: the Bayes error of the unconstrained optimal filter, the cost of constraint, the cost of reducing complexity by compressing the original signal distribution, the design cost, and the contribution of prior knowledge to a decrease in the error. The main purpose of the paper is to present fundamental principles of pattern recognition theory within the framework of active research in nonlinear signal processing.