10 resultados para CNPQ::ENGENHARIAS::ENGENHARIA BIOMEDICA::BIOENGENHARIA::PROCESSAMENTO DE SINAIS BIOLOGICOS

em Universidade Federal de Uberlândia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Skeletal muscle consists of muscle fiber types that have different physiological and biochemical characteristics. Basically, the muscle fiber can be classified into type I and type II, presenting, among other features, contraction speed and sensitivity to fatigue different for each type of muscle fiber. These fibers coexist in the skeletal muscles and their relative proportions are modulated according to the muscle functionality and the stimulus that is submitted. To identify the different proportions of fiber types in the muscle composition, many studies use biopsy as standard procedure. As the surface electromyography (EMGs) allows to extract information about the recruitment of different motor units, this study is based on the assumption that it is possible to use the EMG to identify different proportions of fiber types in a muscle. The goal of this study was to identify the characteristics of the EMG signals which are able to distinguish, more precisely, different proportions of fiber types. Also was investigated the combination of characteristics using appropriate mathematical models. To achieve the proposed objective, simulated signals were developed with different proportions of motor units recruited and with different signal-to-noise ratios. Thirteen characteristics in function of time and the frequency were extracted from emulated signals. The results for each extracted feature of the signals were submitted to the clustering algorithm k-means to separate the different proportions of motor units recruited on the emulated signals. Mathematical techniques (confusion matrix and analysis of capability) were implemented to select the characteristics able to identify different proportions of muscle fiber types. As a result, the average frequency and median frequency were selected as able to distinguish, with more precision, the proportions of different muscle fiber types. Posteriorly, the features considered most able were analyzed in an associated way through principal component analysis. Were found two principal components of the signals emulated without noise (CP1 and CP2) and two principal components of the noisy signals (CP1 and CP2 ). The first principal components (CP1 and CP1 ) were identified as being able to distinguish different proportions of muscle fiber types. The selected characteristics (median frequency, mean frequency, CP1 and CP1 ) were used to analyze real EMGs signals, comparing sedentary people with physically active people who practice strength training (weight training). The results obtained with the different groups of volunteers show that the physically active people obtained higher values of mean frequency, median frequency and principal components compared with the sedentary people. Moreover, these values decreased with increasing power level for both groups, however, the decline was more accented for the group of physically active people. Based on these results, it is assumed that the volunteers of the physically active group have higher proportions of type II fibers than sedentary people. Finally, based on these results, we can conclude that the selected characteristics were able to distinguish different proportions of muscle fiber types, both for the emulated signals as to the real signals. These characteristics can be used in several studies, for example, to evaluate the progress of people with myopathy and neuromyopathy due to the physiotherapy, and also to analyze the development of athletes to improve their muscle capacity according to their sport. In both cases, the extraction of these characteristics from the surface electromyography signals provides a feedback to the physiotherapist and the coach physical, who can analyze the increase in the proportion of a given type of fiber, as desired in each case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data variability analysis has been the focus of a number of studies seeking to capture differences of patterns generated by biological systems. Although several studies related to gait employ the analysis of variability in their observations, we noticed a lack of such information for subjects with unilateral coxarthrosis undergoing total hip arthroplasty (THA). To tackle this deficiency of information, we conducted a study of the gait on a treadmill with10 healthy subjects (30.7 ± 6.75 years old) from G1 and 24 subjects (65 ± 8.5 years old) with unilateral THA from G2. Thus, by means of two inertial measurement units (IMUs) positioned in the pelvis, we have developed a detection method of the step and stride for calculating these intervals and extract the signal characteristics. The variability analysis (coefficient of variation) was performed, taking into consideration the extracted features and the step and stride times. The average and the 95% confidence interval estimate for the average of the step and stride times to each group were in agreement with literature. The mean coefficient of variation for the step and stride times was calculated and compared among groups by the Kruskal-Wallis test with 95% confidence interval. Each component X, Y and Z of the two IMUs (accelerometer, magnetometer and gyroscope) corresponded to a variable. The resultants of each sensor, the linear velocity (accelerometers) and the instantaneous angular displacement (gyroscopes) completed the set of variables. The characteristics were extracted from the signals of these variables to check the variability in the G1 and G2 groups . There were significant differences (p <0.05) between G1 and G2 for the average of the step and stride times. The variability of the step and stride, as well as the variability of all other evaluated characteristics were higher for the group G2 (p <0.05). The method proposed in this study proved to be suitable for the measuring of variability of biomechanical parameters related to the extracted features. All the extracted features categorized the groups. The G2 group showed greater variability, so it is possible that the age and the pathological condition of the hip both contributed to this result.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of studies in the areas of Biomedical Engineering and Health Sciences have employed machine learning tools to develop methods capable of identifying patterns in different sets of data. Despite its extinction in many countries of the developed world, Hansen’s disease is still a disease that affects a huge part of the population in countries such as India and Brazil. In this context, this research proposes to develop a method that makes it possible to understand in the future how Hansen’s disease affects facial muscles. By using surface electromyography, a system was adapted so as to capture the signals from the largest possible number of facial muscles. We have first looked upon the literature to learn about the way researchers around the globe have been working with diseases that affect the peripheral neural system and how electromyography has acted to contribute to the understanding of these diseases. From these data, a protocol was proposed to collect facial surface electromyographic (sEMG) signals so that these signals presented a high signal to noise ratio. After collecting the signals, we looked for a method that would enable the visualization of this information in a way to make it possible to guarantee that the method used presented satisfactory results. After identifying the method's efficiency, we tried to understand which information could be extracted from the electromyographic signal representing the collected data. Once studies demonstrating which information could contribute to a better understanding of this pathology were not to be found in literature, parameters of amplitude, frequency and entropy were extracted from the signal and a feature selection was made in order to look for the features that better distinguish a healthy individual from a pathological one. After, we tried to identify the classifier that best discriminates distinct individuals from different groups, and also the set of parameters of this classifier that would bring the best outcome. It was identified that the protocol proposed in this study and the adaptation with disposable electrodes available in market proved their effectiveness and capability of being used in different studies whose intention is to collect data from facial electromyography. The feature selection algorithm also showed that not all of the features extracted from the signal are significant for data classification, with some more relevant than others. The classifier Support Vector Machine (SVM) proved itself efficient when the adequate Kernel function was used with the muscle from which information was to be extracted. Each investigated muscle presented different results when the classifier used linear, radial and polynomial kernel functions. Even though we have focused on Hansen’s disease, the method applied here can be used to study facial electromyography in other pathologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of studies in the areas of Biomedical Engineering and Health Sciences have employed machine learning tools to develop methods capable of identifying patterns in different sets of data. Despite its extinction in many countries of the developed world, Hansen’s disease is still a disease that affects a huge part of the population in countries such as India and Brazil. In this context, this research proposes to develop a method that makes it possible to understand in the future how Hansen’s disease affects facial muscles. By using surface electromyography, a system was adapted so as to capture the signals from the largest possible number of facial muscles. We have first looked upon the literature to learn about the way researchers around the globe have been working with diseases that affect the peripheral neural system and how electromyography has acted to contribute to the understanding of these diseases. From these data, a protocol was proposed to collect facial surface electromyographic (sEMG) signals so that these signals presented a high signal to noise ratio. After collecting the signals, we looked for a method that would enable the visualization of this information in a way to make it possible to guarantee that the method used presented satisfactory results. After identifying the method's efficiency, we tried to understand which information could be extracted from the electromyographic signal representing the collected data. Once studies demonstrating which information could contribute to a better understanding of this pathology were not to be found in literature, parameters of amplitude, frequency and entropy were extracted from the signal and a feature selection was made in order to look for the features that better distinguish a healthy individual from a pathological one. After, we tried to identify the classifier that best discriminates distinct individuals from different groups, and also the set of parameters of this classifier that would bring the best outcome. It was identified that the protocol proposed in this study and the adaptation with disposable electrodes available in market proved their effectiveness and capability of being used in different studies whose intention is to collect data from facial electromyography. The feature selection algorithm also showed that not all of the features extracted from the signal are significant for data classification, with some more relevant than others. The classifier Support Vector Machine (SVM) proved itself efficient when the adequate Kernel function was used with the muscle from which information was to be extracted. Each investigated muscle presented different results when the classifier used linear, radial and polynomial kernel functions. Even though we have focused on Hansen’s disease, the method applied here can be used to study facial electromyography in other pathologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of access technologies for communication, based on scanning methods, enables new communication opportunities for individuals with severe motor dysfunction. One of the most commom examples of this type of technology is the single switch scanning. Single switch scanning keyboards are often used as augmentative and alternative communication devices for inidividuals with severe mobility restrictions and with compromised speech and writing. They consist of a matrix of keys and simulate the operation of a physical keyboard to write messages. One of the limitations of these systems is their low performance. Low communication rates and considerable errors ocurrence are some of the few problems that users of these devices suffers during daily use. The development and evaluation of new strategies in augmentative and alternative communication are essential to improve the communication opportunities of user who make use of such technology. Thus, this work explores different strategies to increase communication rate and reduce user’s mistakes. Computational and practical analysis were performed for the evaluation of proposed strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lung cancer is the most common of malignant tumors, with 1.59 million new cases worldwide in 2012. Early detection is the main factor to determine the survival of patients affected by this disease. Furthermore, the correct classification is important to define the most appropriate therapeutic approach as well as suggest the prognosis and the clinical disease evolution. Among the exams used to detect lung cancer, computed tomography have been the most indicated. However, CT images are naturally complex and even experts medical are subject to fault detection or classification. In order to assist the detection of malignant tumors, computer-aided diagnosis systems have been developed to aid reduce the amount of false positives biopsies. In this work it was developed an automatic classification system of pulmonary nodules on CT images by using Artificial Neural Networks. Morphological, texture and intensity attributes were extracted from lung nodules cut tomographic images using elliptical regions of interest that they were subsequently segmented by Otsu method. These features were selected through statistical tests that compare populations (T test of Student and U test of Mann-Whitney); from which it originated a ranking. The features after selected, were inserted in Artificial Neural Networks (backpropagation) to compose two types of classification; one to classify nodules in malignant and benign (network 1); and another to classify two types of malignancies (network 2); featuring a cascade classifier. The best networks were associated and its performance was measured by the area under the ROC curve, where the network 1 and network 2 achieved performance equal to 0.901 and 0.892 respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super­ resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In several areas of health professionals (pediatricians, nutritionists, orthopedists, endocrinologists, dentists, etc.) are used in the assessment of bone age to diagnose growth disorders in children. Through interviews with specialists in diagnostic imaging and research done in the literature, we identified the TW method - Tanner and Whitehouse as the most efficient. Even achieving better results than other methods, it is still not the most used, due to the complexity of their use. This work presents the possibility of automation of this method and therefore that its use more widespread. Also in this work, they are met two important steps in the evaluation of bone age, identification and classification of regions of interest. Even in the radiography in which the positioning of the hands were not suitable for TW method, the identification algorithm of the fingers showed good results. As the use AAM - Active Appearance Models showed good results in the identification of regions of interest even in radiographs with high contrast and brightness variation. It has been shown through appearance, good results in the classification of the epiphysis in their stages of development, being chosen the average epiphysis finger III (middle) to show the performance. The final results show an average percentage of 90% hit and misclassified, it was found that the error went away just one stage of the correct stage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work it was developed mathematical resolutions taking as parameter maximum intensity values for the interference analysis of electric and magnetic fields and was given two virtual computer system that supports families of CDMA and WCDMA technologies. The first family were developed computational resources to solve electric and magnetic field calculations and power densities in Radio Base stations , with the use of CDMA technology in the 800 MHz band , taking into account the permissible values referenced by the Commission International Protection on non-Ionizing Radiation . The first family is divided into two segments of calculation carried out in virtual operation. In the first segment to compute the interference field radiated by the base station with input information such as radio channel power; Gain antenna; Radio channel number; Operating frequency; Losses in the cable; Attenuation of direction; Minimum Distance; Reflections. Said computing system allows to quickly and without the need of implementing instruments for measurements, meet the following calculated values: Effective Radiated Power; Sector Power Density; Electric field in the sector; Magnetic field in the sector; Magnetic flux density; point of maximum permissible exposure of electric field and power density. The results are shown in charts for clarity of view of power density in the industry, as well as the coverage area definition. The computer module also includes folders specifications antennas, cables and towers used in cellular telephony, the following manufacturers: RFS World, Andrew, Karthein and BRASILSAT. Many are presented "links" network access "Internet" to supplement the cable specifications, antennas, etc. . In the second segment of the first family work with more variables , seeking to perform calculations quickly and safely assisting in obtaining results of radio signal loss produced by ERB . This module displays screens representing propagation systems denominated "A" and "B". By propagating "A" are obtained radio signal attenuation calculations in areas of urban models , dense urban , suburban , and rural open . In reflection calculations are present the reflection coefficients , the standing wave ratio , return loss , the reflected power ratio , as well as the loss of the signal by mismatch impedance. With the spread " B" seek radio signal losses in the survey line and not targeted , the effective area , the power density , the received power , the coverage radius , the conversion levels and the gain conversion systems radiant . The second family of virtual computing system consists of 7 modules of which 5 are geared towards the design of WCDMA and 2 technology for calculation of telephone traffic serving CDMA and WCDMA . It includes a portfolio of radiant systems used on the site. In the virtual operation of the module 1 is compute-: distance frequency reuse, channel capacity with noise and without noise, Doppler frequency, modulation rate and channel efficiency; Module 2 includes computes the cell area, thermal noise, noise power (dB), noise figure, signal to noise ratio, bit of power (dBm); with the module 3 reaches the calculation: breakpoint, processing gain (dB) loss in the space of BTS, noise power (w), chip period and frequency reuse factor. Module 4 scales effective radiated power, sectorization gain, voice activity and load effect. The module 5 performs the calculation processing gain (Hz / bps) bit time, bit energy (Ws). Module 6 deals with the telephone traffic and scales 1: traffic volume, occupancy intensity, average time of occupancy, traffic intensity, calls completed, congestion. Module 7 deals with two telephone traffic and allows calculating call completion and not completed in HMM. Tests were performed on the mobile network performance field for the calculation of data relating to: CINP , CPI , RSRP , RSRQ , EARFCN , Drop Call , Block Call , Pilot , Data Bler , RSCP , Short Call, Long Call and Data Call ; ECIO - Short Call and Long Call , Data Call Troughput . As survey were conducted surveys of electric and magnetic field in an ERB , trying to observe the degree of exposure to non-ionizing radiation they are exposed to the general public and occupational element. The results were compared to permissible values for health endorsed by the ICNIRP and the CENELEC .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hydrocycloning operation has a goal to separate solid-liquid suspensions and liquid-liquid emulsions through the centrifugal force action. Hydrocyclones are equipment with reduced size and used in both clarification and thickening. This device is used in many areas, like petrochemical and minerals process, and accumulate advantages like versatility and low cost of maintenance. However, the demand to improve the process and to reduce the costs has motivated several studies of equipment optimization. The filtering hydrocyclone is a non-conventional equipment developed at FEQUI/UFU with objective to improve the hydrocycloning separation efficiency. The purpose of this study is to evaluate the operating conditions of feed concentration and underflow diameter on the performance of a filtering geometry optimized to minimization of energy costs. The filtration effect was investigated through the comparison between the performance of the Optimized Filtering Hydrocyclone (HCOF) and the Optimized Concentrator Hydrocyclone (HCO). Because of the resemblance of hydrocyclones performance, the filtration did not represent significant effect on the performance of the HCOF. It was found that in this geometry the decrease of the variable underflow diameter was very favorable to thickening operation. The suspension concentration of quartzite at 1.0% of solids in volume was increased about 42 times when the 3 mm underflow diameter was used. The increase on the feed solid percentage was good for decreasing the energy spent, so that a minimum number of Euler of 730 was achieved at CVA = 10.0%v. However, a greater amount of solids in suspension leads to a lower efficiency of the equipment. Therefore, to minimize the underflow-to-throughput ratio and keep a high efficiency level, it is indicated to work with dilute suspension (CVA = 1.0%) and 3 mm underflow diameter (η = 67%). But if it is necessary to work with high feed concentration, the use of 5 mm underflow diameter provides a rise in the efficiency. The HCO hydrocyclone was compared to the traditional family of hydrocyclones Rietema and presented advantages like higher efficiency (34% higher in average) and lower energy costs (20% lower in average). Finally, the efficiency curves and project equation have been raised for the HCO hydrocyclone each with satisfactory adjust.