954 resultados para Electromyography analysis techniques
Resumo:
Objectives: The aim of this study was to compare the fracture strength of three techniques used to re-attach tooth fragments in sound and endodontically treated fractured teeth with or without fiber post placement. Material and methods: Ninety human lower incisors were randomly divided into three groups of 30 teeth each. In group A teeth were not subjected to endodontic treatment; while teeth from groups B and C were endodontically treated and the pulp chamber restored with a composite resin. All teeth were fractured by an axial load applied to the buccal area in order to obtain tooth fragments. Teeth from each group were then divided into three subgroups, according to the re-attachment technique: bonded-only, buccal-chamfer and circumferential chamfer. Before the re-attachment procedures, fiber posts were placed in teeth from group C using dual cure resin luting cement (Duo-Link). All teeth (groups A-C) had the fragments re-attached using a same dual cure resin luting cement. in the bonded-only group, no additional preparation was made. After re-attachment of the fragment, teeth from groups buccal and circumferential chamfer groups had a 1.0 mm depth chamfer placed in the fracture line either on buccal surfaceor along the buccal and lingual surfaces, respectively. increments of microhybid composite resin (Tetric Ceram) were used in subgroups buccal chamfer and circumferential chamfer to restore the chamfer. The specimens were loaded until fracture in the same pre-determined area. The force required to detach each fragment was recorded and the data was subjected to a three-way analysis of variance where factors Group and Re-attachment technique are independent measures and Time of fracture is a repeated measure factor (first and second) and Tukey`s test (alpha = 0.05). Results: The main factors Re-attachment technique (p = 0.04) and Time of fracture (p = 0.02) were statistically significant. The buccal and circumferential chamfer techniques were statistically similar (p > 0.05) and superior to the bonded-only group (p < 0.05). The first time of fracture was statistically superior to second time of fracture (p < 0.001). Conclusions: The use of fiber post is not necessary for the reinforcement of the tooth structure in re-attachment of endodontically treated teeth. When bonding a fractured fragment, the buccal or circumferential re-attachment techniques should be preferable in comparison with the simple re-attachment without any additional preparation. None of the techniques used for re-attachment restored the fracture strength of the intact teeth. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Objective. The aim of this study was to identify the behavior of masticatory muscles after fractures of the zygomatico-orbital complex (ZOC) and subsequent surgical treatment, by using analyses of bite force, electromyography (EMG), and mandible mobility during a 6-month period after surgery. Study design. Five patients with fractured ZOCs treated surgically by using an intraoral approach and fixation exclusively in the region of the zygomaticomaxillary buttress were evaluated. The control group included 12 other patients. During postoperative follow-up, bite force, mandible mobility, and EMG analysis of the masticatory muscles were evaluated. Results. There was an increase in bite force with time, but a decline in EMG activity during the same period. In the mandible mobility analysis, only maximum mouth-opening values increased significantly after the surgical treatment. Conclusions. The masticatory musculature, according to bite force and EMG, returned to its normal condition by the second month after surgery, and maximum mouth opening was observed after the first month. (Oral Surg Oral Med Oral Pathol Oral Radiol Endod 2011;111:e1-e7)
Resumo:
The masseter and temporal muscles of patients with maxillary and mandibular osteoporosis were submitted to electromyographic analysis and compared with a control group. In conclusion, individuals with osteoporosis did not show significantly lower masticatory cycle performance and efficiency compared to the control group during the proposal mastications. This study aimed to examine electromyographically the masseter and temporal muscles of patients with maxillary and mandibular osteoporosis and compare these patients with control patients. Sixty individuals of both genders with an average age of 53.0 +/- 5 years took part in the study, distributed in two groups with 30 individuals each: (1) individuals with osteoporosis; (2) control patients during the habitual and non-habitual mastication. The electromyographic apparel used was a Myosystem-BR1-DataHomins Technology Ltda., with five channels of acquisition and electrodes active differentials. Statistical analysis of the results was performed using SPSS version 15.0 (Chicago, IL, USA). The result of the Student`s t test indicated no significant differences (p > 0.05) between the normalized values of the ensemble average obtained in masticatory cycles in both groups. Based on the results of this study, it was concluded that individuals with osteoporosis did not show significantly lower masticatory cycle performance and efficiency compared to control subjects during the habitual and non-habitual mastications. This result is very important because it demonstrates the functionality of the complex physiological process of mastication in individuals with osteoporosis at the bones that compose the face.
Resumo:
Estimation of total body water by measuring bioelectrical impedance at a fixed frequency of 50 kHz is useful in assessing body composition in healthy populations. However, in cirrhosis, the distribution of total body water between the extracellular and intracellular compartments is of greater clinical importance. We report an evaluation of a new multiple-frequency bioelectrical-impedance analysis technique (MFBIA) that may quantify the distribution of total body water in cirrhosis. In 21 cirrhotic patients and 21 healthy control subjects, impedance to the Row of current was measured at frequencies ranging from 4 to 1012 kHz. These measurements were used to estimate body water compartments and then compared with total body water and extracellular water determined by isotope methodology. In cirrhotic patients, extracellular water and total body water (as determined by isotope methods) were well predicted by MFBIA (r = 0.73 and 0.89, respectively).;However, the 95% confidence intervals of the limits of agreement between MFBIA and the isotope methods were +/- 14% and +/-9% for cirrhotics (extracellular water and total body water, respectively) and +/-9% and +/-9% for cirrhotics without ascites. The 95% confidence intervals estimated from the control group were +/-10% and +/-5% for extracellular water and total body water, respectively. Thus, despite strong correlations between MFBIA and isotope measurements, the relatively large limits of agreement with accepted techniques suggest that the MFBIA technique requires further refinement before it can be routinely used to determine the nutritional assessment of individual cirrhotic patients. Nutrition 2001,17.31-34. (C)Elsevier Science Inc. 2001.
Resumo:
Objective-To compare the accuracy and feasibility of harmonic power Doppler and digitally subtracted colour coded grey scale imaging for the assessment of perfusion defect severity by single photon emission computed tomography (SPECT) in an unselected group of patients. Design-Cohort study. Setting-Regional cardiothoracic unit. Patients-49 patients (mean (SD) age 61 (11) years; 27 women, 22 men) with known or suspected coronary artery disease were studied with simultaneous myocardial contrast echo (MCE) and SPECT after standard dipyridamole stress. Main outcome measures-Regional myocardial perfusion by SPECT, performed with Tc-99m tetrafosmin, scored qualitatively and also quantitated as per cent maximum activity. Results-Normal perfusion was identified by SPECT in 225 of 270 segments (83%). Contrast echo images were interpretable in 92% of patients. The proportion of normal MCE by grey scale, subtracted, and power Doppler techniques were respectively 76%, 74%, and 88% (p < 0.05) at > 80% of maximum counts, compared with 65%, 69%, and 61% at < 60% of maximum counts. For each technique, specificity was lowest in the lateral wail, although power Doppler was the least affected. Grey scale and subtraction techniques were least accurate in the septal wall, but power Doppler showed particular problems in the apex. On a per patient analysis, the sensitivity was 67%, 75%, and 83% for detection of coronary artery disease using grey scale, colour coded, and power Doppler, respectively, with a significant difference between power Doppler and grey scale only (p < 0.05). Specificity was also the highest for power Doppler, at 55%, but not significantly different from subtracted colour coded images. Conclusions-Myocardial contrast echo using harmonic power Doppler has greater accuracy than with grey scale imaging and digital subtraction. However, power Doppler appears to be less sensitive for mild perfusion defects.
Resumo:
The blending of coals has become popular to improve the performance of coals, to meet specifications of power plants and, to reduce the cost of coals, This article reviews the results and provides new information on ignition, flame stability, and carbon burnout studies of blended coals. The reviewed studies were conducted in laboratory-, pilot-, and full-scale facilities. The new information was taken in pilot-scale studies. The results generally show that blending a high-volatile coal with a low-volatile coal or anthracite can improve the ignition, flame stability and burnout of the blends. This paper discusses two general methods to predict the performance of blended coals: (1) experiment; and (2) indices. Laboratory- and pilot-scale tests, at least, provide a relative ranking of the combustion performance of coal/blends in power station boilers. Several indices, volatile matter content, heating value and a maceral index, can be used to predict the relative ranking of ignitability and flame stability of coals and blends. The maceral index, fuel ratio, and vitrinite reflectance can also be used to predict the absolute carbon burnout of coal and blends within limits. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
It has been recognised that in order to study the displacement, timing and co-ordination of articulatory components (i.e., tongue. lips, jaw) in speech production it is desirable to obtain high-resolution movement data on multiple structures inside and outside the vocal tract. Until recently, with the exception of X-ray techniques such as cineradiography, the study 0. speech movements has been hindered by the inaccessibility of the oral cavity during speech. X-ray techniques are generally not used because of unacceptable radiation exposure. The aim of the present study was to demonstrate the use of a new physiological device, the electromagnetic articulograph, for assessing articulatory dysfunction subsequent to traumatic brain injury. The components of the device together with the measuring principle are described and data collected from a single case presented. A 19 year-old male who exhibited dysarthria subsequent to a traumatic brain injury was fitted wit 2 the electromagnetic articulograph (Carstens AG-100) and a kinematic analysis of his tongue movements during production of the lingual consonants it, s, k/ within single syllable words was performed. Examination of kinematic parameters including movemmt trajectories, velocity, and acceleration revealed differences in the speed and accuracy of his tongue movements compared to those produced by a non-neurologically impaired adult male. It was concluded that the articulograph is a useful device for diagnosing speed and accuracy disorders in tongue movements during speech and that the device has potential for incorporation into physiologically based rehabilitation programs as a real-time biofeedback instrument.
Resumo:
Neurological disease or dysfunction in newborn infants is often first manifested by seizures. Prolonged seizures can result in impaired neurodevelopment or even death. In adults, the clinical signs of seizures are well defined and easily recognized. In newborns, however, the clinical signs are subtle and may be absent or easily missed without constant close observation. This article describes the use of adaptive signal processing techniques for removing artifacts from newborn electroencephalogram (EEG) signals. Three adaptive algorithms have been designed in the context of EEG signals. This preprocessing is necessary before attempting a fine time-frequency analysis of EEG rhythmical activities, such as electrical seizures, corrupted by high amplitude signals. After an overview of newborn EEG signals, the authors describe the data acquisition set-up. They then introduce the basic physiological concepts related to normal and abnormal newborn EEGs and discuss the three adaptive algorithms for artifact removal. They also present time-frequency representations (TFRs) of seizure signals and discuss the estimation and modeling of the instantaneous frequency related to the main ridge of the TFR.
Resumo:
The personal computer revolution has resulted in the widespread availability of low-cost image analysis hardware. At the same time, new graphic file formats have made it possible to handle and display images at resolutions beyond the capability of the human eye. Consequently, there has been a significant research effort in recent years aimed at making use of these hardware and software technologies for flotation plant monitoring. Computer-based vision technology is now moving out of the research laboratory and into the plant to become a useful means of monitoring and controlling flotation performance at the cell level. This paper discusses the metallurgical parameters that influence surface froth appearance and examines the progress that has been made in image analysis of flotation froths. The texture spectrum and pixel tracing techniques developed at the Julius Kruttschnitt Mineral Research Centre are described in detail. The commercial implementation, JKFrothCam, is one of a number of froth image analysis systems now reaching maturity. In plants where it is installed, JKFrothCam has shown a number of performance benefits. Flotation runs more consistently, meeting product specifications while maintaining high recoveries. The system has also shown secondary benefits in that reagent costs have been significantly reduced as a result of improved flotation control. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Background Diagnosis of the HIV-associated lipodystrophy syndrome is based on clinical assessment, in lack of a consensus about case definition and reference methods. Three bedside methods were compared in their diagnostic value for lipodystrophy. Patients and Methods. Consecutive HIV-infected outpatients (n = 278) were investigated, 128 of which also had data from 1997 available. Segmental bioelectrical impedance analysis (BIA) and waist, hip and thigh circumferences were performed. Changes in seven body regions were rated by physicians and patients using linear analogue scale assessment (LASA). Diagnostic cut-off values were searched by receiver operator characteristics. Results. Lipodystrophy was diagnosed in 85 patients (31%). BIA demonstrated higher fat-free mass in patients with lipodystrophy but not after controlling for body mass index and sex. Segmental BIA was not superior to whole body BIA in detecting lipodystrophy. Fat-free mass increased from 1997 to 1999 independent from lipodystrophy. Waist-hip and waist-thigh ratios were higher in patients with lipodystrophy. BIA, anthropometry and LASA did not provide sufficient diagnostic cut-off values for lipodystrophy. Agreement between methods, and between patient and physician rating, was poor. Conclusion: These methods do not fulfil the urgent need for quantitative diagnostic tools for lipodystrophy. BIA estimates of fat free mass may be biased by lipodystrophy, indicating a need for re-calibration in HIV infected populations. (C) 2001 Harcourt Publishers Ltd.
Resumo:
The development of the new TOGA (titration and off-gas analysis) sensor for the detailed study of biological processes in wastewater treatment systems is outlined. The main innovation of the sensor is the amalgamation of titrimetric and off-gas measurement techniques. The resulting measured signals are: hydrogen ion production rate (HPR), oxygen transfer rate (OTR), nitrogen transfer rate (NTR), and carbon dioxide transfer rate (CTR). While OTR and NTR are applicable to aerobic and anoxic conditions, respectively, HPR and CTR are useful signals under all of the conditions found in biological wastewater treatment systems, namely, aerobic, anoxic and anaerobic. The sensor is therefore a powerful tool for studying the key biological processes under all these conditions. A major benefit from the integration of the titrimetric and off-gas analysis methods is that the acid/base buffering systems, in particular the bicarbonate system, are properly accounted for. Experimental data resulting from the TOGA sensor in aerobic, anoxic, and anaerobic conditions demonstrates the strength of the new sensor. In the aerobic environment, carbon oxidation (using acetate as an example carbon source) and nitrification are studied. Both the carbon and ammonia removal rates measured by the sensor compare very well with those obtained from off-line chemical analysis. Further, the aerobic acetate removal process is examined at a fundamental level using the metabolic pathway and stoichiometry established in the literature, whereby the rate of formation of storage products is identified. Under anoxic conditions, the denitrification process is monitored and, again, the measured rate of nitrogen gas transfer (NTR) matches well with the removal of the oxidised nitrogen compounds (measured chemically). In the anaerobic environment, the enhanced biological phosphorus process was investigated. In this case, the measured sensor signals (HPR and CTR) resulting from acetate uptake were used to determine the ratio of the rates of carbon dioxide production by competing groups of microorganisms, which consequently is a measure of the activity of these organisms. The sensor involves the use of expensive equipment such as a mass spectrometer and requires special gases to operate, thus incurring significant capital and operational costs. This makes the sensor more an advanced laboratory tool than an on-line sensor. (C) 2003 Wiley Periodicals, Inc.
Resumo:
Background and Purpose. This study evaluated an electromyographic technique for the measurement of muscle activity of the deep cervical flexor (DCF) muscles. Electromyographic signals were detected from the DCF, sternocleidomastoid (SCM), and anterior scalene (AS) muscles during performance of the craniocervical flexion (CCF) test, which involves performing 5 stages of increasing craniocervical flexion range of motion-the anatomical action of the DCF muscles. Subjects. Ten volunteers without known pathology or impairment participated in this study. Methods. Root-mean-square (RMS) values were calculated for the DCF, SCM, and AS muscles during performance of the CCF test. Myoelectric signals were recorded from the DCF muscles using bipolar electrodes placed over the posterior oropharyngeal wall. Reliability estimates of normalized RMS values were obtained by evaluating intraclass correlation coefficients and the normalized standard error of the mean (SEM). Results. A linear relationship was evident between the amplitude of DCF muscle activity and the incremental stages of the CCF test (F=239.04, df=36, P<.0001). Normalized SEMs in the range 6.7% to 10.3% were obtained for the normalized RMS values for the DCF muscles, providing evidence of reliability for these variables. Discussion and Conclusion. This approach for obtaining a direct measure of the DCF muscles, which differs from those previously used, may be useful for the examination of these muscles in future electromyographic applications.
Resumo:
The utility of 16s rDNA restriction fragment length polymorphism (RFLP) analysis for the partial genomovar differentiation of Burkholderia cepacia complex bacterium is well documented. We compared the 16s rDNA RFLP signatures for a number of non-fermenting gram negative bacilli (NF GNB) LMG control strains and clinical isolates pertaining to the genera Burkholderia, Pseudomonas, Achromobacter (Alcaligenes), Ralstonia, Stenotrophomonas and Pandoraea. A collection of 24 control strain (LMG) and 25 clinical isolates were included in the study. Using conventional PCR, a 1.2 kbp 16s rDNA fragment was generated for each organism. Following restriction digestion and electrophoresis, each clinical isolate RFLP signature was compared to those of the control strain panel. Nineteen different RFLP signatures were detected from the 28 control strains included in the study. TwentyoneyTwenty- five of the clinical isolates could be classified by RFLP analysis into a single genus and species when compared to the patterns produced by the control strain panel. Four clinical B. pseudomallei isolates produced RFLP signatures which were indistinguishable from B. cepacia genomovars I, III and VIII. The identity of these four isolates were confirmed using B. pseudomallei specific PCR. 16s rDNA RFLP analysis can be a useful identification strategy when applied to NF GNB, particularly for those which exhibit colistin sulfate resistance. The use of this molecular based methodology has proved very useful in the setting of a CF referral laboratory particularly when utilised in conjunction with B. cepacia complex and genomovar specific PCR techniques. Species specific PCR or sequence analysis should be considered for selected isolates; especially where discrepancies between epidemiology, phenotypic and genotypic characteristics occur.
Resumo:
Within the development of motor vehicles, crash safety (e.g. occupant protection, pedestrian protection, low speed damageability), is one of the most important attributes. In order to be able to fulfill the increased requirements in the framework of shorter cycle times and rising pressure to reduce costs, car manufacturers keep intensifying the use of virtual development tools such as those in the domain of Computer Aided Engineering (CAE). For crash simulations, the explicit finite element method (FEM) is applied. The accuracy of the simulation process is highly dependent on the accuracy of the simulation model, including the midplane mesh. One of the roughest approximations typically made is the actual part thickness which, in reality, can vary locally. However, almost always a constant thickness value is defined throughout the entire part due to complexity reasons. On the other hand, for precise fracture analysis within FEM, the correct thickness consideration is one key enabler. Thus, availability of per element thickness information, which does not exist explicitly in the FEM model, can significantly contribute to an improved crash simulation quality, especially regarding fracture prediction. Even though the thickness is not explicitly available from the FEM model, it can be inferred from the original CAD geometric model through geometric calculations. This paper proposes and compares two thickness estimation algorithms based on ray tracing and nearest neighbour 3D range searches. A systematic quantitative analysis of the accuracy of both algorithms is presented, as well as a thorough identification of particular geometric arrangements under which their accuracy can be compared. These results enable the identification of each technique’s weaknesses and hint towards a new, integrated, approach to the problem that linearly combines the estimates produced by each algorithm.
Resumo:
Current software development often relies on non-trivial coordination logic for combining autonomous services, eventually running on different platforms. As a rule, however, such a coordination layer is strongly woven within the application at source code level. Therefore, its precise identification becomes a major methodological (and technical) problem and a challenge to any program understanding or refactoring process. The approach introduced in this paper resorts to slicing techniques to extract coordination data from source code. Such data are captured in a specific dependency graph structure from which a coordination model can be recovered either in the form of an Orc specification or as a collection of code fragments corresponding to the identification of typical coordination patterns in the system. Tool support is also discussed