15 resultados para Signal Processing, Computer-Assisted

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need to provide computers with the ability to distinguish the affective state of their users is a major requirement for the practical implementation of affective computing concepts. This dissertation proposes the application of signal processing methods on physiological signals to extract from them features that can be processed by learning pattern recognition systems to provide cues about a person's affective state. In particular, combining physiological information sensed from a user's left hand in a non-invasive way with the pupil diameter information from an eye-tracking system may provide a computer with an awareness of its user's affective responses in the course of human-computer interactions. In this study an integrated hardware-software setup was developed to achieve automatic assessment of the affective status of a computer user. A computer-based "Paced Stroop Test" was designed as a stimulus to elicit emotional stress in the subject during the experiment. Four signals: the Galvanic Skin Response (GSR), the Blood Volume Pulse (BVP), the Skin Temperature (ST) and the Pupil Diameter (PD), were monitored and analyzed to differentiate affective states in the user. Several signal processing techniques were applied on the collected signals to extract their most relevant features. These features were analyzed with learning classification systems, to accomplish the affective state identification. Three learning algorithms: Naïve Bayes, Decision Tree and Support Vector Machine were applied to this identification process and their levels of classification accuracy were compared. The results achieved indicate that the physiological signals monitored do, in fact, have a strong correlation with the changes in the emotional states of the experimental subjects. These results also revealed that the inclusion of pupil diameter information significantly improved the performance of the emotion recognition system. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research pursued the conceptualization and real-time verification of a system that allows a computer user to control the cursor of a computer interface without using his/her hands. The target user groups for this system are individuals who are unable to use their hands due to spinal dysfunction or other afflictions, and individuals who must use their hands for higher priority tasks while still requiring interaction with a computer. ^ The system receives two forms of input from the user: Electromyogram (EMG) signals from muscles in the face and point-of-gaze coordinates produced by an Eye Gaze Tracking (EGT) system. In order to produce reliable cursor control from the two forms of user input, the development of this EMG/EGT system addressed three key requirements: an algorithm was created to accurately translate EMG signals due to facial movements into cursor actions, a separate algorithm was created that recognized an eye gaze fixation and provided an estimate of the associated eye gaze position, and an information fusion protocol was devised to efficiently integrate the outputs of these algorithms. ^ Experiments were conducted to compare the performance of EMG/EGT cursor control to EGT-only control and mouse control. These experiments took the form of two different types of point-and-click trials. The data produced by these experiments were evaluated using statistical analysis, Fitts' Law analysis and target re-entry (TRE) analysis. ^ The experimental results revealed that though EMG/EGT control was slower than EGT-only and mouse control, it provided effective hands-free control of the cursor without a spatial accuracy limitation, and it also facilitated a reliable click operation. This combination of qualities is not possessed by either EGT-only or mouse control, making EMG/EGT cursor control a unique and practical alternative for a user's cursor control needs. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research has indicated that the pupil diameter (PD) in humans varies with their affective states. However, this signal has not been fully investigated for affective sensing purposes in human-computer interaction systems. This may be due to the dominant separate effect of the pupillary light reflex (PLR), which shrinks the pupil when light intensity increases. In this dissertation, an adaptive interference canceller (AIC) system using the H∞ time-varying (HITV) adaptive algorithm was developed to minimize the impact of the PLR on the measured pupil diameter signal. The modified pupil diameter (MPD) signal, obtained from the AIC was expected to reflect primarily the pupillary affective responses (PAR) of the subject. Additional manipulations of the AIC output resulted in a processed MPD (PMPD) signal, from which a classification feature, PMPDmean, was extracted. This feature was used to train and test a support vector machine (SVM), for the identification of stress states in the subject from whom the pupil diameter signal was recorded, achieving an accuracy rate of 77.78%. The advantages of affective recognition through the PD signal were verified by comparatively investigating the classification of stress and relaxation states through features derived from the simultaneously recorded galvanic skin response (GSR) and blood volume pulse (BVP) signals, with and without the PD feature. The discriminating potential of each individual feature extracted from GSR, BVP and PD was studied by analysis of its receiver operating characteristic (ROC) curve. The ROC curve found for the PMPDmean feature encompassed the largest area (0.8546) of all the single-feature ROCs investigated. The encouraging results seen in affective sensing based on pupil diameter monitoring were obtained in spite of intermittent illumination increases purposely introduced during the experiments. Therefore, these results confirmed the benefits of using the AIC implementation with the HITV adaptive algorithm to isolate the PAR and the potential of using PD monitoring to sense the evolving affective states of a computer user.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent research has indicated that the pupil diameter (PD) in humans varies with their affective states. However, this signal has not been fully investigated for affective sensing purposes in human-computer interaction systems. This may be due to the dominant separate effect of the pupillary light reflex (PLR), which shrinks the pupil when light intensity increases. In this dissertation, an adaptive interference canceller (AIC) system using the H∞ time-varying (HITV) adaptive algorithm was developed to minimize the impact of the PLR on the measured pupil diameter signal. The modified pupil diameter (MPD) signal, obtained from the AIC was expected to reflect primarily the pupillary affective responses (PAR) of the subject. Additional manipulations of the AIC output resulted in a processed MPD (PMPD) signal, from which a classification feature, PMPDmean, was extracted. This feature was used to train and test a support vector machine (SVM), for the identification of stress states in the subject from whom the pupil diameter signal was recorded, achieving an accuracy rate of 77.78%. The advantages of affective recognition through the PD signal were verified by comparatively investigating the classification of stress and relaxation states through features derived from the simultaneously recorded galvanic skin response (GSR) and blood volume pulse (BVP) signals, with and without the PD feature. The discriminating potential of each individual feature extracted from GSR, BVP and PD was studied by analysis of its receiver operating characteristic (ROC) curve. The ROC curve found for the PMPDmean feature encompassed the largest area (0.8546) of all the single-feature ROCs investigated. The encouraging results seen in affective sensing based on pupil diameter monitoring were obtained in spite of intermittent illumination increases purposely introduced during the experiments. Therefore, these results confirmed the benefits of using the AIC implementation with the HITV adaptive algorithm to isolate the PAR and the potential of using PD monitoring to sense the evolving affective states of a computer user.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Communication has become an essential function in our civilization. With the increasing demand for communication channels, it is now necessary to find ways to optimize the use of their bandwidth. One way to achieve this is by transforming the information before it is transmitted. This transformation can be performed by several techniques. One of the newest of these techniques is the use of wavelets. Wavelet transformation refers to the act of breaking down a signal into components called details and trends by using small waveforms that have a zero average in the time domain. After this transformation the data can be compressed by discarding the details, transmitting the trends. In the receiving end, the trends are used to reconstruct the image. In this work, the wavelet used for the transformation of an image will be selected from a library of available bases. The accuracy of the reconstruction, after the details are discarded, is dependent on the wavelets chosen from the wavelet basis library. The system developed in this thesis takes a 2-D image and decomposes it using a wavelet bank. A digital signal processor is used to achieve near real-time performance in this transformation task. A contribution of this thesis project is the development of DSP-based test bed for the future development of new real-time wavelet transformation algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examined the effects of computer assisted instruction (CAI) 1 hour per week for 18 weeks on changes in computational scores and attitudes of developmental mathematics students at schools with predominantly Black enrollment. Comparisons were made between students using CAI with differing software--PLATO, CSR or both together--and students using traditional instruction (TI) only.^ This study was conducted in the Dade County Public School System from February through June 1991, at two senior high schools. The dependent variables, the State Student Assessment Test (SSAT), and the School Subjects Attitude Scales (SSAS), measured students' computational scores and attitudes toward mathematics in 3 categories: interest, usefulness, and difficulty, respectively.^ Univariate analyses of variance were performed on the least squares mean differences from pretest to posttest for testing main effects and interactions. A t-test measured significant main effects and interactions. Results were interpreted at the.01 level of significance.^ Null hypotheses 1, 2, and 3 compared versions of CAI with the control group, for changes in mathematical computation scores measured with the SSAT. It could not be concluded that changes in standardized mathematics test scores of students using CAI with differing software 1 hour per week for 18 class hours combined with TI were significantly higher than changes in test scores for students receiving TI only.^ Null hypotheses 4, 5, and 6 tested the effects of CAI for attitudes toward mathematics for experimental groups against control groups measured with the SSAS. Changes in attitudes toward mathematics of students using CAI with differing software 1 hour per week for 18 class hours combined with TI were not significantly higher than attitude changes for students receiving TI only.^ Teacher effect on students' computational scores was a more influential variable than CAI. No interaction was found between gender and learning method on standardized mathematics test scores (null hypothesis 7). ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An alternating treatment design was used to compare the effects of three student response conditions (Clicking, Repeating, and Listening) during computer-assisted instruction on social-studies facts learning and maintenance. Results showed that all students learned and maintained more social-studies facts taught in the Repeating condition followed by the Clicking condition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of computer assisted instruction (CAI) simulations as an instructional strategy provides nursing students with a critical thinking approach for evaluating risks and benefits and choosing correct alternatives in "safe" patient care situations. It was hypothesized that using CAI simulations during an upper level nursing review course would have a positive effect on the students' posttest scores. Subjects (n = 36) were senior nursing students enrolled in a nursing review course in an undergraduate baccalaureate program. A limitation of the study was the small sample size. The study employed a modified group experimental design using the t test for independent samples. The group who received the CAI simulations during the physiological system review demonstrated a significant increase (p $<$.01) in the posttest score mean when compared to the lecture-discussion group score mean. There was no significant difference between high and low clinical grade point average (GPA) students in the CAI and lecture-discussion groups and their score means on the posttest. However, score mean differences of the low clinical GPA students showed a greater increase for the CAI group than the lecture-discussion group. There was no significant difference between the groups in their system content subscore means on the exit examination completed three weeks later. It was concluded that CAI simulations are as effective as lecture-discussion in assisting upper level students to process information for clinical decision making. CAI simulations can be considered as an instructional strategy to supplement or replace lecture content during a review course, allowing more efficient use of faculty time. It is recommended that the study be repeated using a larger sample size. Further investigations are recommended in comparing the effectiveness of computer software formats and various instructional strategies for other learning situations and student populations. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to compare the effects of three student response conditions during computer-assisted instruction on the acquisition and maintenance of social-studies facts. Two of the conditions required active student responding (ASR), whereas the other required an on-task (OT) response. Participants were five fifth-grade students, with learning disabilities enrolled in a private school. An alternating treatments design with a best treatments phase was used to compare the effects of the response procedures on three major dependent measures: same-day tests, next-day tests, and maintenance tests. ^ Each week for six weeks, participants were provided daily one-to-one instruction on sets of 21 unknown social-studies facts using a hypermedia computer program, with a new set of facts being practiced each week. Each set of 21 facts was divided randomly into three conditions: Clicking-ASR, Repeating-ASR, and Listening-OT. Hypermedia lesson began weekly with the concept introduction lesson, followed by practice and testing. Practice and testing occurred four days per week, per set. During Clicking-ASR, student practice involved the selection of a social-studies response by clicking on an item with the mouse on the hypermedia card. Repeating-ASR instruction required students to orally repeat the social-studies facts when prompted by the computer. During Listening-OT, students listened to the social-studies facts being read by the computer. During weeks seven and eight, instruction occurred with seven unknown facts using only the best treatment. ^ Test results show that all for all 5 students, the Repeating-ASR practice procedure resulted in more social-studies facts stated correctly on same-day tests, next-day tests, and one-and two-week maintenance tests. Clicking-ASR was the next most effective procedure. During the seventh and eighth week of instruction when only the best practice condition was implemented, Repeating-ASR produced higher scores than all conditions (including Repeating-ASR) during the first six weeks of the study. ^ The results lend further support to the growing body of literature that demonstrates the positive relation between ASR and student achievement. Much of the ASR literature has focused on the effects of increased ASR during teacher-led or peer-mediated instruction. This study adds a dimension to that research in that it demonstrated the importance of ASR during computer-assisted instruction and further suggests that the type of ASR used during computer-assisted instruction may influence learning. Future research is needed to investigate the effectiveness of other types of ASR during computer-assisted instruction and to identify other fundamental characteristics of an effective computer-assisted instruction. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Intraoperative neurophysiologic monitoring is an integral part of spinal surgeries and involves the recording of somatosensory evoked potentials (SSEP). However, clinical application of IONM still requires anywhere between 200 to 2000 trials to obtain an SSEP signal, which is excessive and introduces a significant delay during surgery to detect a possible neurological damage. The aim of this study is to develop a means to obtain the SSEP using a much less, twelve number of recordings. The preliminary step involved was to distinguish the SSEP with the ongoing brain activity. We first establish that the brain activity is indeed quasi-stationary whereas an SSEP is expected to be identical every time a trial is recorded. An algorithm was developed using Chebychev time windowing for preconditioning of SSEP trials to retain the morphological characteristics of somatosensory evoked potentials (SSEP). This preconditioning was followed by the application of a principal component analysis (PCA)-based algorithm utilizing quasi-stationarity of EEG on 12 preconditioned trials. A unique Walsh transform operation was then used to identify the position of the SSEP event. An alarm is raised when there is a 10% time in latency deviation and/or 50% peak-to-peak amplitude deviation, as per the clinical requirements. The algorithm shows consistency in the results in monitoring SSEP in up to 6-hour surgical procedures even under this significantly reduced number of trials. In this study, the analysis was performed on the data recorded in 29 patients undergoing surgery during which the posterior tibial nerve was stimulated and SSEP response was recorded from scalp. This method is shown empirically to be more clinically viable than present day approaches. In all 29 cases, the algorithm takes 4sec to extract an SSEP signal, as compared to conventional methods, which take several minutes. The monitoring process using the algorithm was successful and proved conclusive under the clinical constraints throughout the different surgical procedures with an accuracy of 91.5%. Higher accuracy and faster execution time, observed in the present study, in determining the SSEP signals provide a much improved and effective neurophysiological monitoring process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need to incorporate advanced engineering tools in biology, biochemistry and medicine is in great demand. Many of the existing instruments and tools are usually expensive and require special facilities.^ With the advent of nanotechnology in the past decade, new approaches to develop devices and tools have been generated by academia and industry. ^ One such technology, NMR spectroscopy, has been used by biochemists for more than 2 decades to study the molecular structure of chemical compounds. However, NMR spectrometers are very expensive and require special laboratory rooms for their proper operation. High magnetic fields with strengths in the order of several Tesla make these instruments unaffordable to most research groups.^ This doctoral research proposes a new technology to develop NMR spectrometers that can operate at field strengths of less than 0.5 Tesla using an inexpensive permanent magnet and spin dependent nanoscale magnetic devices. This portable NMR system is intended to analyze samples as small as a few nanoliters.^ The main problem to resolve when downscaling the variables is to obtain an NMR signal with high Signal-To-Noise-Ratio (SNR). A special Tunneling Magneto-Resistive (TMR) sensor design was developed to achieve this goal. The minimum specifications for each component of the proposed NMR system were established. A complete NMR system was designed based on these minimum requirements. The goat was always to find cost effective realistic components. The novel design of the NMR system uses technologies such as Direct Digital Synthesis (DDS), Digital Signal Processing (DSP) and a special Backpropagation Neural Network that finds the best match of the NMR spectrum. The system was designed, calculated and simulated with excellent results.^ In addition, a general method to design TMR Sensors was developed. The technique was automated and a computer program was written to help the designer perform this task interactively.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many students are entering colleges and universities in the United States underprepared in mathematics. National statistics indicate that only approximately one-third of students in developmental mathematics courses pass. When underprepared students repeatedly enroll in courses that do not count toward their degree, it costs them money and delays graduation. This study investigated a possible solution to this problem: Whether using a particular computer assisted learning strategy combined with using mastery learning techniques improved the overall performance of students in a developmental mathematics course. Participants received one of three teaching strategies: (a) group A was taught using traditional instruction with mastery learning supplemented with computer assisted instruction, (b) group B was taught using traditional instruction supplemented with computer assisted instruction in the absence of mastery learning and, (c) group C was taught using traditional instruction without mastery learning or computer assisted instruction. Participants were students in MAT1033, a developmental mathematics course at a large public 4-year college. An analysis of covariance using participants' pretest scores as the covariate tested the null hypothesis that there was no significant difference in the adjusted mean final examination scores among the three groups. Group A participants had significantly higher adjusted mean posttest score than did group C participants. A chi-square test tested the null hypothesis that there were no significant differences in the proportions of students who passed MAT1033 among the treatment groups. It was found that there was a significant difference in the proportion of students who passed among all three groups, with those in group A having the highest pass rate and those in group C the lowest. A discriminant factor analysis revealed that time on task correctly predicted the passing status of 89% of the participants. ^ It was concluded that the most efficacious strategy for teaching developmental mathematics was through the use of mastery learning supplemented by computer-assisted instruction. In addition, it was noted that time on task was a strong predictor of academic success over and above the predictive ability of a measure of previous knowledge of mathematics.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary goal of this dissertation is to develop point-based rigid and non-rigid image registration methods that have better accuracy than existing methods. We first present point-based PoIRe, which provides the framework for point-based global rigid registrations. It allows a choice of different search strategies including (a) branch-and-bound, (b) probabilistic hill-climbing, and (c) a novel hybrid method that takes advantage of the best characteristics of the other two methods. We use a robust similarity measure that is insensitive to noise, which is often introduced during feature extraction. We show the robustness of PoIRe using it to register images obtained with an electronic portal imaging device (EPID), which have large amounts of scatter and low contrast. To evaluate PoIRe we used (a) simulated images and (b) images with fiducial markers; PoIRe was extensively tested with 2D EPID images and images generated by 3D Computer Tomography (CT) and Magnetic Resonance (MR) images. PoIRe was also evaluated using benchmark data sets from the blind retrospective evaluation project (RIRE). We show that PoIRe is better than existing methods such as Iterative Closest Point (ICP) and methods based on mutual information. We also present a novel point-based local non-rigid shape registration algorithm. We extend the robust similarity measure used in PoIRe to non-rigid registrations adapting it to a free form deformation (FFD) model and making it robust to local minima, which is a drawback common to existing non-rigid point-based methods. For non-rigid registrations we show that it performs better than existing methods and that is less sensitive to starting conditions. We test our non-rigid registration method using available benchmark data sets for shape registration. Finally, we also explore the extraction of features invariant to changes in perspective and illumination, and explore how they can help improve the accuracy of multi-modal registration. For multimodal registration of EPID-DRR images we present a method based on a local descriptor defined by a vector of complex responses to a circular Gabor filter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.