938 resultados para Pattern Analysis
Resumo:
An effective aperture approach is used as a tool for analysis and parameter optimization of mostly known ultrasound imaging systems - phased array systems, compounding systems and synthetic aperture imaging systems. Both characteristics of an imaging system, the effective aperture function and the corresponding two-way radiation pattern, provide information about two of the most important parameters of images produced by an ultrasound system - lateral resolution and contrast. Therefore, in the design, optimization of the effective aperture function leads to optimal choice of such parameters of an imaging systems that influence on lateral resolution and contrast of images produced by this imaging system. It is shown that the effective aperture approach can be used for optimization of a sparse synthetic transmit aperture (STA) imaging system. A new two-stage algorithm is proposed for optimization of both the positions of the transmitted elements and the weights of the receive elements. The proposed system employs a 64-element array with only four active elements used during transmit. The numerical results show that Hamming apodization gives the best compromise between the contrast of images and the lateral resolution.
Resumo:
An optical in-fiber modal interferometer-based volume strain sensor for earthquake prediction is proposed and experimentally demonstrated. The sensing element is formed by wrapping a multimode-singlemode-multimode fiber structure onto a polyurethane hollow column. Due to the modal interference between the excited guided modes in the fiber, strong interference pattern could be observed in the transmission spectrum. Theoretical analysis verifies that the resonant wavelength shifts as a result of the volume strain variation caused by the column deformation with square root relationship. Sensitivity > 3.93 pm/με within the volume strain ranging from 0 to 1300 με is also experimentally demonstrated. By taking the response of bidirectional change of volume strain and the sluggish character of the employed sensing material into consideration, the sensing system presents good repeatability and stability. © 2001-2012 IEEE.
Resumo:
Congenital nystagmus (CN) is an ocular-motor disorder that appears at birth or during the first few months of life; it is characterised by involuntary, conjugated, bilateral to and fro ocular oscillations. Pathogenesis of congenital nystagmus is still unknown. Eye movement recording allow to extract and analyse nystagmus main features such as shape, amplitude and frequency; depending on the morphology of the oscillations nystagmus can be classified in different categories (pendular, jerk, horizontal unidirectional, bidirectional). In general, CN patient show a considerable decrease of the visual acuity: image fixation on the retina is disturbed by nystagmus continuous oscillations; however, image stabilisation is still achieved during the short foveation periods in which eye velocity slows down while the target image is placed onto the fovea. Visual acuity was found to be mainly dependent on foveation periods duration, but cycle-to-cycle foveation repeatability and reduction of retinal image velocities also contribute in increasing visual acuity. This study concentrate on cycle-to-cycle image position variation onto fovea, trying to characterise the sequences of foveation positions. Eye-movement (infrared oculographic or electro oculographic) recordings, relative to different gaze positions and belonging to more than 30 CN patients, were analysed. Preliminary results suggest that sequences of foveations show a cyclic pattern with a dominant frequency (around 0.3 Hz on average) much lower than that of the nystagmus (about 3.3 Hz on average). Sequences of foveations reveals an horizontal ocular swing of more than 2 degree on average, which can explain the low visual acuity of the CN patient. Current CN therapies, pharmacological treatment or surgery of the ocular muscles, mainly aim to increase the patient's visual acuity. Hence, it is fundamental to have an objective parameter (expected visual acuity) for therapy planning. The information about sequences of foveations can improve estimation of patient visual acuity. © 2008 Springer-Verlag.
Resumo:
A novel approach of automatic ECG analysis based on scale-scale signal representation is proposed. The approach uses curvature scale-space representation to locate main ECG waveform limits and peaks and may be used to correct results of other ECG analysis techniques or independently. Moreover dynamic matching of ECG CSS representations provides robust preliminary recognition of ECG abnormalities which has been proven by experimental results.
Resumo:
Most existing approaches to Twitter sentiment analysis assume that sentiment is explicitly expressed through affective words. Nevertheless, sentiment is often implicitly expressed via latent semantic relations, patterns and dependencies among words in tweets. In this paper, we propose a novel approach that automatically captures patterns of words of similar contextual semantics and sentiment in tweets. Unlike previous work on sentiment pattern extraction, our proposed approach does not rely on external and fixed sets of syntactical templates/patterns, nor requires deep analyses of the syntactic structure of sentences in tweets. We evaluate our approach with tweet- and entity-level sentiment analysis tasks by using the extracted semantic patterns as classification features in both tasks. We use 9 Twitter datasets in our evaluation and compare the performance of our patterns against 6 state-of-the-art baselines. Results show that our patterns consistently outperform all other baselines on all datasets by 2.19% at the tweet-level and 7.5% at the entity-level in average F-measure.
Resumo:
Background: During last decade the use of ECG recordings in biometric recognition studies has increased. ECG characteristics made it suitable for subject identification: it is unique, present in all living individuals, and hard to forge. However, in spite of the great number of approaches found in literature, no agreement exists on the most appropriate methodology. This study aimed at providing a survey of the techniques used so far in ECG-based human identification. Specifically, a pattern recognition perspective is here proposed providing a unifying framework to appreciate previous studies and, hopefully, guide future research. Methods: We searched for papers on the subject from the earliest available date using relevant electronic databases (Medline, IEEEXplore, Scopus, and Web of Knowledge). The following terms were used in different combinations: electrocardiogram, ECG, human identification, biometric, authentication and individual variability. The electronic sources were last searched on 1st March 2015. In our selection we included published research on peer-reviewed journals, books chapters and conferences proceedings. The search was performed for English language documents. Results: 100 pertinent papers were found. Number of subjects involved in the journal studies ranges from 10 to 502, age from 16 to 86, male and female subjects are generally present. Number of analysed leads varies as well as the recording conditions. Identification performance differs widely as well as verification rate. Many studies refer to publicly available databases (Physionet ECG databases repository) while others rely on proprietary recordings making difficult them to compare. As a measure of overall accuracy we computed a weighted average of the identification rate and equal error rate in authentication scenarios. Identification rate resulted equal to 94.95 % while the equal error rate equal to 0.92 %. Conclusions: Biometric recognition is a mature field of research. Nevertheless, the use of physiological signals features, such as the ECG traits, needs further improvements. ECG features have the potential to be used in daily activities such as access control and patient handling as well as in wearable electronics applications. However, some barriers still limit its growth. Further analysis should be addressed on the use of single lead recordings and the study of features which are not dependent on the recording sites (e.g. fingers, hand palms). Moreover, it is expected that new techniques will be developed using fiducials and non-fiducial based features in order to catch the best of both approaches. ECG recognition in pathological subjects is also worth of additional investigations.
Resumo:
This research evaluates pattern recognition techniques on a subclass of big data where the dimensionality of the input space (p) is much larger than the number of observations (n). Specifically, we evaluate massive gene expression microarray cancer data where the ratio κ is less than one. We explore the statistical and computational challenges inherent in these high dimensional low sample size (HDLSS) problems and present statistical machine learning methods used to tackle and circumvent these difficulties. Regularization and kernel algorithms were explored in this research using seven datasets where κ < 1. These techniques require special attention to tuning necessitating several extensions of cross-validation to be investigated to support better predictive performance. While no single algorithm was universally the best predictor, the regularization technique produced lower test errors in five of the seven datasets studied.
Resumo:
Cloud computing is a new technological paradigm offering computing infrastructure, software and platforms as a pay-as-you-go, subscription-based service. Many potential customers of cloud services require essential cost assessments to be undertaken before transitioning to the cloud. Current assessment techniques are imprecise as they rely on simplified specifications of resource requirements that fail to account for probabilistic variations in usage. In this paper, we address these problems and propose a new probabilistic pattern modelling (PPM) approach to cloud costing and resource usage verification. Our approach is based on a concise expression of probabilistic resource usage patterns translated to Markov decision processes (MDPs). Key costing and usage queries are identified and expressed in a probabilistic variant of temporal logic and calculated to a high degree of precision using quantitative verification techniques. The PPM cost assessment approach has been implemented as a Java library and validated with a case study and scalability experiments. © 2012 Springer-Verlag Berlin Heidelberg.
Resumo:
With the latest development in computer science, multivariate data analysis methods became increasingly popular among economists. Pattern recognition in complex economic data and empirical model construction can be more straightforward with proper application of modern softwares. However, despite the appealing simplicity of some popular software packages, the interpretation of data analysis results requires strong theoretical knowledge. This book aims at combining the development of both theoretical and applicationrelated data analysis knowledge. The text is designed for advanced level studies and assumes acquaintance with elementary statistical terms. After a brief introduction to selected mathematical concepts, the highlighting of selected model features is followed by a practice-oriented introduction to the interpretation of SPSS1 outputs for the described data analysis methods. Learning of data analysis is usually time-consuming and requires efforts, but with tenacity the learning process can bring about a significant improvement of individual data analysis skills.
Resumo:
To achieve the goal of sustainable development, the building energy system was evaluated from both the first and second law of thermodynamics point of view. The relationship between exergy destruction and sustainable development were discussed at first, followed by the description of the resource abundance model, the life cycle analysis model and the economic investment effectiveness model. By combining the forgoing models, a new sustainable index was proposed. Several green building case studies in U.S. and China were presented. The influences of building function, geographic location, climate pattern, the regional energy structure, and the technology improvement potential of renewable energy in the future were discussed. The building’s envelope, HVAC system, on-site renewable energy system life cycle analysis from energy, exergy, environmental and economic perspective were compared. It was found that climate pattern had a dramatic influence on the life cycle investment effectiveness of the building envelope. The building HVAC system energy performance was much better than its exergy performance. To further increase the exergy efficiency, renewable energy rather than fossil fuel should be used as the primary energy. A building life cycle cost and exergy consumption regression model was set up. The optimal building insulation level could be affected by either cost minimization or exergy consumption minimization approach. The exergy approach would cause better insulation than cost approach. The influence of energy price on the system selection strategy was discussed. Two photovoltaics (PV) systems—stand alone and grid tied system were compared by the life cycle assessment method. The superiority of the latter one was quite obvious. The analysis also showed that during its life span PV technology was less attractive economically because the electricity price in U.S. and China did not fully reflect the environmental burden associated with it. However if future energy price surges and PV system cost reductions were considered, the technology could be very promising for sustainable buildings in the future.
Resumo:
To carry out their specific roles in the cell, genes and gene products often work together in groups, forming many relationships among themselves and with other molecules. Such relationships include physical protein-protein interaction relationships, regulatory relationships, metabolic relationships, genetic relationships, and much more. With advances in science and technology, some high throughput technologies have been developed to simultaneously detect tens of thousands of pairwise protein-protein interactions and protein-DNA interactions. However, the data generated by high throughput methods are prone to noise. Furthermore, the technology itself has its limitations, and cannot detect all kinds of relationships between genes and their products. Thus there is a pressing need to investigate all kinds of relationships and their roles in a living system using bioinformatic approaches, and is a central challenge in Computational Biology and Systems Biology. This dissertation focuses on exploring relationships between genes and gene products using bioinformatic approaches. Specifically, we consider problems related to regulatory relationships, protein-protein interactions, and semantic relationships between genes. A regulatory element is an important pattern or "signal", often located in the promoter of a gene, which is used in the process of turning a gene "on" or "off". Predicting regulatory elements is a key step in exploring the regulatory relationships between genes and gene products. In this dissertation, we consider the problem of improving the prediction of regulatory elements by using comparative genomics data. With regard to protein-protein interactions, we have developed bioinformatics techniques to estimate support for the data on these interactions. While protein-protein interactions and regulatory relationships can be detected by high throughput biological techniques, there is another type of relationship called semantic relationship that cannot be detected by a single technique, but can be inferred using multiple sources of biological data. The contributions of this thesis involved the development and application of a set of bioinformatic approaches that address the challenges mentioned above. These included (i) an EM-based algorithm that improves the prediction of regulatory elements using comparative genomics data, (ii) an approach for estimating the support of protein-protein interaction data, with application to functional annotation of genes, (iii) a novel method for inferring functional network of genes, and (iv) techniques for clustering genes using multi-source data.
Resumo:
In China in particular, large, planned special events (e.g., the Olympic Games, etc.) are viewed as great opportunities for economic development. Large numbers of visitors from other countries and provinces may be expected to attend such events, bringing in significant tourism dollars. However, as a direct result of such events, the transportation system is likely to face great challenges as travel demand increases beyond its original design capacity. Special events in central business districts (CBD) in particular will further exacerbate traffic congestion on surrounding freeway segments near event locations. To manage the transportation system, it is necessary to plan and prepare for such special events, which requires prediction of traffic conditions during the events. This dissertation presents a set of novel prototype models to forecast traffic volumes along freeway segments during special events. Almost all research to date has focused solely on traffic management techniques under special event conditions. These studies, at most, provided a qualitative analysis and there was a lack of an easy-to-implement method for quantitative analyses. This dissertation presents a systematic approach, based separately on univariate time series model with intervention analysis and multivariate time series model with intervention analysis for forecasting traffic volumes on freeway segments near an event location. A case study was carried out, which involved analyzing and modelling the historical time series data collected from loop-detector traffic monitoring stations on the Second and Third Ring Roads near Beijing Workers Stadium. The proposed time series models, with expected intervention, are found to provide reasonably accurate forecasts of traffic pattern changes efficiently. They may be used to support transportation planning and management for special events.
Resumo:
The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.
Resumo:
This dissertation establishes a novel data-driven method to identify language network activation patterns in pediatric epilepsy through the use of the Principal Component Analysis (PCA) on functional magnetic resonance imaging (fMRI). A total of 122 subjects’ data sets from five different hospitals were included in the study through a web-based repository site designed here at FIU. Research was conducted to evaluate different classification and clustering techniques in identifying hidden activation patterns and their associations with meaningful clinical variables. The results were assessed through agreement analysis with the conventional methods of lateralization index (LI) and visual rating. What is unique in this approach is the new mechanism designed for projecting language network patterns in the PCA-based decisional space. Synthetic activation maps were randomly generated from real data sets to uniquely establish nonlinear decision functions (NDF) which are then used to classify any new fMRI activation map into typical or atypical. The best nonlinear classifier was obtained on a 4D space with a complexity (nonlinearity) degree of 7. Based on the significant association of language dominance and intensities with the top eigenvectors of the PCA decisional space, a new algorithm was deployed to delineate primary cluster members without intensity normalization. In this case, three distinct activations patterns (groups) were identified (averaged kappa with rating 0.65, with LI 0.76) and were characterized by the regions of: (1) the left inferior frontal Gyrus (IFG) and left superior temporal gyrus (STG), considered typical for the language task; (2) the IFG, left mesial frontal lobe, right cerebellum regions, representing a variant left dominant pattern by higher activation; and (3) the right homologues of the first pattern in Broca's and Wernicke's language areas. Interestingly, group 2 was found to reflect a different language compensation mechanism than reorganization. Its high intensity activation suggests a possible remote effect on the right hemisphere focus on traditionally left-lateralized functions. In retrospect, this data-driven method provides new insights into mechanisms for brain compensation/reorganization and neural plasticity in pediatric epilepsy.
Resumo:
The purpose of this study was to better understand the study behaviors and habits of university undergraduate students. It was designed to determine whether undergraduate students could be grouped based on their self-reported study behaviors and if any grouping system could be determined, whether group membership was related to students’ academic achievement. A total of 152 undergraduate students voluntarily participated in the current study by completing the Study Behavior Inventory instrument. All participants were enrolled in fall semester of 2010 at Florida International University. The Q factor analysis technique using principal components extraction and a varimax rotation was used in order to examine the participants in relation to each other and to detect a pattern of intercorrelations among participants based on their self-reported study behaviors. The Q factor analysis yielded a two factor structure representing two distinct student types among participants regarding their study behaviors. The first student type (i.e., Factor 1) describes proactive learners who organize both their study materials and study time well. Type 1 students are labeled “Proactive Learners with Well-Organized Study Behaviors”. The second type (i.e., Factor 2) represents students who are poorly organized as well as being very likely to procrastinate. Type 2 students are labeled Disorganized Procrastinators. Hierarchical linear regression was employed to examine the relationship between student type and academic achievement as measured by current grade point averages (GPAs). The results showed significant differences in GPAs between Type 1 and Type 2 students at the .05 significance level. Furthermore, student type was found to be a significant predictor of academic achievement beyond and above students’ attribute variables including sex, age, major, and enrollment status. The study has several implications for educational researchers, practitioners, and policy makers in terms of improving college students' learning behaviors and outcomes.