930 resultados para Pattern classification
Resumo:
A classical application of biosignal analysis has been the psychophysiological detection of deception, also known as the polygraph test, which is currently a part of standard practices of law enforcement agencies and several other institutions worldwide. Although its validity is far from gathering consensus, the underlying psychophysiological principles are still an interesting add-on for more informal applications. In this paper we present an experimental off-the-person hardware setup, propose a set of feature extraction criteria and provide a comparison of two classification approaches, targeting the detection of deception in the context of a role-playing interactive multimedia environment. Our work is primarily targeted at recreational use in the context of a science exhibition, where the main goal is to present basic concepts related with knowledge discovery, biosignal analysis and psychophysiology in an educational way, using techniques that are simple enough to be understood by children of different ages. Nonetheless, this setting will also allow us to build a significant data corpus, annotated with ground-truth information, and collected with non-intrusive sensors, enabling more advanced research on the topic. Experimental results have shown interesting findings and provided useful guidelines for future work. Pattern Recognition
Resumo:
Liver steatosis is mainly a textural abnormality of the hepatic parenchyma due to fat accumulation on the hepatic vesicles. Today, the assessment is subjectively performed by visual inspection. Here a classifier based on features extracted from ultrasound (US) images is described for the automatic diagnostic of this phatology. The proposed algorithm estimates the original ultrasound radio-frequency (RF) envelope signal from which the noiseless anatomic information and the textural information encoded in the speckle noise is extracted. The features characterizing the textural information are the coefficients of the first order autoregressive model that describes the speckle field. A binary Bayesian classifier was implemented and the Bayes factor was calculated. The classification has revealed an overall accuracy of 100%. The Bayes factor could be helpful in the graphical display of the quantitative results for diagnosis purposes.
Resumo:
Background and aim: Cardiorespiratory fitness (CRF) and diet have been involved as significant factors towards the prevention of cardio-metabolic diseases. This study aimed to assess the impact of the combined associations of CRF and adherence to the Southern European Atlantic Diet (SEADiet) on the clustering of metabolic risk factors in adolescents. Methods and Results: A cross-sectional school-based study was conducted on 468 adolescents aged 15-18, from the Azorean Islands, Portugal. We measured fasting glucose, insulin, total cholesterol (TC), HDL-cholesterol, triglycerides, systolic blood pressure, waits circumference and height. HOMA, TC/HDL-C ratio and waist-to-height ratio were calculated. For each of these variables, a Z-score was computed by age and sex. A metabolic risk score (MRS) was constructed by summing the Z scores of all individual risk factors. High risk was considered when the individual had 1SD of this score. CRF was measured with the 20 m-Shuttle-Run- Test. Adherence to SEADiet was assessed with a semi-quantitative food frequency questionnaire. Logistic regression showed that, after adjusting for potential confounders, unfit adolescents with low adherence to SEADiet had the highest odds of having MRS (OR Z 9.4; 95%CI:2.6e33.3) followed by the unfit ones with high adherence to the SEADiet (OR Z 6.6; 95% CI: 1.9e22.5) when compared to those who were fit and had higher adherence to SEADiet.
Resumo:
The definition and programming of distributed applications has become a major research issue due to the increasing availability of (large scale) distributed platforms and the requirements posed by the economical globalization. However, such a task requires a huge effort due to the complexity of the distributed environments: large amount of users may communicate and share information across different authority domains; moreover, the “execution environment” or “computations” are dynamic since the number of users and the computational infrastructure change in time. Grid environments, in particular, promise to be an answer to deal with such complexity, by providing high performance execution support to large amount of users, and resource sharing across different organizations. Nevertheless, programming in Grid environments is still a difficult task. There is a lack of high level programming paradigms and support tools that may guide the application developer and allow reusability of state-of-the-art solutions. Specifically, the main goal of the work presented in this thesis is to contribute to the simplification of the development cycle of applications for Grid environments by bringing structure and flexibility to three stages of that cycle through a commonmodel. The stages are: the design phase, the execution phase, and the reconfiguration phase. The common model is based on the manipulation of patterns through pattern operators, and the division of both patterns and operators into two categories, namely structural and behavioural. Moreover, both structural and behavioural patterns are first class entities at each of the aforesaid stages. At the design phase, patterns can be manipulated like other first class entities such as components. This allows a more structured way to build applications by reusing and composing state-of-the-art patterns. At the execution phase, patterns are units of execution control: it is possible, for example, to start or stop and to resume the execution of a pattern as a single entity. At the reconfiguration phase, patterns can also be manipulated as single entities with the additional advantage that it is possible to perform a structural reconfiguration while keeping some of the behavioural constraints, and vice-versa. For example, it is possible to replace a behavioural pattern, which was applied to some structural pattern, with another behavioural pattern. In this thesis, besides the proposal of the methodology for distributed application development, as sketched above, a definition of a relevant set of pattern operators was made. The methodology and the expressivity of the pattern operators were assessed through the development of several representative distributed applications. To support this validation, a prototype was designed and implemented, encompassing some relevant patterns and a significant part of the patterns operators defined. This prototype was based in the Triana environment; Triana supports the development and deployment of distributed applications in the Grid through a dataflow-based programming model. Additionally, this thesis also presents the analysis of a mapping of some operators for execution control onto the Distributed Resource Management Application API (DRMAA). This assessment confirmed the suitability of the proposed model, as well as the generality and flexibility of the defined pattern operators
Resumo:
Wireless Sensor Networks (WSNs) are increasingly used in various application domains like home-automation, agriculture, industries and infrastructure monitoring. As applications tend to leverage larger geographical deployments of sensor networks, the availability of an intuitive and user friendly programming abstraction becomes a crucial factor in enabling faster and more efficient development, and reprogramming of applications. We propose a programming pattern named sMapReduce, inspired by the Google MapReduce framework, for mapping application behaviors on to a sensor network and enabling complex data aggregation. The proposed pattern requires a user to create a network-level application in two functions: sMap and Reduce, in order to abstract away from the low-level details without sacrificing the control to develop complex logic. Such a two-fold division of programming logic is a natural-fit to typical sensor networking operation which makes sensing and topological modalities accessible to the user.
Resumo:
Optimization problems arise in science, engineering, economy, etc. and we need to find the best solutions for each reality. The methods used to solve these problems depend on several factors, including the amount and type of accessible information, the available algorithms for solving them, and, obviously, the intrinsic characteristics of the problem. There are many kinds of optimization problems and, consequently, many kinds of methods to solve them. When the involved functions are nonlinear and their derivatives are not known or are very difficult to calculate, these methods are more rare. These kinds of functions are frequently called black box functions. To solve such problems without constraints (unconstrained optimization), we can use direct search methods. These methods do not require any derivatives or approximations of them. But when the problem has constraints (nonlinear programming problems) and, additionally, the constraint functions are black box functions, it is much more difficult to find the most appropriate method. Penalty methods can then be used. They transform the original problem into a sequence of other problems, derived from the initial, all without constraints. Then this sequence of problems (without constraints) can be solved using the methods available for unconstrained optimization. In this chapter, we present a classification of some of the existing penalty methods and describe some of their assumptions and limitations. These methods allow the solving of optimization problems with continuous, discrete, and mixing constraints, without requiring continuity, differentiability, or convexity. Thus, penalty methods can be used as the first step in the resolution of constrained problems, by means of methods that typically are used by unconstrained problems. We also discuss a new class of penalty methods for nonlinear optimization, which adjust the penalty parameter dynamically.
Resumo:
OBJECTIVE To investigate the factors related to the granting of preliminary court orders [injunctions] in drug litigations. METHODS A retrospective descriptive study of drug lawsuits in the State of Minas Gerais, Southeastern Brazil, was conducted from October 1999 to 2009. The database consists of 6,112 lawsuits, out of which 6,044 had motions for injunctions and 5,167 included the requisition of drugs. Those with more than one beneficiary were excluded, which totaled 5,072 examined suits. The variables for complete, partial, and suppressed motions were treated as dependent and assessed in relation to those that were independent – lawsuits (year, type, legal representation, defendant, court in which it was filed, adjudication time), drugs (level five of the anatomical therapeutic chemical classification), and diseases (chapter of the International Classification of Diseases). Statistical analyses were performed using the Chi-square test. RESULTS Out of the 5,072 lawsuits with injunctions, 4,184 (82.5%) had the injunctions granted. Granting varied from 95.8% of the total lawsuits in 2004 to 76.9% in 2008. Where there was legal representation, granting exceeded 80.0% and in lawsuits without representation, it did not exceed 66.9%. In public civil actions (89.1%), granting was higher relative to ordinary lawsuits (82.8%) and injunctions (80.1%). Federal courts granted only 68.6% of the injunctions, while the state courts granted 84.8%. Diseases of the digestive system and neoplasms received up to 87.0% in granting, while diseases of the nervous system, mental and behavioral disorders, and diseases of the skin and subcutaneous tissue received granting below 78.6% and showed a high proportion of suspended injunctions (10.9%). Injunctions involving paroxetine, somatropin, and ferrous sulfate drugs were all granted, while less than 54.0% of those involving escitalopram, sodium diclofenac, and nortriptyline were granted. CONCLUSIONS There are significant differences in the granting of injunctions, depending on the procedural and clinical variances. Important trends in the pattern of judicial action were observed, particularly, in the reduced granting [of injunctions] over the period.
Resumo:
In practice the robotic manipulators present some degree of unwanted vibrations. The advent of lightweight arm manipulators, mainly in the aerospace industry, where weight is an important issue, leads to the problem of intense vibrations. On the other hand, robots interacting with the environment often generate impacts that propagate through the mechanical structure and produce also vibrations. In order to analyze these phenomena a robot signal acquisition system was developed. The manipulator motion produces vibrations, either from the structural modes or from endeffector impacts. The instrumentation system acquires signals from several sensors that capture the joint positions, mass accelerations, forces and moments, and electrical currents in the motors. Afterwards, an analysis package, running off-line, reads the data recorded by the acquisition system and extracts the signal characteristics. Due to the multiplicity of sensors, the data obtained can be redundant because the same type of information may be seen by two or more sensors. Because of the price of the sensors, this aspect can be considered in order to reduce the cost of the system. On the other hand, the placement of the sensors is an important issue in order to obtain the suitable signals of the vibration phenomenon. Moreover, the study of these issues can help in the design optimization of the acquisition system. In this line of thought a sensor classification scheme is presented. Several authors have addressed the subject of the sensor classification scheme. White (White, 1987) presents a flexible and comprehensive categorizing scheme that is useful for describing and comparing sensors. The author organizes the sensors according to several aspects: measurands, technological aspects, detection means, conversion phenomena, sensor materials and fields of application. Michahelles and Schiele (Michahelles & Schiele, 2003) systematize the use of sensor technology. They identified several dimensions of sensing that represent the sensing goals for physical interaction. A conceptual framework is introduced that allows categorizing existing sensors and evaluates their utility in various applications. This framework not only guides application designers for choosing meaningful sensor subsets, but also can inspire new systems and leads to the evaluation of existing applications. Today’s technology offers a wide variety of sensors. In order to use all the data from the diversity of sensors a framework of integration is needed. Sensor fusion, fuzzy logic, and neural networks are often mentioned when dealing with problem of combing information from several sensors to get a more general picture of a given situation. The study of data fusion has been receiving considerable attention (Esteban et al., 2005; Luo & Kay, 1990). A survey of the state of the art in sensor fusion for robotics can be found in (Hackett & Shah, 1990). Henderson and Shilcrat (Henderson & Shilcrat, 1984) introduced the concept of logic sensor that defines an abstract specification of the sensors to integrate in a multisensor system. The recent developments of micro electro mechanical sensors (MEMS) with unwired communication capabilities allow a sensor network with interesting capacity. This technology was applied in several applications (Arampatzis & Manesis, 2005), including robotics. Cheekiralla and Engels (Cheekiralla & Engels, 2005) propose a classification of the unwired sensor networks according to its functionalities and properties. This paper presents a development of a sensor classification scheme based on the frequency spectrum of the signals and on a statistical metrics. Bearing these ideas in mind, this paper is organized as follows. Section 2 describes briefly the robotic system enhanced with the instrumentation setup. Section 3 presents the experimental results. Finally, section 4 draws the main conclusions and points out future work.
Resumo:
In this paper an automatic classification algorithm is proposed for the diagnosis of the liver steatosis, also known as, fatty liver, from ultrasound images. The features, automatically extracted from the ultrasound images used by the classifier, are basically the ones used by the physicians in the diagnosis of the disease based on visual inspection of the ultrasound images. The main novelty of the method is the utilization of the speckle noise that corrupts the ultrasound images to compute textural features of the liver parenchyma relevant for the diagnosis. The algorithm uses the Bayesian framework to compute a noiseless image, containing anatomic and echogenic information of the liver and a second image containing only the speckle noise used to compute the textural features. The classification results, with the Bayes classifier using manually classified data as ground truth show that the automatic classifier reaches an accuracy of 95% and a 100% of sensitivity.
Resumo:
This chapter analyzes the signals captured during impacts and vibrations of a mechanical manipulator. Eighteen signals are captured and several metrics are calculated between them, such as the correlation, the mutual information and the entropy. A sensor classification scheme based on the multidimensional scaling technique is presented.
Resumo:
This paper analyzes the signals captured during impacts and vibrations of a mechanical manipulator. To test the impacts, a flexible beam is clamped to the end-effector of a manipulator that is programmed in a way such that the rod moves against a rigid surface. Eighteen signals are captured and theirs correlation are calculated. A sensor classification scheme based on the multidimensional scaling technique is presented.
Resumo:
Locomotion has been a major research issue in the last few years. Many models for the locomotion rhythms of quadrupeds, hexapods, bipeds and other animals have been proposed. This study has also been extended to the control of rhythmic movements of adaptive legged robots. In this paper, we consider a fractional version of a central pattern generator (CPG) model for locomotion in bipeds. A fractional derivative D α f(x), with α non-integer, is a generalization of the concept of an integer derivative, where α=1. The integer CPG model has been proposed by Golubitsky, Stewart, Buono and Collins, and studied later by Pinto and Golubitsky. It is a network of four coupled identical oscillators which has dihedral symmetry. We study parameter regions where periodic solutions, identified with legs’ rhythms in bipeds, occur, for 0<α≤1. We find that the amplitude and the period of the periodic solutions, identified with biped rhythms, increase as α varies from near 0 to values close to unity.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
ABSTRACT OBJECTIVE To describe the increase in cases of malaria in Mozambique. METHODS Cross-sectional study conducted in 2014, in Mozambique with national weekly epidemiological bulletin data. I analyzed the number of recorded cases in the 2009-2013 period, which led to the creation of an endemic channel using the quartile and C-Sum methods. Monthly incidence rates were calculated for the first half of 2014, making it possible to determine the pattern of endemicity. Months in which the incidence rates exceeded the third quartile or line C-sum were declared as epidemic months. RESULTS The provinces of Nampula, Zambezia, Sofala, and Inhambane accounted for 52.7% of all cases in the first half of 2014. Also during this period, the provinces of Nampula, Sofala and Tete were responsible for 54.9% of the deaths from malaria. The incidence rates of malaria in children, and in all ages, have showed patterns in the epidemic zone. For all ages, the incidence rate has peaked in April (2,573 cases/100,000 inhabitants). CONCLUSIONS The results suggest the occurrence of an epidemic pattern of malaria in the first half of 2014 in Mozambique. It is strategic to have a more accurate surveillance at all levels (central, provincial and district) to target prevention and control interventions in a timely manner.
Resumo:
Many learning problems require handling high dimensional datasets with a relatively small number of instances. Learning algorithms are thus confronted with the curse of dimensionality, and need to address it in order to be effective. Examples of these types of data include the bag-of-words representation in text classification problems and gene expression data for tumor detection/classification. Usually, among the high number of features characterizing the instances, many may be irrelevant (or even detrimental) for the learning tasks. It is thus clear that there is a need for adequate techniques for feature representation, reduction, and selection, to improve both the classification accuracy and the memory requirements. In this paper, we propose combined unsupervised feature discretization and feature selection techniques, suitable for medium and high-dimensional datasets. The experimental results on several standard datasets, with both sparse and dense features, show the efficiency of the proposed techniques as well as improvements over previous related techniques.