52 resultados para Vetores
Resumo:
This study aims to investigate the influence of the balance of payments constrained on economic growth in Brazil from 1991 to 2010. With this order, are shown some of the Keynesian balance of payments constrained growth models, inspired by Thirlwall (1979) and Kaldor (1970), which are supported by important points in common, such as adherence to the principle of effective demand. Given that within this theoretical perspective, there is no consensus about the best model to explain the growth rate allowed by the balance of payments constraint, the results are presented by the representative of the empirical literature that addresses the topic, which are necessary for understand the Brazilian case. From the estimation of the income elasticity of imports (0.85) via autoregressive vectors with error correction (VEC), it was calculated five growth rates of income, as predicted by the models of Thirlwall (1979), Thirlwall and Hussain (1982), Moreno-Brid (1998, 2003) and Lourenço et al. (2011) and compared with the actual growth rate. The empirical analysis has shown that: it can not reject the presence of external constraint in the Brazilian economy, there is a strong similarity in growth rates provided by different modeling suggest that growth with external constraint. In addition, when using data in quarterly for the period after 1990 there are no factors that could cause instability in the parameters of the import function (income elasticity and price elasticity of imports) within the period, which indicates that the structural break widely associated with the year 1994 was not confirmed by this study
Resumo:
The skin cancer is the most common of all cancers and the increase of its incidence must, in part, caused by the behavior of the people in relation to the exposition to the sun. In Brazil, the non-melanoma skin cancer is the most incident in the majority of the regions. The dermatoscopy and videodermatoscopy are the main types of examinations for the diagnosis of dermatological illnesses of the skin. The field that involves the use of computational tools to help or follow medical diagnosis in dermatological injuries is seen as very recent. Some methods had been proposed for automatic classification of pathology of the skin using images. The present work has the objective to present a new intelligent methodology for analysis and classification of skin cancer images, based on the techniques of digital processing of images for extraction of color characteristics, forms and texture, using Wavelet Packet Transform (WPT) and learning techniques called Support Vector Machine (SVM). The Wavelet Packet Transform is applied for extraction of texture characteristics in the images. The WPT consists of a set of base functions that represents the image in different bands of frequency, each one with distinct resolutions corresponding to each scale. Moreover, the characteristics of color of the injury are also computed that are dependants of a visual context, influenced for the existing colors in its surround, and the attributes of form through the Fourier describers. The Support Vector Machine is used for the classification task, which is based on the minimization principles of the structural risk, coming from the statistical learning theory. The SVM has the objective to construct optimum hyperplanes that represent the separation between classes. The generated hyperplane is determined by a subset of the classes, called support vectors. For the used database in this work, the results had revealed a good performance getting a global rightness of 92,73% for melanoma, and 86% for non-melanoma and benign injuries. The extracted describers and the SVM classifier became a method capable to recognize and to classify the analyzed skin injuries
Resumo:
The human voice is an important communication tool and any disorder of the voice can have profound implications for social and professional life of an individual. Techniques of digital signal processing have been used by acoustic analysis of vocal disorders caused by pathologies in the larynx, due to its simplicity and noninvasive nature. This work deals with the acoustic analysis of voice signals affected by pathologies in the larynx, specifically, edema, and nodules on the vocal folds. The purpose of this work is to develop a classification system of voices to help pre-diagnosis of pathologies in the larynx, as well as monitoring pharmacological treatments and after surgery. Linear Prediction Coefficients (LPC), Mel Frequency cepstral coefficients (MFCC) and the coefficients obtained through the Wavelet Packet Transform (WPT) are applied to extract relevant characteristics of the voice signal. For the classification task is used the Support Vector Machine (SVM), which aims to build optimal hyperplanes that maximize the margin of separation between the classes involved. The hyperplane generated is determined by the support vectors, which are subsets of points in these classes. According to the database used in this work, the results showed a good performance, with a hit rate of 98.46% for classification of normal and pathological voices in general, and 98.75% in the classification of diseases together: edema and nodules
Resumo:
With the rapid growth of databases of various types (text, multimedia, etc..), There exist a need to propose methods for ordering, access and retrieve data in a simple and fast way. The images databases, in addition to these needs, require a representation of the images so that the semantic content characteristics are considered. Accordingly, several proposals such as the textual annotations based retrieval has been made. In the annotations approach, the recovery is based on the comparison between the textual description that a user can make of images and descriptions of the images stored in database. Among its drawbacks, it is noted that the textual description is very dependent on the observer, in addition to the computational effort required to describe all the images in database. Another approach is the content based image retrieval - CBIR, where each image is represented by low-level features such as: color, shape, texture, etc. In this sense, the results in the area of CBIR has been very promising. However, the representation of the images semantic by low-level features is an open problem. New algorithms for the extraction of features as well as new methods of indexing have been proposed in the literature. However, these algorithms become increasingly complex. So, doing an analysis, it is natural to ask whether there is a relationship between semantics and low-level features extracted in an image? and if there is a relationship, which descriptors better represent the semantic? which leads us to a new question: how to use descriptors to represent the content of the images?. The work presented in this thesis, proposes a method to analyze the relationship between low-level descriptors and semantics in an attempt to answer the questions before. Still, it was observed that there are three possibilities of indexing images: Using composed characteristic vectors, using parallel and independent index structures (for each descriptor or set of them) and using characteristic vectors sorted in sequential order. Thus, the first two forms have been widely studied and applied in literature, but there were no records of the third way has even been explored. So this thesis also proposes to index using a sequential structure of descriptors and also the order of these descriptors should be based on the relationship that exists between each descriptor and semantics of the users. Finally, the proposed index in this thesis revealed better than the traditional approachs and yet, was showed experimentally that the order in this sequence is important and there is a direct relationship between this order and the relationship of low-level descriptors with the semantics of the users
Resumo:
One of the most important goals of bioinformatics is the ability to identify genes in uncharacterized DNA sequences on world wide database. Gene expression on prokaryotes initiates when the RNA-polymerase enzyme interacts with DNA regions called promoters. In these regions are located the main regulatory elements of the transcription process. Despite the improvement of in vitro techniques for molecular biology analysis, characterizing and identifying a great number of promoters on a genome is a complex task. Nevertheless, the main drawback is the absence of a large set of promoters to identify conserved patterns among the species. Hence, a in silico method to predict them on any species is a challenge. Improved promoter prediction methods can be one step towards developing more reliable ab initio gene prediction methods. In this work, we present an empirical comparison of Machine Learning (ML) techniques such as Na¨ýve Bayes, Decision Trees, Support Vector Machines and Neural Networks, Voted Perceptron, PART, k-NN and and ensemble approaches (Bagging and Boosting) to the task of predicting Bacillus subtilis. In order to do so, we first built two data set of promoter and nonpromoter sequences for B. subtilis and a hybrid one. In order to evaluate of ML methods a cross-validation procedure is applied. Good results were obtained with methods of ML like SVM and Naïve Bayes using B. subtilis. However, we have not reached good results on hybrid database
Resumo:
This dissertation presents a new proposal for the Direction of Arrival (DOA) detection problem for more than one signal inciding simultaneously on an antennas array with linear or planar geometry by using intelligent algorithms. The DOA estimator is developed by using techniques of Conventional Beam-forming (CBF), Blind Source Separation (BSS), and the neural estimator MRBF (Modular Structure of Radial Basis Functions). The developed MRBF estimator has its capacity extended due to the interaction with the BSS technique. The BSS makes an estimation of the steering vectors of the multiple plane waves that reach the array in the same frequency, that means, obtains to separate mixed signals without information a priori. The technique developed in this work makes possible to identify the multiple sources directions and to identify and to exclude interference sources
Resumo:
Nowadays, classifying proteins in structural classes, which concerns the inference of patterns in their 3D conformation, is one of the most important open problems in Molecular Biology. The main reason for this is that the function of a protein is intrinsically related to its spatial conformation. However, such conformations are very difficult to be obtained experimentally in laboratory. Thus, this problem has drawn the attention of many researchers in Bioinformatics. Considering the great difference between the number of protein sequences already known and the number of three-dimensional structures determined experimentally, the demand of automated techniques for structural classification of proteins is very high. In this context, computational tools, especially Machine Learning (ML) techniques, have become essential to deal with this problem. In this work, ML techniques are used in the recognition of protein structural classes: Decision Trees, k-Nearest Neighbor, Naive Bayes, Support Vector Machine and Neural Networks. These methods have been chosen because they represent different paradigms of learning and have been widely used in the Bioinfornmatics literature. Aiming to obtain an improvment in the performance of these techniques (individual classifiers), homogeneous (Bagging and Boosting) and heterogeneous (Voting, Stacking and StackingC) multiclassification systems are used. Moreover, since the protein database used in this work presents the problem of imbalanced classes, artificial techniques for class balance (Undersampling Random, Tomek Links, CNN, NCL and OSS) are used to minimize such a problem. In order to evaluate the ML methods, a cross-validation procedure is applied, where the accuracy of the classifiers is measured using the mean of classification error rate, on independent test sets. These means are compared, two by two, by the hypothesis test aiming to evaluate if there is, statistically, a significant difference between them. With respect to the results obtained with the individual classifiers, Support Vector Machine presented the best accuracy. In terms of the multi-classification systems (homogeneous and heterogeneous), they showed, in general, a superior or similar performance when compared to the one achieved by the individual classifiers used - especially Boosting with Decision Tree and the StackingC with Linear Regression as meta classifier. The Voting method, despite of its simplicity, has shown to be adequate for solving the problem presented in this work. The techniques for class balance, on the other hand, have not produced a significant improvement in the global classification error. Nevertheless, the use of such techniques did improve the classification error for the minority class. In this context, the NCL technique has shown to be more appropriated
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Resumo:
The magnetic order of bylayers composed by a ferromagnetic film (F) coupled with an antiferromagnetic film (AF) is studied. Piles of coupled monolayers describe the films and the interfilm coupling is described by an exchange interaction between the magnetic moments at the interface. The F has a cubic anisotropy while the AF has a uniaxial anisotropy. We analyze the effects of an external do magnetic field applied parallel to the interface. We consider the intralayer coupling is strong enough to keep parallel all moments of the monolayer an then they are described by one vector proportional to the magnetization of the layer. The interlayer coupling is represented by an exchange interaction between these vectors. The magnetic energy of the system is the sum of the exchange. Anisotropy and Zeeman energies and the equilibrium configuration is one that gives the absolute minimum of the total energy. The magnetization of the system is calculated and the influence of the external do field combined with the interfilm coupling and the unidirectional anisotropy is studied. Special attention is given to the region near of the transition fields. The torque equation is used to study dynamical behavior of these systems. We consider small oscillations around the equilibrium position and we negleet nonlinear terms to obtain the natural frequencies of the system. The dependence of the frequencies with the external do field and their behavior in the phase transition region is analized
Resumo:
Broadly speaking, the concept of gene therapy involves the transfer of a genetic material into a cell, tissue, or organ in order to cure a disease or at least improve the clinical status of a patient. Making it simple, gene therapy consists in the insertion of functional genes into cells containing defective genes by substituting, complementing or inhibiting them. The achievement of a foreigner DNA expression into a population of cells requires its transfer to the target. Therefore, it is a key issue to create systems able to transfer and protect the DNA until it reaches the target, the vectors. The disadvantages related to the use of viral vectors have encouraged efforts to develop emulsions as non-viral vectors. In fact, they are easily produced, present controllable stability and enable transfection. The aim of this work was to develop an emulsion for gene therapy and evaluate its ability to compact nucleic acids by the development of a complex with the plasmid pIRES2-EGFP. The first step was to determine the Hydrophilic Lipophilic Balance (HLB) of the Captex® 355 (oily internal phase of the emulsion) through long and short term stability assays. Based on the results, emulsions composed of Captex® 355, Tween 20® and Span 60® with 10.7 HLB were produced by three different methods: phase inversion, spontaneous emulsification and sonication. The results showed that the lowest diameter and best stability of the emulsions were achieved by the sonication method. The cationic emulsions were made by adding DOTAP to the basic emulsion. Its association with pIRES2-EGFP was evaluated by electrophoresis. Several rates of emulsion and DNA were evaluated and the results showed that 100% of the complex was formed when the rate DOTAP/DNA(nmol/µg) was 130. In conclusion, the overall results show the ability of the proposed emulsion to compact pIRES2-EGFP, which is a requirement to a successful transfection. Therefore, such formulation may be considered a promising candidate for gene therapy
Resumo:
The development of epidemiological practices in the last years of the nineteenth and early twentieth century was characterized by both an influence of medical geography and the emergence of microbes and vectors of diseases. Both theories were used to explain outbreaks in Rio Grande do Norte specially in Natal. In this process were organized new institutions linked to public health, unhealthy spaces and prescribed hygiene measures. The redefinitions of the spaces were linked to updated elements of Hippocratic medicine such as aerism and emphasis on medical topography. How the physicians of the town were organized in the face of new meanings and fields of expertise in the demarcation of diseases and regulation of their own practices against the illegal medical practitioners? Likewise, the very occurrence of epidemics mobilized people, urban institutions and apparatuses. But how the Hippocratic legacy that leads to the idea of bad air originated by swamps from the eighteenth and nineteenth century has been linked to new microbial assumptions and disease vectors in the early twentieth century? How an invader from Africa, (the mosquito A. gambiae) mobilized transnational efforts to combat malaria and redefined the epidemiological practices? The aim of this work is to understand how epidemiological practices redefine the way we define spaces, practices and disease from both an approach influenced by a relational history of spaces and a theoretical synergy which includes topics in Science Studies, Post Structuralist Geography and some elements of Feminist Studies. Documentary research were surveyed in the reports of the provincial presidents, government posts to the Provincial Assembly, specialized medical articles and theses, and documents from the Rockefeller Foundation and national and international journals. In this regard shall be given to both material and discursive aspects of space-related practical epidemiological that Natal as much (in general) Rio Grande do Norte between bad air and malaria.
Resumo:
This thesis aims to discuss on articulations that have been produced on the socio-cultural field in the Psychiatric Reform process and its pertinence to the streghtening of Psychosocial Care Strategy (EAPS) in Fortaleza/CE. Such interest has been justified by the need to promote not only the production of these networks, but also interfaces to enable strategies of support and sociability from the perspective of deinstitutionalization of madness. We were inspired by the cartography perspective of Deleuze e Guattari, and determined as objectives: 1) to discuss the complexity of Psychiatric Reform process and analyze the EAPS as a model for the current Mental Health policy in the country; 2) to map socio-cultural strategies connected to the CAPS network in the city, investigating experiences that already exist or may be constituted as everyday social support networks; 3) from that mapping to start, define and discuss some aspects that converge to the accomplishment for this new mental health paradigm, drawing a cartography of the issues and movements in progress. The mapping was carried out in 2009 and consisted of semi-structured interviews with the coordinators of the 14 existent CAPS and with some people connected to the Coordination of Mental Health. Besides, during the whole development of the study, we have taken part in public events that brought us clues on the connection between mental health and culture. From the survey produced, we defined three vectors for discussion (Art, Labour and Partnership with Social Movements) which have been highlighted as effective possibilities of intervention in the socio-cultural field of Psychiatric Reform in Fortaleza and reveal important paths on the fulfillment process of a new pattern of care. For each of these axes, we chose a field of empirical research (Projeto Arte e Saúde, COOPCAPS e MSMCBJ) in which we could better understand their strengths and difficulties, starting from open interviews with some of their actors and the production of a diary of sensations in 2010. We have seen that they are articulated with the proposal of EAPS, being part of the concerns to the National Mental Health Policy and also the municipal administration. However, we have noticed to be necessary to promote those dimensions further, focusing on its complexity at the macro and micro policies, with the purpose of leading the Psychiatric Reform process
Resumo:
This thesis seeks to uphold the idea that the therapeutic residential service, as hybrid device and recent process of deinstitutionalization in mental health, works as a problem producer while it also indicates challenges and potentialities in this process, the attention on mental health and on its own care production. To that end, we work with the prospect map with which we approach reality as the subjectivities production field which transformations and intensities are the major thought propellants. From this perspective, it was possible to produce three "purpose maps" from meetings with actors and groups involved with the TRS and the theoretical study carried out. On the first map we mapped the conditions of possibility of this device and its design in the midst of the process of institutionalization and health policies. We indicate on it the TRS configuration as a hybrid and we hassled its proposition as a means of "social rehabilitation" that can work as a social homogeneity mechanism. On a second map, we cartographied mental captures through images and ways historically built from madness presented in the biopolitical contemporary game and we indicated that the resistance to such catches should be built on a politic daily basis as important vectors of the institutionalization process in mental health. Finally, on a third map we mapped the carefulness produced in the TRS, by analyzing the transition psychiatric hospital - TRS and the caregivers´ team work. On this mapping, the care, for the weakness in the coresponsibility field, is reveled crossed by mental, disciplinary and normality elements, but it is also built in resistance born from links in the intersubjective field of the caring work. We conclude, then, that the TRS power and the deinstitutionalization process itself were in building and strengthening affective labor micro political networks of life and liberty producers
Resumo:
Este trabalho apresenta uma extensão do provador haRVey destinada à verificação de obrigações de prova originadas de acordo com o método B. O método B de desenvolvimento de software abrange as fases de especificação, projeto e implementação do ciclo de vida do software. No contexto da verificação, destacam-se as ferramentas de prova Prioni, Z/EVES e Atelier-B/Click n Prove. Elas descrevem formalismos com suporte à checagem satisfatibilidade de fórmulas da teoria axiomática dos conjuntos, ou seja, podem ser aplicadas ao método B. A checagem de SMT consiste na checagem de satisfatibilidade de fórmulas da lógica de primeira-ordem livre de quantificadores dada uma teoria decidível. A abordagem de checagem de SMT implementada pelo provador automático de teoremas haRVey é apresentada, adotando-se a teoria dos vetores que não permite expressar todas as construções necessárias às especificações baseadas em conjuntos. Assim, para estender a checagem de SMT para teorias dos conjuntos destacam-se as teorias dos conjuntos de Zermelo-Frankel (ZFC) e de von Neumann-Bernays-Gödel (NBG). Tendo em vista que a abordagem de checagem de SMT implementada no haRVey requer uma teoria finita e pode ser estendida para as teorias nãodecidíveis, a teoria NBG apresenta-se como uma opção adequada para a expansão da capacidade dedutiva do haRVey à teoria dos conjuntos. Assim, através do mapeamento dos operadores de conjunto fornecidos pela linguagem B a classes da teoria NBG, obtem-se uma abordagem alternativa para a checagem de SMT aplicada ao método B
Resumo:
Nonogram is a logical puzzle whose associated decision problem is NP-complete. It has applications in pattern recognition problems and data compression, among others. The puzzle consists in determining an assignment of colors to pixels distributed in a N M matrix that satisfies line and column constraints. A Nonogram is encoded by a vector whose elements specify the number of pixels in each row and column of a figure without specifying their coordinates. This work presents exact and heuristic approaches to solve Nonograms. The depth first search was one of the chosen exact approaches because it is a typical example of brute search algorithm that is easy to implement. Another implemented exact approach was based on the Las Vegas algorithm, so that we intend to investigate whether the randomness introduce by the Las Vegas-based algorithm would be an advantage over the depth first search. The Nonogram is also transformed into a Constraint Satisfaction Problem. Three heuristics approaches are proposed: a Tabu Search and two memetic algorithms. A new function to calculate the objective function is proposed. The approaches are applied on 234 instances, the size of the instances ranging from 5 x 5 to 100 x 100 size, and including logical and random Nonograms