53 resultados para Acurácia Posicional


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo a Pesquisa do Estado de São Paulo

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The episodic memory system allows us to retrieve information about events, including its contextual aspects. It has been suggested that episodic memory is composed by two independent components: recollection and familiarity. Recollection is related to the vivid e detailed retrieval of item and contextual information, while familiarity is the capability to recognize items previously seen as familiars. Despite the fact that emotion is one of the most influent process on memory, only a few studies have investigated its effect on recollection and familiarity. Another limitation of studies about the effect of emotion on memory is that the majority of them have not adequately considered the differential effects of arousal and positive/negative valence. The main purpose of the current work is to investigate the independent effect of emotional valence and arousal on recollection and familiarity, as well as to test some hypothesis that have been suggested about the effect of emotion on episodic memory. The participants of the research performed a recognition task for three lists of emotional pictures: high arousal negative, high arousal positive and low arousal positive. At the test session, participants also rated the confidence level of their responses. The confidence ratings were used to plot ROC curves and estimate the contributions of recollection and familiarity of recognition performance. As the main results, we found that negative valence enhanced the component of recollection without any effect on familiarity or recognition accuracy. Arousal did not affect recognition performance or their components, but high arousal was associated with a higher proportion of false memories. This work highlight the importance of to consider both the emotional dimensions and episodic memory components in the study of emotion effect on episodic memory, since they interact in complex and independent way

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional applications of feature selection in areas such as data mining, machine learning and pattern recognition aim to improve the accuracy and to reduce the computational cost of the model. It is done through the removal of redundant, irrelevant or noisy data, finding a representative subset of data that reduces its dimensionality without loss of performance. With the development of research in ensemble of classifiers and the verification that this type of model has better performance than the individual models, if the base classifiers are diverse, comes a new field of application to the research of feature selection. In this new field, it is desired to find diverse subsets of features for the construction of base classifiers for the ensemble systems. This work proposes an approach that maximizes the diversity of the ensembles by selecting subsets of features using a model independent of the learning algorithm and with low computational cost. This is done using bio-inspired metaheuristics with evaluation filter-based criteria

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Committees of classifiers may be used to improve the accuracy of classification systems, in other words, different classifiers used to solve the same problem can be combined for creating a system of greater accuracy, called committees of classifiers. To that this to succeed is necessary that the classifiers make mistakes on different objects of the problem so that the errors of a classifier are ignored by the others correct classifiers when applying the method of combination of the committee. The characteristic of classifiers of err on different objects is called diversity. However, most measures of diversity could not describe this importance. Recently, were proposed two measures of the diversity (good and bad diversity) with the aim of helping to generate more accurate committees. This paper performs an experimental analysis of these measures applied directly on the building of the committees of classifiers. The method of construction adopted is modeled as a search problem by the set of characteristics of the databases of the problem and the best set of committee members in order to find the committee of classifiers to produce the most accurate classification. This problem is solved by metaheuristic optimization techniques, in their mono and multi-objective versions. Analyzes are performed to verify if use or add the measures of good diversity and bad diversity in the optimization objectives creates more accurate committees. Thus, the contribution of this study is to determine whether the measures of good diversity and bad diversity can be used in mono-objective and multi-objective optimization techniques as optimization objectives for building committees of classifiers more accurate than those built by the same process, but using only the accuracy classification as objective of optimization

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work was to describe the methodological procedures that were mandatory to develop a 3D digital imaging of the external and internal geometry of the analogue outcrops from reservoirs and to build a Virtual Outcrop Model (VOM). The imaging process of the external geometry was acquired by using the Laser Scanner, the Geodesic GPS and the Total Station procedures. On the other hand, the imaging of the internal geometry was evaluated by GPR (Ground Penetrating Radar).The produced VOMs were adapted with much more detailed data with addition of the geological data and the gamma ray and permeability profiles. As a model for the use of the methodological procedures used on this work, the adapted VOM, two outcrops, located at the east part of the Parnaiba Basin, were selected. On the first one, rocks from the aeolian deposit of the Piaui Formation (Neo-carboniferous) and tidal flat deposits from the Pedra de Fogo Formation (Permian), which arises in a large outcrops located between Floriano and Teresina (Piauí), are present. The second area, located at the National Park of Sete Cidades, also at the Piauí, presents rocks from the Cabeças Formation deposited in fluvial-deltaic systems during the Late Devonian. From the data of the adapted VOMs it was possible to identify lines, surfaces and 3D geometry, and therefore, quantify the geometry of interest. Among the found parameterization values, a table containing the thickness and width, obtained in canal and lobes deposits at the outcrop Paredão and Biblioteca were the more relevant ones. In fact, this table can be used as an input for stochastic simulation of reservoirs. An example of the direct use of such table and their predicted radargrams was the identification of the bounding surface at the aeolian sites from the Piauí Formation. In spite of such radargrams supply only bi-dimensional data, the acquired lines followed of a mesh profile were used to add a third dimension to the imaging of the internal geometry. This phenomenon appears to be valid for all studied outcrops. As a conclusion, the tool here presented can became a new methodology in which the advantages of the digital imaging acquired from the Laser Scanner (precision, accuracy and speed of acquisition) were combined with the Total Station procedure (precision) using the classical digital photomosaic technique

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents models of parameters of Sea Surface Layer (SSL), such as chlorophyll-a, sea surface temperature (SST), Primary Productivity (PP) and Total Suspended Matter (TSM) for the region adjacent to the continental shelf of Rio Grande do Norte (RN), Brazil. Concentrations of these parameters measured in situ were compared in time quasi-synchronous with images AQUA-MODIS between the years 2003 to 2011. Determination coefficients between samples in situ and bands reflectance sensor AQUA-MODIS were representative. From that, concentrations of SSL parameters were acquired for the continental shelf of the RN (eastern and northern) analyzing the geographic distribution of variation of these parameters between the years 2009-2012. Geographical and seasonal variations mainly influenced by global climate phenomena such as El Niño and La Niña, were found through the analysis of AQUA-MODIS images by Principal Components Analysis (PCA). Images show qualitatively the variance and availability of TSM in the regions, as well as their relationship with coastal erosion hotspots, monitored along the coast of the RN. In one of the areas identified as being of limited availability of TSM, we developed a methodology for assessment and evaluation of Digital Elevation Models (DEM) of beach surfaces (emerged and submerged sections) from the integration of topographic and bathymetric data measured in situ and accurately georeferenced compatible to studies of geomorphology and coastal dynamics of short duration. The methodology consisted of surveys with GNSS positioning operated in cinematic relative mode involved in topographic and bathymetric executed in relation to the stations of the geodetic network of the study area, which provided geodetic link to the Brazilian Geodetic System (GBS), univocal , fixed, and relatively stable over time. In this study Ponta Negra Beach, Natal / RN, was identified as a region with low variance and availability of MPS in the region off, as characterized by intense human occupation and intense coastal erosion in recent decades, which presents potential of the proposed methodology for accuracy and productivity, and the progress achieved in relation to the classical methods of surveying beach profiles

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Atualmente, há diferentes definições de implicações fuzzy aceitas na literatura. Do ponto de vista teórico, esta falta de consenso demonstra que há discordâncias sobre o real significado de "implicação lógica" nos contextos Booleano e fuzzy. Do ponto de vista prático, isso gera dúvidas a respeito de quais "operadores de implicação" os engenheiros de software devem considerar para implementar um Sistema Baseado em Regras Fuzzy (SBRF). Uma escolha ruim destes operadores pode implicar em SBRF's com menor acurácia e menos apropriados aos seus domínios de aplicação. Uma forma de contornar esta situação e conhecer melhor os conectivos lógicos fuzzy. Para isso se faz necessário saber quais propriedades tais conectivos podem satisfazer. Portanto, a m de corroborar com o significado de implicação fuzzy e corroborar com a implementação de SBRF's mais apropriados, várias leis Booleanas têm sido generalizadas e estudadas como equações ou inequações nas lógicas fuzzy. Tais generalizações são chamadas de leis Boolean-like e elas não são comumente válidas em qualquer semântica fuzzy. Neste cenário, esta dissertação apresenta uma investigação sobre as condições suficientes e necessárias nas quais três leis Booleanlike like — y ≤ I(x, y), I(x, I(y, x)) = 1 e I(x, I(y, z)) = I(I(x, y), I(x, z)) — se mantém válidas no contexto fuzzy, considerando seis classes de implicações fuzzy e implicações geradas por automorfismos. Além disso, ainda no intuito de implementar SBRF's mais apropriados, propomos uma extensão para os mesmos

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work discusses the application of techniques of ensembles in multimodal recognition systems development in revocable biometrics. Biometric systems are the future identification techniques and user access control and a proof of this is the constant increases of such systems in current society. However, there is still much advancement to be developed, mainly with regard to the accuracy, security and processing time of such systems. In the search for developing more efficient techniques, the multimodal systems and the use of revocable biometrics are promising, and can model many of the problems involved in traditional biometric recognition. A multimodal system is characterized by combining different techniques of biometric security and overcome many limitations, how: failures in the extraction or processing the dataset. Among the various possibilities to develop a multimodal system, the use of ensembles is a subject quite promising, motivated by performance and flexibility that they are demonstrating over the years, in its many applications. Givin emphasis in relation to safety, one of the biggest problems found is that the biometrics is permanently related with the user and the fact of cannot be changed if compromised. However, this problem has been solved by techniques known as revocable biometrics, which consists of applying a transformation on the biometric data in order to protect the unique characteristics, making its cancellation and replacement. In order to contribute to this important subject, this work compares the performance of individual classifiers methods, as well as the set of classifiers, in the context of the original data and the biometric space transformed by different functions. Another factor to be highlighted is the use of Genetic Algorithms (GA) in different parts of the systems, seeking to further maximize their eficiency. One of the motivations of this development is to evaluate the gain that maximized ensembles systems by different GA can bring to the data in the transformed space. Another relevant factor is to generate revocable systems even more eficient by combining two or more functions of transformations, demonstrating that is possible to extract information of a similar standard through applying different transformation functions. With all this, it is clear the importance of revocable biometrics, ensembles and GA in the development of more eficient biometric systems, something that is increasingly important in the present day

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objetivo: Traduzir e avaliar as propriedades psicométricas do Mobility Assessment Tool Physical Activity (MAT-PA) em idosos comunitários brasileiros. Métodos: Trata-se de um estudo tradução, adaptação cultural, e acurácia do instrumento MAT-PA, no qual foram avaliados 329 idosos, com idade mínima de 60 anos, residentes na comunidade. Os indivíduos submeteram-se a um formulário de avaliação composto por: questionário sócio-demográfico e de saúde percebida; avaliação física; Prova Cognitiva de Leganés (PCL); Center for Epidemiologic Studies Depression Scale (CES-D); International Physical Activity Questionnaire (IPAQ); Mobility Assessment Tool Physical Activity (MAT-PA). Dessa amostra total, 42 idosos utilizaram o acelerômetro durante 8 dias. Para verificar a confiabilidade teste-reteste do MAT-PA, reaplicou-se esse instrumento em 34 idosos 8 dias após a primeira avaliação. A análise estatística utilizada foi a correlação de Spearman, o Coeficiente de Correlação Intra-classe, o coeficiente α de Cronbach, o Bland-Altman e o teste T pareado. Resultados: As correlações dos dados IPAQ e acelerômetro versus o escore total do MAT-PA foram significativas e apresentaram um coeficiente de correlação de Spearman de 0,13 e 0,41, respectivamente. Analisou-se também a confiabilidade que apresentou as seguintes medidas: consistência interna, pelo coeficiente alfa de Cronbach (α= 0,70); Concordância teste-reteste, pelo coeficiente de correlação intra-classe (CCI=0,53; p<0,001). Conclusão: A versão brasileira do Mobility Assessment Tool Physical Activity (MAT-PA) como um instrumento de avaliação da atividade física de idosos, mostrou ser um método válido e confiável.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automatic detection of blood components is an important topic in the field of hematology. The segmentation is an important stage because it allows components to be grouped into common areas and processed separately and leukocyte differential classification enables them to be analyzed separately. With the auto-segmentation and differential classification, this work is contributing to the analysis process of blood components by providing tools that reduce the manual labor and increasing its accuracy and efficiency. Using techniques of digital image processing associated with a generic and automatic fuzzy approach, this work proposes two Fuzzy Inference Systems, defined as I and II, for autosegmentation of blood components and leukocyte differential classification, respectively, in microscopic images smears. Using the Fuzzy Inference System I, the proposed technique performs the segmentation of the image in four regions: the leukocyte’s nucleus and cytoplasm, erythrocyte and plasma area and using the Fuzzy Inference System II and the segmented leukocyte (nucleus and cytoplasm) classify them differentially in five types: basophils, eosinophils, lymphocytes, monocytes and neutrophils. Were used for testing 530 images containing microscopic samples of blood smears with different methods. The images were processed and its accuracy indices and Gold Standards were calculated and compared with the manual results and other results found at literature for the same problems. Regarding segmentation, a technique developed showed percentages of accuracy of 97.31% for leukocytes, 95.39% to erythrocytes and 95.06% for blood plasma. As for the differential classification, the percentage varied between 92.98% and 98.39% for the different leukocyte types. In addition to promoting auto-segmentation and differential classification, the proposed technique also contributes to the definition of new descriptors and the construction of an image database using various processes hematological staining

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data Visualization is widely used to facilitate the comprehension of information and find relationships between data. One of the most widely used techniques for multivariate data (4 or more variables) visualization is the 2D scatterplot. This technique associates each data item to a visual mark in the following way: two variables are mapped to Cartesian coordinates so that a visual mark can be placed on the Cartesian plane; the others variables are mapped gradually to visual properties of the mark, such as size, color, shape, among others. As the number of variables to be visualized increases, the amount of visual properties associated to the mark increases as well. As a result, the complexity of the final visualization is higher. However, increasing the complexity of the visualization does not necessarily implies a better visualization and, sometimes, it provides an inverse situation, producing a visually polluted and confusing visualization—this problem is called visual properties overload. This work aims to investigate whether it is possible to work around the overload of the visual channel and improve insight about multivariate data visualized through a modification in the 2D scatterplot technique. In this modification, we map the variables from data items to multisensoriy marks. These marks are composed not only by visual properties, but haptic properties, such as vibration, viscosity and elastic resistance, as well. We believed that this approach could ease the insight process, through the transposition of properties from the visual channel to the haptic channel. The hypothesis was verified through experiments, in which we have analyzed (a) the accuracy of the answers; (b) response time; and (c) the grade of personal satisfaction with the proposed approach. However, the hypothesis was not validated. The results suggest that there is an equivalence between the investigated visual and haptic properties in all analyzed aspects, though in strictly numeric terms the multisensory visualization achieved better results in response time and personal satisfaction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this study was to analyze the behavior of Sell-Side analysts and analysts propose a classification, considering the performance of the price forecasts and recom- mendations (sell-hold-buy) in the Brazilian stock market. For this, the first step was to analyze the consensus of analysts to understand the importance of this collective interven- tion in the market; the second was to analyze the analysts individually to understand how improve their analysis in time. Third was to understand how are the main methods of ranking used in markets. Finally, propose a form of classification that reflects the previous aspects discussed. To investigate the hypotheses proposed in the study were used linear models for panel to capture elements in time. The data of price forecasts and analyst recommendations individually and consensus, in the period 2005-2013 were obtained from Bloomberg R ○ . The main results were: (i) superior performance of consensus recommen- dations, compared with the individual analyzes; (ii) associating the number of analysts issuing recommendations with improved accuracy allows supposing that this number may be associated with increased consensus strength and hence accuracy; (iii) the anchoring effect of the analysts consensus revisions makes his predictions are biased, overvaluating the assets; (iv) analysts need to have greater caution in times of economic turbulence, noting also foreign markets such as the USA. For these may result changes in bias between optimism and pessimism; (v) effects due to changes in bias, as increased pessimism can cause excessive increase in purchase recommendations number. In this case, analysts can should be more cautious in analysis, mainly for consistency between recommendation and the expected price; (vi) the experience of the analyst with the asset economic sector and the asset contributes to the improvement of forecasts, however, the overall experience showed opposite evidence; (vii) the optimism associated with the overall experience, over time, shows a similar behavior to an excess of confidence, which could cause reduction of accuracy; (viii) the conflicting effect of general experience between the accuracy and the observed return shows evidence that, over time, the analyst has effects similar to the endowment bias on assets, which would result in a conflict analysis of recommendations and forecasts ; (ix) despite the focus on fewer sectors contribute to the quality of accuracy, the same does not occur with the focus on assets. So it is possible that analysts may have economies of scale when cover more assets within the same industry; and finally, (x) was possible to develop a proposal for classification analysts to consider both returns and the consistency of these predictions, called Analysis coefficient. This ranking resulted better results, considering the return / standard deviation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Google Docs (GD) is an online word processor with which multiple authors can work on the same document, in a synchronous or asynchronous manner, which can help develop the ability of writing in English (WEISSHEIMER; SOARES, 2012). As they write collaboratively, learners find more opportunities to notice the gaps in their written production, since they are exposed to more input from the fellow co-authors (WEISSHEIMER; BERGSLEITHNER; LEANDRO, 2012) and prioritize the process of text (re)construction instead of the concern with the final product, i.e., the final version of the text (LEANDRO; WEISSHEIMER; COOPER, 2013). Moreover, when it comes to second language (L2) learning, producing language enables the consolidation of existing knowledge as well as the internalization of new knowledge (SWAIN, 1985; 1993). Taking this into consideration, this mixed-method (DÖRNYEI, 2007) quasi-experimental (NUNAN, 1999) study aims at investigating the impact of collaborative writing through GD on the development of the writing skill in English and on the noticing of syntactic structures (SCHMIDT, 1990). Thirtyfour university students of English integrated the cohort of the study: twenty-five were assigned to the experimental group and nine were assigned to the control group. All learners went through a pre-test and a post-test so that we could measure their noticing of syntactic structures. Learners in the experimental group were exposed to a blended learning experience, in which they took reading and writing classes at the university and collaboratively wrote three pieces of flash fiction (a complete story told in a hundred words), outside the classroom, online through GD, during eleven weeks. Learners in the control group took reading and writing classes at the university but did not practice collaborative writing. The first and last stories produced by the learners in the experimental group were analysed in terms of grammatical accuracy, operationalized as the number of grammar errors per hundred words (SOUSA, 2014), and lexical density, which refers to the relationship between the number of words produced with lexical properties and the number of words produced with grammatical properties (WEISSHEIMER, 2007; MEHNERT, 1998). Additionally, learners in the experimental group answered an online questionnaire on the blended learning experience they were exposed to. The quantitative results showed that the collaborative task led to the production of more lexically dense texts over the 11 weeks. The noticing and grammatical accuracy results were different from what we expected; however, they provide us with insights on measurement issues, in the case of noticing, and on the participants‟ positive attitude towards collaborative writing with flash fiction. The qualitative results also shed light on the usefulness of computer-mediated collaborative writing in L2 learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Educational Data Mining is an application domain in artificial intelligence area that has been extensively explored nowadays. Technological advances and in particular, the increasing use of virtual learning environments have allowed the generation of considerable amounts of data to be investigated. Among the activities to be treated in this context exists the prediction of school performance of the students, which can be accomplished through the use of machine learning techniques. Such techniques may be used for student’s classification in predefined labels. One of the strategies to apply these techniques consists in their combination to design multi-classifier systems, which efficiency can be proven by results achieved in other studies conducted in several areas, such as medicine, commerce and biometrics. The data used in the experiments were obtained from the interactions between students in one of the most used virtual learning environments called Moodle. In this context, this paper presents the results of several experiments that include the use of specific multi-classifier systems systems, called ensembles, aiming to reach better results in school performance prediction that is, searching for highest accuracy percentage in the student’s classification. Therefore, this paper presents a significant exploration of educational data and it shows analyzes of relevant results about these experiments.