76 resultados para Integridade de dados


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we studied the asymptotic unbiasedness, the strong and the uniform strong consistencies of a class of kernel estimators fn as an estimator of the density function f taking values on a k-dimensional sphere

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lucid dreaming (LD) is a mental state in which the subject is aware of being dreaming while dreaming. The prevalence of LD among Europeans, North Americans and Asians is quite variable (between 26 and 92%) (Stepansky et al., 1998; Schredl & Erlacher, 2011; Yu, 2008); in Latin Americans it is yet to be investigated. Furthermore, the neural bases of LD remain controversial. Different studies have observed that LD presents power increases in the alpha frequency band (Tyson et al., 1984), in beta oscillations recorded from the parietal cortex (Holzinger et al., 2006) and in gamma rhythm recorded from the frontal cortex (Voss et al., 2009), in comparison with non-lucid dreaming. In this thesis we report epidemiological and neurophysiological investigations of LD. To investigate the epidemiology of LD (Study 1), we developed an online questionnaire about dreams that was answered by 3,427 volunteers. In this sample, 56% were women, 24% were men and 20% did not inform their gender (the median age was 25 years). A total of 76.5% of the subjects reported recalling dreams at least once a week, and about two-thirds of them reported dreaming always in the first person, i.e. when the dreamer observes the dream from within itself, not as another dream character. Dream reports typically depicted actions (93.3%), known people (92.9%), sounds/voices (78.5%), and colored images (76.3%). The oneiric content was related to plans for upcoming days (37.8%), and memories of the previous day (13.8%). Nightmares were characterized by general anxiety/fear (65.5%), feeling of being chased (48.5%), and non-painful unpleasant sensations (47.6%). With regard to LD, 77.2% of the subjects reported having experienced LD at least once in their lifetime (44.9% reported up to 10 episodes ever). LD frequency was weakly correlated with dream recall frequency (r = 0.20, p <0.001) and was higher in men (χ2=10.2, p=0.001). The control of LD was rare (29.7%) and inversely correlated with LD duration (r=-0.38, p <0.001), which is usually short: to 48.5% of the subjects, LD takes less than 1 minute. LD occurrence is mainly associated with having sleep without a fixed time to wake up (38.3%), which increases the chance of having REM sleep (REMS). LD is also associated with stress (30.1%), which increases REMS transitions into wakefulness. Overall, the data suggest that dreams and nightmares can be evolutionarily understood as a simulation of the common situations that happen in life, and that are related to our social, psychological and biological integrity. The results also indicate that LD is a relatively common experience (but not recurrent), often elusive and difficult to control, suggesting that LD is an incomplete stationary stage (or phase transition) between REMS and wake state. Moreover, despite the variability of LD prevalence among North Americans, Europeans and Asians, our data from Latin Americans strengthens the notion that LD is a general phenomenon of the human species. To further investigate the neural bases of LD (Study 2), we performed sleep recordings of 32 non-frequent lucid dreamers (sample 1) and 6 frequent lucid dreamers (sample 2). In sample 1, we applied two cognitive-behavioral techniques to induce LD: presleep LD suggestion (n=8) and light pulses applied during REMS (n=8); in a control group we made no attempt to influence dreaming (n=16). The results indicate that it is quite difficult but still possible to induce LD, since we could induce LD in a single subject, using the suggestion technique. EEG signals from this one subject exhibited alpha (7-14 Hz) bursts prior to LD. These bursts were brief (about 3s), without significant change in muscle tone, and independent of the presence of rapid eye movements. No such bursts were observed in the remaining 31 subjects. In addition, LD exhibited significantly higher occipital alpha and right temporo-parietal gamma (30-50 Hz) power, in comparison with non-lucid REMS. In sample 2, LD presented increased frontal high-gamma (50-100 Hz) power on average, in comparison with non-lucid REMS; however, this was not consistent across all subjects, being a clear phenomenon in just one subject. We also observed that four of these volunteers showed an increase in alpha rhythm power over the occipital region, immediately before or during LD. Altogether, our preliminary results suggest that LD presents neurophysiological characteristics that make it different from both waking and the typical REMS. To the extent that the right temporo-parietal and frontal regions are related to the formation of selfconsciousness and body internal image, we suggest that an increased activity in these regions during sleep may be the neurobiological mechanism underlying LD. The alpha rhythm bursts, as well as the alpha power increase over the occipital region, may represent micro-arousals, which facilitate the contact of the brain during sleep with the external environment, favoring the occurrence of LD. This also strengthens the notion that LD is an intermediary state between sleep and wakefulness

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Circadian rhythms are variations in physiological processes that help living beings to adapt to environmental cycles. These rhythms are generated and are synchronized to the dark light cycle through the suprachiasmatic nucleus. The integrity of circadian rhythmicity has great implication on human health. Currently it is known that disturbances in circadian rhythms are related to some problems of today such as obesity, propensity for certain types of cancer and mental disorders for example. The circadian rhythmicity can be studied through experiments with animal models and in humans directly. In this work we use computational models to gather experimental results from the literature and explain the results of our laboratory. Another focus of this study was to analyze data rhythms of activity and rest obtained experimentally. Here we made a review on the use of variables used to analyze these data and finally propose an update on how to calculate these variables. Our models were able to reproduce the main experimental results in the literature and provided explanations for the results of experiments performed in our laboratory. The new variables used to analyze the rhythm of activity and rest in humans were more efficient to describe the fragmentation and synchronization of this rhythm. Therefore, the work contributed improving existing tools for the study of circadian rhythms in mammals

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a lack of clinical studies evaluating techniques of functional impression for partially edentulous arches. The aim of this double-blind non-randomized controlled clinical trial was to compare the efficacy of altered cast impression (ACI) and direct functional impression (DFI) techniques. The efficacy was evaluated regarding the number of occlusal units on denture teeth, mucosa integrity at 24-hour follow-up and denture base extension. The sample included 51 patients (female and male) with mean age of 58.96 years treated at Dental Department of UFRN. The patients, exhibiting edentulous maxilla and mandibular Kennedy class I, were divided into two groups (group ACI, n=29; group DFI, n=22). Clinical evaluation was based on the number of occlusal units on natural and/or artificial teeth, mucosa integrity at 24-hour follow-up, and denture base extension. Statistical analysis was conducted using the software SPSS 17.0® (SPSS Inc., Chicago, Illinois). Student T-test was used to reveal association between number of occlusal units and impression technique while chi-square test showed association between mucosa integrity and impression technique. Fischer s exact test was applied for association between denture base extension and impression technique at 95% level of significance. No significant difference was observed between the groups regarding number of occlusal units, mucosa integrity and denture base extension. The altered cast technique did not provide significant improvement in comparison to the direct technique when the number of occlusal units, mucosa integrity and denture base extension

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dentre os vários aspectos da saúde do idoso, a saúde bucal merece atenção especial pelo fato de que, historicamente, nos serviços odontológicos, não se considera esse grupo populacional como prioridade de atenção. Por isso, se faz necessária a produção de um indicador multidimensional capaz de mensurar todas as alterações bucais encontradas em um idoso, facilitando a categorização da saúde bucal como um todo. Tal indicador representará um importante instrumento capaz de elencar prioridades de atenção voltadas à população idosa. Portanto, o estudo em questão propõe a produção e validação de um indicador de saúde bucal a partir dos dados secundários coletados pelo projeto SB Brasil 2010 referente ao grupo etário de 65 a 74 anos. A amostra foi representada pelos 7619 indivíduos do grupo etário de 65 a 74 anos que participaram da pesquisa nas 5 (cinco) regiões do Brasil. Tais indivíduos foram submetidos à avaliação epidemiológica das condições de saúde bucal, a partir dos índices CPO-d, CPI e PIP. Além disso, verificou-se o uso e necessidade de prótese, bem como características sociais, econômicas e demográficas. Uma análise fatorial identificou um número relativamente pequeno de fatores comuns, através da análise de componentes principais. Após a nomenclatura dos fatores, foi realizada a soma dos escores fatoriais por indivíduo. Por último, a dicotomização dessa soma nos forneceu o indicador de saúde bucal proposto. Para esse estudo foram incluídas na análise fatorial 12 variáveis de saúde bucal oriundas do banco de dados do SB Brasil 2010 e, também 3 variáveis socioeconômicas e demográficas. Com base no critério de Kaiser, observa-se que foram retidos cinco fatores que explicaram 70,28% da variância total das variáveis incluídas no modelo. O fator 1 (um) explica sozinho 32,02% dessa variância, o fator 2 (dois) 14,78%, enquanto que os fatores 3 (três), 4 (quatro) e 5 (cinco) explicam 8,90%, 7,89% e 6,68%, respectivamente. Por meio das cargas fatoriais, o fator um foi denominado dente hígido e pouco uso de prótese , o dois doença periodontal presente , o três necessidade de reabilitação , já o quarto e quinto fator foram denominados de cárie e condição social favorável , respectivamente. Para garantir a representatividade do indicador proposto, realizou-se uma segunda análise fatorial em uma subamostra da população de idosos investigados. Por outro lado, a aplicabilidade do indicador produzido foi testada por meio da associação do mesmo com outras variáveis do estudo. Por fim, Cabe ressaltar que, o indicador aqui produzido foi capaz de agregar diver sas informações a respeito da saúde bucal e das condições sociais desses indivíduos, traduzindo assim, diversos dados em uma informação simples, que facilita o olhar dos gestores de saúde sobre as reais necessidades de intervenções em relação à saúde bucal de determinada população

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The International Labor Organization (OIT) estimates that there are around 118 million children subjected to child labor around the world. In Brazil, there are 3.5 million workers aged between 5 and 17. This exploitation practice constitutes a serious social problem, including of Public Health, since these workers are exposed to a wide range of risks, such as those related to health, physical integrity and even to life, which may cause them to become sick adults and/or interrupt their lives prematurely. Therefore, this research aims to investigate the relationship between the frequency of child labor in the age group of 10 to 13 years and some socio-economic indicators. It is a quantitative research in an ecological study whose levels of analysis are the Brazilian municipalities grouped in 161 regions, defined from socioeconomic criteria. The dependent variable of this study was the prevalence of child labor in the age group of 10 to 13 years. The independent variables were selected after a correlation between the 2010 Census of child labor in the age group of 10 to 13 years and secondary data had been conducted, adopting two main independent variables: funds from the Family Allowance Program (PBF) per 1,000 inhabitants and Funds from the Child Labor Eradication Program (PETI) per a thousand inhabitants. Initially, it was conducted a descriptive analysis of the variables of the study, then, a bivariate analysis, and the correlation matrix was built. At last, the Multiple Linear Regression stratified analysis was performed. The results of this survey indicate that public policies , like the Bolsa Familia Program Features per 1000 inhabitants and Resources Program for the Eradication of Child Labour to be allocated to municipalities with HDI < 0.697 represent a decrease in the rate of child labor ; These programs have the resources to be invested in municipalities with HDI > = 0.697 have no effect on the rate of child labor. Other adjustment variables showed significance, among these the municipal Human Development Index (IDH), years of schooling at 18 years of age, illiteracy at 15 years of age or more, employees without employment contract at 18 years of age and the Gini Index. It is understood that the child labor issue is complex. The problem is associated, although not restricted to, poverty, the social exclusion and inequality that exist in Brazil, but other factors of cultural and economic nature, as well as of organization of production, also account for its aggravation. Fighting child labor involves a wide intersectoral articulation, shared and integrated with several public policies, among them health, sports, culture, agriculture, labor and human rights, with a view to guaranteeing the integrality of the rights of children and adolescents in situation of labor and of their respective families

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In systems that combine the outputs of classification methods (combination systems), such as ensembles and multi-agent systems, one of the main constraints is that the base components (classifiers or agents) should be diverse among themselves. In other words, there is clearly no accuracy gain in a system that is composed of a set of identical base components. One way of increasing diversity is through the use of feature selection or data distribution methods in combination systems. In this work, an investigation of the impact of using data distribution methods among the components of combination systems will be performed. In this investigation, different methods of data distribution will be used and an analysis of the combination systems, using several different configurations, will be performed. As a result of this analysis, it is aimed to detect which combination systems are more suitable to use feature distribution among the components

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Na computação científica é necessário que os dados sejam o mais precisos e exatos possível, porém a imprecisão dos dados de entrada desse tipo de computação pode estar associada às medidas obtidas por equipamentos que fornecem dados truncados ou arredondados, fazendo com que os cálculos com esses dados produzam resultados imprecisos. Os erros mais comuns durante a computação científica são: erros de truncamentos, que surgem em dados infinitos e que muitas vezes são truncados", ou interrompidos; erros de arredondamento que são responsáveis pela imprecisão de cálculos em seqüências finitas de operações aritméticas. Diante desse tipo de problema Moore, na década de 60, introduziu a matemática intervalar, onde foi definido um tipo de dado que permitiu trabalhar dados contínuos,possibilitando, inclusive prever o tamanho máximo do erro. A matemática intervalar é uma saída para essa questão, já que permite um controle e análise de erros de maneira automática. Porém, as propriedades algébricas dos intervalos não são as mesmas dos números reais, apesar dos números reais serem vistos como intervalos degenerados, e as propriedades algébricas dos intervalos degenerados serem exatamente as dos números reais. Partindo disso, e pensando nas técnicas de especificação algébrica, precisa-se de uma linguagem capaz de implementar uma noção auxiliar de equivalência introduzida por Santiago [6] que ``simule" as propriedades algébricas dos números reais nos intervalos. A linguagem de especificação CASL, Common Algebraic Specification Language, [1] é uma linguagem de especificação algébrica para a descrição de requisitos funcionais e projetos modulares de software, que vem sendo desenvolvida pelo CoFI, The Common Framework Initiative [2] a partir do ano de 1996. O desenvolvimento de CASL se encontra em andamento e representa um esforço conjunto de grandes expoentes da área de especificações algébricas no sentido de criar um padrão para a área. A dissertação proposta apresenta uma especificação em CASL do tipo intervalo, munido da aritmética de Moore, afim de que ele venha a estender os sistemas que manipulem dados contínuos, sendo possível não só o controle e a análise dos erros de aproximação, como também a verificação algébrica de propriedades do tipo de sistema aqui mencionado. A especificação de intervalos apresentada aqui foi feita apartir das especificações dos números racionais proposta por Mossakowaski em 2001 [3] e introduz a noção de igualdade local proposta por Santiago [6, 5, 4]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of clustering methods for the discovery of cancer subtypes has drawn a great deal of attention in the scientific community. While bioinformaticians have proposed new clustering methods that take advantage of characteristics of the gene expression data, the medical community has a preference for using classic clustering methods. There have been no studies thus far performing a large-scale evaluation of different clustering methods in this context. This work presents the first large-scale analysis of seven different clustering methods and four proximity measures for the analysis of 35 cancer gene expression data sets. Results reveal that the finite mixture of Gaussians, followed closely by k-means, exhibited the best performance in terms of recovering the true structure of the data sets. These methods also exhibited, on average, the smallest difference between the actual number of classes in the data sets and the best number of clusters as indicated by our validation criteria. Furthermore, hierarchical methods, which have been widely used by the medical community, exhibited a poorer recovery performance than that of the other methods evaluated. Moreover, as a stable basis for the assessment and comparison of different clustering methods for cancer gene expression data, this study provides a common group of data sets (benchmark data sets) to be shared among researchers and used for comparisons with new methods

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of middleware technology in various types of systems, in order to abstract low-level details related to the distribution of application logic, is increasingly common. Among several systems that can be benefited from using these components, we highlight the distributed systems, where it is necessary to allow communications between software components located on different physical machines. An important issue related to the communication between distributed components is the provision of mechanisms for managing the quality of service. This work presents a metamodel for modeling middlewares based on components in order to provide to an application the abstraction of a communication between components involved in a data stream, regardless their location. Another feature of the metamodel is the possibility of self-adaptation related to the communication mechanism, either by updating the values of its configuration parameters, or by its replacement by another mechanism, in case of the restrictions of quality of service specified are not being guaranteed. In this respect, it is planned the monitoring of the communication state (application of techniques like feedback control loop), analyzing performance metrics related. The paradigm of Model Driven Development was used to generate the implementation of a middleware that will serve as proof of concept of the metamodel, and the configuration and reconfiguration policies related to the dynamic adaptation processes. In this sense was defined the metamodel associated to the process of a communication configuration. The MDD application also corresponds to the definition of the following transformations: the architectural model of the middleware in Java code, and the configuration model to XML

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this work is to investigate the suitability of applying cluster ensemble techniques (ensembles or committees) to gene expression data. More specifically, we will develop experiments with three diferent cluster ensembles methods, which have been used in many works in literature: coassociation matrix, relabeling and voting, and ensembles based on graph partitioning. The inputs for these methods will be the partitions generated by three clustering algorithms, representing diferent paradigms: kmeans, ExpectationMaximization (EM), and hierarchical method with average linkage. These algorithms have been widely applied to gene expression data. In general, the results obtained with our experiments indicate that the cluster ensemble methods present a better performance when compared to the individual techniques. This happens mainly for the heterogeneous ensembles, that is, ensembles built with base partitions generated with diferent clustering algorithms

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing complexity of integrated circuits has boosted the development of communications architectures like Networks-on-Chip (NoCs), as an architecture; alternative for interconnection of Systems-on-Chip (SoC). Networks-on-Chip complain for component reuse, parallelism and scalability, enhancing reusability in projects of dedicated applications. In the literature, lots of proposals have been made, suggesting different configurations for networks-on-chip architectures. Among all networks-on-chip considered, the architecture of IPNoSys is a non conventional one, since it allows the execution of operations, while the communication process is performed. This study aims to evaluate the execution of data-flow based applications on IPNoSys, focusing on their adaptation against the design constraints. Data-flow based applications are characterized by the flowing of continuous stream of data, on which operations are executed. We expect that these type of applications can be improved when running on IPNoSys, because they have a programming model similar to the execution model of this network. By observing the behavior of these applications when running on IPNoSys, were performed changes in the execution model of the network IPNoSys, allowing the implementation of an instruction level parallelism. For these purposes, analysis of the implementations of dataflow applications were performed and compared

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Symbolic Data Analysis (SDA) main aims to provide tools for reducing large databases to extract knowledge and provide techniques to describe the unit of such data in complex units, as such, interval or histogram. The objective of this work is to extend classical clustering methods for symbolic interval data based on interval-based distance. The main advantage of using an interval-based distance for interval-based data lies on the fact that it preserves the underlying imprecision on intervals which is usually lost when real-valued distances are applied. This work includes an approach allow existing indices to be adapted to interval context. The proposed methods with interval-based distances are compared with distances punctual existing literature through experiments with simulated data and real data interval