952 resultados para Average Entropy
Resumo:
The electrocardiogram (ECG) signal has been widely used to study the physiological substrates of emotion. However, searching for better filtering techniques in order to obtain a signal with better quality and with the maximum relevant information remains an important issue for researchers in this field. Signal processing is largely performed for ECG analysis and interpretation, but this process can be susceptible to error in the delineation phase. In addition, it can lead to the loss of important information that is usually considered as noise and, consequently, discarded from the analysis. The goal of this study was to evaluate if the ECG noise allows for the classification of emotions, while using its entropy as an input in a decision tree classifier. We collected the ECG signal from 25 healthy participants while they were presented with videos eliciting negative (fear and disgust) and neutral emotions. The results indicated that the neutral condition showed a perfect identification (100%), whereas the classification of negative emotions indicated good identification performances (60% of sensitivity and 80% of specificity). These results suggest that the entropy of noise contains relevant information that can be useful to improve the analysis of the physiological correlates of emotion.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
This gives statistics on the average daily inmate population in SC prisons from 1970 to 2016.
Resumo:
This statistical sheet gives the statewide average millage rate estimates from 1992 to 2014 broken down by tax year, millage rate, applicable fiscal year and growth rate.
Resumo:
This is a 2013 estimate of average total millage rate broken down by county, millage, percent relative to statewide average and relative to the statewide average.
Resumo:
Las organizaciones y sus entornos son sistemas complejos. Tales sistemas son difíciles de comprender y predecir. Pese a ello, la predicción es una tarea fundamental para la gestión empresarial y para la toma de decisiones que implica siempre un riesgo. Los métodos clásicos de predicción (entre los cuales están: la regresión lineal, la Autoregresive Moving Average y el exponential smoothing) establecen supuestos como la linealidad, la estabilidad para ser matemática y computacionalmente tratables. Por diferentes medios, sin embargo, se han demostrado las limitaciones de tales métodos. Pues bien, en las últimas décadas nuevos métodos de predicción han surgido con el fin de abarcar la complejidad de los sistemas organizacionales y sus entornos, antes que evitarla. Entre ellos, los más promisorios son los métodos de predicción bio-inspirados (ej. redes neuronales, algoritmos genéticos /evolutivos y sistemas inmunes artificiales). Este artículo pretende establecer un estado situacional de las aplicaciones actuales y potenciales de los métodos bio-inspirados de predicción en la administración.
Resumo:
Solar resource assessment is essential for the different phases of solar energy projects, such as preliminary design engineering, financing including due diligence and, later, insurance phases. An important aspect is the long term resource estimation. This kind of estimation can only be obtained through the statistical analysis of long-term data series of solar radiation measurements, preferably ground measurements. This paper is a first step in this direction, with an initial statistical analysis performed over the radiation data from a national measurement network, consisting of eighty-nine meteorological stations. These preliminary results are presented in figures that represent the annual average values of Global Horizontal Irradiation (GHI) and its Variability in the Portuguese continental territory. These results show that the South of Portugal is the most suitable area for the implementation of medium to large scale solar plants.
Resumo:
Reinforcement Learning (RL) provides a powerful framework to address sequential decision-making problems in which the transition dynamics is unknown or too complex to be represented. The RL approach is based on speculating what is the best decision to make given sample estimates obtained from previous interactions, a recipe that led to several breakthroughs in various domains, ranging from game playing to robotics. Despite their success, current RL methods hardly generalize from one task to another, and achieving the kind of generalization obtained through unsupervised pre-training in non-sequential problems seems unthinkable. Unsupervised RL has recently emerged as a way to improve generalization of RL methods. Just as its non-sequential counterpart, the unsupervised RL framework comprises two phases: An unsupervised pre-training phase, in which the agent interacts with the environment without external feedback, and a supervised fine-tuning phase, in which the agent aims to efficiently solve a task in the same environment by exploiting the knowledge acquired during pre-training. In this thesis, we study unsupervised RL via state entropy maximization, in which the agent makes use of the unsupervised interactions to pre-train a policy that maximizes the entropy of its induced state distribution. First, we provide a theoretical characterization of the learning problem by considering a convex RL formulation that subsumes state entropy maximization. Our analysis shows that maximizing the state entropy in finite trials is inherently harder than RL. Then, we study the state entropy maximization problem from an optimization perspective. Especially, we show that the primal formulation of the corresponding optimization problem can be (approximately) addressed through tractable linear programs. Finally, we provide the first practical methodologies for state entropy maximization in complex domains, both when the pre-training takes place in a single environment as well as multiple environments.
Resumo:
The current climate crisis requires a comprehensive understanding of biodiversity to acknowledge how ecosystems’ responses to anthropogenic disturbances may result in feedback that can either mitigate or exacerbate global warming. Although ecosystems are dynamic and macroecological patterns change drastically in response to disturbance, dynamic macroecology has received insufficient attention and theoretical formalisation. In this context, the maximum entropy principle (MaxEnt) could provide an effective inference procedure to study ecosystems. Since the improper usage of entropy outside its scope often leads to misconceptions, the opening chapter will clarify its meaning by following its evolution from classical thermodynamics to information theory. The second chapter introduces the study of ecosystems from a physicist’s viewpoint. In particular, the MaxEnt Theory of Ecology (METE) will be the cornerstone of the discussion. METE predicts the shapes of macroecological metrics in relatively static ecosystems using constraints imposed by static state variables. However, in disturbed ecosystems with macroscale state variables that change rapidly over time, its predictions tend to fail. In the final chapter, DynaMETE is therefore presented as an extension of METE from static to dynamic. By predicting how macroecological patterns are likely to change in response to perturbations, DynaMETE can contribute to a better understanding of disturbed ecosystems’ fate and the improvement of conservation and management of carbon sinks, like forests. Targeted strategies in ecosystem management are now indispensable to enhance the interdependence of human well-being and the health of ecosystems, thus avoiding climate change tipping points.
Resumo:
The purpose of this thesis is to clarify the role of non-equilibrium stationary currents of Markov processes in the context of the predictability of future states of the system. Once the connection between the predictability and the conditional entropy is established, we provide a comprehensive approach to the definition of a multi-particle Markov system. In particular, starting from the well-known theory of random walk on network, we derive the non-linear master equation for an interacting multi-particle system under the one-step process hypothesis, highlighting the limits of its tractability and the prop- erties of its stationary solution. Lastly, in order to study the impact of the NESS on the predictability at short times, we analyze the conditional entropy by modulating the intensity of the stationary currents, both for a single-particle and a multi-particle Markov system. The results obtained analytically are numerically tested on a 5-node cycle network and put in correspondence with the stationary entropy production. Furthermore, because of the low dimensionality of the single-particle system, an analysis of its spectral properties as a function of the modulated stationary currents is performed.
Resumo:
The aim of the study was to develop a culturally adapted translation of the 12-item smell identification test from Sniffin' Sticks (SS-12) for the Estonian population in order to help diagnose Parkinson's disease (PD). A standard translation of the SS-12 was created and 150 healthy Estonians were questioned about the smells used as response options in the test. Unfamiliar smells were replaced by culturally familiar options. The adapted SS-12 was applied to 70 controls in all age groups, and thereafter to 50 PD patients and 50 age- and sex-matched controls. 14 response options from 48 used in the SS-12 were replaced with familiar smells in an adapted version, in which the mean rate of correct response was 87% (range 73-99) compared to 83% with the literal translation (range 50-98). In PD patients, the average adapted SS-12 score (5.4/12) was significantly lower than in controls (average score 8.9/12), p < 0.0001. A multiple linear regression using the score in the SS-12 as the outcome measure showed that diagnosis and age independently influenced the result of the SS-12. A logistic regression using the SS-12 and age as covariates showed that the SS-12 (but not age) correctly classified 79.0% of subjects into the PD and control category, using a cut-off of <7 gave a sensitivity of 76% and specificity of 86% for the diagnosis of PD. The developed SS-12 cultural adaption is appropriate for testing olfaction in Estonia for the purpose of PD diagnosis.
Resumo:
Revascularization outcome depends on microbial elimination because apical repair will not happen in the presence of infected tissues. This study evaluated the microbial composition of traumatized immature teeth and assessed their reduction during different stages of the revascularization procedures performed with 2 intracanal medicaments. Fifteen patients (7-17 years old) with immature teeth were submitted to the revascularization procedures; they were divided into 2 groups according to the intracanal medicament used: TAP group (n = 7), medicated with a triple antibiotic paste, and CHP group (n = 8), dressed with calcium hydroxide + 2% chlorhexidine gel. Samples were taken before any treatment (S1), after irrigation with 6% NaOCl (S2), after irrigation with 2% chlorhexidine (S3), after intracanal dressing (S4), and after 17% EDTA irrigation (S5). Cultivable bacteria recovered from the 5 stages were counted and identified by means of polymerase chain reaction assay (16S rRNA). Both groups had colony-forming unit counts significantly reduced after S2 (P < .05); however, no significant difference was found between the irrigants (S2 and S3, P = .99). No difference in bacteria counts was found between the intracanal medicaments used (P = .95). The most prevalent bacteria detected were Actinomyces naeslundii (66.67%), followed by Porphyromonas endodontalis, Parvimonas micra, and Fusobacterium nucleatum, which were detected in 33.34% of the root canals. An average of 2.13 species per canal was found, and no statistical correlation was observed between bacterial species and clinical/radiographic features. The microbial profile of infected immature teeth is similar to that of primarily infected permanent teeth. The greatest bacterial reduction was promoted by the irrigation solutions. The revascularization protocols that used the tested intracanal medicaments were efficient in reducing viable bacteria in necrotic immature teeth.
Resumo:
This is an ecological, analytical and retrospective study comprising the 645 municipalities in the State of São Paulo, the scope of which was to determine the relationship between socioeconomic, demographic variables and the model of care in relation to infant mortality rates in the period from 1998 to 2008. The ratio of average annual change for each indicator per stratum coverage was calculated. Infant mortality was analyzed according to the model for repeated measures over time, adjusted for the following correction variables: the city's population, proportion of Family Health Programs (PSFs) deployed, proportion of Growth Acceleration Programs (PACs) deployed, per capita GDP and SPSRI (São Paulo social responsibility index). The analysis was performed by generalized linear models, considering the gamma distribution. Multiple comparisons were performed with the likelihood ratio with chi-square approximate distribution, considering a significance level of 5%. There was a decrease in infant mortality over the years (p < 0.05), with no significant difference from 2004 to 2008 (p > 0.05). The proportion of PSFs deployed (p < 0.0001) and per capita GDP (p < 0.0001) were significant in the model. The decline of infant mortality in this period was influenced by the growth of per capita GDP and PSFs.
Resumo:
The aim of this study is to test the feasibility and reproducibility of diffusion-weighted magnetic resonance imaging (DW-MRI) evaluations of the fetal brains in cases of twin-twin transfusion syndrome (TTTS). From May 2011 to June 2012, 24 patients with severe TTTS underwent MRI scans for evaluation of the fetal brains. Datasets were analyzed offline on axial DW images and apparent diffusion coefficient (ADC) maps by two radiologists. The subjective evaluation was described as the absence or presence of water diffusion restriction. The objective evaluation was performed by the placement of 20-mm(2) circular regions of interest on the DW image and ADC maps. Subjective interobserver agreement was assessed by the kappa correlation coefficient. Objective intraobserver and interobserver agreements were assessed by proportionate Bland-Altman tests. Seventy-four DW-MRI scans were performed. Sixty of them (81.1%) were considered to be of good quality. Agreement between the radiologists was 100% for the absence or presence of diffusion restriction of water. For both intraobserver and interobserver agreement of ADC measurements, proportionate Bland-Altman tests showed average percentage differences of less than 1.5% and 95% CI of less than 18% for all sites evaluated. Our data demonstrate that DW-MRI evaluation of the fetal brain in TTTS is feasible and reproducible.
Resumo:
The Atlantic rainforest species Ocotea catharinensis, Ocotea odorifera, and Ocotea porosa have been extensively harvested in the past for timber and oil extraction and are currently listed as threatened due to overexploitation. To investigate the genetic diversity and population structure of these species, we developed 8 polymorphic microsatellite markers for O. odorifera from an enriched microsatellite library by using 2 dinucleotide repeats. The microsatellite markers were tested for cross-amplification in O. catharinensis and O. porosa. The average number of alleles per locus was 10.2, considering all loci over 2 populations of O. odorifera. Observed and expected heterozygosities for O. odorifera ranged from 0.39 to 0.93 and 0.41 to 0.92 across populations, respectively. Cross-amplification of all loci was successfully observed in O. catharinensis and O. porosa except 1 locus that was found to lack polymorphism in O. porosa. Combined probabilities of identity in the studied Ocotea species were very low ranging from 1.0 x 10-24 to 7.7 x 10-24. The probability of exclusion over all loci estimated for O. odorifera indicated a 99.9% chance of correctly excluding a random nonparent individual. The microsatellite markers described in this study have high information content and will be useful for further investigations on genetic diversity within these species and for subsequent conservation purposes.