826 resultados para arithmetic


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human activities are altering greenhouse gas concentrations in the atmosphere and causing global climate change. The issue of impacts of human-induced climate change has become increasingly important in recent years. The objective of this work was to develop a database of climate information of the future scenarios using a Geographic Information System (GIS) tools. Future scenarios focused on the decades of the 2020?s, 2050?s, and 2080?s (scenarios A2 and B2) were obtained from the General Circulation Models (GCM) available on Data Distribution Centre from the Third Assessment Report (TAR) of Intergovernmental Panel on Climate Change (IPCC). The TAR is compounded by six GCM with different spatial resolutions (ECHAM4:2.8125×2.8125º, HadCM3: 3.75×2.5º, CGCM2: 3.75×3.75º, CSIROMk2b: 5.625×3.214º, and CCSR/NIES: 5.625×5.625º). The mean monthly of the climate variables was obtained by the average from the available models using the GIS spatial analysis tools (arithmetic operation). Maps of mean monthly variables of mean temperature, minimum temperature, maximum temperature, rainfall, relative humidity, and solar radiation were elaborated adopting the spatial resolution of 0.5° X 0.5° latitude and longitude. The method of elaborating maps using GIS tools allowed to evaluate the spatial and distribution of future climate assessments. Nowadays, this database is being used in studies of impacts of climate change on plant disease of Embrapa projects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Half-Unit-Biased format is based on shifting the representation line of the binary numbers by half Unit in the Last Place. The main feature of this format is that the round to nearest is carried out by a simple truncation, preventing any carry propagation and saving time and area. Algorithms and architectures have been defined for addition/substraction and multiplication operations under this format. Nevertheless, the division operation has not been confronted yet. In this paper we deal with the floating-point division under HUB format, studying the architecture for the digit recurrence method, including the on-the-fly conversion of the signed digit quotient.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing use of fossil fuels in line with cities demographic explosion carries out to huge environmental impact in society. For mitigate these social impacts, regulatory requirements have positively influenced the environmental consciousness of society, as well as, the strategic behavior of businesses. Along with this environmental awareness, the regulatory organs have conquered and formulated new laws to control potentially polluting activities, mostly in the gas stations sector. Seeking for increasing market competitiveness, this sector needs to quickly respond to internal and external pressures, adapting to the new standards required in a strategic way to get the Green Badge . Gas stations have incorporated new strategies to attract and retain new customers whom present increasingly social demand. In the social dimension, these projects help the local economy by generating jobs and income distribution. In this survey, the present research aims to align the social, economic and environmental dimensions to set the sustainable performance indicators at Gas Stations sector in the city of Natal/RN. The Sustainable Balanced Scorecard (SBSC) framework was create with a set of indicators for mapping the production process of gas stations. This mapping aimed at identifying operational inefficiencies through multidimensional indicators. To carry out this research, was developed a system for evaluating the sustainability performance with application of Data Envelopment Analysis (DEA) through a quantitative method approach to detect system s efficiency level. In order to understand the systemic complexity, sub organizational processes were analyzed by the technique Network Data Envelopment Analysis (NDEA) figuring their micro activities to identify and diagnose the real causes of overall inefficiency. The sample size comprised 33 Gas stations and the conceptual model included 15 indicators distributed in the three dimensions of sustainability: social, environmental and economic. These three dimensions were measured by means of classical models DEA-CCR input oriented. To unify performance score of individual dimensions, was designed a unique grouping index based upon two means: arithmetic and weighted. After this, another analysis was performed to measure the four perspectives of SBSC: learning and growth, internal processes, customers, and financial, unifying, by averaging the performance scores. NDEA results showed that no company was assessed with excellence in sustainability performance. Some NDEA higher efficiency Gas Stations proved to be inefficient under certain perspectives of SBSC. In the sequence, a comparative sustainable performance and assessment analyzes among the gas station was done, enabling entrepreneurs evaluate their performance in the market competitors. Diagnoses were also obtained to support the decision making of entrepreneurs in improving the management of organizational resources and promote guidelines the regulators. Finally, the average index of sustainable performance was 69.42%, representing the efforts of the environmental suitability of the Gas station. This results point out a significant awareness of this segment, but it still needs further action to enhance sustainability in the long term

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Matemática, Programa de Mestrado Profissional em Matemática em Rede Nacional, 2016.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pitch Estimation, also known as Fundamental Frequency (F0) estimation, has been a popular research topic for many years, and is still investigated nowadays. The goal of Pitch Estimation is to find the pitch or fundamental frequency of a digital recording of a speech or musical notes. It plays an important role, because it is the key to identify which notes are being played and at what time. Pitch Estimation of real instruments is a very hard task to address. Each instrument has its own physical characteristics, which reflects in different spectral characteristics. Furthermore, the recording conditions can vary from studio to studio and background noises must be considered. This dissertation presents a novel approach to the problem of Pitch Estimation, using Cartesian Genetic Programming (CGP).We take advantage of evolutionary algorithms, in particular CGP, to explore and evolve complex mathematical functions that act as classifiers. These classifiers are used to identify piano notes pitches in an audio signal. To help us with the codification of the problem, we built a highly flexible CGP Toolbox, generic enough to encode different kind of programs. The encoded evolutionary algorithm is the one known as 1 + , and we can choose the value for . The toolbox is very simple to use. Settings such as the mutation probability, number of runs and generations are configurable. The cartesian representation of CGP can take multiple forms and it is able to encode function parameters. It is prepared to handle with different type of fitness functions: minimization of f(x) and maximization of f(x) and has a useful system of callbacks. We trained 61 classifiers corresponding to 61 piano notes. A training set of audio signals was used for each of the classifiers: half were signals with the same pitch as the classifier (true positive signals) and the other half were signals with different pitches (true negative signals). F-measure was used for the fitness function. Signals with the same pitch of the classifier that were correctly identified by the classifier, count as a true positives. Signals with the same pitch of the classifier that were not correctly identified by the classifier, count as a false negatives. Signals with different pitch of the classifier that were not identified by the classifier, count as a true negatives. Signals with different pitch of the classifier that were identified by the classifier, count as a false positives. Our first approach was to evolve classifiers for identifying artifical signals, created by mathematical functions: sine, sawtooth and square waves. Our function set is basically composed by filtering operations on vectors and by arithmetic operations with constants and vectors. All the classifiers correctly identified true positive signals and did not identify true negative signals. We then moved to real audio recordings. For testing the classifiers, we picked different audio signals from the ones used during the training phase. For a first approach, the obtained results were very promising, but could be improved. We have made slight changes to our approach and the number of false positives reduced 33%, compared to the first approach. We then applied the evolved classifiers to polyphonic audio signals, and the results indicate that our approach is a good starting point for addressing the problem of Pitch Estimation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Researchers interested in the neurobiology of the acute stress response in humans require a valid and reliable acute stressor that can be used under experimental conditions. The Trier Social Stress Test (TSST) provides such a testing platform. It induces stress by requiring participants to make an interview-style presentation, followed by a surprise mental arithmetic test, in front of an interview panel who do not provide feedback or encouragement. In this review, we outline the methodology of the TSST, and discuss key findings under conditions of health and stress-related disorder. The TSST has unveiled differences in males and females, as well as different age groups, in their neurobiological response to acute stress. The TSST has also deepened our understanding of how genotype may moderate the cognitive neurobiology of acute stress, and exciting new inroads have been made in understanding epigenetic contributions to the biological regulation of the acute stress response using the TSST. A number of innovative adaptations have been developed which allow for the TSST to be used in group settings, with children, in combination with brain imaging, and with virtual committees. Future applications may incorporate the emerging links between the gut microbiome and the stress response. Future research should also maximise use of behavioural data generated by the TSST. Alternative acute stress paradigms may have utility over the TSST in certain situations, such as those that require repeat testing. Nonetheless, we expect that the TSST remains the gold standard for examining the cognitive neurobiology of acute stress in humans.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mathematical skills that we acquire during formal education mostly entail exact numerical processing. Besides this specifically human faculty, an additional system exists to represent and manipulate quantities in an approximate manner. We share this innate approximate number system (ANS) with other nonhuman animals and are able to use it to process large numerosities long before we can master the formal algorithms taught in school. Dehaene´s (1992) Triple Code Model (TCM) states that also after the onset of formal education, approximate processing is carried out in this analogue magnitude code no matter if the original problem was presented nonsymbolically or symbolically. Despite the wide acceptance of the model, most research only uses nonsymbolic tasks to assess ANS acuity. Due to this silent assumption that genuine approximation can only be tested with nonsymbolic presentations, up to now important implications in research domains of high practical relevance remain unclear, and existing potential is not fully exploited. For instance, it has been found that nonsymbolic approximation can predict math achievement one year later (Gilmore, McCarthy, & Spelke, 2010), that it is robust against the detrimental influence of learners´ socioeconomic status (SES), and that it is suited to foster performance in exact arithmetic in the short-term (Hyde, Khanum, & Spelke, 2014). We provided evidence that symbolic approximation might be equally and in some cases even better suited to generate predictions and foster more formal math skills independently of SES. In two longitudinal studies, we realized exact and approximate arithmetic tasks in both a nonsymbolic and a symbolic format. With first graders, we demonstrated that performance in symbolic approximation at the beginning of term was the only measure consistently not varying according to children´s SES, and among both approximate tasks it was the better predictor for math achievement at the end of first grade. In part, the strong connection seems to come about from mediation through ordinal skills. In two further experiments, we tested the suitability of both approximation formats to induce an arithmetic principle in elementary school children. We found that symbolic approximation was equally effective in making children exploit the additive law of commutativity in a subsequent formal task as a direct instruction. Nonsymbolic approximation on the other hand had no beneficial effect. The positive influence of the symbolic approximate induction was strongest in children just starting school and decreased with age. However, even third graders still profited from the induction. The results show that also symbolic problems can be processed as genuine approximation, but that beyond that they have their own specific value with regard to didactic-educational concerns. Our findings furthermore demonstrate that the two often con-founded factors ꞌformatꞌ and ꞌdemanded accuracyꞌ cannot be disentangled easily in first graders numerical understanding, but that children´s SES also influences existing interrelations between the different abilities tested here.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dyscalculia is usually perceived of as a specific learning difficulty for mathematics or, more appropriately, arithmetic. Because definitions and diagnoses of dyscalculia are in their infancy and sometimes are contradictory. However, mathematical learning difficulties are certainly not in their infancy and are very prevalent and often devastating in their impact. Co-occurrence of learning disorders appears to be the rule rather than the exception. Co-occurrence is generally assumed to be a consequence of risk factors that are shared between disorders, for example, working memory. However, it should not be assumed that all dyslexics have problems with mathematics, although the percentage may be very high, or that all dyscalculics have problems with reading and writing. Because mathematics is very developmental, any insecurity or uncertainty in early topics will impact on later topics, hence to need to take intervention back to basics. However, it may be worked out in order to decrease its degree of severity. For example, disMAT, an app developed for android may help children to apply mathematical concepts, without much effort, that is turning in itself, a promising tool to dyscalculia treatment. Thus, this work will focus on the development of a Decision Support System to estimate children evidences of dyscalculia, based on data obtained on-the-fly with disMAT. The computational framework is built on top of a Logic Programming approach to Knowledge Representation and Reasoning, grounded on a Case-based approach to computing, that allows for the handling of incomplete, unknown, or even self-contradictory information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes arithmetic and geometric Paasche quality-adjusted price indexes that combine micro data from the base period with macro data on the averages of asset prices and characteristics at the index period.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Arachis pintoi and A. repens are legumes with a high forage value that are used to feed ruminants in consortium systems. Not only do they increase the persistence and quality of pastures, they are also used for ornamental and green cover. The objective of this study was to analyze microsatellite markers in order to access the genetic diversity of 65 forage peanut germplasm accessions in the section Caulorrhizae of the genus Arachis in the Jequitinhonha, São Francisco and Paranã River valleys of Brazil. Fifty-seven accessions of A. pintoi and eight of A. repens were analyzed using 17 microsatellites, and the observed heterozygosity (HO), expected heterozygosity (HE), number of alleles per locus, discriminatory power, and polymorphism information content were all estimated. Ten loci (58.8%) were polymorphic, and 125 alleles were found in total. The HE ranged from 0.30 to 0.94, and HO values ranged from 0.03 to 0.88. By using Bayesian analysis, the accessions were genetically differentiated into three gene pools. Neither the unweighted pair group method with arithmetic mean nor a neighbor-joining analysis clustered samples into species, origin, or collection area. These results reveal a very weak genetic structure that does not form defined clusters, and that there is a high degree of similarity between the two species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis aimed to investigate the cognitive underpinnings of math skills, with particular reference to cognitive, and linguistic markers, core mechanisms of number processing and environmental variables. In particular, the issue of intergenerational transmission of math skills has been deepened, comparing parents’ and children’s basic and formal math abilities. This pattern of relationships amongst these has been considered in two different age ranges, preschool and primary school children. In the first chapter, a general introduction on mathematical skills is offered, with a description of some seminal works up to recent studies and latest findings. The first chapter concludes with a review of studies about the influence of environmental variables. In particular, a review of studies about home numeracy and intergenerational transmission is examined. The first study analyzed the relationship between mathematical skills of children attending primary school and those of their mothers. The objective of this study was to understand the influence of mothers' math abilities on those of their children. In the second study, the relationship between parents’ and children numerical processing has been examined in a sample of preschool children. The goal was to understand how mathematical skills of parents were relevant for the development of the numerical skills of children, taking into account children’s cognitive and linguistic skills as well as the role of home numeracy. The third study had the objective of investigating whether the verbal and nonverbal cognitive skills presumed to underlie arithmetic are also related to reading. Primary school children were administered measures of reading and arithmetic to understand the relationships between these two abilities and testing for possible shared cognitive markers. Finally, in the general discussion a summary of main findings across the study is presented, together with clinical and theoretical implications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of this simulation thesis is to present a tool for studying and eliminating various numerical problems observed while analyzing the behavior of the MIND cable during fast voltage polarity reversal. The tool is built on the MATLAB environment, where several simulations were run to achieve oscillation-free results. This thesis will add to earlier research on HVDC cables subjected to polarity reversals. Initially, the code does numerical simulations to analyze the electric field and charge density behavior of a MIND cable for certain scenarios such as before, during, and after polarity reversal. However, the primary goal is to reduce numerical oscillations from the charge density profile. The generated code is notable for its usage of the Arithmetic Mean Approach and the Non-Uniform Field Approach for filtering and minimizing oscillations even under time and temperature variations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.