943 resultados para arithmetic
Resumo:
This study offers an analysis of classification of the main issues of logic and logical thinking found in competitive tendering and math tests, according to their concepts and characteristics, whether involving mathematics, or not. Moreover, a research on the evolutionary historic processes of logic according to three major crises of the foundations of mathematics was conducted. This research helped to define Logic as a science that is quite distinctive from Mathematics. In order to relate the logical and the mathematical thinking, three types of knowledge, according to Piaget, were presented, with the logical-mathematical one being among them. The study also includes an insight on the basic concepts of propositional and predicative logic, which aids in the classification of issues of logical thinking, formal logic or related to algebraic, and geometric or arithmetic knowledge, according to the Venn diagrams. Furthermore, the key problems - that are most frequently found in tests are resolved and classified, as it was previously described. As a result, the classification in question was created and exemplified with eighteen logic problems, duly solved and explained
Resumo:
This text presents the research developed with students of the 5th year of elementary school at a public school in the city of Taubaté-SP, involved in solving problems involving the Mental Calculation. The read authors show that the Mental Calculation is relevant for the production of mathematical knowledge as it favors the autonomy of students, making it the most critical. Official documents that guide educational practices, such as the Parâmetros Curriculares Nacionais also emphasize that working with mental arithmetic should be encouraged as it has the potential to encourage the production of mathematical knowledge by the student. In this research work Completion of course the tasks proposed to students, who constituted the fieldwork to production data, were designed, developed and analyzed in a phenomenological approach. The intention, the research was to understand the perception of students in the face of situations that encourage them to implement appropriate technical and mental calculation procedures. We analyze how students express and realize the strategies for mental calculation in the search for solution to problem situations
Resumo:
The aim of this study was to compare behavioral profile and school performance of school-age children living with a mother who presents clinical history of recurrent depression, diagnosed according to CID-10 criteria in order to verify the influences of such adversity. Thirty-eight mother-child dyads were evaluated using tests, interviews and questionnaires. Approximately two-thirds of the children presented behavioral and school performance difficulties with predominance of emotional and relationship problems, and impairment in the three areas of school performance which were assessed (writing, arithmetic and reading). Such difficulties may be associated with the negative impact of maternal depression. One-third of the children did not present difficulties, which suggests the use of protective mechanisms. The study highlights the importance of considering differences in children's profiles for the planning of mental health practices.
Resumo:
At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems.
Resumo:
Bark extracts of Stryphnodendron adstringens (Mart) Coville a Leguminosae species, well known in Brazil as barbatimao, are popularly used as healing agent. The objective of this work was to determine the genetic diversity of S. adstringens populations and to correlate genetic distances to the production of tannins. S. adstringens accessions from populations found in Cerrado regions in the states of Goias, Minas Gerais and Sao Paulo were analyzed using the AFLP (Amplified Fragment Length Polymorphism) technique. A total of 236 polymorphic bands were scored and higher proportion of genetic diversity was found inter populations (70.9%), rather than intra populations (29.1%). F-ST value was found to be significantly greater than zero (0.2906), demonstrating the complex genetic structure of S. adstringens populations. Accessions collected in Cristalina, GO, showed higher percentage of polymorphic loci (87.3%) and the highest genetic diversity. The lowest genetic variability was detected among accessions from the population growing in Caldas Novas, GO. The genetic distance among populations was estimated using the Unweighted Pair Group Method with Arithmetic Mean (UPGMA), which grouped populations into 3 clusters. Moreover, chemotypes with tannin concentration above 40% showed higher genetic similarity. AFLP analysis proved to be an efficient gene mapping technique to determine the genetic diversity among remaining populations of S. adstringens. Obtained results may be employed to implement further strategies for the conservation of this medicinal plant. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Objectives: This study evaluated the surface microhardness (SM) and roughness (SR) alterations of dental resins submitted to pH catalysed degradation regimens. Methods: Thirty discs of each TPH Spectrum (Dentsply), Z100 (3M-ESPE), or an unfilled experimental bis-GMA/TEGDMA resin were fabricated, totaling 90 specimens. Each specimen was polymerized for 40 s, finished, polished, and individually stored in deionized water at 37 degrees C for 7 days. Specimens were randomly assigned to the following pH solutions: 1.0, 6.9 or 13, and for SM or SR evaluations (n = 5). Baseline Knoop-hardness of each specimen was obtained by the arithmetic mean of five random micro-indentations. For SR, mean baseline values were obtained by five random surface tracings (R-a). Specimens were then soaked in one of the following storage media at 37 degrees C: (1) 0.1 M, pH 1.0 HCl, (2) 0.1 N, pH 13.0 NaOCl, and (3) deionized water (pH 6.9). Solutions were replaced daily. Repeated SM and SR measurements were performed at the 3-, 7- and 14-day storage time intervals. For each test and resin, data were analysed by two-way ANOVA followed by Tukey's test (alpha = 0.05). Results: There was significant decrease in SM and increase in SR values of composites after storage in alkaline medium. TPH and Z100 presented similar behaviour for SM and SR after immersion in the different media, whereas unfilled resin values showed no significant change. Conclusion: Hydrolytic degradation of resin composites seems to begin with the silanized inorganic particles and therefore depend on their composition. Significance: To accelerate composite hydrolysis and produce quick in vitro microstructural damage, alkaline medium appears to be more suitable than acidic medium. Contemporary resin composite properties seem to withstand neutral and acidic oral environments tolerably well. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Tribocharged polymers display macroscopically patterned positive and negative domains, verifying the fractal geometry of electrostatic mosaics previously detected by electric probe microscopy. Excess charge on contacting polyethylene (PE) and polytetrafluoroethylene (PTFE) follows the triboelectric series but with one caveat: net charge is the arithmetic sum of patterned positive and negative charges, as opposed to the usual assumption of uniform but opposite signal charging on each surface. Extraction with n-hexane preferentially removes positive charges from PTFE, while 1,1-difluoroethane and ethanol largely remove both positive and negative charges. Using suitable analytical techniques (electron energy-loss spectral imaging, infrared microspectrophotometry and carbonization/colorimetry) and theoretical calculations, the positive species were identified as hydrocarbocations and the negative species were identified as fluorocarbanions. A comprehensive model is presented for PTFE tribocharging with PE: mechanochemical chain homolytic rupture is followed by electron transfer from hydrocarbon free radicals to the more electronegative fluorocarbon radicals. Polymer ions self-assemble according to Flory-Huggins theory, thus forming the experimentally observed macroscopic patterns. These results show that tribocharging can only be understood by considering the complex chemical events triggered by mechanical action, coupled to well-established physicochemical concepts. Patterned polymers can be cut and mounted to make macroscopic electrets and multipoles.
Resumo:
Studies involving amplified fragment length polymorphism (cDNA-AFLP) have often used polyacrylamide gels with radiolabeled primers in order to establish best primer combinations, to analyze, and to recover transcript-derived fragments. Use of automatic sequencer to establish best primer combinations is convenient, because it saves time, reduces costs and risks of contamination with radioactive material and acrylamide, and allows objective band-matching and more precise evaluation of transcript-derived fragments intensities. This study aimed at examining the gene expression of commercial cultivars of P. guajava subjected to water and mechanical injury stresses, combining analyses by automatic sequencer and fluorescent kits for polyacrylamide gel electrophoresis. Firstly, 64 combinations of EcoRI and MseI primers were tested. Ten combinations with higher number of polymorphic fragments were then selected for transcript-derived fragments recovering and cluster analysis, involving 45 saplings of P. guajava. Two groups were obtained, one composed by the control samplings, and another formed by samplings undergoing stress, with no clear distinction between stress treatments. The results revealed the convenience of using a combination of automatic sequencer and fluorescent kits for polyacrylamide gel electrophoreses to examine gene expression profiles. The Unweighted Pair Group Method with Arithmetic Mean analysis using Euclidean distances points out a similar induced response mechanism of P. guajava undergoing water stress and mechanical injury.
Resumo:
The purpose of this study was to analyze the influence of lactation and dry period in the constituents of lipid and glucose metabolism of buffaloes. One hundred forty-seven samples of serum and plasma were collected between November 2009 and July 2010, from properties raising Murrah, Mediterranean and crossbred buffaloes, located in the State of Sao Paulo, Brazil. Biochemical analysis was obtained by determining the contents of serum cholesterol, triglycerides, beta-hydroxybutyrate (β-HBO), non-esterified fatty acids (NEFA) and plasma glucose. Values for arithmetic mean and standard error mean were calculated using the SAS procedure, version 9.2. Tests for normality of residuals and homogeneity of variances were performed using the SAS Guide Data Analysis. Data were analyzed by ANOVA using the SAS procedure Glimmix. The group information (Lactation), Farm and Age were used in the statistical models. Means of groups were compared using Least Square Means (LSMeans) of SAS, where significant difference was observed at P ≤ 0.05. It was possible to conclude that buffaloes during peak lactation need to metabolize body reserves to supplement the lower amounts of bloodstream lipids, when they remain in negative energy balance. In the dry period, there were significant changes in the lipid profile, characterized by decrease of nutritional requirements, with consequent improvement in the general conditions of the animals.
Resumo:
The humans process the numbers in a similar way to animals. There are countless studies in which similar performance between animals and humans (adults and/or children) are reported. Three models have been developed to explain the cognitive mechanisms underlying the number processing. The triple-code model (Dehaene, 1992) posits an mental number line as preferred way to represent magnitude. The mental number line has three particular effects: the distance, the magnitude and the SNARC effects. The SNARC effect shows a spatial association between number and space representations. In other words, the small numbers are related to left space while large numbers are related to right space. Recently a vertical SNARC effect has been found (Ito & Hatta, 2004; Schwarz & Keus, 2004), reflecting a space-related bottom-to-up representation of numbers. The magnitude representations horizontally and vertically could influence the subject performance in explicit and implicit digit tasks. The goal of this research project aimed to investigate the spatial components of number representation using different experimental designs and tasks. The experiment 1 focused on horizontal and vertical number representations in a within- and between-subjects designs in a parity and magnitude comparative tasks, presenting positive or negative Arabic digits (1-9 without 5). The experiment 1A replied the SNARC and distance effects in both spatial arrangements. The experiment 1B showed an horizontal reversed SNARC effect in both tasks while a vertical reversed SNARC effect was found only in comparative task. In the experiment 1C two groups of subjects performed both tasks in two different instruction-responding hand assignments with positive numbers. The results did not show any significant differences between two assignments, even if the vertical number line seemed to be more flexible respect to horizontal one. On the whole the experiment 1 seemed to demonstrate a contextual (i.e. task set) influences of the nature of the SNARC effect. The experiment 2 focused on the effect of horizontal and vertical number representations on spatial biases in a paper-and-pencil bisecting tasks. In the experiment 2A the participants were requested to bisect physical and number (2 or 9) lines horizontally and vertically. The findings demonstrated that digit 9 strings tended to generate a more rightward bias comparing with digit 2 strings horizontally. However in vertical condition the digit 2 strings generated a more upperward bias respect to digit 9 strings, suggesting a top-to-bottom number line. In the experiment 2B the participants were asked to bisect lines flanked by numbers (i.e. 1 or 7) in four spatial arrangements: horizontal, vertical, right-diagonal and left-diagonal lines. Four number conditions were created according to congruent or incongruent number line representation: 1-1, 1-7, 7-1 and 7-7. The main results showed a more reliable rightward bias in horizontal congruent condition (1-7) respect to incongruent condition (7-1). Vertically the incongruent condition (1-7) determined a significant bias towards bottom side of line respect to congruent condition (7-1). The experiment 2 suggested a more rigid horizontal number line while in vertical condition the number representation could be more flexible. In the experiment 3 we adopted the materials of experiment 2B in order to find a number line effect on temporal (motor) performance. The participants were presented horizontal, vertical, rightdiagonal and left-diagonal lines flanked by the same digits (i.e. 1-1 or 7-7) or by different digits (i.e. 1-7 or 7-1). The digits were spatially congruent or incongruent with their respective hypothesized mental representations. Participants were instructed to touch the lines either close to the large digit, or close to the small digit, or to bisected the lines. Number processing influenced movement execution more than movement planning. Number congruency influenced spatial biases mostly along the horizontal but also along the vertical dimension. These results support a two-dimensional magnitude representation. Finally, the experiment 4 addressed the visuo-spatial manipulation of number representations for accessing and retrieval arithmetic facts. The participants were requested to perform a number-matching and an addition verification tasks. The findings showed an interference effect between sum-nodes and neutral-nodes only with an horizontal presentation of digit-cues, in number-matching tasks. In the addition verification task, the performance was similar for horizontal and vertical presentations of arithmetic problems. In conclusion the data seemed to show an automatic activation of horizontal number line also used to retrieval arithmetic facts. The horizontal number line seemed to be more rigid and the preferred way to order number from left-to-right. A possible explanation could be the left-to-right direction for reading and writing. The vertical number line seemed to be more flexible and more dependent from the tasks, reflecting perhaps several example in the environment representing numbers either from bottom-to-top or from top-to-bottom. However the bottom-to-top number line seemed to be activated by explicit task demands.
Resumo:
The thesis deals with the modularity conjecture for three-dimensional Calabi-Yau varieties. This is a generalization of the work of A. Wiles and others on modularity of elliptic curves. Modularity connects the number of points on varieties with coefficients of certain modular forms. In chapter 1 we collect the basics on arithmetic on Calabi-Yau manifolds, including general modularity results and strategies for modularity proofs. In chapters 2, 3, 4 and 5 we investigate examples of modular Calabi-Yau threefolds, including all examples occurring in the literature and many new ones. Double octics, i.e. Double coverings of projective 3-space branched along an octic surface, are studied in detail. In chapter 6 we deal with examples connected with the same modular forms. According to the Tate conjecture there should be correspondences between them. Many correspondences are constructed explicitly. We finish by formulating conjectures on the occurring newforms, especially their levels. In the appendices we compile tables of coefficients of weight 2 and weight 4 newforms and many examples of double octics.
Resumo:
1. Teil: Bekannte Konstruktionen. Die vorliegende Arbeit gibt zunächst einen ausführlichen Überblick über die bisherigen Entwicklungen auf dem klassischen Gebiet der Hyperflächen mit vielen Singularitäten. Die maximale Anzahl mu^n(d) von Singularitäten auf einer Hyperfläche vom Grad d im P^n(C) ist nur in sehr wenigen Fällen bekannt, im P^3(C) beispielsweise nur für d<=6. Abgesehen von solchen Ausnahmen existieren nur obere und untere Schranken. 2. Teil: Neue Konstruktionen. Für kleine Grade d ist es oft möglich, bessere Resultate zu erhalten als jene, die durch allgemeine Schranken gegeben sind. In dieser Arbeit beschreiben wir einige algorithmische Ansätze hierfür, von denen einer Computer Algebra in Charakteristik 0 benutzt. Unsere anderen algorithmischen Methoden basieren auf einer Suche über endlichen Körpern. Das Liften der so experimentell gefundenen Hyperflächen durch Ausnutzung ihrer Geometrie oder Arithmetik liefert beispielsweise eine Fläche vom Grad 7 mit $99$ reellen gewöhnlichen Doppelpunkten und eine Fläche vom Grad 9 mit 226 gewöhnlichen Doppelpunkten. Diese Konstruktionen liefern die ersten unteren Schranken für mu^3(d) für ungeraden Grad d>5, die die allgemeine Schranke übertreffen. Unser Algorithmus hat außerdem das Potential, auf viele weitere Probleme der algebraischen Geometrie angewendet zu werden. Neben diesen algorithmischen Methoden beschreiben wir eine Konstruktion von Hyperflächen vom Grad d im P^n mit vielen A_j-Singularitäten, j>=2. Diese Beispiele, deren Existenz wir mit Hilfe der Theorie der Dessins d'Enfants beweisen, übertreffen die bekannten unteren Schranken in den meisten Fällen und ergeben insbesondere neue asymptotische untere Schranken für j>=2, n>=3. 3. Teil: Visualisierung. Wir beschließen unsere Arbeit mit einer Anwendung unserer neuen Visualisierungs-Software surfex, die die Stärken mehrerer existierender Programme bündelt, auf die Konstruktion affiner Gleichungen aller 45 topologischen Typen reeller kubischer Flächen.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
This thesis provides efficient and robust algorithms for the computation of the intersection curve between a torus and a simple surface (e.g. a plane, a natural quadric or another torus), based on algebraic and numeric methods. The algebraic part includes the classification of the topological type of the intersection curve and the detection of degenerate situations like embedded conic sections and singularities. Moreover, reference points for each connected intersection curve component are determined. The required computations are realised efficiently by solving quartic polynomials at most and exactly by using exact arithmetic. The numeric part includes algorithms for the tracing of each intersection curve component, starting from the previously computed reference points. Using interval arithmetic, accidental incorrectness like jumping between branches or the skipping of parts are prevented. Furthermore, the environments of singularities are correctly treated. Our algorithms are complete in the sense that any kind of input can be handled including degenerate and singular configurations. They are verified, since the results are topologically correct and approximate the real intersection curve up to any arbitrary given error bound. The algorithms are robust, since no human intervention is required and they are efficient in the way that the treatment of algebraic equations of high degree is avoided.
Resumo:
In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.