852 resultados para Initial data problem
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Representing visually the external appearance of an extinct animal requires, for a reasonably reliable and expressive reconstitution, a good compilation and arrangement of the scientific conclusions on the fossil findings. It is proposed in this work an initial model of a briefing to be applied in a paleodesign process of a paleovertebrate. Briefing can be understood as a gathering of all necessary data to perform a project. We point out what must be known about the relevant structures in order to access all the data and the importance of such information. It is expected that the present briefing suggested might be faced with flexibility, serving as a facilitating interface of the relation between paleoartists and paleontologists.
Resumo:
This research deals with the discussion about Physics teachers’ undergraduate education and professional performance related to the knowledge acquired during this initial education. More specifically, we try to answer questions like: How do future teachers evaluate the knowledge acquired during their initial education as in terms of specific knowledge as pedagogical knowledge? What are their formative needs and future expectatives about professional performance and the school teaching environment? Data was constituted from a sample of 26 future high school physics teachers, one semester long, that were taking the supervised curricular training in a undergraduate Physics education program (called Licenciatura in Brazil), in São Paulo State public university. Besides the final report of this training, future teachers were asked to answer a questionnaire aiming to take their conceptions about their initial education program, their formative needs, future professional expectatives and high school teaching environment. According to the future teachers, the program they were about to finish was satisfactory in terms of Physics specific contents; however, about the pedagogical content knowledge and the pedagogical practice, they showed to be unsatisfied and insecure. The majority of the questionnaire responses demonstrated that they feel lack of teaching experience. Moreover, teachers emphasize other factors related to the future professional performance: possible difficulties to deal with students’ indiscipline, schools’ bad physical structure, limited number of Physics classes in high school level, lack of didactical laboratories and also they seem to be frightened that the expertise teachers do not be collaborative with the new ones. In this sense, the research outcomes shows the necessity of discussions about questions involving teachers knowledge, related to either, the Physics conceptual domain and the pedagogical one, since it matters directly to future teachers professional performance. Discussions in this sense can also help evaluation and restructuration of programs designed to initial and continuous teachers’ education.
Resumo:
The aim was to investigate the difficulties and limits of four future mathematics teachers to conduct classes in regencies approach of problem solving. Based on participation in a course this approach, undergraduates elaborated three didactic sequences, which were taught by the activity of conducting classroom discipline Supervised Curricular Training. After this work, participated in an individual interview to report what had developed in classroom. The results showed difficulties in the following aspects: in the elaboration of didactic sequences; in providing an environment for discussion of resolution strategies students. Furthermore, the data analysis showed limits related: the lack of space at the school teacher to allow implementation of lessons developed; lack of basic mathematical knowledge of the students.
Resumo:
In the instrumental records of daily precipitation, we often encounter one or more periods in which values below some threshold were not registered. Such periods, besides lacking small values, also have a large number of dry days. Their cumulative distribution function is shifted to the right in relation to that for other portions of the record having more reliable observations. Such problems are examined in this work, based mostly on the two-sample Kolmogorov–Smirnov (KS) test, where the portion of the series with more number of dry days is compared with the portion with less number of dry days. Another relatively common problem in daily rainfall data is the prevalence of integers either throughout the period of record or in some part of it, likely resulting from truncation during data compilation prior to archiving or by coarse rounding of daily readings by observers. This problem is identified by simple calculation of the proportion of integers in the series, taking the expected proportion as 10%. The above two procedures were applied to the daily rainfall data sets from the European Climate Assessment (ECA), Southeast Asian Climate Assessment (SACA), and Brazilian Water Resources Agency (BRA). Taking the statistic D of the KS test >0.15 and the corresponding p-value <0.001 as the condition to classify a given series as suspicious, the proportions of the ECA, SACA, and BRA series falling into this category are, respectively, 34.5%, 54.3%, and 62.5%. With relation to coarse rounding problem, the proportions of series exceeding twice the 10% reference level are 3%, 60%, and 43% for the ECA, SACA, and BRA data sets, respectively. A simple way to visualize the two problems addressed here is by plotting the time series of daily rainfall for a limited range, for instance, 0–10 mm day−1.
Resumo:
This paper presents a mathematical model adapted from literature for the crop rotation problem with demand constraints (CRP-D). The main aim of the present work is to study metaheuristics and their performance in a real context. The proposed algorithms for solution of the CRP-D are a genetic algorithm, a simulated annealing and hybrid approaches: a genetic algorithm with simulated annealing and a genetic algorithm with local search algorithm. A new constructive heuristic was also developed to provide initial solutions for the metaheuristics. Computational experiments were performed using a real planting area and semi-randomly generated instances created by varying the number, positions and dimensions of the lots. The computational results showed that these algorithms determined good feasible solutions in a short computing time as compared with the time spent to get optimal solutions, thus proving their efficacy for dealing with this practical application of the CRP-D.
Resumo:
Pós-graduação em Engenharia de Produção - FEG
Resumo:
The recent likely extinction of the baiji (Chinese river dolphin [Lipotes vexillifer]) (Turvey et al. 2007) makes the vaquita (Gulf of California porpoise [Phocoena sinus]) the most endangered cetacean. The vaquita has the smallest range of any porpoise, dolphin, or whale and, like the baiji, has long been threatened primarily by accidental deaths in fishing gear (bycatch) (Rojas-Bracho et al. 2006). Despite repeated recommendations from scientific bodies and conservation organizations, no effective actions have been taken to remove nets from the vaquita’s environment. Here, we address three questions that are important to vaquita conservation: (1) How many vaquitas remain? (2) How much time is left to find a solution to the bycatch problem? and (3) Are further abundance surveys or bycatch estimates needed to justify the immediate removal of all entangling nets from the range of the vaquita? Our answers are, in short: (1) there are about 150 vaquitas left, (2) there are at most 2 years within which to find a solution, and (3) further abundance surveys or bycatch estimates are not needed. The answers to the first two questions make clear that action is needed now, whereas the answer to the last question removes the excuse of uncertainty as a delay tactic. Herein we explain our reasoning.
Resumo:
Dynamic conferencing refers to a scenario wherein any subset of users in a universe of users form a conference for sharing confidential information among themselves. The key distribution (KD) problem in dynamic conferencing is to compute a shared secret key for such a dynamically formed conference. In literature, the KD schemes for dynamic conferencing either are computationally unscalable or require communication among users, which is undesirable. The extended symmetric polynomial based dynamic conferencing scheme (ESPDCS) is one such KD scheme which has a high computational complexity that is universe size dependent. In this paper we present an enhancement to the ESPDCS scheme to develop a KD scheme called universe-independent SPDCS (UI-SPDCS) such that its complexity is independent of the universe size. However, the UI-SPDCS scheme does not scale with the conference size. We propose a relatively scalable KD scheme termed as DH-SPDCS that uses the UI-SPDCS scheme and the tree-based group Diffie- Hellman (TGDH) key exchange protocol. The proposed DH-SPDCS scheme provides a configurable trade-off between computation and communication complexity of the scheme.
Resumo:
In this action research study of my classroom of 8th grade mathematics, I investigated the use of daily warm-ups written in problem-solving format. Data was collected to determine if use of such warm-ups would have an effect on students’ abilities to problem solve, their overall attitudes regarding problem solving and whether such an activity could also enhance their readiness each day to learn new mathematics concepts. It was also my hope that this practice would have some positive impact on maximizing the amount of time I have with my students for math instruction. I discovered that daily exposure to problem-solving practices did impact the students’ overall abilities and achievement (though sometimes not positively) and similarly the students’ attitudes showed slight changes as well. It certainly seemed to improve their readiness for the day’s lesson as class started in a more timely manner and students were more actively involved in learning mathematics (or perhaps working on mathematics) than other classes not involved in the research. As a result of this study, I plan to continue using daily warm-ups and problem-solving (perhaps on a less formal or regimented level) and continue gathering data to further determine if this methodology can be useful in improving students’ overall mathematical skills, abilities and achievement.
Resumo:
In 1979, the Game Division Administration of the Wyoming Game and Fish Department (WGFD) appointed John Demaree and Tim Fagan to develop a handbook that would address the ever increasing problem of wildlife depredation. Field personnel were often times at a loss on how to deal with or evaluate the assorted types of damage situations they were encountering. Because Wyoming requires landowners to be reimbursed for damage done by big and trophy game and game birds to their crops and livestock, an evaluation and techniques handbook was desperately needed. The initial handbook, completed in January 1981, was 74 pages, and both John and I considered it a masterpiece. It did not take long, however, for this handbook to become somewhat lacking in information and outdated. In 1990, our administration approached us again asking this time for an update of our ten-year-old handbook. John and I went to work, and with the assistance of Evin Oneale of the Wyoming Cooperative Fish and Wildlife Research unit, and Bill Hepworth and John Schneidmiller of the WGFD, have just completed the second edition. This edition is over 600 pages and titled "The Handbook of Wildlife Depredation Techniques." Neither of us care to be around when a third edition is needed. In this handbook we have attempted to cover any type of damage situation our personnel may encounter. Although the primary function of this manual is to inform department personnel about proper and uniform damage prevention and evaluation techniques, it also provides relative and pertinent information concerning the many aspects of wildlife depredation. Information for this handbook has been compiled from techniques developed by our personnel, personnel from other states and provinces, and published data on wildlife depredation. There are nine chapters, a reprint, and Appendix section in this handbook. We will briefly summarize each chapter regarding its contents.
Resumo:
Background: A current challenge in gene annotation is to define the gene function in the context of the network of relationships instead of using single genes. The inference of gene networks (GNs) has emerged as an approach to better understand the biology of the system and to study how several components of this network interact with each other and keep their functions stable. However, in general there is no sufficient data to accurately recover the GNs from their expression levels leading to the curse of dimensionality, in which the number of variables is higher than samples. One way to mitigate this problem is to integrate biological data instead of using only the expression profiles in the inference process. Nowadays, the use of several biological information in inference methods had a significant increase in order to better recover the connections between genes and reduce the false positives. What makes this strategy so interesting is the possibility of confirming the known connections through the included biological data, and the possibility of discovering new relationships between genes when observed the expression data. Although several works in data integration have increased the performance of the network inference methods, the real contribution of adding each type of biological information in the obtained improvement is not clear. Methods: We propose a methodology to include biological information into an inference algorithm in order to assess its prediction gain by using biological information and expression profile together. We also evaluated and compared the gain of adding four types of biological information: (a) protein-protein interaction, (b) Rosetta stone fusion proteins, (c) KEGG and (d) KEGG+GO. Results and conclusions: This work presents a first comparison of the gain in the use of prior biological information in the inference of GNs by considering the eukaryote (P. falciparum) organism. Our results indicates that information based on direct interaction can produce a higher improvement in the gain than data about a less specific relationship as GO or KEGG. Also, as expected, the results show that the use of biological information is a very important approach for the improvement of the inference. We also compared the gain in the inference of the global network and only the hubs. The results indicates that the use of biological information can improve the identification of the most connected proteins.
Resumo:
The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.
Resumo:
Background: This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results: The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions: We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.