978 resultados para reasoning about loops
Resumo:
I den här avhandlingen är intresset riktat mot svensk utbildningspolitik och medborgarskap. I tider av globalisering, och med ett etablerat svenskt medlemskap i Europeiska unionen, har det nationella policyskapandet kring utbildningens medborgardanande funktion hamnat under ökat tryck. I studien undersöks hur detta hanteras i svensk utbildningspolitik under 1990-talet, en tid som utmärker sig genom betydande förändringar på utbildningsområdet i Sverige. Frågan är, mera precist, vilken riktning för skolans, enligt lag befästa, uppdrag att fostra demokratiska medborgare som utstakas i svensk utbildningspolitik vid denna tid. Genom att fästa vikt vid mål, visioner och motiv som formuleras i utbildningspolitiska 1990-talstexter klarläggs förståelser av medborgarskap som karaktäriserar svensk utbildningspolitik under denna tid. Även en bredare historisk analys görs, ur vilken historiska målsättningar med skolans medborgarfostran som föregår 1990-talets framträder. Studiens syfte är kritiskt. Förståelserna granskas utifrån vad de innesluter och vad de utesluter, vilka möjliga konsekvenser de kan tänkas få för olika individer och grupper i samhället, och om det finns öppningar för tänkbara alternativ. Studien visar på två historiska skiften vad gäller medborgarskapets innehåll och mening i det inhemska policyskapandet. Det första skiftet äger rum under 1990-talets tidiga del. Då bryts en etablerad samhällsbyggande medborgarroll upp, till förmån för andra mera marknadsorienterade medborgarroller. Under 1990-talets senare del, då marknadsorienteringen förstärks i neoliberal riktning, sker ett andra skifte; en historiskt vedertagen gemenskapstanke – nationen – bryts upp som grund för medborgerlig gemenskap. Denna tanke ersätts av en annan som är globaliseringsinriktad, vilken visar sig ha andra inne- och uteslutande mekanismer för olika individer och samhällsgrupper. Utifrån dessa forskningsrön tecknas avslutningsvis några konturer till ett alternativt sätt att tänka kring medborgarskap och gemenskap. Detta alternativ tar form i ambitionen att, i högre grad än vad som blir synligt i svensk utbildningspolitik, resonera kring möjligheter för ett medborgarskap bortom förhandstecknade indelningsgrunder för ett “vi”. Språk som politisk och samhällelig förändringskraft ges en central betydelse i avhandlingen. I analysen av texternas tal om skolans medborgarfostrande roll undersöks pågående politiska motsättningar när det gäller att vinna tal- och tolkningsföreträde till skolans fostransmål. Utgångarna av dessa motsättningar belyses genom tre områden för medborgarfostran som urskilts som centrala; ett politiskt, ett kulturellt samt ett ekonomi- och arbetslivsriktat. Genom dessa har rådande medborgarskapsdiskurser tagit form, ur vilka de utbildningspolitiska förståelserna av medborgarskap gestaltas och diskuteras.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
In the past decade, systems that extract information from millions of Internet documents have become commonplace. Knowledge graphs -- structured knowledge bases that describe entities, their attributes and the relationships between them -- are a powerful tool for understanding and organizing this vast amount of information. However, a significant obstacle to knowledge graph construction is the unreliability of the extracted information, due to noise and ambiguity in the underlying data or errors made by the extraction system and the complexity of reasoning about the dependencies between these noisy extractions. My dissertation addresses these challenges by exploiting the interdependencies between facts to improve the quality of the knowledge graph in a scalable framework. I introduce a new approach called knowledge graph identification (KGI), which resolves the entities, attributes and relationships in the knowledge graph by incorporating uncertain extractions from multiple sources, entity co-references, and ontological constraints. I define a probability distribution over possible knowledge graphs and infer the most probable knowledge graph using a combination of probabilistic and logical reasoning. Such probabilistic models are frequently dismissed due to scalability concerns, but my implementation of KGI maintains tractable performance on large problems through the use of hinge-loss Markov random fields, which have a convex inference objective. This allows the inference of large knowledge graphs using 4M facts and 20M ground constraints in 2 hours. To further scale the solution, I develop a distributed approach to the KGI problem which runs in parallel across multiple machines, reducing inference time by 90%. Finally, I extend my model to the streaming setting, where a knowledge graph is continuously updated by incorporating newly extracted facts. I devise a general approach for approximately updating inference in convex probabilistic models, and quantify the approximation error by defining and bounding inference regret for online models. Together, my work retains the attractive features of probabilistic models while providing the scalability necessary for large-scale knowledge graph construction. These models have been applied on a number of real-world knowledge graph projects, including the NELL project at Carnegie Mellon and the Google Knowledge Graph.
Resumo:
Coinduction is a proof rule. It is the dual of induction. It allows reasoning about non--well--founded structures such as lazy lists or streams and is of particular use for reasoning about equivalences. A central difficulty in the automation of coinductive proof is the choice of a relation (called a bisimulation). We present an automation of coinductive theorem proving. This automation is based on the idea of proof planning. Proof planning constructs the higher level steps in a proof, using knowledge of the general structure of a family of proofs and exploiting this knowledge to control the proof search. Part of proof planning involves the use of failure information to modify the plan by the use of a proof critic which exploits the information gained from the failed proof attempt. Our approach to the problem was to develop a strategy that makes an initial simple guess at a bisimulation and then uses generalisation techniques, motivated by a critic, to refine this guess, so that a larger class of coinductive problems can be automatically verified. The implementation of this strategy has focused on the use of coinduction to prove the equivalence of programs in a small lazy functional language which is similar to Haskell. We have developed a proof plan for coinduction and a critic associated with this proof plan. These have been implemented in CoClam, an extended version of Clam with encouraging results. The planner has been successfully tested on a number of theorems.
Resumo:
Coinduction is a method of growing importance in reasoning about functional languages, due to the increasing prominence of lazy data structures. Through the use of bisimulations and proofs that bisimilarity is a congruence in various domains it can be used to prove the congruence of two processes. A coinductive proof requires a relation to be chosen which can be proved to be a bisimulation. We use proof planning to develop a heuristic method which automatically constucts a candidate relation. If this relation doesn't allow the proof to go through a proof critic analyses the reasons why it failed and modifies the relation accordingly. Several proof tools have been developed to aid coinductive proofs but all require user interaction. Crucially they require the user to supply an appropriate relation which the system can then prove to be a bisimulation.
Resumo:
Relatório de estágio para obtenção de grau mestre em Educação pré-escolar e Ensino do 1.º ciclo do ensino básico
Resumo:
Relatório de estágio para obtenção de grau mestre em Educação pré-escolar e Ensino do 1.º ciclo do ensino básico
Resumo:
This study looks at how upper secondary school teachers gender stereotype aspects of students' mathematical reasoning. Girls were attributed gender symbols including insecurity, use of standard methods and imitative reasoning. Boys were assigned the symbols such as multiple strategies especially on the calculator, guessing and chance-taking.
Resumo:
The subject of the text is the issue of the "political", which is defined as the nature and level of the final judgment and ultimate reasoning. The issues of this kind of the "political" has been attempted to distinguish in political sciences. The text focuses on: (1) the scientist as an agent for the final judgment and reasoning, (2) the subject of study of political science, (3) "theoretical strategies" in the science of politics. The latter problem has been discussed mainly on the example of Polish political science. Discussed were among others: (1) "the dilemma of scale", (2) limited operational capacity (methodological and theoretical), (3) aesthetic imagery of political life, (4) structural ignorance in the field of ontology, epistemology and methodology.
Resumo:
Second-order phase locked loops (PLLs) are devices that are able to provide synchronization between the nodes in a network even under severe quality restrictions in the signal propagation. Consequently, they are widely used in telecommunication and control. Conventional master-slave (M-S) clock-distribution systems are being, replaced by mutually connected (MC) ones due to their good potential to be used in new types of application such as wireless sensor networks, distributed computation and communication systems. Here, by using an analytical reasoning, a nonlinear algebraic system of equations is proposed to establish the existence conditions for the synchronous state in an MC PLL network. Numerical experiments confirm the analytical results and provide ideas about how the network parameters affect the reachability of the synchronous state. The phase-difference oscillation amplitudes are related to the node parameters helping to design PLL neural networks. Furthermore, estimation of the acquisition time depending on the node parameters allows the performance evaluation of time distribution systems and neural networks based on phase-locked techniques. (c) 2008 Elsevier GmbH. All rights reserved.
Resumo:
This paper is research oriented and pretends to contribute toward giving empirical evidence about how students develop their reasoning and how they achieved to a proof construction in school context. Its main theme is epistemology. It describes the way in which four students in 9th Grade explored a task related with the discovery of symmetry axes in various geometric figures. The proof constructed by students had essentially an explaining function and it was related with the symmetry axes of regular polygons. The teacher’s role in meaning negotiation of the proof and its need is described through illustrative episodes. The paper presents part of a study which purpose is to analyse the nature of mathematical proof in classroom, its role and the nature of the relationship between the construction of a proof and the social interactions. Assuming a social perspective, attention is focussed on the social construction of knowledge and on the structuring resources that shape mathematical experience. The study’s methodology has an interpretative nature. One outcome of the study discussed here is that students develop first a practical understanding with no awareness of the reasons founding mathematical statements and after a theoretical one leading them to a proof elaboration.
Resumo:
In order to study the impact of premature birth and low income on mother–infant interaction, four Portuguese samples were gathered: full-term, middle-class (n=99); premature, middle-class (n=63); full-term, low income (n=22); and premature, low income (n=21). Infants were filmed in a free play situation with their mothers, and the results were scored using the CARE Index. By means of multinomial regression analysis, social economic status (SES) was found to be the best predictor of maternal sensitivity and infant cooperative behavior within a set of medical and social factors. Contrary to the expectations of the cumulative risk perspective, two factors of risk (premature birth together with low SES) were as negative for mother–infant interaction as low SES solely. In this study, as previous studies have shown, maternal sensitivity and infant cooperative behavior were highly correlated, as was maternal control with infant compliance. Our results further indicate that, when maternal lack of responsiveness is high, the infant displays passive behavior, whereas when the maternal lack of responsiveness is medium, the infant displays difficult behavior. Indeed, our findings suggest that, in these cases, the link between types of maternal and infant interactive behavior is more dependent on the degree of maternal lack of responsiveness than it is on birth status or SES. The results will be discussed under a developmental and evolutionary reasoning
Resumo:
In this paper we present a Self-Optimizing module, inspired on Autonomic Computing, acquiring a scheduling system with the ability to automatically select a Meta-heuristic to use in the optimization process, so as its parameterization. Case-based Reasoning was used so the system may be able of learning from the acquired experience, in the resolution of similar problems. From the obtained results we conclude about the benefit of its use.
Resumo:
In this paper we explore the importance of analyzing the exercises that the manuals have in Mathematics study, because the difficulty of identifying some errors on them can interfere with the capabilities of children. We work with some exercises related to the theme of temporal notions, based on a survey of textbooks from the 1st and 2nd grade (K-1 and K-2). Our concern is to alert about the importance of reflecting on the content of the books, in order to promote a teaching-learning process tailored to the needs of children. The activities present in the manuals should allow children to develop their logical- mathematical reasoning, for later be able to understand and apply Mathematics. To this end, we present some reflection about the exercises of manuals, and we give our opinion about what is the correct and incorrect. Also, some activities are suggested, among which were implemented with children of the 2nd grade, K- 2, along the experiments that support our work.
Resumo:
This contribution presents novel concepts for analysis of pressure–volume curves, which offer information about the time domain dynamics of the respiratory system. The aim is to verify whether a mapping of the respiratory diseases can be obtained, allowing analysis of (dis)similarities between the dynamical pattern in the breathing in children. The groups investigated here are children, diagnosed as healthy, asthmatic, and cystic fibrosis. The pressure–volume curves have been measured by means of the noninvasive forced oscillation technique during breathing at rest. The geometrical fractal dimension is extracted from the pressure–volume curves and a power-law behavior is observed in the data. The power-law model coefficients are identified from the three sets and the results show that significant differences are present between the groups. This conclusion supports the idea that the respiratory system changes with disease in terms of airway geometry, tissue parameters, leading in turn to variations in the fractal dimension of the respiratory tree and its dynamics.