987 resultados para mathematics computing
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
We discuss the design principles of TCP within the context of heterogeneous wired/wireless networks and mobile networking. We identify three shortcomings in TCP's behavior: (i) the protocol's error detection mechanism, which does not distinguish different types of errors and thus does not suffice for heterogeneous wired/wireless environments, (ii) the error recovery, which is not responsive to the distinctive characteristics of wireless networks such as transient or burst errors due to handoffs and fading channels, and (iii) the protocol strategy, which does not control the tradeoff between performance measures such as goodput and energy consumption, and often entails a wasteful effort of retransmission and energy expenditure. We discuss a solution-framework based on selected research proposals and the associated evaluation criteria for the suggested modifications. We highlight an important angle that did not attract the required attention so far: the need for new performance metrics, appropriate for evaluating the impact of protocol strategies on battery-powered devices.
Resumo:
A difficulty in lung image registration is accounting for changes in the size of the lungs due to inspiration. We propose two methods for computing a uniform scale parameter for use in lung image registration that account for size change. A scaled rigid-body transformation allows analysis of corresponding lung CT scans taken at different times and can serve as a good low-order transformation to initialize non-rigid registration approaches. Two different features are used to compute the scale parameter. The first method uses lung surfaces. The second uses lung volumes. Both approaches are computationally inexpensive and improve the alignment of lung images over rigid registration. The two methods produce different scale parameters and may highlight different functional information about the lungs.
Resumo:
Many people suffer from conditions that lead to deterioration of motor control and makes access to the computer using traditional input devices difficult. In particular, they may loose control of hand movement to the extent that the standard mouse cannot be used as a pointing device. Most current alternatives use markers or specialized hardware to track and translate a user's movement to pointer movement. These approaches may be perceived as intrusive, for example, wearable devices. Camera-based assistive systems that use visual tracking of features on the user's body often require cumbersome manual adjustment. This paper introduces an enhanced computer vision based strategy where features, for example on a user's face, viewed through an inexpensive USB camera, are tracked and translated to pointer movement. The main contributions of this paper are (1) enhancing a video based interface with a mechanism for mapping feature movement to pointer movement, which allows users to navigate to all areas of the screen even with very limited physical movement, and (2) providing a customizable, hierarchical navigation framework for human computer interaction (HCI). This framework provides effective use of the vision-based interface system for accessing multiple applications in an autonomous setting. Experiments with several users show the effectiveness of the mapping strategy and its usage within the application framework as a practical tool for desktop users with disabilities.
Resumo:
Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions.
Resumo:
This research study investigates the image of mathematics held by 5th-year post-primary students in Ireland. For this study, “image of mathematics” is conceptualized as a mental representation or view of mathematics, presumably constructed as a result of past experiences, mediated through school, parents, peers or society. It is also understood to include attitudes, beliefs, emotions, self-concept and motivation in relation to mathematics. This study explores the image of mathematics held by a sample of 356 5th-year students studying ordinary level mathematics. Students were aged between 15 and 18 years. In addition, this study examines the factors influencing students‟ images of mathematics and the possible reasons for students choosing not to study higher level mathematics for the Leaving Certificate. The design for this study is chiefly explorative. A questionnaire survey was created containing both quantitative and qualitative methods to investigate the research interest. The quantitative aspect incorporated eight pre-established scales to examine students‟ attitudes, beliefs, emotions, self-concept and motivation regarding mathematics. The qualitative element explored students‟ past experiences of mathematics, their causal attributions for success or failure in mathematics and their influences in mathematics. The quantitative and qualitative data was analysed for all students and also for students grouped by gender, prior achievement, type of post-primary school attending, co-educational status of the post-primary school and the attendance of a Project Maths pilot school. Students‟ images of mathematics were seen to be strongly indicated by their attitudes (enjoyment and value), beliefs, motivation, self-concept and anxiety, with each of these elements strongly correlated with each other, particularly self-concept and anxiety. Students‟ current images of mathematics were found to be influenced by their past experiences of mathematics, by their mathematics teachers, parents and peers, and by their prior mathematical achievement. Gender differences occur for students in their images of mathematics, with males having more positive images of mathematics than females and this is most noticeable with regards to anxiety about mathematics. Mathematics anxiety was identified as a possible reason for the low number of students continuing with higher level mathematics for the Leaving Certificate. Some students also expressed low mathematical self-concept with regards to higher level mathematics specifically. Students with low prior achievement in mathematics tended to believe that mathematics requires a natural ability which they do not possess. Rote-learning was found to be common among many students in the sample. The most positive image of mathematics held by students was the “problem-solving image”, with resulting implications for the new Project Maths syllabus in post-primary education. Findings from this research study provide important insights into the image of mathematics held by the sample of Irish post-primary students and make an innovative contribution to mathematics education research. In particular, findings contribute to the current national interest in Ireland in post-primary mathematics education, highlighting issues regarding the low uptake of higher level mathematics for the Leaving Certificate and also making a preliminary comparison between students who took part in the piloting of Project Maths and students who were more recently introduced to the new syllabus. This research study also holds implications for mathematics teachers, parents and the mathematics education community in Ireland, with some suggestions made on improving students‟ images of mathematics.
Resumo:
The technological role of handheld devices is fundamentally changing. Portable computers were traditionally application specific. They were designed and optimised to deliver a specific task. However, it is now commonly acknowledged that future handheld devices need to be multi-functional and need to be capable of executing a range of high-performance applications. This thesis has coined the term pervasive handheld computing systems to refer to this type of mobile device. Portable computers are faced with a number of constraints in trying to meet these objectives. They are physically constrained by their size, their computational power, their memory resources, their power usage, and their networking ability. These constraints challenge pervasive handheld computing systems in achieving their multi-functional and high-performance requirements. This thesis proposes a two-pronged methodology to enable pervasive handheld computing systems meet their future objectives. The methodology is a fusion of two independent and yet complementary concepts. The first step utilises reconfigurable technology to enhance the physical hardware resources within the environment of a handheld device. This approach recognises that reconfigurable computing has the potential to dynamically increase the system functionality and versatility of a handheld device without major loss in performance. The second step of the methodology incorporates agent-based middleware protocols to support handheld devices to effectively manage and utilise these reconfigurable hardware resources within their environment. The thesis asserts the combined characteristics of reconfigurable computing and agent technology can meet the objectives of pervasive handheld computing systems.
Resumo:
Phase-locked loops (PLLs) are a crucial component in modern communications systems. Comprising of a phase-detector, linear filter, and controllable oscillator, they are widely used in radio receivers to retrieve the information content from remote signals. As such, they are capable of signal demodulation, phase and carrier recovery, frequency synthesis, and clock synchronization. Continuous-time PLLs are a mature area of study, and have been covered in the literature since the early classical work by Viterbi [1] in the 1950s. With the rise of computing in recent decades, discrete-time digital PLLs (DPLLs) are a more recent discipline; most of the literature published dates from the 1990s onwards. Gardner [2] is a pioneer in this area. It is our aim in this work to address the difficulties encountered by Gardner [3] in his investigation of the DPLL output phase-jitter where additive noise to the input signal is combined with frequency quantization in the local oscillator. The model we use in our novel analysis of the system is also applicable to another of the cases looked at by Gardner, that is the DPLL with a delay element integrated in the loop. This gives us the opportunity to look at this system in more detail, our analysis providing some unique insights into the variance `dip' seen by Gardner in [3]. We initially provide background on the probability theory and stochastic processes. These branches of mathematics are the basis for the study of noisy analogue and digital PLLs. We give an overview of the classical analogue PLL theory as well as the background on both the digital PLL and circle map, referencing the model proposed by Teplinsky et al. [4, 5]. For our novel work, the case of the combined frequency quantization and noisy input from [3] is investigated first numerically, and then analytically as a Markov chain via its Chapman-Kolmogorov equation. The resulting delay equation for the steady-state jitter distribution is treated using two separate asymptotic analyses to obtain approximate solutions. It is shown how the variance obtained in each case matches well to the numerical results. Other properties of the output jitter, such as the mean, are also investigated. In this way, we arrive at a more complete understanding of the interaction between quantization and input noise in the first order DPLL than is possible using simulation alone. We also do an asymptotic analysis of a particular case of the noisy first-order DPLL with delay, previously investigated by Gardner [3]. We show a unique feature of the simulation results, namely the variance `dip' seen for certain levels of input noise, is explained by this analysis. Finally, we look at the second-order DPLL with additive noise, using numerical simulations to see the effects of low levels of noise on the limit cycles. We show how these effects are similar to those seen in the noise-free loop with non-zero initial conditions.
Resumo:
This thesis traces a genealogy of the discourse of mathematics education reform in Ireland at the beginning of the twenty first century at a time when the hegemonic political discourse is that of neoliberalism. It draws on the work of Michel Foucault to identify the network of power relations involved in the development of a single case of curriculum reform – in this case Project Maths. It identifies the construction of an apparatus within the fields of politics, economics and education, the elements of which include institutions like the OECD and the Government, the bureaucracy, expert groups and special interest groups, the media, the school, the State, state assessment and international assessment. Five major themes in educational reform emerge from the analysis: the arrival of neoliberal governance in Ireland; the triumph of human capital theory as the hegemonic educational philosophy here; the dominant role of OECD/PISA and its values in the mathematics education discourse in Ireland; the fetishisation of western scientific knowledge and knowledge as commodity; and the formation of a new kind of subjectivity, namely the subjectivity of the young person as a form of human-capital-to-be. In particular, it provides a critical analysis of the influence of OECD/PISA on the development of mathematics education policy here – especially on Project Maths curriculum, assessment and pedagogy. It unpacks the arguments in favour of curriculum change and lays bare their ideological foundations. This discourse contextualises educational change as occurring within a rapidly changing economic environment where the concept of the State’s economic aspirations and developments in science, technology and communications are reshaping both the focus of business and the demands being put on education. Within this discourse, education is to be repurposed and its consequences measured against the paradigm of the Knowledge Economy – usually characterised as the inevitable or necessary future of a carefully defined present.
Resumo:
The authors explore nanoscale sensor processor (nSP) architectures. Their design includes a simple accumulator-based instruction-set architecture, sensors, limited memory, and instruction-fused sensing. Using nSP technology based on optical resonance energy transfer logic helps them decrease the design's size; their smallest design is about the size of the largest-known virus. © 2006 IEEE.
Resumo:
In planning units and lessons every day, teachers face the problem of designing a sequence of activities to promote learning. In particular, they are expected to foster the development of learning goals in their students. Based on the idea of learning path of a task, we describe a heuristic procedure to enable teachers to characterize a learning goal in terms of its cognitive requirements and to analyze and select tasks based on this characterization. We then present an example of how a group of future teachers used this heuristic in a preservice teachers training course and discuss its contributions and constraints.
Resumo:
We have shown a description of the changes and innovations happened in Spain concerning the research on Mathematics Education during the last 25 years, highlighting specially the fast development of the last 10 years. Neither of these great and striking changes would have taken place if there was not been an evolution within the Spanish society, and particularly, within its educational system. Thanks to this, we have found the appropriate conditions for research development.
Basic components in the scienctific didactical training of the secondary school mathematics teachers
Resumo:
Secondary mathematics teacher training in Spain is currently the subject of a heated revision debate. The speed of social, cultural, scientific and economic changes have left a hundred years old teacher training model well behind. However, academical inertia and professional interests are impeding a real new training of the mathematics teacher as an autonomous mathematical educator. Teachers of Didactic of Mathematics and the Spanish Associations of mathematics teachers have recently been discussing the issue. Their conclusions are included here.
Resumo:
En este trabajo resumimos algunas reflexiones sobre el papel que pueden desarrollar la tecnología en el estudio de sistemas semióticos de representación, y que constituyen el núcleo para la comprensión de los procesos de construcción del conocimiento matemático de los estudiantes. La cita corresponde con el resumen de una página publicado.