563 resultados para Iterative methods (Mathematics)
Resumo:
With rapid and continuing growth of learning support initiatives in mathematics and statistics found in many parts of the world, and with the likelihood that this trend will continue, there is a need to ensure that robust and coherent measures are in place to evaluate the effectiveness of these initiatives. The nature of learning support brings challenges for measurement and analysis of its effects. After briefly reviewing the purpose, rationale for, and extent of current provision, this article provides a framework for those working in learning support to think about how their efforts can be evaluated. It provides references and specific examples of how workers in this field are collecting, analysing and reporting their findings. The framework is used to structure evaluation in terms of usage of facilities, resources and services provided, and also in terms of improvements in performance of the students and staff who engage with them. Very recent developments have started to address the effects of learning support on the development of deeper approaches to learning, the affective domain and the development of communities of practice of both learners and teachers. This article intends to be a stimulus to those who work in mathematics and statistics support to gather even richer, more valuable, forms of data. It provides a 'toolkit' for those interested in evaluation of learning support and closes by referring to an on-line resource being developed to archive the growing body of evidence. © 2011 Taylor & Francis.
Resumo:
We present an iterative hierarchical algorithm for multi-view stereo. The algorithm attempts to utilise as much contextual information as is available to compute highly accurate and robust depth maps. There are three novel aspects to the approach: 1) firstly we incrementally improve the depth fidelity as the algorithm progresses through the image pyramid; 2) secondly we show how to incorporate visual hull information (when available) to constrain depth searches; and 3) we show how to simultaneously enforce the consistency of the depth-map by continual comparison with neighbouring depth-maps. We show that this approach produces highly accurate depth-maps and, since it is essentially a local method, is both extremely fast and simple to implement.
Resumo:
Under pressure from both the ever increasing level of market competition and the global financial crisis, clients in consumer electronics (CE) industry are keen to understand how to choose the most appropriate procurement method and hence to improve their competitiveness. Four rounds of Delphi questionnaire survey were conducted with 12 experts in order to identify the most appropriate procurement method in the Hong Kong CE industry. Five key selection criteria in the CE industry are highlighted, including product quality, capability, price competition, flexibility and speed. This study also revealed that product quality was found to be the most important criteria for the “First type used commercially” and “Major functional improvements” projects. As for “Minor functional improvements” projects, price competition was the most crucial factor to be considered during the PP selection. These research findings provide owners with useful insights to select the procurement strategies.
Resumo:
Research Interests: Are parents complying with the legislation? Is this the same for urban, regional and rural parents? Indigenous parents? What difficulties do parents experience in complying? Do parents understand why the legislation was put in place? Have there been negative consequences for other organisations or sectors of the community?
Resumo:
The ability to decode graphics is an increasingly important component of mathematics assessment and curricula. This study examined 50, 9- to 10-year-old students (23 male, 27 female), as they solved items from six distinct graphical languages (e.g., maps) that are commonly used to convey mathematical information. The results of the study revealed: 1) factors which contribute to success or hinder performance on tasks with various graphical representations; and 2) how the literacy and graphical demands of tasks influence the mathematical sense making of students. The outcomes of this study highlight the changing nature of assessment in school mathematics and identify the function and influence of graphics in the design of assessment tasks.
Resumo:
Recent studies have started to explore context-awareness as a driver in the design of adaptable business processes. The emerging challenge of identifying and considering contextual drivers in the environment of a business process are well understood, however, typical methods used in business process modeling do not yet consider this additional contextual information in their process designs. In this chapter, we describe our research towards innovative and advanced process modeling methods that include mechanisms to incorporate relevant contextual drivers and their impacts on business processes in process design models. We report on our ongoing work with an Australian insurance provider and describe the design science we employed to develop these innovative and useful artifacts as part of a context-aware method framework. We discuss the utility of these artifacts in an application in the claims handling process at the case organization.
Resumo:
This paper reviews the current state in the application of infrared methods, particularly mid-infrared (mid-IR) and near infrared (NIR), for the evaluation of the structural and functional integrity of articular cartilage. It is noted that while a considerable amount of research has been conducted with respect to tissue characterization using mid-IR, it is almost certain that full-thickness cartilage assessment is not feasible with this method. On the contrary, the relatively more considerable penetration capacity of NIR suggests that it is a suitable candidate for full-thickness cartilage evaluation. Nevertheless, significant research is still required to improve the specificity and clinical applicability of the method if we are going to be able to use it for distinguishing between functional and dysfunctional cartilage.
Resumo:
Background: Integrating 3D virtual world technologies into educational subjects continues to draw the attention of educators and researchers alike. The focus of this study is the use of a virtual world, Second Life, in higher education teaching. In particular, it explores the potential of using a virtual world experience as a learning component situated within a curriculum delivered predominantly through face-to-face teaching methods. Purpose: This paper reports on a research study into the development of a virtual world learning experience designed for marketing students taking a Digital Promotions course. The experience was a field trip into Second Life to allow students to investigate how business branding practices were used for product promotion in this virtual world environment. The paper discusses the issues involved in developing and refining the virtual course component over four semesters. Methods: The study used a pedagogical action research approach, with iterative cycles of development, intervention and evaluation over four semesters. The data analysed were quantitative and qualitative student feedback collected after each field trip as well as lecturer reflections on each cycle. Sample: Small-scale convenience samples of second- and third-year students studying in a Bachelor of Business degree, majoring in marketing, taking the Digital Promotions subject at a metropolitan university in Queensland, Australia participated in the study. The samples included students who had and had not experienced the field trip. The numbers of students taking part in the field trip ranged from 22 to 48 across the four semesters. Findings and Implications: The findings from the four iterations of the action research plan helped identify key considerations for incorporating technologies into learning environments. Feedback and reflections from the students and lecturer suggested that an innovative learning opportunity had been developed. However, pedagogical potential was limited, in part, by technological difficulties and by student perceptions of relevance.
Resumo:
Design for Manufacturing (DFM) is a highly integral methodology in product development, starting from the concept development phase, with the aim of improving manufacturing productivity and maintaining product quality. While Design for Assembly (DFA) is focusing on elimination or combination of parts with other components (Boothroyd, Dewhurst and Knight, 2002), which in most cases relates to performing a function and manufacture operation in a simpler way, DFM is following a more holistic approach. During DFM, the considerable background work required for the conceptual phase is compensated for by a shortening of later development phases. Current DFM projects normally apply an iterative step-by-step approach and eventually transfer to the developer team. Although DFM has been a well established methodology for about 30 years, a Fraunhofer IAO study from 2009 found that DFM was still one of the key challenges of the German Manufacturing Industry. A new, knowledge based approach to DFM, eliminating steps of DFM, was introduced in Paul and Al-Dirini (2009). The concept focuses on a concurrent engineering process between the manufacturing engineering and product development systems, while current product realization cycles depend on a rigorous back-and-forth examine-and-correct approach so as to ensure compatibility of any proposed design to the DFM rules and guidelines adopted by the company. The key to achieving reductions is to incorporate DFM considerations into the early stages of the design process. A case study for DFM application in an automotive powertrain engineering environment is presented. It is argued that a DFM database needs to be interfaced to the CAD/CAM software, which will restrict designers to the DFM criteria. Consequently, a notable reduction of development cycles can be achieved. The case study is following the hypothesis that current DFM methods do not improve product design in a manner claimed by the DFM method. The critical case was to identify DFA/DFM recommendations or program actions with repeated appearance in different sources. Repetitive DFM measures are identified, analyzed and it is shown how a modified DFM process can mitigate a non-fully integrated DFM approach.
Resumo:
Purpose: To compare accuracies of different methods for calculating human lens power when lens thickness is not available. Methods: Lens power was calculated by four methods. Three methods were used with previously published biometry and refraction data of 184 emmetropic and myopic eyes of 184 subjects (age range [18, 63] years, spherical equivalent range [–12.38, +0.75] D). These three methods consist of the Bennett method, which uses lens thickness, our modification of the Stenström method and the Bennett¬Rabbetts method, both of which do not require knowledge of lens thickness. These methods include c constants, which represent distances from lens surfaces to principal planes. Lens powers calculated with these methods were compared with those calculated using phakometry data available for a subgroup of 66 emmetropic eyes (66 subjects). Results: Lens powers obtained from the Bennett method corresponded well with those obtained by phakometry for emmetropic eyes, although individual differences up to 3.5D occurred. Lens powers obtained from the modified¬Stenström and Bennett¬Rabbetts methods deviated significantly from those obtained with either the Bennett method or phakometry. Customizing the c constants improved this agreement, but applying these constants to the entire group gave mean lens power differences of 0.71 ± 0.56D compared with the Bennett method. By further optimizing the c constants, the agreement with the Bennett method was within ± 1D for 95% of the eyes. Conclusion: With appropriate constants, the modified¬Stenström and Bennett¬Rabbetts methods provide a good approximation of the Bennett lens power in emmetropic and myopic eyes.
Resumo:
During the course of several natural disasters in recent years, Twitter has been found to play an important role as an additional medium for many–to–many crisis communication. Emergency services are successfully using Twitter to inform the public about current developments, and are increasingly also attempting to source first–hand situational information from Twitter feeds (such as relevant hashtags). The further study of the uses of Twitter during natural disasters relies on the development of flexible and reliable research infrastructure for tracking and analysing Twitter feeds at scale and in close to real time, however. This article outlines two approaches to the development of such infrastructure: one which builds on the readily available open source platform yourTwapperkeeper to provide a low–cost, simple, and basic solution; and, one which establishes a more powerful and flexible framework by drawing on highly scaleable, state–of–the–art technology.
Resumo:
Background: Previous research identified that primary brain tumour patients have significant psychological morbidity and unmet needs, particularly the need for more information and support. However, the utility of strategies to improve information provision in this setting is unknown. This study involved the development and piloting of a brain tumour specific question prompt list (QPL). A QPL is a list of questions patients may find useful to ask their health professionals, and is designed to facilitate communication and information exchange. Methods: Thematic analysis of QPLs developed for other chronic diseases and brain tumour specific patient resources informed a draft QPL. Subsequent refinement of the QPL involved an iterative process of interviews and review with 12 recently diagnosed patients and six caregivers. Final revisions were made following readability analyses and review by health professionals. Piloting of the QPL is underway using a non-randomised control group trial with patients undergoing treatment for a primary brain tumour in Brisbane, Queensland. Following baseline interviews, consenting participants are provided with the QPL or standard information materials. Follow-up interviews four to 6 weeks later allow assessment of the acceptability of the QPL, how it is used by patients, impact on information needs, and feasibility of recruitment, implementation and outcome assessment. Results: The final QPL was determined to be readable at the sixth grade level. It contains seven sections: diagnosis, prognosis, symptoms and changes, the health professional team, support, treatment and management, and post-treatment concerns. At this time, fourteen participants have been recruited for the pilot, and data collection completed for eleven. Data collection and preliminary analysis are expected to be completed by and presented at the conference. Conclusions: If acceptable to participants, the QPL may encourage patients, doctors and nurses to communicate more effectively, reducing unmet information needs and ultimately improving psychological wellbeing.
Resumo:
Volume measurements are useful in many branches of science and medicine. They are usually accomplished by acquiring a sequence of cross sectional images through the object using an appropriate scanning modality, for example x-ray computed tomography (CT), magnetic resonance (MR) or ultrasound (US). In the cases of CT and MR, a dividing cubes algorithm can be used to describe the surface as a triangle mesh. However, such algorithms are not suitable for US data, especially when the image sequence is multiplanar (as it usually is). This problem may be overcome by manually tracing regions of interest (ROIs) on the registered multiplanar images and connecting the points into a triangular mesh. In this paper we describe and evaluate a new discreet form of Gauss’ theorem which enables the calculation of the volume of any enclosed surface described by a triangular mesh. The volume is calculated by summing the vector product of the centroid, area and normal of each surface triangle. The algorithm was tested on computer-generated objects, US-scanned balloons, livers and kidneys and CT-scanned clay rocks. The results, expressed as the mean percentage difference ± one standard deviation were 1.2 ± 2.3, 5.5 ± 4.7, 3.0 ± 3.2 and −1.2 ± 3.2% for balloons, livers, kidneys and rocks respectively. The results compare favourably with other volume estimation methods such as planimetry and tetrahedral decomposition.
Resumo:
The purpose of this article is to describe a project with one Torres Strait Islander Community. It provides some insights into parents’ funds of knowledge that are mathematical in nature, such as sorting shells and giving fish. The idea of funds of knowledge is based on the premise that people are competent and have knowledge that has been historically and culturally accumulated into a body of knowledge and skills essential for their functioning and well-being. This knowledge is then practised throughout their lives and passed onto the next generation of children. Through adopting a community research approach, funds of knowledge that can be used to validate the community’s identities as knowledgeable people, can also be used as foundations for future learnings for teachers, parents and children in the early years of school. They can be the bridge that joins a community’s funds of knowledge with schools validating that knowledge.
Resumo:
A standard method for the numerical solution of partial differential equations (PDEs) is the method of lines. In this approach the PDE is discretised in space using �finite di�fferences or similar techniques, and the resulting semidiscrete problem in time is integrated using an initial value problem solver. A significant challenge when applying the method of lines to fractional PDEs is that the non-local nature of the fractional derivatives results in a discretised system where each equation involves contributions from many (possibly every) spatial node(s). This has important consequences for the effi�ciency of the numerical solver. First, since the cost of evaluating the discrete equations is high, it is essential to minimise the number of evaluations required to advance the solution in time. Second, since the Jacobian matrix of the system is dense (partially or fully), methods that avoid the need to form and factorise this matrix are preferred. In this paper, we consider a nonlinear two-sided space-fractional di�ffusion equation in one spatial dimension. A key contribution of this paper is to demonstrate how an eff�ective preconditioner is crucial for improving the effi�ciency of the method of lines for solving this equation. In particular, we show how to construct suitable banded approximations to the system Jacobian for preconditioning purposes that permit high orders and large stepsizes to be used in the temporal integration, without requiring dense matrices to be formed. The results of numerical experiments are presented that demonstrate the effectiveness of this approach.