241 resultados para moment problem
Resumo:
Mathematical problem solving has been the subject of substantial and often controversial research for several decades. We use the term, problem solving, here in a broad sense to cover a range of activities that challenge and extend one’s thinking. In this chapter, we initially present a sketch of past decades of research on mathematical problem solving and its impact on the mathematics curriculum. We then consider some of the factors that have limited previous research on problem solving. In the remainder of the chapter we address some ways in which we might advance the fields of problem-solving research and curriculum development.
Resumo:
This study reported on the issues surrounding the acquisition of problem-solving competence of middle-year students who had been ascertained as above average in intelligence, but underachieving in problem-solving competence. In particular, it looked at the possible links between problem-posing skills development and improvements in problem-solving competence. A cohort of Year 7 students at a private, non-denominational, co-educational school was chosen as participants for the study, as they undertook a series of problem-posing sessions each week throughout a school term. The lessons were facilitated by the researcher in the students’ school setting. Two criteria were chosen to identify participants for this study. Firstly, each participant scored above the 60th percentile in the standardized Middle Years Ability Test (MYAT) (Australian Council for Educational Research, 2005) and secondly, the participants all scored below the cohort average for Criterion B (Problem-solving Criterion) in their school mathematics tests during the first semester of Year 7. Two mutually exclusive groups of participants were investigated with one constituting the Comparison Group and the other constituting the Intervention Group. The Comparison Group was chosen from a Year 7 cohort for whom no problem-posing intervention had occurred, while the Intervention Group was chosen from the Year 7 cohort of the following year. This second group received the problem-posing intervention in the form of a teaching experiment. That is, the Comparison Group were only pre-tested and post-tested, while the Intervention Group was involved in the teaching experiment and received the pre-testing and post-testing at the same time of the year, but in the following year, when the Comparison Group have moved on to the secondary part of the school. The groups were chosen from consecutive Year 7 cohorts to avoid cross-contamination of the data. A constructionist framework was adopted for this study that allowed the researcher to gain an “authentic understanding” of the changes that occurred in the development of problem-solving competence of the participants in the context of a classroom setting (Richardson, 1999). Qualitative and quantitative data were collected through a combination of methods including researcher observation and journal writing, video taping, student workbooks, informal student interviews, student surveys, and pre-testing and post-testing. This combination of methods was required to increase the validity of the study’s findings through triangulation of the data. The study findings showed that participation in problem-posing activities can facilitate the re-engagement of disengaged, middle-year mathematics students. In addition, participation in these activities can result in improved problem-solving competence and associated developmental learning changes. Some of the changes that were evident as a result of this study included improvements in self-regulation, increased integration of prior knowledge with new knowledge and increased and contextualised socialisation.
Resumo:
In the paper, the flow-shop scheduling problem with parallel machines at each stage (machine center) is studied. For each job its release and due date as well as a processing time for its each operation are given. The scheduling criterion consists of three parts: the total weighted earliness, the total weighted tardiness and the total weighted waiting time. The criterion takes into account the costs of storing semi-manufactured products in the course of production and ready-made products as well as penalties for not meeting the deadlines stated in the conditions of the contract with customer. To solve the problem, three constructive algorithms and three metaheuristics (based one Tabu Search and Simulated Annealing techniques) are developed and experimentally analyzed. All the proposed algorithms operate on the notion of so-called operation processing order, i.e. the order of operations on each machine. We show that the problem of schedule construction on the base of a given operation processing order can be reduced to the linear programming task. We also propose some approximation algorithm for schedule construction and show the conditions of its optimality.
Resumo:
Interdisciplinary studies are fundamental to the signature practices for the middle years of schooling. Middle years researchers claim that interdisciplinarity in teaching appropriately meets the needs of early adolescents by tying concepts together, providing frameworks for the relevance of knowledge, and demonstrating the linking of disparate information for solution of novel problems. Cognitive research is not wholeheartedly supportive of this position. Learning theorists assert that application of knowledge in novel situations for the solution of problems is actually dependent on deep discipline based understandings. The present research contrasts the capabilities of early adolescent students from discipline based and interdisciplinary based curriculum schooling contexts to successfully solve multifaceted real world problems. This will inform the development of effective management of middle years of schooling curriculum.
Resumo:
In this thesis we are interested in financial risk and the instrument we want to use is Value-at-Risk (VaR). VaR is the maximum loss over a given period of time at a given confidence level. Many definitions of VaR exist and some will be introduced throughout this thesis. There two main ways to measure risk and VaR: through volatility and through percentiles. Large volatility in financial returns implies greater probability of large losses, but also larger probability of large profits. Percentiles describe tail behaviour. The estimation of VaR is a complex task. It is important to know the main characteristics of financial data to choose the best model. The existing literature is very wide, maybe controversial, but helpful in drawing a picture of the problem. It is commonly recognised that financial data are characterised by heavy tails, time-varying volatility, asymmetric response to bad and good news, and skewness. Ignoring any of these features can lead to underestimating VaR with a possible ultimate consequence being the default of the protagonist (firm, bank or investor). In recent years, skewness has attracted special attention. An open problem is the detection and modelling of time-varying skewness. Is skewness constant or there is some significant variability which in turn can affect the estimation of VaR? This thesis aims to answer this question and to open the way to a new approach to model simultaneously time-varying volatility (conditional variance) and skewness. The new tools are modifications of the Generalised Lambda Distributions (GLDs). They are four-parameter distributions, which allow the first four moments to be modelled nearly independently: in particular we are interested in what we will call para-moments, i.e., mean, variance, skewness and kurtosis. The GLDs will be used in two different ways. Firstly, semi-parametrically, we consider a moving window to estimate the parameters and calculate the percentiles of the GLDs. Secondly, parametrically, we attempt to extend the GLDs to include time-varying dependence in the parameters. We used the local linear regression to estimate semi-parametrically conditional mean and conditional variance. The method is not efficient enough to capture all the dependence structure in the three indices —ASX 200, S&P 500 and FT 30—, however it provides an idea of the DGP underlying the process and helps choosing a good technique to model the data. We find that GLDs suggest that moments up to the fourth order do not always exist, there existence appears to vary over time. This is a very important finding, considering that past papers (see for example Bali et al., 2008; Hashmi and Tay, 2007; Lanne and Pentti, 2007) modelled time-varying skewness, implicitly assuming the existence of the third moment. However, the GLDs suggest that mean, variance, skewness and in general the conditional distribution vary over time, as already suggested by the existing literature. The GLDs give good results in estimating VaR on three real indices, ASX 200, S&P 500 and FT 30, with results very similar to the results provided by historical simulation.
Resumo:
The concept of "fair basing" is widely acknowledged as a difficult area of patent law. This article maps the development of fair basing law to demonstrate how some of the difficulties have arisen. Part I of the article traces the development of the branches of patent law that were swept under the nomenclature of "fair basing" by British legislation in 1949. It looks at the early courts' approach to patent construction, examines the early origin of fair basing and what it was intended to achiever. Part II of the article considers the modern interpretation of fair basing, which provides a striking contrast to its historical context. Without any consistent judicial approach to construction the doctrine has developed inappropriately, giving rise to both over-strict and over-generous approaches.
Resumo:
In an earlier article the concept of fair basing in Australian patent law was described as a "problem child", often unruly and unpredictable in practice, but nevertheless understandable and useful in policy terms. The article traced the development of several different branches of patent law that were swept under the nomenclature of "fair basing" in Britain in 1949. It then went on to examine the adoption of fair basis into Australian law, the modern interpretation of the requirement, and its problems. This article provides an update. After briefly recapping on the relevant historical issues, it examines the recent Lockwood "internal" fair basing case in the Federal and High Courts.
Resumo:
Currently the Bachelor of Design is the generic degree offered to the four disciplines of Architecture, Landscape Architecture, Industrial Design, and Interior Design within the School of Design at the Queensland University of Technology. Regardless of discipline, Digital Communication is a core unit taken by the 600 first year students entering the Bachelor of Design degree. Within the design disciplines the communication of the designer's intentions is achieved primarily through the use of graphic images, with written information being considered as supportive or secondary. As such, Digital Communication attempts to educate learners in the fundamentals of this graphic design communication, using a generic digital or software tool. Past iterations of the unit have not acknowledged the subtle difference in design communication of the different design disciplines involved, and has used a single generic software tool. Following a review of the unit in 2008, it was decided that a single generic software tool was no longer entirely sufficient. This decision was based on the recognition that there was an increasing emergence of discipline specific digital tools, and an expressed student desire and apparent aptitude to learn these discipline specific tools. As a result the unit was reconstructed in 2009 to offer both discipline specific and generic software instruction, if elected by the student. This paper, apart from offering the general context and pedagogy of the existing and restructured units, will more importantly offer research data that validates the changes made to the unit. Most significant of this new data is the results of surveys that authenticate actual student aptitude versus desire in learning discipline specific tools. This is done through an exposure of student self efficacy in problem resolution and technological prowess - generally and specifically within the unit. More traditional means of validation is also presented that includes the results of the generic university-wide Learning Experience Survey of the unit, as well as a comparison between the assessment results of the restructured unit versus the previous year.
Resumo:
Cloud computing is a latest new computing paradigm where applications, data and IT services are provided over the Internet. Cloud computing has become a main medium for Software as a Service (SaaS) providers to host their SaaS as it can provide the scalability a SaaS requires. The challenges in the composite SaaS placement process rely on several factors including the large size of the Cloud network, SaaS competing resource requirements, SaaS interactions between its components and SaaS interactions with its data components. However, existing applications’ placement methods in data centres are not concerned with the placement of the component’s data. In addition, a Cloud network is much larger than data center networks that have been discussed in existing studies. This paper proposes a penalty-based genetic algorithm (GA) to the composite SaaS placement problem in the Cloud. We believe this is the first attempt to the SaaS placement with its data in Cloud provider’s servers. Experimental results demonstrate the feasibility and the scalability of the GA.
Resumo:
Web service composition is an important problem in web service based systems. It is about how to build a new value-added web service using existing web services. A web service may have many implementations, all of which have the same functionality, but may have different QoS values. Thus, a significant research problem in web service composition is how to select a web service implementation for each of the web services such that the composite web service gives the best overall performance. This is so-called optimal web service selection problem. There may be mutual constraints between some web service implementations. Sometimes when an implementation is selected for one web service, a particular implementation for another web service must be selected. This is so called dependency constraint. Sometimes when an implementation for one web service is selected, a set of implementations for another web service must be excluded in the web service composition. This is so called conflict constraint. Thus, the optimal web service selection is a typical constrained ombinatorial optimization problem from the computational point of view. This paper proposes a new hybrid genetic algorithm for the optimal web service selection problem. The hybrid genetic algorithm has been implemented and evaluated. The evaluation results have shown that the hybrid genetic algorithm outperforms other two existing genetic algorithms when the number of web services and the number of constraints are large.
Resumo:
This paper demonstrates some interesting connections between the hitherto disparate fields of mobile robot navigation and image-based visual servoing. A planar formulation of the well-known image-based visual servoing method leads to a bearing-only navigation system that requires no explicit localization and directly yields desired velocity. The well known benefits of image-based visual servoing such as robustness apply also to the planar case. Simulation results are presented.
Resumo:
This paper demonstrates some interesting connections between the hitherto disparate fields of mobile robot navigation and image-based visual servoing. A planar formulation of the well-known image-based visual servoing method leads to a bearing-only navigation system that requires no explicit localization and directly yields desired velocity. The well known benefits of image-based visual servoing such as robustness apply also to the planar case. Simulation results are presented.
Resumo:
Objective: The Brief Michigan Alcoholism Screening Test (bMAST) is a 10-item test derived from the 25-item Michigan Alcoholism Screening Test (MAST). It is widely used in the assessment of alcohol dependence. In the absence of previous validation studies, the principal aim of this study was to assess the validity and reliability of the bMAST as a measure of the severity of problem drinking. Method: There were 6,594 patients (4,854 men, 1,740 women) who had been referred for alcohol-use disorders to a hospital alcohol and drug service who voluntarily participated in this study. Results: An exploratory factor analysis defined a two-factor solution, consisting of Perception of Current Drinking and Drinking Consequences factors. Structural equation modeling confirmed that the fit of a nine-item, two-factor model was superior to the original one-factor model. Concurrent validity was assessed through simultaneous administration of the Alcohol Use Disorders Identification Test (AUDIT) and associations with alcohol consumption and clinically assessed features of alcohol dependence. The two-factor bMAST model showed moderate correlations with the AUDIT. The two-factor bMAST and AUDIT were similarly associated with quantity of alcohol consumption and clinically assessed dependence severity features. No differences were observed between the existing weighted scoring system and the proposed simple scoring system. Conclusions: In this study, both the existing bMAST total score and the two-factor model identified were as effective as the AUDIT in assessing problem drinking severity. There are additional advantages of employing the two-factor bMAST in the assessment and treatment planning of patients seeking treatment for alcohol-use disorders. (J. Stud. Alcohol Drugs 68: 771-779,2007)
Resumo:
Cloud computing has become a main medium for Software as a Service (SaaS) hosting as it can provide the scalability a SaaS requires. One of the challenges in hosting the SaaS is the placement process where the placement has to consider SaaS interactions between its components and SaaS interactions with its data components. A previous research has tackled this problem using a classical genetic algorithm (GA) approach. This paper proposes a cooperative coevolutionary algorithm (CCEA) approach. The CCEA has been implemented and evaluated and the result has shown that the CCEA has produced higher quality solutions compared to the GA.