949 resultados para inverse problem
Resumo:
Excessive consumption of alcohol is a serious public health problem. While intensive treatments are suitable for those who are physically dependent on alcohol, they are not cost-effective options for the vast majority of problem drinkers who are not dependent. There is good evidence that brief interventions are effective in reducing overall alcohol consumption, alcohol-related problems, and health-care utilisation among nondependent problem drinkers. Psychologists are in an ideal position to opportunistically detect people who drink excessively and to offer them brief advice to reduce their drinking. In this paper we outline the process involved in providing brief opportunistic screening and intervention for problem drinkers. We also discuss methods that psychologists can employ if a client is not ready to reduce drinking, or is ambivalent about change. Depending on the client's level of motivation to change, psychologists can engage in either an education-clarification approach, a commitment-enhancement approach, or a skills-training approach. Routine engagement in opportunistic intervention is an important public-health approach to reducing alcohol-related harm in the community.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Resumo:
In this paper, the train scheduling problem is modelled as a blocking parallel-machine job shop scheduling (BPMJSS) problem. In the model, trains, single-track sections and multiple-track sections, respectively, are synonymous with jobs, single machines and parallel machines, and an operation is regarded as the movement/traversal of a train across a section. Due to the lack of buffer space, the real-life case should consider blocking or hold-while-wait constraints, which means that a track section cannot release and must hold the train until next section on the routing becomes available. Based on literature review and our analysis, it is very hard to find a feasible complete schedule directly for BPMJSS problems. Firstly, a parallel-machine job-shop-scheduling (PMJSS) problem is solved by an improved shifting bottleneck procedure (SBP) algorithm without considering blocking conditions. Inspired by the proposed SBP algorithm, feasibility satisfaction procedure (FSP) algorithm is developed to solve and analyse the BPMJSS problem, by an alternative graph model that is an extension of the classical disjunctive graph models. The proposed algorithms have been implemented and validated using real-world data from Queensland Rail. Sensitivity analysis has been applied by considering train length, upgrading track sections, increasing train speed and changing bottleneck sections. The outcomes show that the proposed methodology would be a very useful tool for the real-life train scheduling problems
Resumo:
In this chapter I introduce an ecological-philosophical approach to artmaking that has guided my work over the past 16 years. I call this ‘Ecosophical praxis’. To illustrate how this infuses and directs my research methodologies, I draw upon a case study called Knowmore (House of Commons), an emerging interactive installation due for first showings in late 2008. This allows me to tease out the complex interrelationships between research and practice within my work, and describe how they comment upon and model these eco-cultural theories. I conclude with my intentions and hopes for the continued emergence of a contemporary eco-political modality of new media praxis that self-reflexively questions how we might re-focus future practices upon ‘sustaining the sustainable’.
Resumo:
The melting of spherical nanoparticles is considered from the perspective of heat flow in a pure material and as a moving boundary (Stefan) problem. The dependence of the melting temperature on both the size of the particle and the interfacial tension is described by the Gibbs-Thomson effect, and the resulting two-phase model is solved numerically using a front-fixing method. Results show that interfacial tension increases the speed of the melting process, and furthermore, the temperature distribution within the solid core of the particle exhibits behaviour that is qualitatively different to that predicted by the classical models without interfacial tension.
Resumo:
It has long been recognised that government and public sector services suffer an innovation deficit compared to private or market-based services. This paper argues that this can be explained as an unintended consequence of the concerted public sector drive toward the elimination of waste through efficiency, accountability and transparency. Yet in an evolving economy this can be a false efficiency, as it also eliminates the 'good waste' that is a necessary cost of experimentation. This results in a systematic trade0off in the public sector between the static efficiency of minimizing the misuse of public resources and the dynamic efficiency of experimentation. this is inherently biased against risk and uncertainty and therein, explains why governments find service innovation so difficult. In the drive to eliminate static inefficiencies, many political systems have susequently overshot and stifled policy innovation. I propose the 'Red Queen' solution of adaptive economic policy.
Resumo:
Problem-based learning (PBL) is a pedagogical methodology that presents the learner with a problem to be solved to stimulate and situate learning. This paper presents key characteristics of a problem-based learning environment that determines its suitability as a data source for workrelated research studies. To date, little has been written about the availability and validity of PBL environments as a data source and its suitability for work-related research. We describe problembased learning and use a research project case study to illustrate the challenges associated with industry work samples. We then describe the PBL course used in our research case study and use this example to illustrate the key attributes of problem-based learning environments and show how the chosen PBL environment met the work-related research requirements of the research case study. We propose that the more realistic the PBL work context and work group composition, the better the PBL environment as a data source for a work-related research. The work context is more realistic when relevant and complex project-based problems are tackled in industry-like work conditions over longer time frames. Work group composition is more realistic when participants with industry-level education and experience enact specialized roles in different disciplines within a professional community.
Resumo:
Mathematical problem solving has been the subject of substantial and often controversial research for several decades. We use the term, problem solving, here in a broad sense to cover a range of activities that challenge and extend one’s thinking. In this chapter, we initially present a sketch of past decades of research on mathematical problem solving and its impact on the mathematics curriculum. We then consider some of the factors that have limited previous research on problem solving. In the remainder of the chapter we address some ways in which we might advance the fields of problem-solving research and curriculum development.
Resumo:
This study reported on the issues surrounding the acquisition of problem-solving competence of middle-year students who had been ascertained as above average in intelligence, but underachieving in problem-solving competence. In particular, it looked at the possible links between problem-posing skills development and improvements in problem-solving competence. A cohort of Year 7 students at a private, non-denominational, co-educational school was chosen as participants for the study, as they undertook a series of problem-posing sessions each week throughout a school term. The lessons were facilitated by the researcher in the students’ school setting. Two criteria were chosen to identify participants for this study. Firstly, each participant scored above the 60th percentile in the standardized Middle Years Ability Test (MYAT) (Australian Council for Educational Research, 2005) and secondly, the participants all scored below the cohort average for Criterion B (Problem-solving Criterion) in their school mathematics tests during the first semester of Year 7. Two mutually exclusive groups of participants were investigated with one constituting the Comparison Group and the other constituting the Intervention Group. The Comparison Group was chosen from a Year 7 cohort for whom no problem-posing intervention had occurred, while the Intervention Group was chosen from the Year 7 cohort of the following year. This second group received the problem-posing intervention in the form of a teaching experiment. That is, the Comparison Group were only pre-tested and post-tested, while the Intervention Group was involved in the teaching experiment and received the pre-testing and post-testing at the same time of the year, but in the following year, when the Comparison Group have moved on to the secondary part of the school. The groups were chosen from consecutive Year 7 cohorts to avoid cross-contamination of the data. A constructionist framework was adopted for this study that allowed the researcher to gain an “authentic understanding” of the changes that occurred in the development of problem-solving competence of the participants in the context of a classroom setting (Richardson, 1999). Qualitative and quantitative data were collected through a combination of methods including researcher observation and journal writing, video taping, student workbooks, informal student interviews, student surveys, and pre-testing and post-testing. This combination of methods was required to increase the validity of the study’s findings through triangulation of the data. The study findings showed that participation in problem-posing activities can facilitate the re-engagement of disengaged, middle-year mathematics students. In addition, participation in these activities can result in improved problem-solving competence and associated developmental learning changes. Some of the changes that were evident as a result of this study included improvements in self-regulation, increased integration of prior knowledge with new knowledge and increased and contextualised socialisation.
Resumo:
In the paper, the flow-shop scheduling problem with parallel machines at each stage (machine center) is studied. For each job its release and due date as well as a processing time for its each operation are given. The scheduling criterion consists of three parts: the total weighted earliness, the total weighted tardiness and the total weighted waiting time. The criterion takes into account the costs of storing semi-manufactured products in the course of production and ready-made products as well as penalties for not meeting the deadlines stated in the conditions of the contract with customer. To solve the problem, three constructive algorithms and three metaheuristics (based one Tabu Search and Simulated Annealing techniques) are developed and experimentally analyzed. All the proposed algorithms operate on the notion of so-called operation processing order, i.e. the order of operations on each machine. We show that the problem of schedule construction on the base of a given operation processing order can be reduced to the linear programming task. We also propose some approximation algorithm for schedule construction and show the conditions of its optimality.
Resumo:
Interdisciplinary studies are fundamental to the signature practices for the middle years of schooling. Middle years researchers claim that interdisciplinarity in teaching appropriately meets the needs of early adolescents by tying concepts together, providing frameworks for the relevance of knowledge, and demonstrating the linking of disparate information for solution of novel problems. Cognitive research is not wholeheartedly supportive of this position. Learning theorists assert that application of knowledge in novel situations for the solution of problems is actually dependent on deep discipline based understandings. The present research contrasts the capabilities of early adolescent students from discipline based and interdisciplinary based curriculum schooling contexts to successfully solve multifaceted real world problems. This will inform the development of effective management of middle years of schooling curriculum.