955 resultados para Shortest path problem
Resumo:
Increasing numbers of medical schools in Australia and overseas have moved away from didactic teaching methodologies and embraced problem-based learning (PBL) to improve clinical reasoning skills and communication skills as well as to encourage self-directed lifelong learning. In January 2005, the first cohort of students entered the new MBBS program at the Griffith University School of Medicine, Gold Coast, to embark upon an exciting, fully integrated curriculum using PBL, combining electronic delivery, communication and evaluation systems incorporating cognitive principles that underpin the PBL process. This chapter examines the educational philosophies and design of the e-learning environment underpinning the processes developed to deliver, monitor and evaluate the curriculum. Key initiatives taken to promote student engagement and innovative and distinctive approaches to student learning at Griffith promoted within the conceptual model for the curriculum are (a) Student engagement, (b) Pastoral care, (c) Staff engagement, (d) Monitoring and (e) Curriculum/Program Review. © 2007 Springer-Verlag Berlin Heidelberg.
Resumo:
The problem of reconstruction of a refractive-index distribution (RID) in optical refraction tomography (ORT) with optical path-length difference (OPD) data is solved using two adaptive-estimation-based extended-Kalman-filter (EKF) approaches. First, a basic single-resolution EKF (SR-EKF) is applied to a state variable model describing the tomographic process, to estimate the RID of an optically transparent refracting object from noisy OPD data. The initialization of the biases and covariances corresponding to the state and measurement noise is discussed. The state and measurement noise biases and covariances are adaptively estimated. An EKF is then applied to the wavelet-transformed state variable model to yield a wavelet-based multiresolution EKF (MR-EKF) solution approach. To numerically validate the adaptive EKF approaches, we evaluate them with benchmark studies of standard stationary cases, where comparative results with commonly used efficient deterministic approaches can be obtained. Detailed reconstruction studies for the SR-EKF and two versions of the MR-EKF (with Haar and Daubechies-4 wavelets) compare well with those obtained from a typically used variant of the (deterministic) algebraic reconstruction technique, the average correction per projection method, thus establishing the capability of the EKF for ORT. To the best of our knowledge, the present work contains unique reconstruction studies encompassing the use of EKF for ORT in single-resolution and multiresolution formulations, and also in the use of adaptive estimation of the EKF's noise covariances. (C) 2010 Optical Society of America
Resumo:
We describe a real-time system that supports design of optimal flight paths over terrains. These paths either maximize view coverage or minimize vehicle exposure to ground. A volume-rendered display of multi-viewpoint visibility and a haptic interface assists the user in selecting, assessing, and refining the computed flight path. We design a three-dimensional scalar field representing the visibility of a point above the terrain, describe an efficient algorithm to compute the visibility field, and develop visual and haptic schemes to interact with the visibility field. Given the origin and destination, the desired flight path is computed using an efficient simulation of an articulated rope under the influence of the visibility gradient. The simulation framework also accepts user input, via the haptic interface, thereby allowing manual refinement of the flight path.
Resumo:
The study analyses European social policy as a political project that proceeds under the guidance of the European Commission. In the name of modernisation, the project aims to build a new idea for the welfare state. To understand the project, it is necessary to distance oneself from both the juridical competence of the European Union and the traditional national welfare state models. The question is about sharing problems, as well as solutions to them: it is the creation and sharing of common views, concepts and images that play a key role in European integration. Drawing on texts and speeches produced by the European Commission, the study throws light on the development of European social policy during the first years of the 2000s. The study "freeze-frames" the welfare debate having its starting points in the nation states in the name of the entity of Europe. The first article approaches the European social model as a story in itself, a preparatory, persuasive narrative that concerns the management of change. The article shows how the audience can be motivated to work towards a set target by using discursive elements in a persuasive manner: the function of a persuasive story is to convince the target audience of the appropriateness of the chosen direction and to shape their identity so that they are favourably disposed to the desired political targets. This is a kind of "intermediate state" where the story, despite its inner contradictions and inaccuracies, succeeds in appearing as an almost self-evident path towards a modern social policy that Europe is currently seen to be in need of. The second article outlines the European social model as a question of governance. Health as a sector of social policy is detached from the old political order, which was based on the welfare state, and is closely linked to economy. At the same time the population is primarily seen as an economic resource. The Commission is working towards a "Europe of Health" that grapples with the problem of governance with the help of the "healthisation" of society, healthy citizenship and health economics. The way the Commission speaks is guided by the Union's powerful interest to act as "Europe" in the field of welfare policy. At the same time, the traditional separateness of health policy is effaced in order to be able to make health policy reforms a part of the Union's wider modernisation targets. The third article then shows the European social policy as its own area of governance. The article uses an approach based on critical discourse analysis in examining the classification systems and presentation styles adopted by Commission communications, as well as the identities that they help build. In analysing the "new start" of the Lisbon strategy from the perspective of social policy, the article shows how the emphasis has shifted from the persuasive arguments for change with necessary common European targets in the early stages of the strategy towards the implementation of reforms: from a narrative to a vision and from a diagnosis to healing. The phase of global competition represents "the modern" with which European society with its culture and ways of life now has to be matched. The Lisbon strategy is a way to direct this societal change, thus building a modern European social policy. The fourth article describes how the Commission uses its communications policy to build practices and techniques of governance and how it persuades citizens to participate in the creation of a European project of change. This also requires a new kind of agency: agents for whom accountability and responsibilities mean integration into and commitment to European society. Accountability is shaped into a decisive factor in implementing the European Union's strategy of change. As such it will displace hierarchical confrontations and emphasise common action with a view to modernising Europe. However, the Union's discourse cannot be described as being a political language that would genuinely rouse and convince the audience at the level of everyday life. Keywords: European social policy, EU policy, European social model, European Commission, modernisation of welfare, welfare state, communications, discoursiveness.
Resumo:
It is well known that the numerical accuracy of a series solution to a boundary-value problem by the direct method depends on the technique of approximate satisfaction of the boundary conditions and on the stage of truncation of the series. On the other hand, it does not appear to be generally recognized that, when the boundary conditions can be described in alternative equivalent forms, the convergence of the solution is significantly affected by the actual form in which they are stated. The importance of the last aspect is studied for three different techniques of computing the deflections of simply supported regular polygonal plates under uniform pressure. It is also shown that it is sometimes possible to modify the technique of analysis to make the accuracy independent of the description of the boundary conditions.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.
Resumo:
Abstract (Mig or mej, själ or sjel? Problems and solutions in the transcription of Swedish song texts): In this article I am pointing out and discussing problems and solutions concerning phonetic transcription of Swedish song texts. My material consists of 66 Swedish songs phonetically transcribed. The transcriptions were published by The Academy of Finnish Art Song in 2009. The first issue was which level of accuracy should be chosen. The transcriptions were created to be clear at a glance and suitable for the needs of interpretation of non Swedish speaking singers. The principle was to use as few signs and symbols as possible without sacrificing accuracy. Certain songs were provided with additional information whenever there was a chance of misinterpretation. The second issue was which geographic variety of the language should be visible in the transcription, Standard Swedish or Finland-Swedish? The songs in the volume are a selection of well-known works that are also of international interest. Most were composed by Jean Sibelius (1865–1957), a substantial number of whose songs were based on poems written by Finland’s national poet, Johan Ludvig Runeberg (1804–1877). Thus I chose to use the variety of Swedish language spoken in Finland, in order to reflect the cultural origin of the songs. This variety differs slightly from the variety spoken in Sweden both on prosodic and phonetic level. In singing, the note-text gives the interpretor enough information about prosody. The differences concern mostly the phonemes. A fully consequent transcript was, however, difficult to make, due to vocal requirement. So, for example, in an unstressed final syllable the vowel was often indicated as a central vowel, which in singing is given a more direct emphasis than in a literal pronunciation, even if this central vowel does not occur in spoken Finland-Swedish.
Resumo:
In this paper, we have first given a numerical procedure for the solution of second order non-linear ordinary differential equations of the type y″ = f (x;y, y′) with given initial conditions. The method is based on geometrical interpretation of the equation, which suggests a simple geometrical construction of the integral curve. We then translate this geometrical method to the numerical procedure adaptable to desk calculators and digital computers. We have studied the efficacy of this method with the help of an illustrative example with known exact solution. We have also compared it with Runge-Kutta method. We have then applied this method to a physical problem, namely, the study of the temperature distribution in a semi-infinite solid homogeneous medium for temperature-dependent conductivity coefficient.
Resumo:
Following Weisskopf, the kinematics of quantum mechanics is shown to lead to a modified charge distribution for a test electron embedded in the Fermi-Dirac vacuum with interesting consequences.
Resumo:
Thin films are the basis of much of recent technological advance, ranging from coatings with mechanical or optical benefits to platforms for nanoscale electronics. In the latter, semiconductors have been the norm ever since silicon became the main construction material for a multitude of electronical components. The array of characteristics of silicon-based systems can be widened by manipulating the structure of the thin films at the nanoscale - for instance, by making them porous. The different characteristics of different films can then to some extent be combined by simple superposition. Thin films can be manufactured using many different methods. One emerging field is cluster beam deposition, where aggregates of hundreds or thousands of atoms are deposited one by one to form a layer, the characteristics of which depend on the parameters of deposition. One critical parameter is deposition energy, which dictates how porous, if at all, the layer becomes. Other parameters, such as sputtering rate and aggregation conditions, have an effect on the size and consistency of the individual clusters. Understanding nanoscale processes, which cannot be observed experimentally, is fundamental to optimizing experimental techniques and inventing new possibilities for advances at this scale. Atomistic computer simulations offer a window to the world of nanometers and nanoseconds in a way unparalleled by the most accurate of microscopes. Transmission electron microscope image simulations can then bridge this gap by providing a tangible link between the simulated and the experimental. In this thesis, the entire process of cluster beam deposition is explored using molecular dynamics and image simulations. The process begins with the formation of the clusters, which is investigated for Si/Ge in an Ar atmosphere. The structure of the clusters is optimized to bring it as close to the experimental ideal as possible. Then, clusters are deposited, one by one, onto a substrate, until a sufficiently thick layer has been produced. Finally, the concept is expanded by further deposition with different parameters, resulting in multiple superimposed layers of different porosities. This work demonstrates how the aggregation of clusters is not entirely understood within the scope of the approximations used in the simulations; yet, it is also shown how the continued deposition of clusters with a varying deposition energy can lead to a novel kind of nanostructured thin film: a multielemental porous multilayer. According to theory, these new structures have characteristics that can be tailored for a variety of applications, with precision heretofore unseen in conventional multilayer manufacture.
Resumo:
Ingarden (1962, 1964) postulates that artworks exist in an “Objective purely intentional” way. According to this view, objectivity and subjectivity are opposed forms of existence, parallel to the opposition between realism and idealism. Using arguments of cognitive science, experimental psychology, and semiotics, this lecture proposes that, particularly in the aesthetic phenomena, realism and idealism are not pure oppositions; rather they are aspects of a single process of cognition in different strata. Furthermore, the concept of realism can be conceived as an empirical extreme of idealism, and the concept of idealism can be conceived as a pre-operative extreme of realism. Both kind of systems of knowledge are mutually associated by a synecdoche, performing major tasks of mental order and categorisation. This contribution suggests that the supposed opposition between objectivity and subjectivity, raises, first of all, a problem of translatability, more than a problem of existential categories. Synecdoche seems to be a very basic transaction of the mind, establishing ontologies (in the more Ingardean way of the term). Wegrzecki (1994, 220) defines ontology as “the central domain of philosophy to which other its parts directly or indirectly refer”. Thus, ontology operates within philosophy as the synecdoche does within language, pointing the sense of the general into the particular and/or viceversa. The many affinities and similarities between different sign systems, like those found across the interrelationships of the arts, are embedded into a transversal, synecdochic intersemiosis. An important question, from this view, is whether Ingardean’s pure objectivities lie basically on the impossibility of translation, therefore being absolute self-referential constructions. In such a case, it would be impossible to translate pure intentionality into something else, like acts or products.