26 resultados para Shortest path problem
em Helda - Digital Repository of University of Helsinki
Resumo:
In this study I consider what kind of perspective on the mind body problem is taken and can be taken by a philosophical position called non-reductive physicalism. Many positions fall under this label. The form of non-reductive physicalism which I discuss is in essential respects the position taken by Donald Davidson (1917-2003) and Georg Henrik von Wright (1916-2003). I defend their positions and discuss the unrecognized similarities between their views. Non-reductive physicalism combines two theses: (a) Everything that exists is physical; (b) Mental phenomena cannot be reduced to the states of the brain. This means that according to non-reductive physicalism the mental aspect of humans (be it a soul, mind, or spirit) is an irreducible part of the human condition. Also Davidson and von Wright claim that, in some important sense, the mental aspect of a human being does not reduce to the physical aspect, that there is a gap between these aspects that cannot be closed. I claim that their arguments for this conclusion are convincing. I also argue that whereas von Wright and Davidson give interesting arguments for the irreducibility of the mental, their physicalism is unwarranted. These philosophers do not give good reasons for believing that reality is thoroughly physical. Notwithstanding the materialistic consensus in the contemporary philosophy of mind the ontology of mind is still an uncharted territory where real breakthroughs are not to be expected until a radically new ontological position is developed. The third main claim of this work is that the problem of mental causation cannot be solved from the Davidsonian - von Wrightian perspective. The problem of mental causation is the problem of how mental phenomena like beliefs can cause physical movements of the body. As I see it, the essential point of non-reductive physicalism - the irreducibility of the mental - and the problem of mental causation are closely related. If mental phenomena do not reduce to causally effective states of the brain, then what justifies the belief that mental phenomena have causal powers? If mental causes do not reduce to physical causes, then how to tell when - or whether - the mental causes in terms of which human actions are explained are actually effective? I argue that this - how to decide when mental causes really are effective - is the real problem of mental causation. The motivation to explore and defend a non-reductive position stems from the belief that reductive physicalism leads to serious ethical problems. My claim is that Davidson's and von Wright's ultimate reason to defend a non-reductive view comes back to their belief that a reductive understanding of human nature would be a narrow and possibly harmful perspective. The final conclusion of my thesis is that von Wright's and Davidson's positions provide a starting point from which the current scientistic philosophy of mind can be critically further explored in the future.
Resumo:
Design embraces several disciplines dedicated to the production of artifacts and services. These disciplines are quite independent and only recently has psychological interest focused on them. Nowadays, the psychological theories of design, also called design cognition literature, describe the design process from the information processing viewpoint. These models co-exist with the normative standards of how designs should be crafted. In many places there are concrete discrepancies between these two in a way that resembles the differences between the actual and ideal decision-making. This study aimed to explore the possible difference related to problem decomposition. Decomposition is a standard component of human problem-solving models and is also included in the normative models of design. The idea of decomposition is to focus on a single aspect of the problem at a time. Despite its significance, the nature of decomposition in conceptual design is poorly understood and has only been preliminary investigated. This study addressed the status of decomposition in conceptual design of products using protocol analysis. Previous empirical investigations have argued that there are implicit and explicit decomposition, but have not provided a theoretical basis for these two. Therefore, the current research began by reviewing the problem solving and design literature and then composing a cognitive model of the solution search of conceptual design. The result is a synthetic view which describes recognition and decomposition as the basic schemata for conceptual design. A psychological experiment was conducted to explore decomposition. In the test, sixteen (N=16) senior students of mechanical engineering created concepts for two alternative tasks. The concurrent think-aloud method and protocol analysis were used to study decomposition. The results showed that despite the emphasis on decomposition in the formal education, only few designers (N=3) used decomposition explicitly and spontaneously in the presented tasks, although the designers in general applied a top-down control strategy. Instead, inferring from the use of structured strategies, the designers always relied on implicit decomposition. These results confirm the initial observations found in the literature, but they also suggest that decomposition should be investigated further. In the future, the benefits and possibilities of explicit decomposition should be considered along with the cognitive mechanisms behind decomposition. After that, the current results could be reinterpreted.
Resumo:
The thesis aims to link the biolinguistic research program and the results of studies in comceptual combination from cognitive psychology. The thesis derives a theory of syntactic structure of noun and adjectival compounds from the Empty Lexicon Hypothesis. Two compound-forming operations are described: root-compounding and word-compounding. The aptness of theory is tested with finnish and greek compounds. From the syntactic theory semantic requirements for conceptual system are derived, especially requirements for handling morphosyntactic features. These requirements are compared to three formidable theories of conceptual combination: relation theory CARIN, Dual-Process theory and C3-theory. The claims of explanatory power of relational distributions of modifier in CARIN-theory ared discarded, as the method for sampling and building relational distributions is not reliable and the algorithmic instantiation of theory does not compute what it claims to compute. From relational theory there still remains results supporting existence of 'easy' relations for certain concepts. Dual-Process theory is found to provide results that cannot in theory be affected by linguistic system, but the basic idea of property compounds is kept. C3-theory is found to be not computationally realistic, but the basic results of diagnosticity and local properties (domains) of conceptual system are solid. The three conceptual combination models are rethought as a problem of finding the shortest route between the two concepts. The new basis for modeling is suggested to be bare conceptual landscape with morphosyntactiic or semantic features working as guidance and structural features of landscape basically unknown, but such as they react to features from linguistic system. Minimalistic principles to conceptual modeling are suggested.
Resumo:
Achieving sustainable consumption patterns is a crucial step on the way towards sustainability. The scientific knowledge used to decide which priorities to set and how to enforce them has to converge with societal, political, and economic initiatives on various levels: from individual household decision-making to agreements and commitments in global policy processes. The aim of this thesis is to draw a comprehensive and systematic picture of sustainable consumption and to do this it develops the concept of Strong Sustainable Consumption Governance. In this concept, consumption is understood as resource consumption. This includes consumption by industries, public consumption, and household consumption. Next to the availability of resources (including the available sink capacity of the ecosystem) and their use and distribution among the Earth’s population, the thesis also considers their contribution to human well-being. This implies giving specific attention to the levels and patterns of consumption. Methods: The thesis introduces the terminology and various concepts of Sustainable Consumption and of Governance. It briefly elaborates on the methodology of Critical Realism and its potential for analysing Sustainable Consumption. It describes the various methods on which the research is based and sets out the political implications a governance approach towards Strong Sustainable Consumption may have. Two models are developed: one for the assessment of the environmental relevance of consumption activities, another to identify the influences of globalisation on the determinants of consumption opportunities. Results: One of the major challenges for Strong Sustainable Consumption is that it is not in line with the current political mainstream: that is, the belief that economic growth can cure all our problems. So, the proponents have to battle against a strong headwind. Their motivation however is the conviction that there is no alternative. Efforts have to be taken on multiple levels by multiple actors. And all of them are needed as they constitute the individual strings that together make up the rope. However, everyone must ensure that they are pulling in the same direction. It might be useful to apply a carrot and stick strategy to stimulate public debate. The stick in this case is to create a sense of urgency. The carrot would be to articulate better the message to the public that a shrinking of the economy is not as much of a disaster as mainstream economics tends to suggest. In parallel to this it is necessary to demand that governments take responsibility for governance. The dominant strategy is still information provision. But there is ample evidence that hard policies like regulatory instruments and economic instruments are most effective. As for Civil Society Organizations it is recommended that they overcome the habit of promoting Sustainable (in fact green) Consumption by using marketing strategies and instead foster public debate in values and well-being. This includes appreciating the potential of social innovation. A countless number of such initiatives are on the way but their potential is still insufficiently explored. Beyond the question of how to multiply such approaches, it is also necessary to establish political macro structures to foster them.
Resumo:
Phosphorus is a nutrient needed in crop production. While boosting crop yields it may also accelerate eutrophication in the surface waters receiving the phosphorus runoff. The privately optimal level of phosphorus use is determined by the input and output prices, and the crop response to phosphorus. Socially optimal use also takes into account the impact of phosphorus runoff on water quality. Increased eutrophication decreases the economic value of surface waters by Deteriorating fish stocks, curtailing the potential for recreational activities and by increasing the probabilities of mass algae blooms. In this dissertation, the optimal use of phosphorus is modelled as a dynamic optimization problem. The potentially plant available phosphorus accumulated in soil is treated as a dynamic state variable, the control variable being the annual phosphorus fertilization. For crop response to phosphorus, the state variable is more important than the annual fertilization. The level of this state variable is also a key determinant of the runoff of dissolved, reactive phosphorus. Also the loss of particulate phosphorus due to erosion is considered in the thesis, as well as its mitigation by constructing vegetative buffers. The dynamic model is applied for crop production on clay soils. At the steady state, the analysis focuses on the effects of prices, damage parameterization, discount rate and soil phosphorus carryover capacity on optimal steady state phosphorus use. The economic instruments needed to sustain the social optimum are also analyzed. According to the results the economic incentives should be conditioned on soil phosphorus values directly, rather than on annual phosphorus applications. The results also emphasize the substantial effects the differences in varying discount rates of the farmer and the social planner have on optimal instruments. The thesis analyzes the optimal soil phosphorus paths from its alternative initial levels. It also examines how erosion susceptibility of a parcel affects these optimal paths. The results underline the significance of the prevailing soil phosphorus status on optimal fertilization levels. With very high initial soil phosphorus levels, both the privately and socially optimal phosphorus application levels are close to zero as the state variable is driven towards its steady state. The soil phosphorus processes are slow. Therefore, depleting high phosphorus soils may take decades. The thesis also presents a methodologically interesting phenomenon in problems of maximizing the flow of discounted payoffs. When both the benefits and damages are related to the same state variable, the steady state solution may have an interesting property, under very general conditions: The tail of the payoffs of the privately optimal path as well as the steady state may provide a higher social welfare than the respective tail of the socially optimal path. The result is formalized and an applied to the created framework of optimal phosphorus use.
Resumo:
The problem of recovering information from measurement data has already been studied for a long time. In the beginning, the methods were mostly empirical, but already towards the end of the sixties Backus and Gilbert started the development of mathematical methods for the interpretation of geophysical data. The problem of recovering information about a physical phenomenon from measurement data is an inverse problem. Throughout this work, the statistical inversion method is used to obtain a solution. Assuming that the measurement vector is a realization of fractional Brownian motion, the goal is to retrieve the amplitude and the Hurst parameter. We prove that under some conditions, the solution of the discretized problem coincides with the solution of the corresponding continuous problem as the number of observations tends to infinity. The measurement data is usually noisy, and we assume the data to be the sum of two vectors: the trend and the noise. Both vectors are supposed to be realizations of fractional Brownian motions, and the goal is to retrieve their parameters using the statistical inversion method. We prove a partial uniqueness of the solution. Moreover, with the support of numerical simulations, we show that in certain cases the solution is reliable and the reconstruction of the trend vector is quite accurate.
Resumo:
The object of this dissertation is to study globally defined bounded p-harmonic functions on Cartan-Hadamard manifolds and Gromov hyperbolic metric measure spaces. Such functions are constructed by solving the so called Dirichlet problem at infinity. This problem is to find a p-harmonic function on the space that extends continuously to the boundary at inifinity and obtains given boundary values there. The dissertation consists of an overview and three published research articles. In the first article the Dirichlet problem at infinity is considered for more general A-harmonic functions on Cartan-Hadamard manifolds. In the special case of two dimensions the Dirichlet problem at infinity is solved by only assuming that the sectional curvature has a certain upper bound. A sharpness result is proved for this upper bound. In the second article the Dirichlet problem at infinity is solved for p-harmonic functions on Cartan-Hadamard manifolds under the assumption that the sectional curvature is bounded outside a compact set from above and from below by functions that depend on the distance to a fixed point. The curvature bounds allow examples of quadratic decay and examples of exponential growth. In the final article a generalization of the Dirichlet problem at infinity for p-harmonic functions is considered on Gromov hyperbolic metric measure spaces. Existence and uniqueness results are proved and Cartan-Hadamard manifolds are considered as an application.
Resumo:
In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.
Resumo:
The main focus of this study is the epilogue of 4QMMT (4QMiqsat Ma aseh ha-Torah), a text of obscure genre containing a halakhic section found in cave 4 at Qumran. In the official edition published in the series Discoveries of the Judaean Desert (DJD X), the extant document was divided by its editors, Elisha Qimron and John Strugnell, into three literary divisions: Section A) the calendar section representing a 364-day solar calendar, Section B) the halakhot, and Section C) an epilogue. The work begins with text critical inspection of the manuscripts containing text from the epilogue (mss 4Q397, 4Q398, and 4Q399). However, since the relationship of the epilogue to the other sections of the whole document 4QMMT is under investigation, the calendrical fragments (4Q327 and 4Q394 3-7, lines 1-3) and the halakhic section also receive some attention, albeit more limited and purpose oriented. In Ch. 2, after a transcription of the fragments of the epilogue, a synopsis is presented in order to evaluate the composite text of the DJD X edition in light of the evidence provided by the individual manuscripts. As a result, several critical comments are offered, and finally, an alternative arrangement of the fragments of the epilogue with an English translation. In the following chapter (Ch. 3), the diversity of the two main literary divisions, the halakhic section and the epilogue, is discussed, and it is demonstrated that the author(s) of 4QMMT adopted and adjusted the covenantal pattern known from biblical law collections, more specifically Deuteronomy. The question of the genre of 4QMMT is investigated in Ch. 4. The final chapter (Ch. 5) contains an analysis of the use of Scripture in the epilogue. In a close reading, both the explicit citations and the more subtle allusions are investigated in an attempt to trace the theology of the epilogue. The main emphases of the epilogue are covenantal faithfulness, repentance and return. The contents of the document reflect a grave concern for the purity of the cult in Jerusalem, and in the epilogue Deuteronomic language and expressions are used to convince the readers of the necessity of a reformation. The large number of late copies found in cave 4 at Qumran witness the significance of 4QMMT and the continuous importance of the Jerusalem Temple for the Qumran community.
Resumo:
The study analyses European social policy as a political project that proceeds under the guidance of the European Commission. In the name of modernisation, the project aims to build a new idea for the welfare state. To understand the project, it is necessary to distance oneself from both the juridical competence of the European Union and the traditional national welfare state models. The question is about sharing problems, as well as solutions to them: it is the creation and sharing of common views, concepts and images that play a key role in European integration. Drawing on texts and speeches produced by the European Commission, the study throws light on the development of European social policy during the first years of the 2000s. The study "freeze-frames" the welfare debate having its starting points in the nation states in the name of the entity of Europe. The first article approaches the European social model as a story in itself, a preparatory, persuasive narrative that concerns the management of change. The article shows how the audience can be motivated to work towards a set target by using discursive elements in a persuasive manner: the function of a persuasive story is to convince the target audience of the appropriateness of the chosen direction and to shape their identity so that they are favourably disposed to the desired political targets. This is a kind of "intermediate state" where the story, despite its inner contradictions and inaccuracies, succeeds in appearing as an almost self-evident path towards a modern social policy that Europe is currently seen to be in need of. The second article outlines the European social model as a question of governance. Health as a sector of social policy is detached from the old political order, which was based on the welfare state, and is closely linked to economy. At the same time the population is primarily seen as an economic resource. The Commission is working towards a "Europe of Health" that grapples with the problem of governance with the help of the "healthisation" of society, healthy citizenship and health economics. The way the Commission speaks is guided by the Union's powerful interest to act as "Europe" in the field of welfare policy. At the same time, the traditional separateness of health policy is effaced in order to be able to make health policy reforms a part of the Union's wider modernisation targets. The third article then shows the European social policy as its own area of governance. The article uses an approach based on critical discourse analysis in examining the classification systems and presentation styles adopted by Commission communications, as well as the identities that they help build. In analysing the "new start" of the Lisbon strategy from the perspective of social policy, the article shows how the emphasis has shifted from the persuasive arguments for change with necessary common European targets in the early stages of the strategy towards the implementation of reforms: from a narrative to a vision and from a diagnosis to healing. The phase of global competition represents "the modern" with which European society with its culture and ways of life now has to be matched. The Lisbon strategy is a way to direct this societal change, thus building a modern European social policy. The fourth article describes how the Commission uses its communications policy to build practices and techniques of governance and how it persuades citizens to participate in the creation of a European project of change. This also requires a new kind of agency: agents for whom accountability and responsibilities mean integration into and commitment to European society. Accountability is shaped into a decisive factor in implementing the European Union's strategy of change. As such it will displace hierarchical confrontations and emphasise common action with a view to modernising Europe. However, the Union's discourse cannot be described as being a political language that would genuinely rouse and convince the audience at the level of everyday life. Keywords: European social policy, EU policy, European social model, European Commission, modernisation of welfare, welfare state, communications, discoursiveness.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.
Resumo:
Abstract (Mig or mej, själ or sjel? Problems and solutions in the transcription of Swedish song texts): In this article I am pointing out and discussing problems and solutions concerning phonetic transcription of Swedish song texts. My material consists of 66 Swedish songs phonetically transcribed. The transcriptions were published by The Academy of Finnish Art Song in 2009. The first issue was which level of accuracy should be chosen. The transcriptions were created to be clear at a glance and suitable for the needs of interpretation of non Swedish speaking singers. The principle was to use as few signs and symbols as possible without sacrificing accuracy. Certain songs were provided with additional information whenever there was a chance of misinterpretation. The second issue was which geographic variety of the language should be visible in the transcription, Standard Swedish or Finland-Swedish? The songs in the volume are a selection of well-known works that are also of international interest. Most were composed by Jean Sibelius (1865–1957), a substantial number of whose songs were based on poems written by Finland’s national poet, Johan Ludvig Runeberg (1804–1877). Thus I chose to use the variety of Swedish language spoken in Finland, in order to reflect the cultural origin of the songs. This variety differs slightly from the variety spoken in Sweden both on prosodic and phonetic level. In singing, the note-text gives the interpretor enough information about prosody. The differences concern mostly the phonemes. A fully consequent transcript was, however, difficult to make, due to vocal requirement. So, for example, in an unstressed final syllable the vowel was often indicated as a central vowel, which in singing is given a more direct emphasis than in a literal pronunciation, even if this central vowel does not occur in spoken Finland-Swedish.
Resumo:
Thin films are the basis of much of recent technological advance, ranging from coatings with mechanical or optical benefits to platforms for nanoscale electronics. In the latter, semiconductors have been the norm ever since silicon became the main construction material for a multitude of electronical components. The array of characteristics of silicon-based systems can be widened by manipulating the structure of the thin films at the nanoscale - for instance, by making them porous. The different characteristics of different films can then to some extent be combined by simple superposition. Thin films can be manufactured using many different methods. One emerging field is cluster beam deposition, where aggregates of hundreds or thousands of atoms are deposited one by one to form a layer, the characteristics of which depend on the parameters of deposition. One critical parameter is deposition energy, which dictates how porous, if at all, the layer becomes. Other parameters, such as sputtering rate and aggregation conditions, have an effect on the size and consistency of the individual clusters. Understanding nanoscale processes, which cannot be observed experimentally, is fundamental to optimizing experimental techniques and inventing new possibilities for advances at this scale. Atomistic computer simulations offer a window to the world of nanometers and nanoseconds in a way unparalleled by the most accurate of microscopes. Transmission electron microscope image simulations can then bridge this gap by providing a tangible link between the simulated and the experimental. In this thesis, the entire process of cluster beam deposition is explored using molecular dynamics and image simulations. The process begins with the formation of the clusters, which is investigated for Si/Ge in an Ar atmosphere. The structure of the clusters is optimized to bring it as close to the experimental ideal as possible. Then, clusters are deposited, one by one, onto a substrate, until a sufficiently thick layer has been produced. Finally, the concept is expanded by further deposition with different parameters, resulting in multiple superimposed layers of different porosities. This work demonstrates how the aggregation of clusters is not entirely understood within the scope of the approximations used in the simulations; yet, it is also shown how the continued deposition of clusters with a varying deposition energy can lead to a novel kind of nanostructured thin film: a multielemental porous multilayer. According to theory, these new structures have characteristics that can be tailored for a variety of applications, with precision heretofore unseen in conventional multilayer manufacture.