20 resultados para Fokker-Planck problem
em Helda - Digital Repository of University of Helsinki
Resumo:
In this study I consider what kind of perspective on the mind body problem is taken and can be taken by a philosophical position called non-reductive physicalism. Many positions fall under this label. The form of non-reductive physicalism which I discuss is in essential respects the position taken by Donald Davidson (1917-2003) and Georg Henrik von Wright (1916-2003). I defend their positions and discuss the unrecognized similarities between their views. Non-reductive physicalism combines two theses: (a) Everything that exists is physical; (b) Mental phenomena cannot be reduced to the states of the brain. This means that according to non-reductive physicalism the mental aspect of humans (be it a soul, mind, or spirit) is an irreducible part of the human condition. Also Davidson and von Wright claim that, in some important sense, the mental aspect of a human being does not reduce to the physical aspect, that there is a gap between these aspects that cannot be closed. I claim that their arguments for this conclusion are convincing. I also argue that whereas von Wright and Davidson give interesting arguments for the irreducibility of the mental, their physicalism is unwarranted. These philosophers do not give good reasons for believing that reality is thoroughly physical. Notwithstanding the materialistic consensus in the contemporary philosophy of mind the ontology of mind is still an uncharted territory where real breakthroughs are not to be expected until a radically new ontological position is developed. The third main claim of this work is that the problem of mental causation cannot be solved from the Davidsonian - von Wrightian perspective. The problem of mental causation is the problem of how mental phenomena like beliefs can cause physical movements of the body. As I see it, the essential point of non-reductive physicalism - the irreducibility of the mental - and the problem of mental causation are closely related. If mental phenomena do not reduce to causally effective states of the brain, then what justifies the belief that mental phenomena have causal powers? If mental causes do not reduce to physical causes, then how to tell when - or whether - the mental causes in terms of which human actions are explained are actually effective? I argue that this - how to decide when mental causes really are effective - is the real problem of mental causation. The motivation to explore and defend a non-reductive position stems from the belief that reductive physicalism leads to serious ethical problems. My claim is that Davidson's and von Wright's ultimate reason to defend a non-reductive view comes back to their belief that a reductive understanding of human nature would be a narrow and possibly harmful perspective. The final conclusion of my thesis is that von Wright's and Davidson's positions provide a starting point from which the current scientistic philosophy of mind can be critically further explored in the future.
Resumo:
Design embraces several disciplines dedicated to the production of artifacts and services. These disciplines are quite independent and only recently has psychological interest focused on them. Nowadays, the psychological theories of design, also called design cognition literature, describe the design process from the information processing viewpoint. These models co-exist with the normative standards of how designs should be crafted. In many places there are concrete discrepancies between these two in a way that resembles the differences between the actual and ideal decision-making. This study aimed to explore the possible difference related to problem decomposition. Decomposition is a standard component of human problem-solving models and is also included in the normative models of design. The idea of decomposition is to focus on a single aspect of the problem at a time. Despite its significance, the nature of decomposition in conceptual design is poorly understood and has only been preliminary investigated. This study addressed the status of decomposition in conceptual design of products using protocol analysis. Previous empirical investigations have argued that there are implicit and explicit decomposition, but have not provided a theoretical basis for these two. Therefore, the current research began by reviewing the problem solving and design literature and then composing a cognitive model of the solution search of conceptual design. The result is a synthetic view which describes recognition and decomposition as the basic schemata for conceptual design. A psychological experiment was conducted to explore decomposition. In the test, sixteen (N=16) senior students of mechanical engineering created concepts for two alternative tasks. The concurrent think-aloud method and protocol analysis were used to study decomposition. The results showed that despite the emphasis on decomposition in the formal education, only few designers (N=3) used decomposition explicitly and spontaneously in the presented tasks, although the designers in general applied a top-down control strategy. Instead, inferring from the use of structured strategies, the designers always relied on implicit decomposition. These results confirm the initial observations found in the literature, but they also suggest that decomposition should be investigated further. In the future, the benefits and possibilities of explicit decomposition should be considered along with the cognitive mechanisms behind decomposition. After that, the current results could be reinterpreted.
Resumo:
The problem of recovering information from measurement data has already been studied for a long time. In the beginning, the methods were mostly empirical, but already towards the end of the sixties Backus and Gilbert started the development of mathematical methods for the interpretation of geophysical data. The problem of recovering information about a physical phenomenon from measurement data is an inverse problem. Throughout this work, the statistical inversion method is used to obtain a solution. Assuming that the measurement vector is a realization of fractional Brownian motion, the goal is to retrieve the amplitude and the Hurst parameter. We prove that under some conditions, the solution of the discretized problem coincides with the solution of the corresponding continuous problem as the number of observations tends to infinity. The measurement data is usually noisy, and we assume the data to be the sum of two vectors: the trend and the noise. Both vectors are supposed to be realizations of fractional Brownian motions, and the goal is to retrieve their parameters using the statistical inversion method. We prove a partial uniqueness of the solution. Moreover, with the support of numerical simulations, we show that in certain cases the solution is reliable and the reconstruction of the trend vector is quite accurate.
Resumo:
The object of this dissertation is to study globally defined bounded p-harmonic functions on Cartan-Hadamard manifolds and Gromov hyperbolic metric measure spaces. Such functions are constructed by solving the so called Dirichlet problem at infinity. This problem is to find a p-harmonic function on the space that extends continuously to the boundary at inifinity and obtains given boundary values there. The dissertation consists of an overview and three published research articles. In the first article the Dirichlet problem at infinity is considered for more general A-harmonic functions on Cartan-Hadamard manifolds. In the special case of two dimensions the Dirichlet problem at infinity is solved by only assuming that the sectional curvature has a certain upper bound. A sharpness result is proved for this upper bound. In the second article the Dirichlet problem at infinity is solved for p-harmonic functions on Cartan-Hadamard manifolds under the assumption that the sectional curvature is bounded outside a compact set from above and from below by functions that depend on the distance to a fixed point. The curvature bounds allow examples of quadratic decay and examples of exponential growth. In the final article a generalization of the Dirichlet problem at infinity for p-harmonic functions is considered on Gromov hyperbolic metric measure spaces. Existence and uniqueness results are proved and Cartan-Hadamard manifolds are considered as an application.
Resumo:
In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.
Resumo:
The main focus of this study is the epilogue of 4QMMT (4QMiqsat Ma aseh ha-Torah), a text of obscure genre containing a halakhic section found in cave 4 at Qumran. In the official edition published in the series Discoveries of the Judaean Desert (DJD X), the extant document was divided by its editors, Elisha Qimron and John Strugnell, into three literary divisions: Section A) the calendar section representing a 364-day solar calendar, Section B) the halakhot, and Section C) an epilogue. The work begins with text critical inspection of the manuscripts containing text from the epilogue (mss 4Q397, 4Q398, and 4Q399). However, since the relationship of the epilogue to the other sections of the whole document 4QMMT is under investigation, the calendrical fragments (4Q327 and 4Q394 3-7, lines 1-3) and the halakhic section also receive some attention, albeit more limited and purpose oriented. In Ch. 2, after a transcription of the fragments of the epilogue, a synopsis is presented in order to evaluate the composite text of the DJD X edition in light of the evidence provided by the individual manuscripts. As a result, several critical comments are offered, and finally, an alternative arrangement of the fragments of the epilogue with an English translation. In the following chapter (Ch. 3), the diversity of the two main literary divisions, the halakhic section and the epilogue, is discussed, and it is demonstrated that the author(s) of 4QMMT adopted and adjusted the covenantal pattern known from biblical law collections, more specifically Deuteronomy. The question of the genre of 4QMMT is investigated in Ch. 4. The final chapter (Ch. 5) contains an analysis of the use of Scripture in the epilogue. In a close reading, both the explicit citations and the more subtle allusions are investigated in an attempt to trace the theology of the epilogue. The main emphases of the epilogue are covenantal faithfulness, repentance and return. The contents of the document reflect a grave concern for the purity of the cult in Jerusalem, and in the epilogue Deuteronomic language and expressions are used to convince the readers of the necessity of a reformation. The large number of late copies found in cave 4 at Qumran witness the significance of 4QMMT and the continuous importance of the Jerusalem Temple for the Qumran community.
Resumo:
The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.
Resumo:
Aims: Develop and validate tools to estimate residual noise covariance in Planck frequency maps. Quantify signal error effects and compare different techniques to produce low-resolution maps. Methods: We derive analytical estimates of covariance of the residual noise contained in low-resolution maps produced using a number of map-making approaches. We test these analytical predictions using Monte Carlo simulations and their impact on angular power spectrum estimation. We use simulations to quantify the level of signal errors incurred in different resolution downgrading schemes considered in this work. Results: We find an excellent agreement between the optimal residual noise covariance matrices and Monte Carlo noise maps. For destriping map-makers, the extent of agreement is dictated by the knee frequency of the correlated noise component and the chosen baseline offset length. The significance of signal striping is shown to be insignificant when properly dealt with. In map resolution downgrading, we find that a carefully selected window function is required to reduce aliasing to the sub-percent level at multipoles, ell > 2Nside, where Nside is the HEALPix resolution parameter. We show that sufficient characterization of the residual noise is unavoidable if one is to draw reliable contraints on large scale anisotropy. Conclusions: We have described how to compute the low-resolution maps, with a controlled sky signal level, and a reliable estimate of covariance of the residual noise. We have also presented a method to smooth the residual noise covariance matrices to describe the noise correlations in smoothed, bandwidth limited maps.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.
Resumo:
Abstract (Mig or mej, själ or sjel? Problems and solutions in the transcription of Swedish song texts): In this article I am pointing out and discussing problems and solutions concerning phonetic transcription of Swedish song texts. My material consists of 66 Swedish songs phonetically transcribed. The transcriptions were published by The Academy of Finnish Art Song in 2009. The first issue was which level of accuracy should be chosen. The transcriptions were created to be clear at a glance and suitable for the needs of interpretation of non Swedish speaking singers. The principle was to use as few signs and symbols as possible without sacrificing accuracy. Certain songs were provided with additional information whenever there was a chance of misinterpretation. The second issue was which geographic variety of the language should be visible in the transcription, Standard Swedish or Finland-Swedish? The songs in the volume are a selection of well-known works that are also of international interest. Most were composed by Jean Sibelius (1865–1957), a substantial number of whose songs were based on poems written by Finland’s national poet, Johan Ludvig Runeberg (1804–1877). Thus I chose to use the variety of Swedish language spoken in Finland, in order to reflect the cultural origin of the songs. This variety differs slightly from the variety spoken in Sweden both on prosodic and phonetic level. In singing, the note-text gives the interpretor enough information about prosody. The differences concern mostly the phonemes. A fully consequent transcript was, however, difficult to make, due to vocal requirement. So, for example, in an unstressed final syllable the vowel was often indicated as a central vowel, which in singing is given a more direct emphasis than in a literal pronunciation, even if this central vowel does not occur in spoken Finland-Swedish.
Resumo:
Ingarden (1962, 1964) postulates that artworks exist in an “Objective purely intentional” way. According to this view, objectivity and subjectivity are opposed forms of existence, parallel to the opposition between realism and idealism. Using arguments of cognitive science, experimental psychology, and semiotics, this lecture proposes that, particularly in the aesthetic phenomena, realism and idealism are not pure oppositions; rather they are aspects of a single process of cognition in different strata. Furthermore, the concept of realism can be conceived as an empirical extreme of idealism, and the concept of idealism can be conceived as a pre-operative extreme of realism. Both kind of systems of knowledge are mutually associated by a synecdoche, performing major tasks of mental order and categorisation. This contribution suggests that the supposed opposition between objectivity and subjectivity, raises, first of all, a problem of translatability, more than a problem of existential categories. Synecdoche seems to be a very basic transaction of the mind, establishing ontologies (in the more Ingardean way of the term). Wegrzecki (1994, 220) defines ontology as “the central domain of philosophy to which other its parts directly or indirectly refer”. Thus, ontology operates within philosophy as the synecdoche does within language, pointing the sense of the general into the particular and/or viceversa. The many affinities and similarities between different sign systems, like those found across the interrelationships of the arts, are embedded into a transversal, synecdochic intersemiosis. An important question, from this view, is whether Ingardean’s pure objectivities lie basically on the impossibility of translation, therefore being absolute self-referential constructions. In such a case, it would be impossible to translate pure intentionality into something else, like acts or products.
Resumo:
We still know little of why strategy processes often involve participation problems. In this paper, we argue that this crucial issue is linked to fundamental assumptions about the nature of strategy work. Hence, we need to examine how strategy processes are typically made sense of and what roles are assigned to specific organizational members. For this purpose, we adopt a critical discursive perspective that allows us to discover how specific conceptions of strategy work are reproduced and legitimized in organizational strategizing. Our empirical analysis is based on an extensive research project on strategy work in 12 organizations. As a result of our analysis, we identify three central discourses that seem to be systematically associated with nonparticipatory approaches to strategy work: “mystification,” “disciplining,” and “technologization.” However, we also distinguish three strategy discourses that promote participation: “self-actualization,” “dialogization,” and “concretization.” Our analysis shows that strategy as practice involves alternative and even competing discourses that have fundamentally different kinds of implications for participation in strategy work. We argue from a critical perspective that it is important to be aware of the inherent problems associated with dominant discourses as well as to actively advance the use of alternative ones.