26 resultados para DDAP Dock Door Assignment Problem
Resumo:
In this study I consider what kind of perspective on the mind body problem is taken and can be taken by a philosophical position called non-reductive physicalism. Many positions fall under this label. The form of non-reductive physicalism which I discuss is in essential respects the position taken by Donald Davidson (1917-2003) and Georg Henrik von Wright (1916-2003). I defend their positions and discuss the unrecognized similarities between their views. Non-reductive physicalism combines two theses: (a) Everything that exists is physical; (b) Mental phenomena cannot be reduced to the states of the brain. This means that according to non-reductive physicalism the mental aspect of humans (be it a soul, mind, or spirit) is an irreducible part of the human condition. Also Davidson and von Wright claim that, in some important sense, the mental aspect of a human being does not reduce to the physical aspect, that there is a gap between these aspects that cannot be closed. I claim that their arguments for this conclusion are convincing. I also argue that whereas von Wright and Davidson give interesting arguments for the irreducibility of the mental, their physicalism is unwarranted. These philosophers do not give good reasons for believing that reality is thoroughly physical. Notwithstanding the materialistic consensus in the contemporary philosophy of mind the ontology of mind is still an uncharted territory where real breakthroughs are not to be expected until a radically new ontological position is developed. The third main claim of this work is that the problem of mental causation cannot be solved from the Davidsonian - von Wrightian perspective. The problem of mental causation is the problem of how mental phenomena like beliefs can cause physical movements of the body. As I see it, the essential point of non-reductive physicalism - the irreducibility of the mental - and the problem of mental causation are closely related. If mental phenomena do not reduce to causally effective states of the brain, then what justifies the belief that mental phenomena have causal powers? If mental causes do not reduce to physical causes, then how to tell when - or whether - the mental causes in terms of which human actions are explained are actually effective? I argue that this - how to decide when mental causes really are effective - is the real problem of mental causation. The motivation to explore and defend a non-reductive position stems from the belief that reductive physicalism leads to serious ethical problems. My claim is that Davidson's and von Wright's ultimate reason to defend a non-reductive view comes back to their belief that a reductive understanding of human nature would be a narrow and possibly harmful perspective. The final conclusion of my thesis is that von Wright's and Davidson's positions provide a starting point from which the current scientistic philosophy of mind can be critically further explored in the future.
Resumo:
Sommaren 1788 drabbades den svenska flottan av en svårartad febersjukdom. Febern, som senare definierats som febris recurrens el. återfallsfeber, hade sitt ursprung i den ryska flottan. Besättningen ombord skeppet Vladislav, krigsbytet från slaget vid Hogland, bar på ett stort antal smittade klädlöss. Efter flottans ankomst till Sveaborg spred sig sjukdomen snabbt bland manskapet, men även bland fästningens garnison. Förhållandena inom militären, både inom lantarmén och framför allt inom flottan, var gynnsamma för epidemiers spridning. De trånga utrymmena, den ensidiga kosten, det undermåliga dricksvattnet, den bristande hygienen: allt gynnade uppkomsten och spridningen av olika epidemier. Manskapets försämrade allmäntillstånd gjorde, att sjukdomarna blev mera förödande än vad de i andra förhållanden skulle ha varit. Bristen på manskap och material under Gustav III:s ryska krig var enormt, bl.a. var bristen på medicinsk personal och -utrustning skriande. Då flottan och armén drabbades av en epidemi av katastrofala dimensioner stod myndigheterna hjälplösa. Epidemin visaqr tydligt hur illa förberett hela kriget var och hur misskött flottans sjukvård var. På Sveaborg var förhållandena fruktansvärda. Halva garnisonen uppges ha avlidit, och det låg travar av lik överallt. Kaserner m.fl. byggnader adapterades till provisoriska lasarett och det rådde brist på allt. De medicinska myndigheterna representerades av den till fästningen skickade andra fältläkaren, som tillsammans med läkarna på fästningen gjorde sitt bästa i enlighet med tidens vårdmetoder. Då den svenska örlogsflottan i november seglat över tilll Karlskrona spred sig epidemin i staden. Sjukdomen grasserade också bland de civila. Då sjukdomens orsak och utbredningssätt var okända, kunde man varken hindra epidemin från att spridas eller genomföra adekvata vårdmetoder. Tvärtom, med de hemförlovade båtsmännen spred sig sjukdomen även till de övriga delarna av riket. Under 1789 var flottan p.g.a. de många sjukdomsfallen närmast operationsoduglig. Under vårvintern och våren 1790 avtog epidemin. Epidemin var ett svårt medicinskt problem. För att utreda situationen i Karlskrona skickade den tillförordnade regeringen, utredningskommissionen och Collegium medicum sina egna representanter till staden. De olika läkarnas sjukdomssyner grundade sig främst på tron om sjukdomars uppkomst genom miasma och förbättrandet av luftkvaliteten sågs som en väsentlig vårdform. I arbetet jämförs de olika myndigheternas och några av de på platsen varande läkarnas syn på sjukdomens art, dess orsaker och ursprung. De flesta härleder sjukdomen till den ryska flottan, och nämner någon form av smitta. Som främsta sjukdomsorsak nämns dock miasma och de rekommenderade vårdformerna representerade den humoralpatologiska synen. Förste amiralitetsläkaren Arvid Faxe representerar dock en annan åsikt, i det att han enbart tror på sjukdomens överföring via smitta. Epidemin var också ett politiskt problem. Epidemin var en lokal angelägenhet ända till dess att flottans operationer hämmades av manskapsbristen, varefter den blev ett ärende på högsta nivå. Kungen ingrep sommaren 1789 genom att grunda en kommision med rätt vidsträckta befogenheter. I Karlskrona verkar de militära myndigheterna och läkarna ha misstrott och skuldsatt varandra för katastrofen, och förhållandet mellan de till staden sända utredarna och militärerna var likaså inflammerat. Genom källorna återspeglas rivalitet, avund och inbördes konkurrens. Personalbristen var svår, och den skyldiga söktes utanför den egna kretsen. Den danskfödde apotekaren med sina påstott otjänliga mediciner blev en ypperlig syndabock. Örlogsflottan beräknas i sjukdomar ha förlorat omkring 10.000 man i döda, huvudsakligen i Karlskrona (civila inberäknade). Armén och Skärgårdsflottan uppges likadeles ha mist omkring 10.000 man, medan antalet i strid stupade armésoldater endast var ca 1500. Sammanlagt antas alltså ca 20.000 människor ha mist livet; både i återfallsfeber, men även i andra, samtidigt grasserande farsoter. I denna siffra är inte de övriga delarna av riket inberäknade. Epidemin i fråga kan alltså på goda grunder anses vara det svenska 1700-talets största medicinska katastrof.
Resumo:
Design embraces several disciplines dedicated to the production of artifacts and services. These disciplines are quite independent and only recently has psychological interest focused on them. Nowadays, the psychological theories of design, also called design cognition literature, describe the design process from the information processing viewpoint. These models co-exist with the normative standards of how designs should be crafted. In many places there are concrete discrepancies between these two in a way that resembles the differences between the actual and ideal decision-making. This study aimed to explore the possible difference related to problem decomposition. Decomposition is a standard component of human problem-solving models and is also included in the normative models of design. The idea of decomposition is to focus on a single aspect of the problem at a time. Despite its significance, the nature of decomposition in conceptual design is poorly understood and has only been preliminary investigated. This study addressed the status of decomposition in conceptual design of products using protocol analysis. Previous empirical investigations have argued that there are implicit and explicit decomposition, but have not provided a theoretical basis for these two. Therefore, the current research began by reviewing the problem solving and design literature and then composing a cognitive model of the solution search of conceptual design. The result is a synthetic view which describes recognition and decomposition as the basic schemata for conceptual design. A psychological experiment was conducted to explore decomposition. In the test, sixteen (N=16) senior students of mechanical engineering created concepts for two alternative tasks. The concurrent think-aloud method and protocol analysis were used to study decomposition. The results showed that despite the emphasis on decomposition in the formal education, only few designers (N=3) used decomposition explicitly and spontaneously in the presented tasks, although the designers in general applied a top-down control strategy. Instead, inferring from the use of structured strategies, the designers always relied on implicit decomposition. These results confirm the initial observations found in the literature, but they also suggest that decomposition should be investigated further. In the future, the benefits and possibilities of explicit decomposition should be considered along with the cognitive mechanisms behind decomposition. After that, the current results could be reinterpreted.
Resumo:
Understanding the process of cell division is crucial for modern cancer medicine due to the central role of uncontrolled cell division in this disease. Cancer involves unrestrained proliferation as a result of cells loosing normal control and being driven through the cell cycle, where they normally would be non-dividing or quiescent. Progression through the cell cycle is thought to be dependent on the sequential activation of cyclin-dependent kinases (Cdks). The full activation of Cdks requires the phosphorylation of a conserved residue (threonine-160 on human Cdk2) on the T-loop of the kinase domain. In metazoan species, a trimeric complex consisting of Cdk7, cyclin H and Mat1 has been suggested to be the T-loop kinase of several Cdks. In addition, Cdk7 have also been implicated in the regulation of transcription. Cdk7, cyclin H, and Mat1 can be found as subunits of general transcription factor TFIIH. Cdk7, in this context, phosphorylates the Carboxy-terminal domain (CTD) of the large subunit of RNA polymerase II (RNA pol II), specifically on serine-5 residues of the CTD repeat. The regulation of Cdk7 in these and other functions is not well known and the unambiguous characterization of the in vivo role of Cdk7 in both T-loop activation and CTD serine-5 phosphorylation has proved challenging. In this study, the fission yeast Cdk7-cyclin H homologous complex, Mcs6-Mcs2, is identified as the in vivo T-loop kinase of Cdk1(Cdc2). It also identifies multiple levels of regulation of Mcs6 kinase activity, i.e. association with Pmh1, a novel fission yeast protein that is the apparent homolog of metazoan Mat1, and T-loop phosphorylation of Mcs6, mediated by Csk1, a monomeric T-loop kinase with similarity to Cak1 of budding yeast. In addition, Skp1, a component of the SCF (Skp1-Cullin-F box protein) ubiquitin ligase is identified by its interactions with Mcs2 and Pmh1. The Skp1 association with Mcs2 and Pmh1 is however SCF independent and does not involve proteolytic degradation but may reflect a novel mechanism to modulate the activity or complex assembly of Mcs6. In addition to Cdk7, also Cdk8 has been shown to have CTD serine-5 kinase activity in vitro. Cdk8 is not essential in yeast but has been shown to function as a transcriptional regulator. The function of Cdk8 is unknown in flies and mammals. This prompted the investigation of murine Cdk8 and its potential role as a redundant CTD serine-5 kinase. We find that Cdk8 is required for development prior to implantation, at a time that is co-incident with a burst of Cdk8 expression during normal development. The results does not support a role of Cdk8 as a serine-5 CTD kinase in vivo but rather shows an unexpected requirement for Cdk8, early in mammalian development. The results presented in this thesis extends our current knowledge of the regulation of the cell cycle by characterizing the function of two distinct cell cycle regulating T-loop kinases, including the unambiguous identification of Mcs6, the fission yeast Cdk7 homolog, as the T-loop kinase of Cdk1. The results also indicate that the function of Mcs6 is conserved from fission yeast to human Cdk7 and suggests novel mechanisms by which the distinct functions of Cdk7 and Mcs6 could be regulated. These findings are important for our understanding of how progression of the cell cycle and proper transcription is controlled, during normal development and tissue homeostasis but also under condition where cells have escaped these control mechanisms e.g. cancer.
Resumo:
The problem of recovering information from measurement data has already been studied for a long time. In the beginning, the methods were mostly empirical, but already towards the end of the sixties Backus and Gilbert started the development of mathematical methods for the interpretation of geophysical data. The problem of recovering information about a physical phenomenon from measurement data is an inverse problem. Throughout this work, the statistical inversion method is used to obtain a solution. Assuming that the measurement vector is a realization of fractional Brownian motion, the goal is to retrieve the amplitude and the Hurst parameter. We prove that under some conditions, the solution of the discretized problem coincides with the solution of the corresponding continuous problem as the number of observations tends to infinity. The measurement data is usually noisy, and we assume the data to be the sum of two vectors: the trend and the noise. Both vectors are supposed to be realizations of fractional Brownian motions, and the goal is to retrieve their parameters using the statistical inversion method. We prove a partial uniqueness of the solution. Moreover, with the support of numerical simulations, we show that in certain cases the solution is reliable and the reconstruction of the trend vector is quite accurate.
Resumo:
The object of this dissertation is to study globally defined bounded p-harmonic functions on Cartan-Hadamard manifolds and Gromov hyperbolic metric measure spaces. Such functions are constructed by solving the so called Dirichlet problem at infinity. This problem is to find a p-harmonic function on the space that extends continuously to the boundary at inifinity and obtains given boundary values there. The dissertation consists of an overview and three published research articles. In the first article the Dirichlet problem at infinity is considered for more general A-harmonic functions on Cartan-Hadamard manifolds. In the special case of two dimensions the Dirichlet problem at infinity is solved by only assuming that the sectional curvature has a certain upper bound. A sharpness result is proved for this upper bound. In the second article the Dirichlet problem at infinity is solved for p-harmonic functions on Cartan-Hadamard manifolds under the assumption that the sectional curvature is bounded outside a compact set from above and from below by functions that depend on the distance to a fixed point. The curvature bounds allow examples of quadratic decay and examples of exponential growth. In the final article a generalization of the Dirichlet problem at infinity for p-harmonic functions is considered on Gromov hyperbolic metric measure spaces. Existence and uniqueness results are proved and Cartan-Hadamard manifolds are considered as an application.
Resumo:
In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.
The Mediated Immediacy : João Batista Libanio and the Question of Latin American Liberation Theology
Resumo:
This study is a systematic analysis of mediated immediacy in the production of the Brazilian professor of theology João Batista Libanio. He stresses both ethical mediation and the immediate character of the faith. Libanio has sought an answer to the problem of science and faith. He makes use of the neo-scholastic distinction between matter and form. According to St. Thomas Aquinas, God cannot be known as a scientific object, but it is possible to predicate a formal theological content of other subject matter with the help of revelation. This viewpoint was emphasized in neo-Thomism and supported by the liberation theologians. For them, the material starting point was social science. It becomes a theologizable or revealable (revelabile) reality. This social science has its roots in Latin American Marxism which was influenced by the school of Louis Althusser and considered Marxism a science of history . The synthesis of Thomism and Marxism is a challenge Libanio faced, especially in his Teologia da libertação from 1987. He emphasized the need for a genuinely spiritual and ethical discernment, and was particularly critical of the ethical implications of class struggle. Libanio s thinking has a strong hermeneutic flavor. It is more important to understand than to explain. He does not deny the need for social scientific data, but that they cannot be the exclusive starting point of theology. There are different readings of the world, both scientific and theological. A holistic understanding of the nature of religious experience is needed. Libanio follows the interpretation given by H. C. de Lima Vaz, according to whom the Hegelian dialectic is a rational circulation between the totality and its parts. He also recalls Oscar Cullmann s idea of God s Kingdom that is already and not yet . In other words, there is a continuous mediation of grace into the natural world. This dialectic is reflected in ethics. Faith must be verified in good works. Libanio uses the Thomist fides caritate formata principle and the modern orthopraxis thinking represented by Edward Schillebeeckx. One needs both the ortho of good faith and the praxis of the right action. The mediation of praxis is the mediation of human and divine love. Libanio s theology has strong roots in the Jesuit spirituality that places the emphasis on contemplation in action.
Resumo:
The main focus of this study is the epilogue of 4QMMT (4QMiqsat Ma aseh ha-Torah), a text of obscure genre containing a halakhic section found in cave 4 at Qumran. In the official edition published in the series Discoveries of the Judaean Desert (DJD X), the extant document was divided by its editors, Elisha Qimron and John Strugnell, into three literary divisions: Section A) the calendar section representing a 364-day solar calendar, Section B) the halakhot, and Section C) an epilogue. The work begins with text critical inspection of the manuscripts containing text from the epilogue (mss 4Q397, 4Q398, and 4Q399). However, since the relationship of the epilogue to the other sections of the whole document 4QMMT is under investigation, the calendrical fragments (4Q327 and 4Q394 3-7, lines 1-3) and the halakhic section also receive some attention, albeit more limited and purpose oriented. In Ch. 2, after a transcription of the fragments of the epilogue, a synopsis is presented in order to evaluate the composite text of the DJD X edition in light of the evidence provided by the individual manuscripts. As a result, several critical comments are offered, and finally, an alternative arrangement of the fragments of the epilogue with an English translation. In the following chapter (Ch. 3), the diversity of the two main literary divisions, the halakhic section and the epilogue, is discussed, and it is demonstrated that the author(s) of 4QMMT adopted and adjusted the covenantal pattern known from biblical law collections, more specifically Deuteronomy. The question of the genre of 4QMMT is investigated in Ch. 4. The final chapter (Ch. 5) contains an analysis of the use of Scripture in the epilogue. In a close reading, both the explicit citations and the more subtle allusions are investigated in an attempt to trace the theology of the epilogue. The main emphases of the epilogue are covenantal faithfulness, repentance and return. The contents of the document reflect a grave concern for the purity of the cult in Jerusalem, and in the epilogue Deuteronomic language and expressions are used to convince the readers of the necessity of a reformation. The large number of late copies found in cave 4 at Qumran witness the significance of 4QMMT and the continuous importance of the Jerusalem Temple for the Qumran community.
Resumo:
Aim: To characterize the inhibition of platelet function by paracetamol in vivo and in vitro, and to evaluate the possible interaction of paracetamol and diclofenac or valdecoxib in vivo. To assess the analgesic effect of the drugs in an experimental pain model. Methods: Healthy volunteers received increasing doses of intravenous paracetamol (15, 22.5 and 30 mg/kg), or the combination of paracetamol 1 g and diclofenac 1.1 mg/kg or valdecoxib 40 mg (as the pro-drug parecoxib). Inhibition of platelet function was assessed with photometric aggregometry, the platelet function analyzer (PFA-100), and release of thromboxane B2. Analgesia was assessed with the cold pressor test. The inhibition coefficient of platelet aggregation by paracetamol was determined as well as the nature of interaction between paracetamol and diclofenac by an isobolographic analysis in vitro. Results: Paracetamol inhibited platelet aggregation and TxB2-release dose-dependently in volunteers and concentration-dependently in vitro. The inhibition coefficient was 15.2 mg/L (95% CI 11.8 - 18.6). Paracetamol augmented the platelet inhibition by diclofenac in vivo, and the isobole showed that this interaction is synergistic. Paracetamol showed no interaction with valdecoxib. PFA-100 appeared insensitive in detecting platelet dysfunction by paracetamol, and the cold-pressor test showed no analgesia. Conclusions: Paracetamol inhibits platelet function in vivo and shows synergism when combined with diclofenac. This effect may increase the risk of bleeding in surgical patients with an impaired haemostatic system. The combination of paracetamol and valdecoxib may be useful in patients with low risk for thromboembolism. The PFA-100 seems unsuitable for detection of platelet dysfunction and the cold-pressor test seems unsuitable for detection of analgesia by paracetamol.
Resumo:
The continuous production of blood cells, a process termed hematopoiesis, is sustained throughout the lifetime of an individual by a relatively small population of cells known as hematopoietic stem cells (HSCs). HSCs are unique cells characterized by their ability to self-renew and give rise to all types of mature blood cells. Given their high proliferative potential, HSCs need to be tightly regulated on the cellular and molecular levels or could otherwise turn malignant. On the other hand, the tight regulatory control of HSC function also translates into difficulties in culturing and expanding HSCs in vitro. In fact, it is currently not possible to maintain or expand HSCs ex vivo without rapid loss of self-renewal. Increased knowledge of the unique features of important HSC niches and of key transcriptional regulatory programs that govern HSC behavior is thus needed. Additional insight in the mechanisms of stem cell formation could enable us to recapitulate the processes of HSC formation and self-renewal/expansion ex vivo with the ultimate goal of creating an unlimited supply of HSCs from e.g. human embryonic stem cells (hESCs) or induced pluripotent stem cells (iPS) to be used in therapy. We thus asked: How are hematopoietic stem cells formed and in what cellular niches does this happen (Papers I, II)? What are the molecular mechanisms that govern hematopoietic stem cell development and differentiation (Papers III, IV)? Importantly, we could show that placenta is a major fetal hematopoietic niche that harbors a large number of HSCs during midgestation (Paper I)(Gekas et al., 2005). In order to address whether the HSCs found in placenta were formed there we utilized the Runx1-LacZ knock-in and Ncx1 knockout mouse models (Paper II). Importantly, we could show that HSCs emerge de novo in the placental vasculature in the absence of circulation (Rhodes et al., 2008). Furthermore, we could identify defined microenvironmental niches within the placenta with distinct roles in hematopoiesis: the large vessels of the chorioallantoic mesenchyme serve as sites of HSC generation whereas the placental labyrinth is a niche supporting HSC expansion (Rhodes et al., 2008). Overall, these studies illustrate the importance of distinct milieus in the emergence and subsequent maturation of HSCs. To ensure proper function of HSCs several regulatory mechanisms are in place. The microenvironment in which HSCs reside provides soluble factors and cell-cell interactions. In the cell-nucleus, these cell-extrinsic cues are interpreted in the context of cell-intrinsic developmental programs which are governed by transcription factors. An essential transcription factor for initiation of hematopoiesis is Scl/Tal1 (stem cell leukemia gene/T-cell acute leukemia gene 1). Loss of Scl results in early embryonic death and total lack of all blood cells, yet deactivation of Scl in the adult does not affect HSC function (Mikkola et al., 2003b. In order to define the temporal window of Scl requirement during fetal hematopoietic development, we deactivated Scl in all hematopoietic lineages shortly after hematopoietic specification in the embryo . Interestingly, maturation, expansion and function of fetal HSCs was unaffected, and, as in the adult, red blood cell and platelet differentiation was impaired (Paper III)(Schlaeger et al., 2005). These findings highlight that, once specified, the hematopoietic fate is stable even in the absence of Scl and is maintained through mechanisms that are distinct from those required for the initial fate choice. As the critical downstream targets of Scl remain unknown, we sought to identify and characterize target genes of Scl (Paper IV). We could identify transcription factor Mef2C (myocyte enhancer factor 2 C) as a novel direct target gene of Scl specifically in the megakaryocyte lineage which largely explains the megakaryocyte defect observed in Scl deficient mice. In addition, we observed an Scl-independent requirement of Mef2C in the B-cell compartment, as loss of Mef2C leads to accelerated B-cell aging (Gekas et al. Submitted). Taken together, these studies identify key extracellular microenvironments and intracellular transcriptional regulators that dictate different stages of HSC development, from emergence to lineage choice to aging.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.