1000 resultados para concurrency theory
Resumo:
Tutkielmassa eritellään Norman Faircloughin kriittisen diskurssianalyysin teoriaa ja siihen kohdistettua kritiikkiä. Pyrkimyksenä on sovittaa näitä erilaisia näkemyksiä keskenään ja tarjota ratkaisuja yhteen kiriittisen diskurssianalyysin keskeiseen ongelmaan eli emansipaation (sosiaalisten epäkohtien tunnistamisen ja ratkaisemisen) puutteellisuuteen. Teoriaosuudesta esiin nousevia mahdollisuuksia sovelletaan tekstianalyysiin. Tutkimuksen kohteena on teksti Rebuilding America’s Defenses: Strategy, Forces and Resources For a New Century ja jossain määrin sen tuottanut järjestö Project for the New American Century. Näitä tarkastellaan ennen kaikkea sosiaalisina ilmiöinä ja suhteessa toisiinsa. Faircloughin mallin suurimmiksi ongelmiksi muodostuvat perinteinen käsitys kielestä, jonka mukaan kielen järjestelmän abstraktit ja sisäiset suhteet ovat tärkeimpiä, sekä ideologinen vastakkainasettelu kritiikin lähtökohtana. Ensimmäinen johtaa kielellisten tutkimustulosten epätyydyttävään kykyyn selittää sosiaalisia havaintoja ja jälkimmäinen poliittiseen tai maailmankatsomukselliseen väittelyyn, joka ei mahdollista uusia näkemyksiä. Tutkielman lopputulema on, että keskittymällä asiasisältöön kielen rakenteen sijasta ja ymmärtämällä tekstin tuottaja yksittäisenä, rajattuna sosiaalisena toimijana voidaan analyysiin saada avoimuutta ja täsmällisyyttä. Kriittiinen diskurssianalyysi kaipaa tällaista näkemystä kielellisten analyysien tueksi ja uudenlaisen relevanssin löytääkseen.
Resumo:
This paper is a historical companion to a previous one, in which it was studied the so-called abstract Galois theory as formulated by the Portuguese mathematician José Sebastião e Silva (see da Costa, Rodrigues (2007)). Our purpose is to present some applications of abstract Galois theory to higher-order model theory, to discuss Silva's notion of expressibility and to outline a classical Galois theory that can be obtained inside the two versions of the abstract theory, those of Mark Krasner and of Silva. Some comments are made on the universal theory of (set-theoretic) structures.
Resumo:
ABSTRACT When Hume, in the Treatise on Human Nature, began his examination of the relation of cause and effect, in particular, of the idea of necessary connection which is its essential constituent, he identified two preliminary questions that should guide his research: (1) For what reason we pronounce it necessary that every thing whose existence has a beginning should also have a cause and (2) Why we conclude that such particular causes must necessarily have such particular effects? (1.3.2, 14-15) Hume observes that our belief in these principles can result neither from an intuitive grasp of their truth nor from a reasoning that could establish them by demonstrative means. In particular, with respect to the first, Hume examines and rejects some arguments with which Locke, Hobbes and Clarke tried to demonstrate it, and suggests, by exclusion, that the belief that we place on it can only come from experience. Somewhat surprisingly, however, Hume does not proceed to show how that derivation of experience could be made, but proposes instead to move directly to an examination of the second principle, saying that, "perhaps, be found in the end, that the same answer will serve for both questions" (1.3.3, 9). Hume's answer to the second question is well known, but the first question is never answered in the rest of the Treatise, and it is even doubtful that it could be, which would explain why Hume has simply chosen to remove any mention of it when he recompiled his theses on causation in the Enquiry concerning Human Understanding. Given this situation, an interesting question that naturally arises is to investigate the relations of logical or conceptual implication between these two principles. Hume seems to have thought that an answer to (2) would also be sufficient to provide an answer to (1). Henry Allison, in his turn, argued (in Custom and Reason in Hume, p. 94-97) that the two questions are logically independent. My proposal here is to try to show that there is indeed a logical dependency between them, but the implication is, rather, from (1) to (2). If accepted, this result may be particularly interesting for an interpretation of the scope of the so-called "Kant's reply to Hume" in the Second Analogy of Experience, which is structured as a proof of the a priori character of (1), but whose implications for (2) remain controversial.
Resumo:
In this article I intend to show that certain aspects of A.N. Whitehead's philosophy of organism and especially his epochal theory of time, as mainly exposed in his well-known work Process and Reality, can serve in clarify the underlying assumptions that shape nonstandard mathematical theories as such and also as metatheories of quantum mechanics. Concerning the latter issue, I point to an already significant research on nonstandard versions of quantum mechanics; two of these approaches are chosen to be critically presented in relation to the scope of this work. The main point of the paper is that, insofar as we can refer a nonstandard mathematical entity to a kind of axiomatical formalization essentially 'codifying' an underlying mental process indescribable as such by analytic means, we can possibly apply certain principles of Whitehead's metaphysical scheme focused on the key notion of process which is generally conceived as the becoming of actual entities. This is done in the sense of a unifying approach to provide an interpretation of nonstandard mathematical theories as such and also, in their metatheoretical status, as a formalization of the empirical-experimental context of quantum mechanics.
Resumo:
Min avhandling behandlar hur oordnade material leder elektrisk ström. Bland materialen som studeras finns ledande polymerer, d.v.s. plaster som leder ström, och mer allmänt organiska halvledare. Av de här materialen har man kunnat bygga elektroniska komponenter, och man hoppas på att kunna trycka hela kretsar av organiska material. För de här tillämpningarna är det viktigt att förstå hur materialen själva leder elektrisk ström. Termen oordnade material syftar på material som saknar kristallstruktur. Oordningen gör att elektronernas tillstånd blir lokaliserade i rummet, så att en elektron i ett visst tillstånd är begränsad t.ex. till en molekyl eller ett segment av en polymer. Det här kan jämföras med kristallina material, där ett elektrontillstånd är utspritt över hela kristallen (men i stället har en väldefinierad rörelsemängd). Elektronerna (eller hålen) i det oordnade materialet kan röra sig genom att tunnelera mellan de lokaliserade tillstånden. Utgående från egenskaperna för den här tunneleringsprocessen, kan man bestämma transportegenskaperna för hela materialet. Det här är utgångspunkten för den så kallade hopptransportmodellen, som jag har använt mig av. Hopptransportmodellen innehåller flera drastiska förenklingar. Till exempel betraktas elektrontillstånden som punktformiga, så att tunneleringssannolikheten mellan två tillstånd endast beror på avståndet mellan dem, och inte på deras relativa orientation. En annan förenkling är att behandla det kvantmekaniska tunneleringsproblemet som en klassisk process, en slumpvandring. Trots de här grova approximationerna visar hopptransportmodellen ändå många av de fenomen som uppträder i de verkliga materialen som man vill modellera. Man kan kanske säga att hopptransportmodellen är den enklaste modell för oordnade material som fortfarande är intressant att studera. Man har inte hittat exakta analytiska lösningar för hopptransportmodellen, därför använder man approximationer och numeriska metoder, ofta i form av datorberäkningar. Vi har använt både analytiska metoder och numeriska beräkningar för att studera olika aspekter av hopptransportmodellen. En viktig del av artiklarna som min avhandling baserar sig på är att jämföra analytiska och numeriska resultat. Min andel av arbetet har främst varit att utveckla de numeriska metoderna och applicera dem på hopptransportmodellen. Därför fokuserar jag på den här delen av arbetet i avhandlingens introduktionsdel. Ett sätt att studera hopptransportmodellen numeriskt är att direkt utföra en slumpvandringsprocess med ett datorprogram. Genom att föra statisik över slumpvandringen kan man beräkna olika transportegenskaper i modellen. Det här är en så kallad Monte Carlo-metod, eftersom själva beräkningen är en slumpmässig process. I stället för att följa rörelsebanan för enskilda elektroner, kan man beräkna sannolikheten vid jämvikt för att hitta en elektron i olika tillstånd. Man ställer upp ett system av ekvationer, som relaterar sannolikheterna för att hitta elektronen i olika tillstånd i systemet med flödet, strömmen, mellan de olika tillstånden. Genom att lösa ekvationssystemet fås sannolikhetsfördelningen för elektronerna. Från sannolikhetsfördelningen kan sedan strömmen och materialets transportegenskaper beräknas. En aspekt av hopptransportmodellen som vi studerat är elektronernas diffusion, d.v.s. deras slumpmässiga rörelse. Om man betraktar en samling elektroner, så sprider den med tiden ut sig över ett större område. Det är känt att diffusionshastigheten beror av elfältet, så att elektronerna sprider sig fortare om de påverkas av ett elektriskt fält. Vi har undersökt den här processen, och visat att beteendet är väldigt olika i endimensionella system, jämfört med två- och tredimensionella. I två och tre dimensioner beror diffusionskoefficienten kvadratiskt av elfältet, medan beroendet i en dimension är linjärt. En annan aspekt vi studerat är negativ differentiell konduktivitet, d.v.s. att strömmen i ett material minskar då man ökar spänningen över det. Eftersom det här fenomenet har uppmätts i organiska minnesceller, ville vi undersöka om fenomenet också kan uppstå i hopptransportmodellen. Det visade sig att det i modellen finns två olika mekanismer som kan ge upphov till negativ differentiell konduktivitet. Dels kan elektronerna fastna i fällor, återvändsgränder i systemet, som är sådana att det är svårare att ta sig ur dem då elfältet är stort. Då kan elektronernas medelhastighet och därmed strömmen i materialet minska med ökande elfält. Elektrisk växelverkan mellan elektronerna kan också leda till samma beteende, genom en så kallad coulombblockad. En coulombblockad kan uppstå om antalet ledningselektroner i materialet ökar med ökande spänning. Elektronerna repellerar varandra och ett större antal elektroner kan leda till att transporten blir långsammare, d.v.s. att strömmen minskar.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
Lectio praecursoria Tampereen yliopistossa 17.8.2010.
Resumo:
The application of the Extreme Value Theory (EVT) to model the probability of occurrence of extreme low Standardized Precipitation Index (SPI) values leads to an increase of the knowledge related to the occurrence of extreme dry months. This sort of analysis can be carried out by means of two approaches: the block maxima (BM; associated with the General Extreme Value distribution) and the peaks-over-threshold (POT; associated with the Generalized Pareto distribution). Each of these procedures has its own advantages and drawbacks. Thus, the main goal of this study is to compare the performance of BM and POT in characterizing the probability of occurrence of extreme dry SPI values obtained from the weather station of Ribeirão Preto-SP (1937-2012). According to the goodness-of-fit tests, both BM and POT can be used to assess the probability of occurrence of the aforementioned extreme dry SPI monthly values. However, the scalar measures of accuracy and the return level plots indicate that POT provides the best fit distribution. The study also indicated that the uncertainties in the parameters estimates of a probabilistic model should be taken into account when the probability associated with a severe/extreme dry event is under analysis.
Resumo:
For number of reasons social responsibility in corporations has become a more essential part of business operations than before. Corporate social responsibility (CSR) is dealt with different means and aspects but the overall effects it has on organisations performance, communication and underline actions is indisputable. The thesis describes corporate social responsibility and the main objective was to observe how corporate social responsibility has developed in our case company with answering to main research question how CSR reporting has evolved in UPM-Kymmene Oyj? In addition following questions were also addressed: Is there a monetary value of CSR? What does proficient CSR report consist of? What does corporate social responsibility consist of? Qualitative research method, content analysis to be precise, was chosen and excessive literature study performed to find the theoretical back ground to perform the empirical part of the study. Data for the empirical part was collected from UPM-Kymmene Oyj financial data and annual reports. The study shows that UPM-Kymmene Oyj engagement to CSR and reporting of CSR matter have improved due time but still few managerial implications could be found. UPM-Kymmene Oyj economic key figures are only building shareholder value and stakeholders are identified in very general level. Also CSR data is scattered all over the annual report which causes problems to readers. The scientific importance of this thesis arises from the profound way CSR has been addressed in a holistic manner. Thus it is giving a good basis to understand the underlying reasons of CSR from society towards the organisation and vice versa.
Resumo:
After decades of mergers and acquisitions and successive technology trends such as CRM, ERP and DW, the data in enterprise systems is scattered and inconsistent. Global organizations face the challenge of addressing local uses of shared business entities, such as customer and material, and at the same time have a consistent, unique, and consolidate view of financial indicators. In addition, current enterprise systems do not accommodate the pace of organizational changes and immense efforts are required to maintain data. When it comes to systems integration, ERPs are considered “closed” and expensive. Data structures are complex and the “out-of-the-box” integration options offered are not based on industry standards. Therefore expensive and time-consuming projects are undertaken in order to have required data flowing according to business processes needs. Master Data Management (MDM) emerges as one discipline focused on ensuring long-term data consistency. Presented as a technology-enabled business discipline, it emphasizes business process and governance to model and maintain the data related to key business entities. There are immense technical and organizational challenges to accomplish the “single version of the truth” MDM mantra. Adding one central repository of master data might prove unfeasible in a few scenarios, thus an incremental approach is recommended, starting from areas most critically affected by data issues. This research aims at understanding the current literature on MDM and contrasting it with views from professionals. The data collected from interviews revealed details on the complexities of data structures and data management practices in global organizations, reinforcing the call for more in-depth research on organizational aspects of MDM. The most difficult piece of master data to manage is the “local” part, the attributes related to the sourcing and storing of materials in one particular warehouse in The Netherlands or a complex set of pricing rules for a subsidiary of a customer in Brazil. From a practical perspective, this research evaluates one MDM solution under development at a Finnish IT solution-provider. By means of applying an existing assessment method, the research attempts at providing the company with one possible tool to evaluate its product from a vendor-agnostics perspective.
Resumo:
Kirjallisuusarvostelu