772 resultados para ”Learning by doing”
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This study evaluated the use of hipoosmotic swelling test (HOST) with deionized water (0 mOsmol), as a method of post thaw ram semen evaluation and correlate their findings with different techniques of semen evaluation. Therefore, twenty semen samples of 20 different adult rams were assessed as for kinetic sperm parameters through computerized system (IVOS 12, Hamiton Thorn Biosciences, Beverly, MA, EUA) and subjective analysis. The sperm membranes viability was carried out by the association of fluorescent probes (propidium iodide, JC-1 and FITC-PSA). The structural integrity of the plasma membrane was also studied through supravital test with eosin and the functional integrity of membrane evaluated by doing the hipoosmotic swelling test with deionized water (0 mOsmol), in the following proportions: One part of semen for 10 (HOST 10), 50 (HOST 50) and 100 (HOST 100) parts of water. After semen dilution in the different proportions it was fixed in formalin-buffered saline and analyzed with regard to percentage of HOST reactive sperm (bent/coiled). The percentage of reaction obtained for HOST 10 (33,1%); HOST 50 (32,8%) and HOST 100 (31,8%) did not differ significantly. HOST 10 presented positive correlation with the plasma membrane integrity by the EOS (r = 0,80; p < 0,05). Positive correlations between HOST 50 and HOST 100 with sperm subpopulation with membrane integrity by fluorescence were observed (r = 0,83 and r = 0,85; p < 0,01). The findings suggest that the HOST with deionized water can provide additional information for post thawing ram sperm viability evaluation.
Resumo:
This study has investigated the question of relation between literacy practices in and out of school in rural Tanzania. By using the perspective of linguistic anthropology, literacy practices in five villages in Karagwe district in the northwest of Tanzania have been analysed. The outcome may be used as a basis for educational planning and literacy programs. The analysis has revealed an intimate relation between language, literacy and power. In Karagwe, traditional élites have drawn on literacy to construct and reconstruct their authority, while new élites, such as individual women and some young people have been able to use literacy as one tool to get access to power. The study has also revealed a high level of bilingualism and a high emphasis on education in the area, which prove a potential for future education in the area. At the same time discontinuity in language use, mainly caused by stigmatisation of what is perceived as local and traditional, such as the mother-tongue of the majority of the children, and the high status accrued to all that is perceived as Western, has turned out to constitute a great obstacle for pupils’ learning. The use of ethnographic perspectives has enabled comparisons between interactional patterns in schools and outside school. This has revealed communicative patterns in school that hinder pupils’ learning, while the same patterns in other discourses reinforce learning. By using ethnography, relations between explicit and implicit language ideologies and their impact in educational contexts may be revealed. This knowledge may then be used to make educational plans and literacy programmes more relevant and efficient, not only in poor post-colonial settings such as Tanzania, but also elsewhere, such as in Western settings.
Resumo:
Recently, a rising interest in political and economic integration/disintegration issues has been developed in the political economy field. This growing strand of literature partly draws on traditional issues of fiscal federalism and optimum public good provision and focuses on a trade-off between the benefits of centralization, arising from economies of scale or externalities, and the costs of harmonizing policies as a consequence of the increased heterogeneity of individual preferences in an international union or in a country composed of at least two regions. This thesis stems from this strand of literature and aims to shed some light on two highly relevant aspects of the political economy of European integration. The first concerns the role of public opinion in the integration process; more precisely, how economic benefits and costs of integration shape citizens' support for European Union (EU) membership. The second is the allocation of policy competences among different levels of government: European, national and regional. Chapter 1 introduces the topics developed in this thesis by reviewing the main recent theoretical developments in the political economy analysis of integration processes. It is structured as follows. First, it briefly surveys a few relevant articles on economic theories of integration and disintegration processes (Alesina and Spolaore 1997, Bolton and Roland 1997, Alesina et al. 2000, Casella and Feinstein 2002) and discusses their relevance for the study of the impact of economic benefits and costs on public opinion attitude towards the EU. Subsequently, it explores the links existing between such political economy literature and theories of fiscal federalism, especially with regard to normative considerations concerning the optimal allocation of competences in a union. Chapter 2 firstly proposes a model of citizens’ support for membership of international unions, with explicit reference to the EU; subsequently it tests the model on a panel of EU countries. What are the factors that influence public opinion support for the European Union (EU)? In international relations theory, the idea that citizens' support for the EU depends on material benefits deriving from integration, i.e. whether European integration makes individuals economically better off (utilitarian support), has been common since the 1970s, but has never been the subject of a formal treatment (Hix 2005). A small number of studies in the 1990s have investigated econometrically the link between national economic performance and mass support for European integration (Eichenberg and Dalton 1993; Anderson and Kalthenthaler 1996), but only making informal assumptions. The main aim of Chapter 2 is thus to propose and test our model with a view to providing a more complete and theoretically grounded picture of public support for the EU. Following theories of utilitarian support, we assume that citizens are in favour of membership if they receive economic benefits from it. To develop this idea, we propose a simple political economic model drawing on the recent economic literature on integration and disintegration processes. The basic element is the existence of a trade-off between the benefits of centralisation and the costs of harmonising policies in presence of heterogeneous preferences among countries. The approach we follow is that of the recent literature on the political economy of international unions and the unification or break-up of nations (Bolton and Roland 1997, Alesina and Wacziarg 1999, Alesina et al. 2001, 2005a, to mention only the relevant). The general perspective is that unification provides returns to scale in the provision of public goods, but reduces each member state’s ability to determine its most favoured bundle of public goods. In the simple model presented in Chapter 2, support for membership of the union is increasing in the union’s average income and in the loss of efficiency stemming from being outside the union, and decreasing in a country’s average income, while increasing heterogeneity of preferences among countries points to a reduced scope of the union. Afterwards we empirically test the model with data on the EU; more precisely, we perform an econometric analysis employing a panel of member countries over time. The second part of Chapter 2 thus tries to answer the following question: does public opinion support for the EU really depend on economic factors? The findings are broadly consistent with our theoretical expectations: the conditions of the national economy, differences in income among member states and heterogeneity of preferences shape citizens’ attitude towards their country’s membership of the EU. Consequently, this analysis offers some interesting policy implications for the present debate about ratification of the European Constitution and, more generally, about how the EU could act in order to gain more support from the European public. Citizens in many member states are called to express their opinion in national referenda, which may well end up in rejection of the Constitution, as recently happened in France and the Netherlands, triggering a European-wide political crisis. These events show that nowadays understanding public attitude towards the EU is not only of academic interest, but has a strong relevance for policy-making too. Chapter 3 empirically investigates the link between European integration and regional autonomy in Italy. Over the last few decades, the double tendency towards supranationalism and regional autonomy, which has characterised some European States, has taken a very interesting form in this country, because Italy, besides being one of the founding members of the EU, also implemented a process of decentralisation during the 1970s, further strengthened by a constitutional reform in 2001. Moreover, the issue of the allocation of competences among the EU, the Member States and the regions is now especially topical. The process leading to the drafting of European Constitution (even if then it has not come into force) has attracted much attention from a constitutional political economy perspective both on a normative and positive point of view (Breuss and Eller 2004, Mueller 2005). The Italian parliament has recently passed a new thorough constitutional reform, still to be approved by citizens in a referendum, which includes, among other things, the so called “devolution”, i.e. granting the regions exclusive competence in public health care, education and local police. Following and extending the methodology proposed in a recent influential article by Alesina et al. (2005b), which only concentrated on the EU activity (treaties, legislation, and European Court of Justice’s rulings), we develop a set of quantitative indicators measuring the intensity of the legislative activity of the Italian State, the EU and the Italian regions from 1973 to 2005 in a large number of policy categories. By doing so, we seek to answer the following broad questions. Are European and regional legislations substitutes for state laws? To what extent are the competences attributed by the European treaties or the Italian Constitution actually exerted in the various policy areas? Is their exertion consistent with the normative recommendations from the economic literature about their optimum allocation among different levels of government? The main results show that, first, there seems to be a certain substitutability between EU and national legislations (even if not a very strong one), but not between regional and national ones. Second, the EU concentrates its legislative activity mainly in international trade and agriculture, whilst social policy is where the regions and the State (which is also the main actor in foreign policy) are more active. Third, at least two levels of government (in some cases all of them) are significantly involved in the legislative activity in many sectors, even where the rationale for that is, at best, very questionable, indicating that they actually share a larger number of policy tasks than that suggested by the economic theory. It appears therefore that an excessive number of competences are actually shared among different levels of government. From an economic perspective, it may well be recommended that some competences be shared, but only when the balance between scale or spillover effects and heterogeneity of preferences suggests so. When, on the contrary, too many levels of government are involved in a certain policy area, the distinction between their different responsibilities easily becomes unnecessarily blurred. This may not only leads to a slower and inefficient policy-making process, but also risks to make it too complicate to understand for citizens, who, on the contrary, should be able to know who is really responsible for a certain policy when they vote in national,local or European elections or in referenda on national or European constitutional issues.
Resumo:
Background: It is well known, since the pioneristic observation by Jenkins and Dallenbach (Am J Psychol 1924;35:605-12), that a period of sleep provides a specific advantage for the consolidation of newly acquired informations. Recent research about the possible enhancing effect of sleep on memory consolidation has focused on procedural memory (part of non-declarative memory system, according to Squire’s taxonomy), as it appears the memory sub-system for which the available data are more consistent. The acquisition of a procedural skill follows a typical time course, consisting in a substantial practice-dependent learning followed by a slow, off-line improvement. Sleep seems to play a critical role in promoting the process of slow learning, by consolidating memory traces and making them more stable and resistant to interferences. If sleep is critical for the consolidation of a procedural skill, then an alteration of the organization of sleep should result in a less effective consolidation, and therefore in a reduced memory performance. Such alteration can be experimentally induced, as in a deprivation protocol, or it can be naturally observed in some sleep disorders as, for example, in narcolepsy. In this research, a group of narcoleptic patients, and a group of matched healthy controls, were tested in two different procedural abilities, in order to better define the size and time course of sleep contribution to memory consolidation. Experimental Procedure: A Texture Discrimination Task (Karni & Sagi, Nature 1993;365:250-2) and a Finger Tapping Task (Walker et al., Neuron 2002;35:205-11) were administered to two indipendent samples of drug-naive patients with first-diagnosed narcolepsy with cataplexy (International Classification of Sleep Disorder 2nd ed., 2005), and two samples of matched healthy controls. In the Texture Discrimination task, subjects (n=22) had to learn to recognize a complex visual array on the screen of a personal computer, while in the Finger Tapping task (n=14) they had to press a numeric sequence on a standard keyboard, as quickly and accurately as possible. Three subsequent experimental sessions were scheduled for each partecipant, namely a training session, a first retrieval session the next day, and a second retrieval session one week later. To test for possible circadian effects on learning, half of the subjects performed the training session at 11 a.m. and half at 17 p.m. Performance at training session was taken as a measure of the practice-dependent learning, while performance of subsequent sessions were taken as a measure of the consolidation level achieved respectively after one and seven nights of sleep. Between training and first retrieval session, all participants spent a night in a sleep laboratory and underwent a polygraphic recording. Results and Discussion: In both experimental tasks, while healthy controls improved their performance after one night of undisturbed sleep, narcoleptic patients showed a non statistically significant learning. Despite this, at the second retrieval session either healthy controls and narcoleptics improved their skills. Narcoleptics improved relatively more than controls between first and second retrieval session in the texture discrimination ability, while their performance remained largely lower in the motor (FTT) ability. Sleep parameters showed a grater fragmentation in the sleep of the pathological group, and a different distribution of Stage 1 and 2 NREM sleep in the two groups, being thus consistent with the hypothesis of a lower consolidation power of sleep in narcoleptic patients. Moreover, REM density of the first part of the night of healthy subjects showed a significant correlation with the amount of improvement achieved at the first retrieval session in TDT task, supporting the hypothesis that REM sleep plays an important role in the consolidation of visuo-perceptual skills. Taken together, these results speak in favor of a slower, rather than lower consolidation of procedural skills in narcoleptic patients. Finally, an explanation of the results, based on the possible role of sleep in contrasting the interference provided by task repetition is proposed.
Resumo:
Sekundäres organisches Aerosol (SOA) ist ein wichtiger Bestandteil von atmosphärischen Aerosolpartikeln. Atmosphärische Aerosole sind bedeutsam, da sie das Klima über direkte (Streuung und Absorption von Strahlung) und indirekte (Wolken-Kondensationskeime) Effekte beeinflussen. Nach bisherigen Schätzungen ist die SOA-Bildung aus biogenen Kohlenwasserstoffen global weit wichtiger als die SOA-Bildung aus anthropogenen Kohlenwasserstoffen. Reaktive Kohlenwasserstoffe, die in großen Mengen von der Vegetation emittiert werden und als die wichtigsten Vorläufersubstanzen für biogenes SOA gelten, sind die Terpene. In der vorliegenden Arbeit wurde eine Methode entwickelt, welche die Quantifizierung von aciden Produkten der Terpen-Oxidation ermöglicht. Die Abscheidung des größenselektierten Aerosols (PM 2.5) erfolgte auf Quarzfilter, die unter Zuhilfenahme von Ultraschall mittels Methanol extrahiert wurden. Nach Aufkonzentrierung und Lösungsmittelwechsel auf Wasser sowie Standardaddition wurden die Proben mit einer Kapillar-HPLC-ESI-MSn-Methode analysiert. Das verwendete Ionenfallen-Massenspektrometer (LCQ-DECA) bietet die Möglichkeit, Strukturaufklärung durch selektive Fragmentierung der Qasimolekülionen zu betreiben. Die Quantifizierung erfolgte teilweise im MS/MS-Modus, wodurch Selektivität und Nachweisgrenze verbessert werden konnten. Um Produkte der Terpen-Oxidation zu identifizieren, die nicht als Standards erhältlich waren, wurden Ozonolysexperimente durchgeführt. Dadurch gelang die Identifizierung einer Reihe von Oxidationsprodukten in Realproben. Neben schon bekannten Produkten der Terpen-Oxidation konnten einige Produkte erstmals in Realproben eindeutig als Produkte des α Pinens nachgewiesen werden. In den Proben der Ozonolyseexperimente konnten auch Produkte mit hohem Molekulargewicht (>300 u) nachgewiesen werden, die Ähnlichkeit zeigen zu den als Dimeren oder Polymeren in der Literatur bezeichneten Substanzen. Sie konnten jedoch nicht in Feldproben gefunden werden. Im Rahmen von 5 Messkampagnen in Deutschland und Finnland wurden Proben der atmosphärischen Partikelphase genommen. Die Quantifizierung von Produkten der Oxidation von α-Pinen, β-Pinen, 3-Caren, Sabinen und Limonen in diesen Proben ergab eine große zeitliche und örtliche Variationsbreite der Konzentrationen. Die Konzentration von Pinsäure bewegte sich beispielsweise zwischen etwa 0,4 und 21 ng/m³ während aller Messkampagnen. Es konnten stets Produkte verschiedener Terpene nachgewiesen werden. Produkte einiger Terpene eignen sich sogar als Markersubstanzen für verschiedene Pflanzenarten. Sabinen-Produkte wie Sabinsäure können als Marker für die Emissionen von Laubbäumen wie Buchen oder Birken verwendet werden, während Caren-Produkte wie Caronsäure als Marker für Nadelbäume, speziell Kiefern, verwendet werden können. Mit den quantifizierten Substanzen als Marker wurde unter zu Hilfenahme von Messungen des Gehaltes an organischem und elementarem Kohlenstoff im Aerosol der Anteil des sekundären organischen Aerosols (SOA) errechnet, der von der Ozonolyse der Terpene stammt. Erstaunlicherweise konnten nur 1% bis 8% des SOA auf die Ozonolyse der Terpene zurückgeführt werden. Dies steht im Gegensatz zu der bisherigen Meinung, dass die Ozonolyse der Terpene die wichtigste Quelle für biogenes SOA darstellt. Gründe für diese Diskrepanz werden in der Arbeit diskutiert. Um die atmosphärischen Prozesse der Bildung von SOA vollständig zu verstehen, müssen jedoch noch weitere Anstrengungen unternommen werden.
Resumo:
Vor dem Hintergrund sich wandelnder (medialer) Lebenswelten von Schülerinnen und Schülern gewinnen Bestimmungsversuche um medienpädagogische Handlungskompetenzen von Lehrpersonen an Bedeutung. Der Erwerb medienpädagogischer Kompetenz, verstanden als dynamisches Zusammenspiel von domänenspezifischem Wissen und anwendungsorientiertem Können, wird in der vorliegenden Arbeit als wesentliches Lernziel der medienpädagogischen (Aus-)Bildung bestimmt. Als ein Weg zur Förderung medienpädagogischer Handlungskompetenz wird von der Autorin auf der Folie konstruktivistischer Vorstellungen über das Lehren und Lernen die Methode der Problemorientierung vorgeschlagen. Im ersten Teil der Arbeit werden Modelle und Konzepte diskutiert, die Bausteine für ein Modell medienpädagogischer Kompetenz liefern. Im zweiten Teil wird eine empirische Untersuchung zum Erwerb medienpädagogischer Handlungskompetenz auf der Basis eines von der Autorin erarbeiteten Modells vorgestellt und die Ergebnisse diskutiert. Eine kompetenztheoretische Annäherung erfolgt auf der Basis zweier Konzepte. Dies sind die Ausführungen zu einem Konzept kommunikativer Kompetenz nach Jürgen Habermas sowie dessen Überführung in die Medienpädagogik durch Dieter Baacke. Ferner wird die rezente bildungspolitische Kompetenzdebatte in Anbindung an Franz E. Weinert analysiert. Es folgt eine Zusammenschau über die methodischen Konzepte zur Erfassung von Kompetenzen in der Erziehungswissenschaft und deren Anwendbarkeit für die medienpädagogische Kompetenzforschung. Die gegenwärtig vorliegenden Entwürfe zu einer inhaltlichen Bestimmung medienpädagogischer Kompetenzen werden besprochen (Sigrid Blömeke, Werner Sesink, International Society for Technology in Education). Im Rekurs auf konstruktivistische lerntheoretische Überlegungen erfährt das problemorientierte Lernen beim Aufbau von Kompetenzen eine enorme Aufwertung. In der Arbeit wird insbesondere den Arbeiten von David Jonassen zu einer konstruktivistisch-instruktionistischen Herangehensweise bei der Gestaltung problemorientierter Lernumgebungen eine große Bedeutung zugesprochen (vgl. auch Ansätze des Goal-based Scenarios/Roger Schank und des Learning by Design/Janet Kolodner). Im zweiten Teil wird die Interventionsstudie im Kontrollgruppendesign vorgestellt. Anhand eines Modells medienpädagogischer Kompetenz, dass auf den Dimensionen Wissen einerseits und Können andererseits basiert, wurden Studierende (n=59) in einem Pre-Posttestverfahren auf diese Dimensionen getestet. Die Studierenden der Interventionsgruppe (n=30) arbeiteten über ein Semester mit einer problemorientierten Lernanwendung, die Studierenden der Kontrollgruppe (n=29) in einem klassischen Seminarsetting. Hauptergebnis der Untersuchung ist es, das die Intervention zu einem messbaren Lernerfolg beim medienpädagogischen Können führte. In der Diskussion der Ergebnisse werden Empfehlungen zur Gestaltung problemorientierter Lernumgebungen formuliert. Die Chancen einer Orientierung an problemorientierten Lernsettings für das Lernen an Hochschulen werden herausgestellt.
Resumo:
Microcredit has been a tool to alleviate poverty since long. This research is aimed to observe the efficiency of microcredit in the field of social exclusion. The development of questionnaires and use of existing tools was used to observe the tangible and intangible intertwining of microcredit and by doing so the effort was concentrated to observe whether microcredit has a direct effect on social exclusion or not. Bangladesh was chosen for the field study and 85 samples were taken for the analysis. It is a time period research and one year time was set to receive the sample and working on the statistical analysis. The tangible aspect was based on a World Bank questionnaire and the social capital questionnaire was developed through different well observed tools. The borrowers of Grameen Bank in Bangladesh, is the research sample whish shows a strong correlation between their tangible activity and social life. There are significant changes in tangible aspect and social participation observed from the research. Strong correlation between the two aspects was also found taking into account that the borrowers themselves have a vibrant social life in the village.
Synthetische Glycopeptide mit Sulfatyl-Lewis X-Struktur als potenzielle Inhibitoren der Zelladhäsion
Resumo:
Zelladhäsionsprozesse sind von großer Bedeutung für zahlreiche biologische Prozesse, wie etwa die Immunantwort, die Wundheilung und die Embryogenese. Außerdem spielen sie eine entscheidende Rolle im Verlauf inflammatorischer Prozesse. An der Zelladhäsion sind verschiedene Klassen von Adhäsionsmolekülen beteiligt. Die erste leichte „rollende“ Adhäsion von Leukozyten am Ort einer Entzündung wird durch die Selektine vermittelt. Diese binden über die Kohlenhydrat-Strukturen Sialyl-Lewisx und Sialyl-Lewisa über eine calciumabhängige Kohlenhydrat-Protein-Bindung an ihre spezifischen Liganden und vermitteln so den ersten Zellkontakt, bevor andere Adhäsionsmoleküle (Cadherine, Integrine) die feste Adhäsion und den Durchtritt durch das Endothel bewirken. Bei einer pathogenen Überexpression der Selektine kommt es jedoch zu zahlreichen chronischen Erkrankungen wie z. B. rheumatoider Arthritis, Erkrankungen der Herzkranzgefäße oder dem Reperfusions-syndrom. Außerdem wird eine Beteiligung der durch die Selektine vermittelten Zellkontakte bei der Metastasierung von Karzinomzellen angenommen. Ein Ansatzpunkt für die Behandlung der oben genannten Erkrankungen ist die Gabe löslicher kompetitiver Inhibitoren für die Selektine. Ziel der Arbeit war die Modifikation des Sialyl-Lewisx-Leitmotivs zur Steigerung der metabolischen Stabilität und dessen Einbau in die Peptidsequenz aus der für die Bindung verantwortlichen Domäne des endogenen Selektin-Liganden PSGL-1. Dazu wurden mit einer modifizierten Lewisx-Struktur glycosylierte Aminosäurebausteine dargestellt (Abb.1). Die Verwendung von Arabinose und des Sulfatrestes anstelle von Fusose und Sialinsäure sollte außerdem zu einer gesteigerten metabolischen Stabilität des synthetischen Liganden beitragen. Die so erhaltenen Glycosylaminosäuren sollten nun in die Festphasenpeptidsynthese eingesetzt werden. Aufgrund der großen säurelabilität konnte hier nicht auf das Standartverfahren (Wang-Harz, Abspaltung mit TFA) zurückgegriffen werden. Deshalb kam ein neuartiges UV-labiles Ankersystem zum Einsatz. Dazu wurde ein Protokoll für die Synthese und Abspaltung von Peptiden an diesem neuen System entwickelt. Daran gelang die Synthese des nichtglycosylierten Peptidrückgrats sowie eines mit der dem sulfatierten Lewisx-Motiv versehenen Glycopeptids. Ein vierfach sulfatiertes Glycopeptid, welches durch den Einsatz von im Vorfeld chemisch sulfatierer Tyrosin-Bausteinen dargestellt werden sollte, konnte massenspektrometrisch nachgewiesen werden.
Resumo:
This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.
Resumo:
« Dieu est mort » proclame à l’envi le fou nietzschéen. C’est sous l’égide inquiète de cette assertion paroxystique, traduisant ce «malaise de la culture» qu’évoquait Freud, que la pensée, la littérature et l’art du XXe siècle européen évoluent. Cependant, le christianisme dont ce cri signe l’extrême décadence, n’est pas seul à imprégner les productions artistiques de ce siècle, même les plus prétendument athées, mais avant tout la figure du Christ - autour de laquelle sont structurés tant cette religion que son système de croyance – semble, littéralement et paradoxalement, infester l’imaginaire du XXe siècle, sous des formes plus ou moins fantasmatiques. Ce travail se propose ainsi précisément d’étudier, dans une optique interdisciplinaire entre littérature, art et cinéma, cette dynamique controversée, ses causes, les processus qui la sous-tendent ainsi que ses effets, à partir des œuvres de trois auteurs : Artaud, Beckett et Pasolini. L’objectif est de fournir une clé de lecture de cette problématique qui mette en exergue comment « la conversion de la croyance », comme la définit Deleuze, à laquelle ces auteurs participent, n’engendre pas un rejet purement profanatoire du christianisme mais, à l’inverse, la mise en œuvre d’un mouvement aussi violent que libératoire qualifié par Nancy de « déconstruction du christianisme ». Ce travail entend donc étudier tout d’abord à la lumière de l’expérience intérieure de Bataille, l’imaginaire christique qui sous-tend leurs productions ; puis, d’en analyser les mouvements et les effets en les questionnant sur la base de cette dynamique ambivalente que Grossman nomme la « défiguration de la forme christique ». Les excès délirants d’Artaud, l’ironie tranchante de Beckett et la passion ambiguë de Pasolini s’avèrent ainsi participer à un mouvement commun qui, oscillant entre reprise et rejet, débouche sur une attitude tout aussi destructive que revitalisante des fondements du christianisme.
Resumo:
La tesi analizza una parte della politica estera dell’amministrazione Johnson, e più specificamente l’avvio del dialogo con l’Urss in materia di non proliferazione e controllo degli armamenti e la revisione della China policy, inquadrando entrambe nell’adattamento della cold war strategy all’evoluzione sistema internazionale, argomentando che la distensione intesa come rilassamento delle tensioni e ricerca di terreno comune per il dialogo, fosse perlomeno uno degli strumenti politici che l’amministrazione scelse di usare. Il primo capitolo analizza i cambiamenti che interessarono il Blocco sovietico e il movimento comunista internazionale tra la fine degli anni Cinquanta e l’inizio degli anni Sessanta, soprattutto la rottura dell’alleanza sino-sovietica, e l’impatto che essi ebbero sul sistema bipolare su cui si basava la Guerra Fredda. Il capitolo secondo affronta più specificamente l’evoluzione delle relazioni tra Stati Uniti e Unione Sovietica, il perseguimento di una politica di distensione, dopo la crisi dei missili cubani, e in che relazione si trovasse ciò con lo status della leadership sovietica a seguito dei cambiamenti che avevano avuto luogo. Soffermandosi sulla questione del controllo degli armamenti e sul percorso che portò alla firma del Trattato di Non-proliferazione, si analizza come la nuova rotta intrapresa col dialogo sulle questioni strategiche sia stato anche un cambiamento di rotta in generale nella concezione della Guerra Fredda e l’introduzione della distensione come strumento politico. Il terzo capitolo affronta la questione della modifica della politica verso Pechino e il processo tortuoso e contorto attraverso cui l’amministrazione Johnson giunse a distaccarsi dalla China policy seguita sino ad allora.
Resumo:
Il presente studio ha indagato e valutato alcune abilità cognitive del cane: la capacità di discriminare quantità e le capacità di apprendimento mediante imitazione; quest’ultima è poi stata messa in relazione con l’attaccamento nei confronti del proprietario. Per l’esecuzione della prima indagine sono stati messi appunto due test: il primo si è basato esclusivamente sulla presentazione di uno stimolo visivo: diversi quantitativi di cibo, differenti tra loro del 50%, sono stati presentati al cane; la scelta effettuata dai soggetti testati è stata premiata con differenti tipi di rinforzo differenziale o non differenziale. Il secondo test è stato diviso in due parti: sono stati presentati al cane diversi quantitativi di cibo sempre differenti tra loro del 50% ma nella prima parte del test l’input sensoriale per il cane è stato esclusivamente uditivo mentre nella seconda parte è stato sia uditivo che visivo. Ove è stato possibile è stato applicato ai cani un cardiofrequenzimetro al fine di eseguire una valutazione delle variazioni della frequenza cardiaca nel corso del test. Lo scopo è stato quello di valutare se i soggetti testati erano in grado di discriminare la quantità maggiore. La seconda indagine ha analizzato le capacità di apprendimento di 36 soggetti che sono stati suddivisi in cani da lavoro e pet. I soggetti protagonisti dello studio hanno eseguito il Mirror Test per la valutazione dell’apprendimento per imitazione. I soggetti presi in considerazione, sono stati sottoposti a scansione termografica all’inizio ed al termine del test ed è stata rilevata la loro frequenza respiratoria nella fase iniziale e finale del test. In 11 soggetti che hanno eseguito il precedente test è stato possibile eseguire anche il Strange Situation Test per la valutazione dell’attaccamento al proprietario; i test in questione sono stati videoregistrati ed analizzati per mezzo di un software preposto (OBSERVER XT 10).
Resumo:
Understanding the biology of Multiple Myeloma (MM) is of primary importance in the struggle to achieve a cure for this yet incurable neoplasm. A better knowledge of the mechanism underlying the development of MM can guide us in the development of new treatment strategies. Studies both on solid and haematological tumours have shown that cancer comprises a collection of related but subtly different clones, a feature that has been termed “intra-clonal heterogeneity”. This intra-clonal heterogeneity is likely, from a “Darwinian” natural selection perspective, to be the essential substrate for cancer evolution, disease progression and relapse. In this context the critical mechanism for tumour progression is competition between individual clones (and cancer stem cells) for the same microenvironmental “niche”, combined with the process of adaptation and natural selection. The Darwinian behavioural characteristics of cancer stem cells are applicable to MM. The knowledge that intra-clonal heterogeneity is an important feature of tumours’ biology has changed our way to addressing cancer, now considered as a composite mixture of clones and not as a linear evolving disease. In this variable therapeutic landscape it is important for clinicians and researchers to consider the impact that evolutionary biology and intra-clonal heterogeneity have on the treatment of myeloma and the emergence of treatment resistance. It is clear that if we want to effectively cure myeloma it is of primarily importance to understand disease biology and evolution. Only by doing so will we be able to effectively use all of the new tools we have at our disposal to cure myeloma and to use treatment in the most effective way possible. The aim of the present research project was to investigate at different levels the presence of intra-clonal heterogeneity in MM patients, and to evaluate the impact of treatment on clonal evolution and on patients’ outcomes.
Resumo:
Through studying German, Polish and Czech publications on Silesia, Mr. Kamusella found that most of them, instead of trying to objectively analyse the past, are devoted to proving some essential "Germanness", "Polishness" or "Czechness" of this region. He believes that the terminology and thought-patterns of nationalist ideology are so deeply entrenched in the minds of researchers that they do not consider themselves nationalist. However, he notes that, due to the spread of the results of the latest studies on ethnicity/nationalism (by Gellner, Hobsbawm, Smith, Erikson Buillig, amongst others), German publications on Silesia have become quite objective since the 1980s, and the same process (impeded by under funding) has been taking place in Poland and the Czech Republic since 1989. His own research totals some 500 pages, in English, presented on disc. So what are the traps into which historians have been inclined to fall? There is a tendency for them to treat Silesia as an entity which has existed forever, though Mr. Kamusella points out that it emerged as a region only at the beginning of the 11th century. These same historians speak of Poles, Czechs and Germans in Silesia, though Mr. Kamusella found that before the mid-19th century, identification was with an inhabitant's local area, religion or dynasty. In fact, a German national identity started to be forged in Prussian Silesia only during the Liberation War against Napoleon (1813-1815). It was concretised in 1861 in the form of the first Prussian census, when the language a citizen spoke was equated with his/her nationality. A similar census was carried out in Austrian Silesia only in 1881. The censuses forced the Silesians to choose their nationality despite their multiethnic multicultural identities. It was the active promotion of a German identity in Prussian Silesia, and Vienna's uneasy acceptance of the national identities in Austrian Silesia which stimulated the development of Polish national, Moravian ethnic and Upper Silesian ethnic regional identities in Upper Silesia, and Polish national, Czech national, Moravian ethnic and Silesian ethnic identities in Austrian Silesia. While traditional historians speak of the "nationalist struggle" as though it were a permanent characteristic of Silesia, Mr. Kamusella points out that such a struggle only developed in earnest after 1918. What is more, he shows how it has been conveniently forgotten that, besides the national players, there were also significant ethnic movements of Moravians, Upper Silesians, Silesians and the tutejsi (i.e. those who still chose to identify with their locality). At this point Mr. Kamusella moves into the area of linguistics. While traditionally historians have spoken of the conflicts between the three national languages (German, Polish and Czech), Mr Kamusella reminds us that the standardised forms of these languages, which we choose to dub "national", were developed only in the mid-18th century, after 1869 (when Polish became the official language in Galicia), and after the 1870s (when Czech became the official language in Bohemia). As for standard German, it was only widely promoted in Silesia from the mid 19th century onwards. In fact, the majority of the population of Prussian Upper Silesia and Austrian Silesia were bi- or even multilingual. What is more, the "Polish" and "Czech" Silesians spoke were not the standard languages we know today, but a continuum of West-Slavic dialects in the countryside and a continuum of West-Slavic/German creoles in the urbanised areas. Such was the linguistic confusion that, from time to time, some ethnic/regional and Church activists strove to create a distinctive Upper Silesian/Silesian language on the basis of these dialects/creoles, but their efforts were thwarted by the staunch promotion of standard German, and after 1918, of standard Polish and Czech. Still on the subject of language, Mr. Kamusella draws attention to a problem around the issue of place names and personal names. Polish historians use current Polish versions of the Silesian place names, Czechs use current Polish/Czech versions of the place names, and Germans use the German versions which were in use in Silesia up to 1945. Mr. Kamusella attempted to avoid this, as he sees it, nationalist tendency, by using an appropriate version of a place name for a given period and providing its modern counterpart in parentheses. In the case of modern place names he gives the German version in parentheses. As for the name of historical figures, he strove to use the name entered on the birth certificate of the person involved, and by doing so avoid such confusion as, for instance, surrounds the Austrian Silesian pastor L.J. Sherschnik, who in German became Scherschnick, in Polish, Szersznik, and in Czech, Sersnik. Indeed, the prospective Silesian scholar should, Mr. Kamusella suggests, as well as the three languages directly involved in the area itself, know English and French, since many documents and books on the subject have been published in these languages, and even Latin, when dealing in depth with the period before the mid-19th century. Mr. Kamusella divides the policies of ethnic cleansing into two categories. The first he classifies as soft, meaning that policy is confined to the educational system, army, civil service and the church, and the aim is that everyone learn the language of the dominant group. The second is the group of hard policies, which amount to what is popularly labelled as ethnic cleansing. This category of policy aims at the total assimilation and/or physical liquidation of the non-dominant groups non-congruent with the ideal of homogeneity of a given nation-state. Mr. Kamusella found that soft policies were consciously and systematically employed by Prussia/Germany in Prussian Silesia from the 1860s to 1918, whereas in Austrian Silesia, Vienna quite inconsistently dabbled in them from the 1880s to 1917. In the inter-war period, the emergence of the nation-states of Poland and Czechoslovakia led to full employment of the soft policies and partial employment of the hard ones (curbed by the League of Nations minorities protection system) in Czechoslovakian Silesia, German Upper Silesia and the Polish parts of Upper and Austrian Silesia. In 1939-1945, Berlin started consistently using all the "hard" methods to homogenise Polish and Czechoslovakian Silesia which fell, in their entirety, within the Reich's borders. After World War II Czechoslovakia regained its prewar part of Silesia while Poland was given its prewar section plus almost the whole of the prewar German province. Subsequently, with the active involvement and support of the Soviet Union, Warsaw and Prague expelled the majority of Germans from Silesia in 1945-1948 (there were also instances of the Poles expelling Upper Silesian Czechs/Moravians, and of the Czechs expelling Czech Silesian Poles/pro-Polish Silesians). During the period of communist rule, the same two countries carried out a thorough Polonisation and Czechisation of Silesia, submerging this region into a new, non-historically based administrative division. Democratisation in the wake of the fall of communism, and a gradual retreat from the nationalist ideal of the homogeneous nation-state with a view to possible membership of the European Union, caused the abolition of the "hard" policies and phasing out of the "soft" ones. Consequently, limited revivals of various ethnic/national minorities have been observed in Czech and Polish Silesia, whereas Silesian regionalism has become popular in the westernmost part of Silesia which remained part of Germany. Mr. Kamusella believes it is possible that, with the overcoming of the nation-state discourse in European politics, when the expression of multiethnicity and multilingualism has become the cause of the day in Silesia, regionalism will hold sway in this region, uniting its ethnically/nationally variegated population in accordance with the principle of subsidiarity championed by the European Union.