865 resultados para CASE-II-DIFFUSION
Resumo:
This paper is a continuation of Dokuchaev and Novikov (2010) [8]. The interaction between partial projective representations and twisted partial actions of groups considered in Dokuchaev and Novikov (2010) [8] is treated now in a categorical language. In the case of a finite group G, a structural result on the domains of factor sets of partial projective representations of G is obtained in terms of elementary partial actions. For arbitrary G we study the component pM'(G) of totally-defined factor sets in the partial Schur multiplier pM(G) using the structure of Exel's semigroup. A complete characterization of the elements of pM'(G) is obtained for algebraically closed fields. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
O tratamento da má oclusão de Classe II sem extrações dentárias vem ganhando popularidade na comunidade ortodôntica já há três décadas. Aparelhos funcionais fixos vêm sendo utilizados por profissionais, de maneira crescente, para promover compensações dentoalveolares e corrigir a má oclusão de Classe II. Os efeitos mais significativos são observados em pacientes com padrão de crescimento horizontal. Um caso clínico será relatado com o uso do aparelho fixo Twin Force Bite Corrector em uma paciente do sexo feminino, para a correção da Classe II. Esse dispositivo de ancoragem fixa dispensa o uso de aparelhos funcionais removíveis e não necessita da cooperação do paciente.
Resumo:
OBJECTIVE: Define and compare numbers and types of occlusal contacts in maximum intercuspation. METHODS: The study consisted of clinical and photographic analysis of occlusal contacts in maximum intercuspation. Twenty-six Caucasian Brazilian subjects were selected before orthodontic treatment, 20 males and 6 females, with ages ranging between 12 and 18 years. The subjects were diagnosed and grouped as follows: 13 with Angle Class I malocclusion and 13 with Angle Class II Division 1 malocclusion. After analysis, the occlusal contacts were classified according to the established criteria as: tripodism, bipodism, monopodism (respectively, three, two or one contact point with the slope of the fossa); cuspid to a marginal ridge; cuspid to two marginal ridges; cuspid tip to opposite inclined plane; surface to surface; and edge to edge. RESULTS: The mean number of occlusal contacts per subject in Class I malocclusion was 43.38 and for Class II Division 1 malocclusion it was 44.38, this difference was not statistically significant (p>0.05). CONCLUSIONS: There is a variety of factors that influence the number of occlusal contacts between a Class I and a Class II, Division 1 malocclusion. There is no standardization of occlusal contact type according to the studied malocclusions. A proper selection of occlusal contact types such as cuspid to fossa or cuspid to marginal ridge and its location in the teeth should be individually defined according to the demands of each case. The existence of an adequate occlusal contact leads to a correct distribution of forces, promoting periodontal health.
Resumo:
Structural properties of model membranes, such as lipid vesicles, may be investigated through the addition of fluorescent probes. After incorporation, the fluorescent molecules are excited with linearly polarized light and the fluorescence emission is depolarized due to translational as well as rotational diffusion during the lifetime of the excited state. The monitoring of emitted light is undertaken through the technique of time-resolved fluorescence: the intensity of the emitted light informs on fluorescence decay times, and the decay of the components of the emitted light yield rotational correlation times which inform on the fluidity of the medium. The fluorescent molecule DPH, of uniaxial symmetry, is rather hydrophobic and has collinear transition and emission moments. It has been used frequently as a probe for the monitoring of the fluidity of the lipid bilayer along the phase transition of the chains. The interpretation of experimental data requires models for localization of fluorescent molecules as well as for possible restrictions on their movement. In this study, we develop calculations for two models for uniaxial diffusion of fluorescent molecules, such as DPH, suggested in several articles in the literature. A zeroth order test model consists of a free randomly rotating dipole in a homogeneous solution, and serves as the basis for the study of the diffusion of models in anisotropic media. In the second model, we consider random rotations of emitting dipoles distributed within cones with their axes perpendicular to the vesicle spherical geometry. In the third model, the dipole rotates in the plane of the of bilayer spherical geometry, within a movement that might occur between the monolayers forming the bilayer. For each of the models analysed, two methods are used by us in order to analyse the rotational diffusion: (I) solution of the corresponding rotational diffusion equation for a single molecule, taking into account the boundary conditions imposed by the models, for the probability of the fluorescent molecule to be found with a given configuration at time t. Considering the distribution of molecules in the geometry proposed, we obtain the analytical expression for the fluorescence anisotropy, except for the cone geometry, for which the solution is obtained numerically; (II) numerical simulations of a restricted rotational random walk in the two geometries corresponding to the two models. The latter method may be very useful in the cases of low-symmetry geometries or of composed geometries.
Resumo:
Programa de doctorado de oceanografía
Resumo:
The Ph.D. dissertation analyses the reasons for which political actors (governments, legislatures and political parties) decide consciously to give away a source of power by increasing the political significance of the courts. It focuses on a single case of particular significance: the passage of the Constitutional Reform Act 2005 in the United Kingdom. This Act has deeply changed the governance and the organization of the English judicial system, has provided a much clearer separation of powers and a stronger independence of the judiciary from the executive and the legislative. What’s more, this strengthening of the judicial independence has been decided in a period in which the political role of the English judges was evidently increasing. I argue that the reform can be interpreted as a «paradigm shift» (Hall 1993), that has changed the way in which the judicial power is considered. The most diffused conceptions in the sub-system of the English judicial policies are shifted, and a new paradigm has become dominant. The new paradigm includes: (i) stronger separation of powers, (ii) collective (as well as individual) conception of the independence of the judiciary, (iii) reduction of the political accountability of the judges, (iv) formalization of the guarantees of judicial independence, (v) principle-driven (instead of pragmatic) approach to the reforms, and (vi) transformation of a non-codified constitution in a codified one. Judicialization through political decisions represent an important, but not fully explored, field of research. The literature, in particular, has focused on factors unable to explain the English case: the competitiveness of the party system (Ramseyer 1994), the political uncertainty at the time of constitutional design (Ginsburg 2003), the cultural divisions within the polity (Hirschl 2004), federal institutions and division of powers (Shapiro 2002). All these contributes link the decision to enhance the political relevance of the judges to some kind of diffusion of political power. In the contemporary England, characterized by a relative high concentration of power in the government, the reasons for such a reform should be located elsewhere. I argue that the Constitutional Reform Act 2005 can be interpreted as a result of three different kinds of reasons: (i) the social and demographical transformations of the English judiciary, which have made inefficient most of the precedent mechanism of governance, (ii) the role played by the judges in the policy process and (iii) the cognitive and normative influences originated from the European context, as a consequence of the membership of the United Kingdom to the European Union and the Council of Europe. My thesis is that only a full analysis of all these three aspects can explain the decision to reform the judicial system and the content of the Constitutional Reform Act 2005. Only the cultural influences come from the European legal complex, above all, can explain the paradigm shift previously described.
Resumo:
Healthcare, Human Computer Interfaces (HCI), Security and Biometry are the most promising application scenario directly involved in the Body Area Networks (BANs) evolution. Both wearable devices and sensors directly integrated in garments envision a word in which each of us is supervised by an invisible assistant monitoring our health and daily-life activities. New opportunities are enabled because improvements in sensors miniaturization and transmission efficiency of the wireless protocols, that achieved the integration of high computational power aboard independent, energy-autonomous, small form factor devices. Application’s purposes are various: (I) data collection to achieve off-line knowledge discovery; (II) user notification of his/her activities or in case a danger occurs; (III) biofeedback rehabilitation; (IV) remote alarm activation in case the subject need assistance; (V) introduction of a more natural interaction with the surrounding computerized environment; (VI) users identification by physiological or behavioral characteristics. Telemedicine and mHealth [1] are two of the leading concepts directly related to healthcare. The capability to borne unobtrusiveness objects supports users’ autonomy. A new sense of freedom is shown to the user, not only supported by a psychological help but a real safety improvement. Furthermore, medical community aims the introduction of new devices to innovate patient treatments. In particular, the extension of the ambulatory analysis in the real life scenario by proving continuous acquisition. The wide diffusion of emerging wellness portable equipment extended the usability of wearable devices also for fitness and training by monitoring user performance on the working task. The learning of the right execution techniques related to work, sport, music can be supported by an electronic trainer furnishing the adequate aid. HCIs made real the concept of Ubiquitous, Pervasive Computing and Calm Technology introduced in the 1988 by Marc Weiser and John Seeley Brown. They promotes the creation of pervasive environments, enhancing the human experience. Context aware, adaptive and proactive environments serve and help people by becoming sensitive and reactive to their presence, since electronics is ubiquitous and deployed everywhere. In this thesis we pay attention to the integration of all the aspects involved in a BAN development. Starting from the choice of sensors we design the node, configure the radio network, implement real-time data analysis and provide a feedback to the user. We present algorithms to be implemented in wearable assistant for posture and gait analysis and to provide assistance on different walking conditions, preventing falls. Our aim, expressed by the idea to contribute at the development of a non proprietary solutions, driven us to integrate commercial and standard solutions in our devices. We use sensors available on the market and avoided to design specialized sensors in ASIC technologies. We employ standard radio protocol and open source projects when it was achieved. The specific contributions of the PhD research activities are presented and discussed in the following. • We have designed and build several wireless sensor node providing both sensing and actuator capability making the focus on the flexibility, small form factor and low power consumption. The key idea was to develop a simple and general purpose architecture for rapid analysis, prototyping and deployment of BAN solutions. Two different sensing units are integrated: kinematic (3D accelerometer and 3D gyroscopes) and kinetic (foot-floor contact pressure forces). Two kind of feedbacks were implemented: audio and vibrotactile. • Since the system built is a suitable platform for testing and measuring the features and the constraints of a sensor network (radio communication, network protocols, power consumption and autonomy), we made a comparison between Bluetooth and ZigBee performance in terms of throughput and energy efficiency. Test in the field evaluate the usability in the fall detection scenario. • To prove the flexibility of the architecture designed, we have implemented a wearable system for human posture rehabilitation. The application was developed in conjunction with biomedical engineers who provided the audio-algorithms to furnish a biofeedback to the user about his/her stability. • We explored off-line gait analysis of collected data, developing an algorithm to detect foot inclination in the sagittal plane, during walk. • In collaboration with the Wearable Lab – ETH, Zurich, we developed an algorithm to monitor the user during several walking condition where the user carry a load. The remainder of the thesis is organized as follows. Chapter I gives an overview about Body Area Networks (BANs), illustrating the relevant features of this technology and the key challenges still open. It concludes with a short list of the real solutions and prototypes proposed by academic research and manufacturers. The domain of the posture and gait analysis, the methodologies, and the technologies used to provide real-time feedback on detected events, are illustrated in Chapter II. The Chapter III and IV, respectively, shown BANs developed with the purpose to detect fall and monitor the gait taking advantage by two inertial measurement unit and baropodometric insoles. Chapter V reports an audio-biofeedback system to improve balance on the information provided by the use centre of mass. A walking assistant based on the KNN classifier to detect walking alteration on load carriage, is described in Chapter VI.
Resumo:
Der AMANDA-II Detektor ist primär für den richtungsaufgelösten Nachweis hochenergetischer Neutrinos konzipiert. Trotzdem können auch niederenergetische Neutrinoausbrüche, wie sie von Supernovae erwartet werden, mit hoher Signifikanz nachgewiesen werden, sofern sie innerhalb der Milchstraße stattfinden. Die experimentelle Signatur im Detektor ist ein kollektiver Anstieg der Rauschraten aller optischen Module. Zur Abschätzung der Stärke des erwarteten Signals wurden theoretische Modelle und Simulationen zu Supernovae und experimentelle Daten der Supernova SN1987A studiert. Außerdem wurden die Sensitivitäten der optischen Module neu bestimmt. Dazu mussten für den Fall des südpolaren Eises die Energieverluste geladener Teilchen untersucht und eine Simulation der Propagation von Photonen entwickelt werden. Schließlich konnte das im Kamiokande-II Detektor gemessene Signal auf die Verhältnisse des AMANDA-II Detektors skaliert werden. Im Rahmen dieser Arbeit wurde ein Algorithmus zur Echtzeit-Suche nach Signalen von Supernovae als Teilmodul der Datennahme implementiert. Dieser beinhaltet diverse Verbesserungen gegenüber der zuvor von der AMANDA-Kollaboration verwendeten Version. Aufgrund einer Optimierung auf Rechengeschwindigkeit können nun mehrere Echtzeit-Suchen mit verschiedenen Analyse-Zeitbasen im Rahmen der Datennahme simultan laufen. Die Disqualifikation optischer Module mit ungeeignetem Verhalten geschieht in Echtzeit. Allerdings muss das Verhalten der Module zu diesem Zweck anhand von gepufferten Daten beurteilt werden. Dadurch kann die Analyse der Daten der qualifizierten Module nicht ohne eine Verzögerung von etwa 5 Minuten geschehen. Im Falle einer erkannten Supernova werden die Daten für die Zeitdauer mehrerer Minuten zur späteren Auswertung in 10 Millisekunden-Intervallen archiviert. Da die Daten des Rauschverhaltens der optischen Module ansonsten in Intervallen von 500 ms zur Verfgung stehen, ist die Zeitbasis der Analyse in Einheiten von 500 ms frei wählbar. Im Rahmen dieser Arbeit wurden drei Analysen dieser Art am Südpol aktiviert: Eine mit der Zeitbasis der Datennahme von 500 ms, eine mit der Zeitbasis 4 s und eine mit der Zeitbasis 10 s. Dadurch wird die Sensitivität für Signale maximiert, die eine charakteristische exponentielle Zerfallszeit von 3 s aufweisen und gleichzeitig eine gute Sensitivität über einen weiten Bereich exponentieller Zerfallszeiten gewahrt. Anhand von Daten der Jahre 2000 bis 2003 wurden diese Analysen ausführlich untersucht. Während die Ergebnisse der Analyse mit t = 500 ms nicht vollständig nachvollziehbare Ergebnisse produzierte, konnten die Resultate der beiden Analysen mit den längeren Zeitbasen durch Simulationen reproduziert und entsprechend gut verstanden werden. Auf der Grundlage der gemessenen Daten wurden die erwarteten Signale von Supernovae simuliert. Aus einem Vergleich zwischen dieser Simulation den gemessenen Daten der Jahre 2000 bis 2003 und der Simulation des erwarteten statistischen Untergrunds kann mit einem Konfidenz-Niveau von mindestens 90 % gefolgert werden, dass in der Milchstraße nicht mehr als 3.2 Supernovae pro Jahr stattfinden. Zur Identifikation einer Supernova wird ein Ratenanstieg mit einer Signifikanz von mindestens 7.4 Standardabweichungen verlangt. Die Anzahl erwarteter Ereignisse aus dem statistischen Untergrund beträgt auf diesem Niveau weniger als ein Millionstel. Dennoch wurde ein solches Ereignis gemessen. Mit der gewählten Signifikanzschwelle werden 74 % aller möglichen Vorläufer-Sterne von Supernovae in der Galaxis überwacht. In Kombination mit dem letzten von der AMANDA-Kollaboration veröffentlicheten Ergebnis ergibt sich sogar eine obere Grenze von nur 2.6 Supernovae pro Jahr. Im Rahmen der Echtzeit-Analyse wird für die kollektive Ratenüberhöhung eine Signifikanz von mindestens 5.5 Standardabweichungen verlangt, bevor eine Meldung über die Detektion eines Supernova-Kandidaten verschickt wird. Damit liegt der überwachte Anteil Sterne der Galaxis bei 81 %, aber auch die Frequenz falscher Alarme steigt auf bei etwa 2 Ereignissen pro Woche. Die Alarm-Meldungen werden über ein Iridium-Modem in die nördliche Hemisphäre übertragen, und sollen schon bald zu SNEWS beitragen, dem weltweiten Netzwerk zur Früherkennung von Supernovae.
Resumo:
Sterne mit einer Anfangsmasse zwischen etwa 8 und 25 Sonnenmassen enden ihre Existenz mit einer gewaltigen Explosion, einer Typ II Supernova. Die hierbei entstehende Hoch-Entropie-Blase ist ein Bereich am Rande des sich bildenden Neutronensterns und gilt als möglicher Ort für den r-Prozess. Wegen der hohen Temperatur T innerhalb der Blase ist die Materie dort vollkommen photodesintegriert. Das Verhältnis von Neutronen zu Protonen wird durch die Elektronenhäufigkeit Ye beschrieben. Die thermodynamische Entwicklung des Systems wird durch die Entropie S gegeben. Da die Expansion der Blase schnell vonstatten geht, kann sie als adiabatisch betrachtet werden. Die Entropie S ist dann proportional zu T^3/rho, wobei rho die Dichte darstellt. Die explizite Zeitentwicklung von T und rho sowie die Prozessdauer hängen von Vexp, der Expansionsgeschwindigkeit der Blase, ab. Der erste Teil dieser Dissertation beschäftigt sich mit dem Prozess der Reaktionen mit geladenen Teilchen, dem alpha-Prozess. Dieser Prozess endet bei Temperaturen von etwa 3 mal 10^9 K, dem sogenannten "alpha-reichen" Freezeout, wobei überwiegend alpha-Teilchen, freie Neutronen sowie ein kleiner Anteil von mittelschweren "Saat"-Kernen im Massenbereich um A=100 gebildet werden. Das Verhältnis von freien Neutronen zu Saatkernen Yn/Yseed ist entscheidend für den möglichen Ablauf eines r-Prozesses. Der zweite Teil dieser Arbeit beschäftigt sich mit dem eigentlichen r-Prozess, der bei Neutronenanzahldichten von bis zu 10^27 Neutronen pro cm^3 stattfindet, und innerhalb von maximal 400 ms sehr neutronenreiche "Progenitor"-Isotope von Elementen bis zum Thorium und Uran bildet. Bei dem sich anschliessendem Ausfrieren der Neutroneneinfangreaktionen bei 10^9 K und 10^20 Neutronen pro cm^3 erfolgt dann der beta-Rückzerfall der ursprünglichen r-Prozesskerne zum Tal der Stabilität. Diese Nicht-Gleichgewichts-Phase wird in der vorliegenden Arbeit in einer Parameterstudie eingehend untersucht. Abschliessend werden astrophysikalische Bedingungen definiert, unter denen die gesamte Verteilung der solaren r-Prozess-Isotopenhäufigkeiten reproduziert werden können.
Resumo:
My work concerns two different systems of equations used in the mathematical modeling of semiconductors and plasmas: the Euler-Poisson system and the quantum drift-diffusion system. The first is given by the Euler equations for the conservation of mass and momentum, with a Poisson equation for the electrostatic potential. The second one takes into account the physical effects due to the smallness of the devices (quantum effects). It is a simple extension of the classical drift-diffusion model which consists of two continuity equations for the charge densities, with a Poisson equation for the electrostatic potential. Using an asymptotic expansion method, we study (in the steady-state case for a potential flow) the limit to zero of the three physical parameters which arise in the Euler-Poisson system: the electron mass, the relaxation time and the Debye length. For each limit, we prove the existence and uniqueness of profiles to the asymptotic expansion and some error estimates. For a vanishing electron mass or a vanishing relaxation time, this method gives us a new approach in the convergence of the Euler-Poisson system to the incompressible Euler equations. For a vanishing Debye length (also called quasineutral limit), we obtain a new approach in the existence of solutions when boundary layers can appear (i.e. when no compatibility condition is assumed). Moreover, using an iterative method, and a finite volume scheme or a penalized mixed finite volume scheme, we numerically show the smallness condition on the electron mass needed in the existence of solutions to the system, condition which has already been shown in the literature. In the quantum drift-diffusion model for the transient bipolar case in one-space dimension, we show, by using a time discretization and energy estimates, the existence of solutions (for a general doping profile). We also prove rigorously the quasineutral limit (for a vanishing doping profile). Finally, using a new time discretization and an algorithmic construction of entropies, we prove some regularity properties for the solutions of the equation obtained in the quasineutral limit (for a vanishing pressure). This new regularity permits us to prove the positivity of solutions to this equation for at least times large enough.
Resumo:
This research has focused on the study of the behavior and of the collapse of masonry arch bridges. The latest decades have seen an increasing interest in this structural type, that is still present and in use, despite the passage of time and the variation of the transport means. Several strategies have been developed during the time to simulate the response of this type of structures, although even today there is no generally accepted standard one for assessment of masonry arch bridges. The aim of this thesis is to compare the principal analytical and numerical methods existing in literature on case studies, trying to highlight values and weaknesses. The methods taken in exam are mainly three: i) the Thrust Line Analysis Method; ii) the Mechanism Method; iii) the Finite Element Methods. The Thrust Line Analysis Method and the Mechanism Method are analytical methods and derived from two of the fundamental theorems of the Plastic Analysis, while the Finite Element Method is a numerical method, that uses different strategies of discretization to analyze the structure. Every method is applied to the case study through computer-based representations, that allow a friendly-use application of the principles explained. A particular closed-form approach based on an elasto-plastic material model and developed by some Belgian researchers is also studied. To compare the three methods, two different case study have been analyzed: i) a generic masonry arch bridge with a single span; ii) a real masonry arch bridge, the Clemente Bridge, built on Savio River in Cesena. In the analyses performed, all the models are two-dimensional in order to have results comparable between the different methods taken in exam. The different methods have been compared with each other in terms of collapse load and of hinge positions.
Resumo:
Die vorliegende Arbeit beschäftigt sich mit der Darstellung von Latexpartikeln in nicht-wässrigen Emulsionssystemen. Hintergrund der Untersuchungen war die Frage, ob es durch die Anwendung von nicht-wässrigen Emulsionen ermöglicht werden kann, sowohl wassersensitive Monomere als auch feuchtigkeitsempfindliche Polymerisationen zur Darstellung von Polymer-Latexpartikeln und deren Primärdispersionen einzusetzen. Das Basiskonzept der Arbeit bestand darin, nicht-wässrige Emulsionen auf der Basis zweier nicht mischbarer organischer Lösungsmittel unterschiedlicher Polarität auszubilden und anschließend die dispergierte Phase der Emulsion zur Synthese der Latexpartikel auszunutzen. Hierzu wurden verschiedene nicht-wässrige Emulsionssysteme erarbeitet, welche als dispergierte Phase ein polares und als kontinuierliche Phase ein unpolares Lösungsmittel enthielten. Auf Basis dieser Ergebnisse wurde in den nachfolgenden Untersuchungen zunächst die Anwendbarkeit solcher Emulsionen zur Darstellung verschiedener Acrylat- und Methacrylatpolymerdispersionen mittels radikalischer Polymerisation studiert. Um zu zeigen, dass die hier entwickelten nicht-wässrigen Emulsionen auch zur Durchführung von Stufenwachstumsreaktionen geeignet sind, wurden ebenfalls Polyester-, Polyamid- und Polyurethan-Latexpartikel dargestellt. Die Molekulargewichte der erhaltenen Polymere lagen bei bis zu 40 000 g/mol, im Vergleich zu wässrigen Emulsions- und Miniemulsions¬polymerisationssystemen sind diese um den Faktor fünf bis 30 höher. Es kann davon ausgegangen werden, dass hauptsächlich zwei Faktoren für die hohen Molekulargewichte verantwortlich sind: Zum einen die wasserfreien Bedingungen, welche die Hydrolyse der reaktiven Gruppen verhindern, und zum anderen die teilweise erfüllten Schotten-Baumann-Bedingungen, welche an der Grenzfläche zwischen dispergierter und kontinuierlicher Phase eine durch Diffusion kontrollierte ausgeglichene Stöchiometrie der Reaktionspartner gewährleisten. Somit ist es erstmals möglich, hochmolekulare Polyester, -amide und -urethane in nur einem Syntheseschritt als Primär¬dispersion darzustellen. Die Variabilität der nicht-wässrigen Emulsionen wurde zudem in weiteren Beispielen durch die Synthese von verschiedenen elektrisch leitfähigen Latices, wie z.B. Polyacetylen-Latexpartikeln, aufgezeigt. In dieser Arbeit konnte gezeigt werden, dass die entwickelten nicht-wässrigen Emulsionen eine äußerst breite Anwendbarkeit zur Darstellung von Polymer-Latexpartikeln aufweisen. Durch die wasserfreien Bedingungen erlauben die beschriebenen Emulsionsprozesse, Latexpartikel und entsprechende nicht-wässrige Dispersionen nicht nur traditionell radikalisch, sondern auch mittels weiterer Polymerisationsmechanismen (katalytisch, oxidativ oder mittels Polykondensation bzw. -addition) darzustellen.
Resumo:
This thesis is about plant breeding in Early 20th-Century Italy. The stories of the two most prominent Italian plant-breeders of the time, Nazareno Strampelli and Francesco Todaro, are used to explore a fragment of the often-neglected history of Italian agricultural research. While Italy was not at the forefront of agricultural innovation, research programs aimed at varietal innovation did emerge in the country, along with an early diffusion of Mendelism. Using philosophical as well as historical analysis, plant breeding is analysed throughout this thesis as a process: a sequence of steps that lays on practical skills and theoretical assumptions, acting on various elements of production. Systematic plant-breeding programs in Italy started from small individual efforts, attracting more and more resources until they became a crucial part of the fascist regime's infamous agricultural policy. Hybrid varieties developed in the early 20th century survived World War II and are now ancestors of the varieties that are still cultivated today. Despite this relevance, the history of Italian wheat hybrids is today largely forgotten: this thesis is an effort to re-evaluate a part of it. The research did allow previously unknown or neglected facts to emerge, giving a new perspective on the infamous alliance between plant-breeding programs and the fascist regime. This thesis undertakes an analysis of Italian plant-breeding programs as processes. Those processes had a practical as well as a theoretical side, and involved various elements of production. Although a complete history of Italian plant breeding still remains to be written, the Italian case can now be considered along with the other case-studies that other scholars have developed in the history of plant breeding. The hope is that this historical and philosophical analysis will contribute to the on-going effort to understand the history of plants.
Resumo:
Anche se la politica editoriale comunista rappresenta un campo di indagine fondamentale nella ricerca sul Pci, la sua attività editoriale è caduta in un oblio storico. Assumendo il libro come supporto materiale e veicolo della cultura politica comunista, e la casa editrice come canale di socializzazione, questa ricerca s’interroga sui suoi processi di costruzione e di diffusione. La ricerca si muove in due direzioni. Nel primo capitolo si è tentato di dare conto delle ragioni metodologiche dell’indagine e della messa a punto delle ipotesi di ricerca sul “partito editore”, raccogliendo alcune sfide poste alla storia politica da altri ambiti disciplinari, come la sociologia e la scienza politica, che rappresentano una vena feconda per la nostra indagine. La seconda direzione, empirica, ha riguardato la ricognizione delle fonti e degli strumenti di analisi per ricostruire le vicende del “partito editore” dal 1944 al 1956. La suddivisione della ricerca in due parti – 1944-1947 e 1947-1956 – segue a grandi linee la periodizzazione classica individuata dalla storiografia sulla politica culturale del Pci, ed è costruita su quattro fratture storiche – il 1944, con la “svolta di Salerno”; il 1947, con la “svolta cominformista”; il 1953, con la morte di Stalin e il disgelo; il 1956, con il XX Congresso e i fatti d’Ungheria – che sono risultate significative anche per la nostra ricerca sull’editoria comunista. Infine, il presente lavoro si basa su tre livelli di analisi: l’individuazione dei meccanismi di decisione politica e dell’organizzazione assunta dall’editoria comunista, esaminando gli scopi e i mutamenti organizzativi interni al partito per capire come i mutamenti strategici e tattici si sono riflessi sull’attività editoriale; la ricostruzione della produzione editoriale comunista; infine, l’identificazione dei processi di distribuzione e delle politiche per la lettura promosse dal Pci.