932 resultados para multimotor drives


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Die Funktion von Rho GTPasen in den von Toll-Rezeptoren induzierten Signaltransduktionswegen Der Toll-ähnliche Rezeptor 2 (hTLR2) ist wie der TNFa-Rezeptor und das bei Drosophila identifizierte Imd-Protein in der Lage, über einen bisher ungeklärten Mechanismus, sowohl die Aktivierung von NF-kB als auch Apoptose zu induzieren. Im Rahmen dieser Arbeit konnte gezeigt werden, daß die aktive Form der GTPase Rho in beiden Signaltransduktionswegen eine entscheidende Kontrollfunktion übernimmt. So führt die Stimulierung von TLR2 zu einer Aktivierung von RhoA in epithelialen und monozytischen Zellinien. Die aktivierte GTPase rekrutiert die Kinase PKCz und induziert so die IkB-unabhängige Aktivierung des p65/Rel-Transkriptionskomplexes. Aktives RhoA kontrolliert darüberhinaus einen weiteren Signaltransduktionsweg, der die TLR2-abhängigen, früh-apoptptischen Membranveränderungen unter der Beteiligung der Kinasen ROCK und MLCK herbeiführt. Die Rho-abhängige Regulation dieser gegensätzlichen Signalantworten wird durch die direkte Interaktion mit spezifischen Downstreamtargets, die jeweils nur Bestandteil eines Signalweges sind, ermöglicht. Die GTPase Rho stellt somit ein Schlüsselelement in der von TLR2 induzierten primären Immunantwort dar.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the statics and dynamics of a glassy,non-entangled, short bead-spring polymer melt with moleculardynamics simulations. Temperature ranges from slightlyabove the mode-coupling critical temperature to the liquidregime where features of a glassy liquid are absent. Ouraim is to work out the polymer specific effects on therelaxation and particle correlation. We find the intra-chain static structure unaffected bytemperature, it depends only on the distance of monomersalong the backbone. In contrast, the distinct inter-chainstructure shows pronounced site-dependence effects at thelength-scales of the chain and the nearest neighbordistance. There, we also find the strongest temperaturedependence which drives the glass transition. Both the siteaveraged coupling of the monomer and center of mass (CM) andthe CM-CM coupling are weak and presumably not responsiblefor a peak in the coherent relaxation time at the chain'slength scale. Chains rather emerge as soft, easilyinterpenetrating objects. Three particle correlations arewell reproduced by the convolution approximation with theexception of model dependent deviations. In the spatially heterogeneous dynamics of our system weidentify highly mobile monomers which tend to follow eachother in one-dimensional paths forming ``strings''. Thesestrings have an exponential length distribution and aregenerally short compared to the chain length. Thus, arelaxation mechanism in which neighboring mobile monomersmove along the backbone of the chain seems unlikely.However, the correlation of bonded neighbors is enhanced. When liquids are confined between two surfaces in relativesliding motion kinetic friction is observed. We study ageneric model setup by molecular dynamics simulations for awide range of sliding speeds, temperatures, loads, andlubricant coverings for simple and molecular fluids. Instabilities in the particle trajectories are identified asthe origin of kinetic friction. They lead to high particlevelocities of fluid atoms which are gradually dissipatedresulting in a friction force. In commensurate systemsfluid atoms follow continuous trajectories for sub-monolayercoverings and consequently, friction vanishes at low slidingspeeds. For incommensurate systems the velocity probabilitydistribution exhibits approximately exponential tails. Weconnect this velocity distribution to the kinetic frictionforce which reaches a constant value at low sliding speeds. This approach agrees well with the friction obtaineddirectly from simulations and explains Amontons' law on themicroscopic level. Molecular bonds in commensurate systemslead to incommensurate behavior, but do not change thequalitative behavior of incommensurate systems. However,crossed chains form stable load bearing asperities whichstrongly increase friction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrothermal fluids are a fundamental resource for understanding and monitoring volcanic and non-volcanic systems. This thesis is focused on the study of hydrothermal system through numerical modeling with the geothermal simulator TOUGH2. Several simulations are presented, and geophysical and geochemical observables, arising from fluids circulation, are analyzed in detail throughout the thesis. In a volcanic setting, fluids feeding fumaroles and hot spring may play a key role in the hazard evaluation. The evolution of the fluids circulation is caused by a strong interaction between magmatic and hydrothermal systems. A simultaneous analysis of different geophysical and geochemical observables is a sound approach for interpreting monitored data and to infer a consistent conceptual model. Analyzed observables are ground displacement, gravity changes, electrical conductivity, amount, composition and temperature of the emitted gases at surface, and extent of degassing area. Results highlight the different temporal response of the considered observables, as well as the different radial pattern of variation. However, magnitude, temporal response and radial pattern of these signals depend not only on the evolution of fluid circulation, but a main role is played by the considered rock properties. Numerical simulations highlight differences that arise from the assumption of different permeabilities, for both homogeneous and heterogeneous systems. Rock properties affect hydrothermal fluid circulation, controlling both the range of variation and the temporal evolution of the observable signals. Low temperature fumaroles and low discharge rate may be affected by atmospheric conditions. Detailed parametric simulations were performed, aimed to understand the effects of system properties, such as permeability and gas reservoir overpressure, on diffuse degassing when air temperature and barometric pressure changes are applied to the ground surface. Hydrothermal circulation, however, is not only a characteristic of volcanic system. Hot fluids may be involved in several mankind problems, such as studies on geothermal engineering, nuclear waste propagation in porous medium, and Geological Carbon Sequestration (GCS). The current concept for large-scale GCS is the direct injection of supercritical carbon dioxide into deep geological formations which typically contain brine. Upward displacement of such brine from deep reservoirs driven by pressure increases resulting from carbon dioxide injection may occur through abandoned wells, permeable faults or permeable channels. Brine intrusion into aquifers may degrade groundwater resources. Numerical results show that pressure rise drives dense water up to the conduits, and does not necessarily result in continuous flow. Rather, overpressure leads to new hydrostatic equilibrium if fluids are initially density stratified. If warm and salty fluid does not cool passing through the conduit, an oscillatory solution is then possible. Parameter studies delineate steady-state (static) and oscillatory solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Gulf of Aqaba represents a small scale, easy to access, regional analogue of larger oceanic oligotrophic systems. In this Gulf, the seasonal cycles of stratification and mixing drives the seasonal phytoplankton dynamics. In summer and fall, when nutrient concentrations are very low, Prochlorococcus and Synechococcus are more abundant in the surface water. This two populations are exposed to phosphate limitation. During winter mixing, when nutrient concentrations are high, Chlorophyceae and Cryptophyceae are dominant but scarce or absent during summer. In this study it was tried to develop a simulation model based on historical data to predict the phytoplankton dynamics in the northern Gulf of Aqaba. The purpose is to understand what forces operate, and how, to determine the phytoplankton dynamics in this Gulf. To make the models data sampled in two different sampling station (Fish Farm Station and Station A) were used. The data of chemical, biological and physical factors, are available from 14th January 2007 to 28th December 2009. The Fish Farm Station point was near a Fish Farm that was operational until 17th June 2008, complete closure date of the Fish Farm, about halfway through the total sampling time. The Station A sampling point is about 13 Km away from the Fish Farm Station. To build the model, the MATLAB software was used (version 7.6.0.324 R2008a), in particular a tool named Simulink. The Fish Farm Station models shows that the Fish Farm activity has altered the nutrient concentrations and as a consequence the normal phytoplankton dynamics. Despite the distance between the two sampling stations, there might be an influence from the Fish Farm activities also in the Station A ecosystem. The models about this sampling station shows that the Fish Farm impact appears to be much lower than the impact in the Fish Farm Station, because the phytoplankton dynamics appears to be driven mainly by the seasonal mixing cycle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At global level, the population is increasingly concentrating in the cities. In Europe, around 75% of the population lives in urban areas and, according to the European Environmental Agency (2010), urban population is foreseen to increase up to 80 % by 2020. At the same time, the quality of life in the cities is declining and urban pollution keeps increasing in terms of carbon dioxide (CO2) emissions, waste, noise, and lack of greenery. Many of European cities struggle to cope with social, economic and environmental problems resulting from pressures such as overcrowding or decline, social inequity, health problems related to food security and pollution. Nowadays local authorities try to solve these problems related to the environmental sustainability through various urban logistics measures, which directly and indirectly affect the urban food supply system, thus an integrated approach including freight transport and food provisioning policies issues is needed. This research centres on the urban food transport system and its impact on the city environmental sustainability. The main question that drives the research analysis is "How the urban food distribution system affects the ecological sustainability in modern cities?" The research analyses the city logistics project for food transport implemented in Parma, Italy, by the wholesale produce market. The case study investigates the renewed role of the wholesale market in the urban food supply chain as commercial and logistic operator, referring to the concept of food hub. Then, a preliminary analysis on the urban food transport for the city of Bologna is presented. The research aims at suggesting a methodological framework to estimate the urban food demand, the urban food supply and to assess the urban food transport performance, in order to identify external costs indicators that help policymakers in evaluating the environmental sustainability of different logistics measures

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les théories du post-industrialisation utilisent comme une preuve empirique du changement du processus historique l’entrée dans une nouvelle structure sociale que, par ailleurs, se distingue par le déplacement des biens et des services et par la formation de nouvelles structures professionnelles et de la gestion. Dans ce contexte, en premier lieu, c’est très intéressant à comprendre comme les nouvelles formes de l’organisation économique et sociale ont reussies à influer sur les systèmes de la fiscalité directe de l’État italien et de l’État français à la formation et au perfectionnement de la notion de revenu du travail indépendant et aussi à la formation et au perfectionnement des modèles de la taxation directe des revenus du travail indépendant. Par conséquent, la recherche, dans le principe, se concentre sur le processus de la construction et de l’évolution de la notion de revenu du travail indépendant et aussi de la construction et de l’évolution des formes nationales de la taxation directe des revenus du travail indépendant; un processus dévelopé au cours de l’Époque Moderne et de l’Époque Contemporaine que, du point de vue historique-fiscale, s’encadre comme l’époque des grands changements en ce qui concerne aussi à la fiscalité directe des revenus de la richesse mobilière. En second lieu, c’est très important à préciser si existe la possibilité de reconstruire les notions actuelles des revenus du travail indépendant en vue de l’aproximation des modalitès de la taxation directe de cette catégorie de revenus de la richesse mobilière avec les modalitès de la taxation directe des revenus de l’entreprise adoptées dans les systèmes italien et français de la fiscalitè directe; par conséquent, la recherche s’oriente vers la déscrition et l’analyse des questions en ce qui concerne à la définition fiscale objective et subjective des revenus du travail indépendant, à la direction vers laquelle on doit s’adresser actuellement les modèles nationaux de la taxation directe des revenus du travail indépendant et les raisons que la justifient. En autre, la recherche s’étendre vers une analyse comparative laquelle évidence les éléments de la convergence et de la divergence nécessaires pour tirer avec exactitude des conclusions sur l’approximation au niveau national et européen des notions des revenus du travail indépendant et des principes et modalités de la taxation directe des revenus du travail indépendant à fin de garantir les libertés de l’établissement et de la prestation des services et les principes de non-dicrimination et de la non-différenciation fiscale des travailleurs indépendants transfrontières dans le marché intérieur. En troisième lieu, c’est très intéressant à préciser avec cette recherche si dans le cadre conventionnel et européen existe une notion de revenu du travail indépendant ou non et si existe un modèle européen unifié ou, au contraire, il s’agit d’une approximation des modèles nationales de la taxation directe des revenus du travail indépendant. Par conséquent, un’autre argument de la recherce est l’analyse de la normative conventionnelle et de la législation européenne et aussi de la jurisprudence de la Cour de la Justice de l’Union Européenne relatives à la construction d’une notion conventionnelle et aussi européenne du travail indépendant au matiere de la fiscalité directe et l’incidence de principes conventionnels et aussi de libertés européenne de l’établissement et de la prestation des services à la taxation directe des revenus des travailleurs indépendants par rapport aux principes de non-discrimination et de la non-différenciation fiscale; une analyse laquelle évidence l’absence d’un modèle conventionnel et d’un modèle européen harmonisé relativement à la taxation directe des revenus du travail indépendant à raison de la prévalence du principe de la souveranité fiscale au domaine de la fiscalitè directe et pour cette raison en peut parler seulement d’une approximation des modèles nationales de la taxation directe des revenus du travail indépendant à fin de garantir les libertés européenne de l’établissement et de la prestation des services des travailleurs indépendants et les principes conventionnels de non-discrimination et de la non-différenciation fiscale. À la fin, c’est très intéressant à préciser si existe une corrélation entre les Traités fiscales et le Droit fiscal européen en ce qui concerne à la notion de revenus du travail indépendant et les principes fiscales. Par conséquent, la recherche se compléte avec l’analyse du régime fiscale des revenus du travail indépendant évidencé dans le Modèle de la Convention de l’OCDE et dans la Convention Italie-France concernant à l’élimination de la double imposition; une analyse laquelle, en analogie avec le droit fiscal européen, précise l’approximation des revenus du travail indépendant avec les revenus de l’entreprise en se référant le Modèle de la Convention de l’OCDE et l’absence d’un modèle conventionel de la taxation directe des revenus du travail indépendant, mais, à différence du droit fiscal européen, évidence la présence des certains critéres adoptés par la normative conventionnelle à fin de garantir l’arrêt de la double imposition et le principe de la non-discrimination que, en substance, sont points de convérgence avec le droit fiscal européen. En autre, l’analyse de la normative conventionnelle de l’OCDE, à différence de la normative conventionnelle relative à la Convention de l’élimination de la double imposition finalisée par l’Italie et la France, évidence une évolution de la fiscalitè directe en ce qui concerne aux travailleurs indépendants laquelle se vérifie à l’adoption des critéres de la fiscalitè directe des revenus des sociétés et de la quelle en se dérive l’approximation de la notion des revenus du travail indépendant avec la notion des revenus de l’entreprise, en substance, revenus provenant par les activités économiques. Compte tenu de ce qui précède, c’est clair la convérgence parmis les législations nationales de la taxation directe des revenus du travail indépendant et la normative conventionnelle du Modèle de la Convention de l’OCDE et la normative europénne; une convérgence que confirme la nouvelle diréction vers la quelle s’adressent les notions et les modèles de la taxation directe des revenus du travail indépendant dans les systèmes nationals de la taxation directe: l’approximation avec les modèles nationales de la taxation directe des revenus des sociétés en vue de l’approximation des notions des revenus dérives par les activités économiques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research work carried out in focusing a novel multiphase-multilevel ac motor drive system much suitable for low-voltage high-current power applications. In specific, six-phase asymmetrical induction motor with open-end stator winding configuration, fed from four standard two-level three-phase voltage source inverters (VSIs). Proposed synchronous reference frame control algorithm shares the total dc source power among the 4 VSIs in each switching cycle with three degree of freedom. Precisely, first degree of freedom concerns with the current sharing between two three-phase stator windings. Based on modified multilevel space vector pulse width modulation shares the voltage between each single VSIs of two three-phase stator windings with second and third degree of freedom, having proper multilevel output waveforms. Complete model of whole ac motor drive based on three-phase space vector decomposition approach was developed in PLECS - numerical simulation software working in MATLAB environment. Proposed synchronous reference control algorithm was framed in MATLAB with modified multilevel space vector pulse width modulator. The effectiveness of the entire ac motor drives system was tested. Simulation results are given in detail to show symmetrical and asymmetrical, power sharing conditions. Furthermore, the three degree of freedom are exploited to investigate fault tolerant capabilities in post-fault conditions. Complete set of simulation results are provided when one, two and three VSIs are faulty. Hardware prototype model of quad-inverter was implemented with two passive three-phase open-winding loads using two TMS320F2812 DSP controllers. Developed McBSP (multi-channel buffered serial port) communication algorithm able to control the four VSIs for PWM communication and synchronization. Open-loop control scheme based on inverse three-phase decomposition approach was developed to control entire quad-inverter configuration and tested with balanced and unbalanced operating conditions with simplified PWM techniques. Both simulation and experimental results are always in good agreement with theoretical developments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nell’ambito della presente tesi verrà descritto un approccio generalizzato per il controllo delle macchine elettriche trifasi; la prima parte è incentrata nello sviluppo di una metodologia di modellizzazione generale, ossia in grado di descrivere, da un punto di vista matematico, il comportamento di una generica macchina elettrica, che possa quindi includere in sé stessa tutte le caratteristiche salienti che possano caratterizzare ogni specifica tipologia di macchina elettrica. Il passo successivo è quello di realizzare un algoritmo di controllo per macchine elettriche che si poggi sulla teoria generalizzata e che utilizzi per il proprio funzionamento quelle grandezze offerte dal modello unico delle macchine elettriche. La tipologia di controllo che è stata utilizzata è quella che comunemente viene definita come controllo ad orientamento di campo (FOC), per la quale sono stati individuati degli accorgimenti atti a migliorarne le prestazioni dinamiche e di controllo della coppia erogata. Per concludere verrà presentata una serie di prove sperimentali con lo scopo di mettere in risalto alcuni aspetti cruciali nel controllo delle macchine elettriche mediante un algoritmo ad orientamento di campo e soprattutto di verificare l’attendibilità dell’approccio generalizzato alle macchine elettriche trifasi. I risultati sperimentali confermano quindi l’applicabilità del metodo a diverse tipologie di macchine (asincrone e sincrone) e sono stati verificate nelle condizioni operative più critiche: bassa velocità, alta velocità bassi carichi, dinamica lenta e dinamica veloce.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biologische Membranen sind Fettmolekül-Doppelschichten, die sich wie zweidimensionale Flüssigkeiten verhalten. Die Energie einer solchen fluiden Oberfläche kann häufig mit Hilfe eines Hamiltonians beschrieben werden, der invariant unter Reparametrisierungen der Oberfläche ist und nur von ihrer Geometrie abhängt. Beiträge innerer Freiheitsgrade und der Umgebung können in den Formalismus mit einbezogen werden. Dieser Ansatz wird in der vorliegenden Arbeit dazu verwendet, die Mechanik fluider Membranen und ähnlicher Oberflächen zu untersuchen. Spannungen und Drehmomente in der Oberfläche lassen sich durch kovariante Tensoren ausdrücken. Diese können dann z. B. dazu verwendet werden, die Gleichgewichtsposition der Kontaktlinie zu bestimmen, an der sich zwei aneinander haftende Oberflächen voneinander trennen. Mit Ausnahme von Kapillarphänomenen ist die Oberflächenenergie nicht nur abhängig von Translationen der Kontaktlinie, sondern auch von Änderungen in der Steigung oder sogar Krümmung. Die sich ergebenden Randbedingungen entsprechen den Gleichgewichtsbedingungen an Kräfte und Drehmomente, falls sich die Kontaktlinie frei bewegen kann. Wenn eine der Oberflächen starr ist, muss die Variation lokal dieser Fläche folgen. Spannungen und Drehmomente tragen dann zu einer einzigen Gleichgewichtsbedingung bei; ihre Beiträge können nicht mehr einzeln identifiziert werden. Um quantitative Aussagen über das Verhalten einer fluiden Oberfläche zu machen, müssen ihre elastischen Eigenschaften bekannt sein. Der "Nanotrommel"-Versuchsaufbau ermöglicht es, Membraneigenschaften lokal zu untersuchen: Er besteht aus einer porenüberspannenden Membran, die während des Experiments durch die Spitze eines Rasterkraftmikroskops in die Pore gedrückt wird. Der lineare Verlauf der resultierenden Kraft-Abstands-Kurven kann mit Hilfe der in dieser Arbeit entwickelten Theorie reproduziert werden, wenn der Einfluss von Adhäsion zwischen Spitze und Membran vernachlässigt wird. Bezieht man diesen Effekt in die Rechnungen mit ein, ändert sich das Resultat erheblich: Kraft-Abstands-Kurven sind nicht länger linear, Hysterese und nichtverschwindende Trennkräfte treten auf. Die Voraussagen der Rechnungen könnten in zukünftigen Experimenten dazu verwendet werden, Parameter wie die Biegesteifigkeit der Membran mit einer Auflösung im Nanometerbereich zu bestimmen. Wenn die Materialeigenschaften bekannt sind, können Probleme der Membranmechanik genauer betrachtet werden. Oberflächenvermittelte Wechselwirkungen sind in diesem Zusammenhang ein interessantes Beispiel. Mit Hilfe des oben erwähnten Spannungstensors können analytische Ausdrücke für die krümmungsvermittelte Kraft zwischen zwei Teilchen, die z. B. Proteine repräsentieren, hergeleitet werden. Zusätzlich wird das Gleichgewicht der Kräfte und Drehmomente genutzt, um mehrere Bedingungen an die Geometrie der Membran abzuleiten. Für den Fall zweier unendlich langer Zylinder auf der Membran werden diese Bedingungen zusammen mit Profilberechnungen kombiniert, um quantitative Aussagen über die Wechselwirkung zu treffen. Theorie und Experiment stoßen an ihre Grenzen, wenn es darum geht, die Relevanz von krümmungsvermittelten Wechselwirkungen in der biologischen Zelle korrekt zu beurteilen. In einem solchen Fall bieten Computersimulationen einen alternativen Ansatz: Die hier präsentierten Simulationen sagen voraus, dass Proteine zusammenfinden und Membranbläschen (Vesikel) bilden können, sobald jedes der Proteine eine Mindestkrümmung in der Membran induziert. Der Radius der Vesikel hängt dabei stark von der lokal aufgeprägten Krümmung ab. Das Resultat der Simulationen wird in dieser Arbeit durch ein approximatives theoretisches Modell qualitativ bestätigt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nell’ambito della ricerca scientifica nel campo dello sport, la Performance Analysis si sta ritagliando un crescente spazio di interesse. Per Performance Analysis si intende l’analisi della prestazione agonistica sia dal punto di vista biomeccanico che dal punto di vista dell’analisi notazionale. In questa tesi è stata analizzata la prestazione agonistica nel tennistavolo attraverso lo strumento dell’analisi notazionale, partendo dallo studio degli indicatori di prestazione più importanti dal punto di vista tecnico-tattico e dalla loro selezione attraverso uno studio sull’attendibilità nella raccolta dati. L’attenzione è stata posta quindi su un aspetto tecnico originale, il collegamento spostamenti e colpi, ricordando che una buona tecnica di spostamento permette di muoversi rapidamente nella direzione della pallina per effettuare il colpo migliore. Infine, l’obbiettivo principale della tesi è stato quello di confrontare le tre categorie di atleti selezionate: alto livello mondiale maschile (M), alto livello junior europeo (J) ed alto livello mondiale femminile (F). La maggior parte delle azioni cominciano con un servizio corto al centro del tavolo, proseguono con una risposta in push (M) o in flik di rovescio (J). Il colpo che segue è principalmente il top spin di dritto dopo un passo pivot o un top di rovescio senza spostamento. Gli alteti M e J contrattaccano maggiormente con top c. top di dritto e le atlete F prediligono colpi meno spregiudicati, bloccando di rovescio e proseguendo con drive di rovescio. Attraverso lo studio della prestazione di atleti di categorie e generi diversi è possibile migliorare le scelte strategiche prima e durante gli incontri. Le analisi statistiche multivariate (modelli log-lineari) hanno permesso di validare con metodo scientifico sia le procedure già utilizzate in letteratura che quelle innovative messe a punto per la prima volta in occasione di questo studio.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complex Networks analysis turn out to be a very promising field of research, testified by many research projects and works that span different fields. Those analysis have been usually focused on characterize a single aspect of the system and a study that considers many informative axes along with a network evolve is lacking. We propose a new multidimensional analysis that is able to inspect networks in the two most important dimensions, space and time. To achieve this goal, we studied them singularly and investigated how the variation of the constituting parameters drives changes to the network as a whole. By focusing on space dimension, we characterized spatial alteration in terms of abstraction levels. We proposed a novel algorithm that, by applying a fuzziness function, can reconstruct networks under different level of details. We verified that statistical indicators depend strongly on the granularity with which a system is described and on the class of networks. We keep fixed the space axes and we isolated the dynamics behind networks evolution process. We detected new instincts that trigger social networks utilization and spread the adoption of novel communities. We formalized this enhanced social network evolution by adopting special nodes (called sirens) that, thanks to their ability to attract new links, were able to construct efficient connection patterns. We simulated the dynamics of the system by considering three well-known growth models. Applying this framework to real and synthetic networks, we showed that the sirens, even when used for a limited time span, effectively shrink the time needed to get a network in mature state. In order to provide a concrete context of our findings, we formalized the cost of setting up such enhancement and provided the best combinations of system's parameters, such as number of sirens, time span of utilization and attractiveness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Con il termine IPC (precondizionamento ischemico) si indica un fenomeno per il quale, esponendo il cuore a brevi cicli di ischemie subletali prima di un danno ischemico prolungato, si conferisce una profonda resistenza all’infarto, una delle principali cause di invalidità e mortalità a livello mondiale. Studi recenti hanno suggerito che l’IPC sia in grado di migliorare la sopravvivenza, la mobilizzazione e l’integrazione di cellule staminali in aree ischemiche e che possa fornire una nuova strategia per potenziare l’efficacia della terapia cellulare cardiaca, un’area della ricerca in continuo sviluppo. L’IPC è difficilmente trasferibile nella pratica clinica ma, da anni, è ben documentato che gli oppioidi e i loro recettori hanno un ruolo cardioprotettivo e che attivano le vie di segnale coinvolte nell’IPC: sono quindi candidati ideali per una possibile terapia farmacologica alternativa all’IPC. Il trattamento di cardiomiociti con gli agonisti dei recettori oppioidi Dinorfina B, DADLE e Met-Encefalina potrebbe proteggere, quindi, le cellule dall’apoptosi causata da un ambiente ischemico ma potrebbe anche indurle a produrre fattori che richiamino elementi staminali. Per testare quest’ipotesi è stato messo a punto un modello di “microambiente ischemico” in vitro sui cardiomioblasti di ratto H9c2 ed è stato dimostrato che precondizionando le cellule in modo “continuativo” (ventiquattro ore di precondizionamento con oppioidi e successivamente ventiquattro ore di induzione del danno, continuando a somministrare i peptidi oppioidi) con Dinorfina B e DADLE si verifica una protezione diretta dall’apoptosi. Successivamente, saggi di migrazione e adesione hanno mostrato che DADLE agisce sulle H9c2 “ischemiche” spronandole a creare un microambiente capace di attirare cellule staminali mesenchimali umane (FMhMSC) e di potenziare le capacità adesive delle FMhMSC. I dati ottenuti suggeriscono, inoltre, che la capacità del microambiente ischemico trattato con DADLE di attirare le cellule staminali possa essere imputabile alla maggiore espressione di chemochine da parte delle H9c2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'oggetto principale di questa tesi è il concetto di fine negli universi seriali. Spesso si intende il “The End” in un romanzo o in un film come un momento climatico, e che i finali sono collegati ad una teleologia che guida il testo nel suo insieme. Come risultato di questo modo di approcciare i finale, una delle opinioni più comuni è simile a quella di Henry James [1884] che diceva: “distribution at the last of prizes, pensions, husbands, wives, babies, millions, appended paragraph, and cheerful remarks”. Ma è molto difficile applicare la posizione di James a un romanzo modernista o a un film postmoderno e ancor ameno ai cosiddetti universi narrativi seriali, in cui la storia si sviluppa lungo decenni. Nel nostro contemporaneo panorama mediale, il testo non è più concepito come un'opera, ma deve essere costruito e concepito come un network, un ecosistema in cui nuove connessioni economiche e nuove relazioni bottom-up modellano una struttura inedita. Questa nuova struttura può riconfigurare il senso del finale e della fine, ma anche per le vast narratives spesso si dice che “Il finale non corrispondeva alla spirito della storia”, “il finale era deludente”. Potremmo sostenere che il concetto di finale sia ancora importante, nonostante sia stato superato dal punto di vista teorico. Per analizzare se il finale è costruito in un maniera non-lineare ma percepito come teleologico, la tesi è strutturata in due parti e di quattro capitoli. Prima parte “Storia” [1. Letteratura; 2. Cinema], seconda “Forme/strutture” [3. Transmedia; 4. Remix]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation is about collective action issues in common property resources. Its focus is the “threshold hypothesis,” which posits the existence of a threshold in group size that drives the process of institutional change. This hypothesis is tested using a six-century dataset concerning the management of the commons by hundreds of communities in the Italian Alps. The analysis seeks to determine the group size threshold and the institutional changes that occur when groups cross this threshold. There are five main findings. First, the number of individuals in villages remained stable for six centuries, despite the population in the region tripling in the same period. Second, the longitudinal analysis of face-to-face assemblies and community size led to the empirical identification of a threshold size that triggered the transition from informal to more formal regimes to manage common property resources. Third, when groups increased in size, gradual organizational changes took place: large groups split into independent subgroups or structured interactions into multiple layers while maintaining a single formal organization. Fourth, resource heterogeneity seemed to have had no significant impact on various institutional characteristics. Fifth, social heterogeneity showed statistically significant impacts, especially on institutional complexity, consensus, and the relative importance of governance rules versus resource management rules. Overall, the empirical evidence from this research supports the “threshold hypothesis.” These findings shed light on the rationale of institutional change in common property regimes, and clarify the mechanisms of collective action in traditional societies. Further research may generalize these conclusions to other domains of collective action and to present-day applications.