961 resultados para conformance checking


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Service Oriented Computing is a new programming paradigm for addressing distributed system design issues. Services are autonomous computational entities which can be dynamically discovered and composed in order to form more complex systems able to achieve different kinds of task. E-government, e-business and e-science are some examples of the IT areas where Service Oriented Computing will be exploited in the next years. At present, the most credited Service Oriented Computing technology is that of Web Services, whose specifications are enriched day by day by industrial consortia without following a precise and rigorous approach. This PhD thesis aims, on the one hand, at modelling Service Oriented Computing in a formal way in order to precisely define the main concepts it is based upon and, on the other hand, at defining a new approach, called bipolar approach, for addressing system design issues by synergically exploiting choreography and orchestration languages related by means of a mathematical relation called conformance. Choreography allows us to describe systems of services from a global view point whereas orchestration supplies a means for addressing such an issue from a local perspective. In this work we present SOCK, a process algebra based language inspired by the Web Service orchestration language WS-BPEL which catches the essentials of Service Oriented Computing. From the definition of SOCK we will able to define a general model for dealing with Service Oriented Computing where services and systems of services are related to the design of finite state automata and process algebra concurrent systems, respectively. Furthermore, we introduce a formal language for dealing with choreography. Such a language is equipped with a formal semantics and it forms, together with a subset of the SOCK calculus, the bipolar framework. Finally, we present JOLIE which is a Java implentation of a subset of the SOCK calculus and it is part of the bipolar framework we intend to promote.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interaction protocols establish how different computational entities can interact with each other. The interaction can be finalized to the exchange of data, as in 'communication protocols', or can be oriented to achieve some result, as in 'application protocols'. Moreover, with the increasing complexity of modern distributed systems, protocols are used also to control such a complexity, and to ensure that the system as a whole evolves with certain features. However, the extensive use of protocols has raised some issues, from the language for specifying them to the several verification aspects. Computational Logic provides models, languages and tools that can be effectively adopted to address such issues: its declarative nature can be exploited for a protocol specification language, while its operational counterpart can be used to reason upon such specifications. In this thesis we propose a proof-theoretic framework, called SCIFF, together with its extensions. SCIFF is based on Abductive Logic Programming, and provides a formal specification language with a clear declarative semantics (based on abduction). The operational counterpart is given by a proof procedure, that allows to reason upon the specifications and to test the conformance of given interactions w.r.t. a defined protocol. Moreover, by suitably adapting the SCIFF Framework, we propose solutions for addressing (1) the protocol properties verification (g-SCIFF Framework), and (2) the a-priori conformance verification of peers w.r.t. the given protocol (AlLoWS Framework). We introduce also an agent based architecture, the SCIFF Agent Platform, where the same protocol specification can be used to program and to ease the implementation task of the interacting peers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Primary stability of stems in cementless total hip replacements is recognized to play a critical role for long-term survival and thus for the success of the overall surgical procedure. In Literature, several studies addressed this important issue. Different approaches have been explored aiming to evaluate the extent of stability achieved during surgery. Some of these are in-vitro protocols while other tools are coinceived for the post-operative assessment of prosthesis migration relative to the host bone. In vitro protocols reported in the literature are not exportable to the operating room. Anyway most of them show a good overall accuracy. The RSA, EBRA and the radiographic analysis are currently used to check the healing process of the implanted femur at different follow-ups, evaluating implant migration, occurance of bone resorption or osteolysis at the interface. These methods are important for follow up and clinical study but do not assist the surgeon during implantation. At the time I started my Ph.D Study in Bioengineering, only one study had been undertaken to measure stability intra-operatively. No follow-up was presented to describe further results obtained with that device. In this scenario, it was believed that an instrument that could measure intra-operatively the stability achieved by an implanted stem would consistently improve the rate of success. This instrument should be accurate and should give to the surgeon during implantation a quick answer concerning the stability of the implanted stem. With this aim, an intra-operative device was designed, developed and validated. The device is meant to help the surgeon to decide how much to press-fit the implant. It is essentially made of a torsional load cell, able to measure the extent of torque applied by the surgeon to test primary stability, an angular sensor that measure the relative angular displacement between stem and femur, a rigid connector that enable connecting the device to the stem, and all the electronics for signals conditioning. The device was successfully validated in-vitro, showing a good overall accuracy in discriminating stable from unstable implants. Repeatability tests showed that the device was reliable. A calibration procedure was then performed in order to convert the angular readout into a linear displacement measurement, which is an information clinically relevant and simple to read in real-time by the surgeon. The second study reported in my thesis, concerns the evaluation of the possibility to have predictive information regarding the primary stability of a cementless stem, by measuring the micromotion of the last rasp used by the surgeon to prepare the femoral canal. This information would be really useful to the surgeon, who could check prior to the implantation process if the planned stem size can achieve a sufficient degree of primary stability, under optimal press fitting conditions. An intra-operative tool was developed to this aim. It was derived from a previously validated device, which was adapted for the specific purpose. The device is able to measure the relative micromotion between the femur and the rasp, when a torsional load is applied. An in-vitro protocol was developed and validated on both composite and cadaveric specimens. High correlation was observed between one of the parameters extracted form the acquisitions made on the rasp and the stability of the corresponding stem, when optimally press-fitted by the surgeon. After tuning in-vitro the protocol as in a closed loop, verification was made on two hip patients, confirming the results obtained in-vitro and highlighting the independence of the rasp indicator from the bone quality, anatomy and preserving conditions of the tested specimens, and from the sharpening of the rasp blades. The third study is related to an approach that have been recently explored in the orthopaedic community, but that was already in use in other scientific fields. It is based on the vibration analysis technique. This method has been successfully used to investigate the mechanical properties of the bone and its application to evaluate the extent of fixation of dental implants has been explored, even if its validity in this field is still under discussion. Several studies have been published recently on the stability assessment of hip implants by vibration analysis. The aim of the reported study was to develop and validate a prototype device based on the vibration analysis technique to measure intra-operatively the extent of implant stability. The expected advantages of a vibration-based device are easier clinical use, smaller dimensions and minor overall cost with respect to other devices based on direct micromotion measurement. The prototype developed consists of a piezoelectric exciter connected to the stem and an accelerometer attached to the femur. Preliminary tests were performed on four composite femurs implanted with a conventional stem. The results showed that the input signal was repeatable and the output could be recorded accurately. The fourth study concerns the application of the device based on the vibration analysis technique to several cases, considering both composite and cadaveric specimens. Different degrees of bone quality were tested, as well as different femur anatomies and several levels of press-fitting were considered. The aim of the study was to verify if it is possible to discriminate between stable and quasi-stable implants, because this is the most challenging detection for the surgeon in the operation room. Moreover, it was possible to validate the measurement protocol by comparing the results of the acquisitions made with the vibration-based tool to two reference measurements made by means of a validated technique, and a validated device. The results highlighted that the most sensitive parameter to stability is the shift in resonance frequency of the stem-bone system, showing high correlation with residual micromotion on all the tested specimens. Thus, it seems possible to discriminate between many levels of stability, from the grossly loosened implant, through the quasi-stable implants, to the definitely stable one. Finally, an additional study was performed on a different type of hip prosthesis, which has recently gained great interest thus becoming fairly popular in some countries in the last few years: the hip resurfacing prosthesis. The study was motivated by the following rationale: although bone-prosthesis micromotion is known to influence the stability of total hip replacement, its effect on the outcome of resurfacing implants has not been investigated in-vitro yet, but only clinically. Thus the work was aimed at verifying if it was possible to apply to the resurfacing prosthesis one of the intraoperative devices just validated for the measurement of the micromotion in the resurfacing implants. To do that, a preliminary study was performed in order to evaluate the extent of migration and the typical elastic movement for an epiphyseal prosthesis. An in-vitro procedure was developed to measure micromotions of resurfacing implants. This included a set of in-vitro loading scenarios that covers the range of directions covered by hip resultant forces in the most typical motor-tasks. The applicability of the protocol was assessed on two different commercial designs and on different head sizes. The repeatability and reproducibility were excellent (comparable to the best previously published protocols for standard cemented hip stems). Results showed that the procedure is accurate enough to detect micromotions of the order of few microns. The protocol proposed was thus completely validated. The results of the study demonstrated that the application of an intra-operative device to the resurfacing implants is not necessary, as the typical micromovement associated to this type of prosthesis could be considered negligible and thus not critical for the stabilization process. Concluding, four intra-operative tools have been developed and fully validated during these three years of research activity. The use in the clinical setting was tested for one of the devices, which could be used right now by the surgeon to evaluate the degree of stability achieved through the press-fitting procedure. The tool adapted to be used on the rasp was a good predictor of the stability of the stem. Thus it could be useful for the surgeon while checking if the pre-operative planning was correct. The device based on the vibration technique showed great accuracy, small dimensions, and thus has a great potential to become an instrument appreciated by the surgeon. It still need a clinical evaluation, and must be industrialized as well. The in-vitro tool worked very well, and can be applied for assessing resurfacing implants pre-clinically.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Self-organisation is increasingly being regarded as an effective approach to tackle modern systems complexity. The self-organisation approach allows the development of systems exhibiting complex dynamics and adapting to environmental perturbations without requiring a complete knowledge of the future surrounding conditions. However, the development of self-organising systems (SOS) is driven by different principles with respect to traditional software engineering. For instance, engineers typically design systems combining smaller elements where the composition rules depend on the reference paradigm, but typically produce predictable results. Conversely, SOS display non-linear dynamics, which can hardly be captured by deterministic models, and, although robust with respect to external perturbations, are quite sensitive to changes on inner working parameters. In this thesis, we describe methodological aspects concerning the early-design stage of SOS built relying on the Multiagent paradigm: in particular, we refer to the A&A metamodel, where MAS are composed by agents and artefacts, i.e. environmental resources. Then, we describe an architectural pattern that has been extracted from a recurrent solution in designing self-organising systems: this pattern is based on a MAS environment formed by artefacts, modelling non-proactive resources, and environmental agents acting on artefacts so as to enable self-organising mechanisms. In this context, we propose a scientific approach for the early design stage of the engineering of self-organising systems: the process is an iterative one and each cycle is articulated in four stages, modelling, simulation, formal verification, and tuning. During the modelling phase we mainly rely on the existence of a self-organising strategy observed in Nature and, hopefully encoded as a design pattern. Simulations of an abstract system model are used to drive design choices until the required quality properties are obtained, thus providing guarantees that the subsequent design steps would lead to a correct implementation. However, system analysis exclusively based on simulation results does not provide sound guarantees for the engineering of complex systems: to this purpose, we envision the application of formal verification techniques, specifically model checking, in order to exactly characterise the system behaviours. During the tuning stage parameters are tweaked in order to meet the target global dynamics and feasibility constraints. In order to evaluate the methodology, we analysed several systems: in this thesis, we only describe three of them, i.e. the most representative ones for each of the three years of PhD course. We analyse each case study using the presented method, and describe the exploited formal tools and techniques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many research fields are pushing the engineering of large-scale, mobile, and open systems towards the adoption of techniques inspired by self-organisation: pervasive computing, but also distributed artificial intelligence, multi-agent systems, social networks, peer-topeer and grid architectures exploit adaptive techniques to make global system properties emerge in spite of the unpredictability of interactions and behaviour. Such a trend is visible also in coordination models and languages, whenever a coordination infrastructure needs to cope with managing interactions in highly dynamic and unpredictable environments. As a consequence, self-organisation can be regarded as a feasible metaphor to define a radically new conceptual coordination framework. The resulting framework defines a novel coordination paradigm, called self-organising coordination, based on the idea of spreading coordination media over the network, and charge them with services to manage interactions based on local criteria, resulting in the emergence of desired and fruitful global coordination properties of the system. Features like topology, locality, time-reactiveness, and stochastic behaviour play a key role in both the definition of such a conceptual framework and the consequent development of self-organising coordination services. According to this framework, the thesis presents several self-organising coordination techniques developed during the PhD course, mainly concerning data distribution in tuplespace-based coordination systems. Some of these techniques have been also implemented in ReSpecT, a coordination language for tuple spaces, based on logic tuples and reactions to events occurring in a tuple space. In addition, the key role played by simulation and formal verification has been investigated, leading to analysing how automatic verification techniques like probabilistic model checking can be exploited in order to formally prove the emergence of desired behaviours when dealing with coordination approaches based on self-organisation. To this end, a concrete case study is presented and discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research is part of a survey for the detection of the hydraulic and geotechnical conditions of river embankments funded by the Reno River Basin Regional Technical Service of the Region Emilia-Romagna. The hydraulic safety of the Reno River, one of the main rivers in North-Eastern Italy, is indeed of primary importance to the Emilia-Romagna regional administration. The large longitudinal extent of the banks (several hundreds of kilometres) has placed great interest in non-destructive geophysical methods, which, compared to other methods such as drilling, allow for the faster and often less expensive acquisition of high-resolution data. The present work aims to experience the Ground Penetrating Radar (GPR) for the detection of local non-homogeneities (mainly stratigraphic contacts, cavities and conduits) inside the Reno River and its tributaries embankments, taking into account supplementary data collected with traditional destructive tests (boreholes, cone penetration tests etc.). A comparison with non-destructive methodologies likewise electric resistivity tomography (ERT), Multi-channels Analysis of Surface Waves (MASW), FDEM induction, was also carried out in order to verify the usability of GPR and to provide integration of various geophysical methods in the process of regular maintenance and check of the embankments condition. The first part of this thesis is dedicated to the explanation of the state of art concerning the geographic, geomorphologic and geotechnical characteristics of Reno River and its tributaries embankments, as well as the description of some geophysical applications provided on embankments belonging to European and North-American Rivers, which were used as bibliographic basis for this thesis realisation. The second part is an overview of the geophysical methods that were employed for this research, (with a particular attention to the GPR), reporting also their theoretical basis and a deepening of some techniques of the geophysical data analysis and representation, when applied to river embankments. The successive chapters, following the main scope of this research that is to highlight advantages and drawbacks in the use of Ground Penetrating Radar applied to Reno River and its tributaries embankments, show the results obtained analyzing different cases that could yield the formation of weakness zones, which successively lead to the embankment failure. As advantages, a considerable velocity of acquisition and a spatial resolution of the obtained data, incomparable with respect to other methodologies, were recorded. With regard to the drawbacks, some factors, related to the attenuation losses of wave propagation, due to different content in clay, silt, and sand, as well as surface effects have significantly limited the correlation between GPR profiles and geotechnical information and therefore compromised the embankment safety assessment. Recapitulating, the Ground Penetrating Radar could represent a suitable tool for checking up river dike conditions, but its use has significantly limited by geometric and geotechnical characteristics of the Reno River and its tributaries levees. As a matter of facts, only the shallower part of the embankment was investigate, achieving also information just related to changes in electrical properties, without any numerical measurement. Furthermore, GPR application is ineffective for a preliminary assessment of embankment safety conditions, while for detailed campaigns at shallow depth, which aims to achieve immediate results with optimal precision, its usage is totally recommended. The cases where multidisciplinary approach was tested, reveal an optimal interconnection of the various geophysical methodologies employed, producing qualitative results concerning the preliminary phase (FDEM), assuring quantitative and high confidential description of the subsoil (ERT) and finally, providing fast and highly detailed analysis (GPR). Trying to furnish some recommendations for future researches, the simultaneous exploitation of many geophysical devices to assess safety conditions of river embankments is absolutely suggested, especially to face reliable flood event, when the entire extension of the embankments themselves must be investigated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Questa tesi si basa su una serie di lavori precedenti, volti ad analizzare la correlazione tra i modelli AUML e le reti di Petri, per riuscire a fornire una metodologia di traduzione dai primi alle seconde. Questa traduzione permetterà di applicare tecniche di model checking alle reti così create, al fine di stabilire le proprietà necessarie al sistema per poter essere realizzato effettivamente. Verrà poi discussa un'implementazione di tale algoritmo sviluppata in tuProlog ed un primo approccio al model checking utilizzando il programma Maude. Con piccole modifiche all'algoritmo utilizzato per la conversione dei diagrammi AUML in reti di Petri, è stato possibile, inoltre, realizzare un sistema di implementazione automatica dei protocolli precedentemente analizzati, verso due piattaforme per la realizzazione di sistemi multiagente: Jason e TuCSoN. Verranno quindi presentate tre implementazioni diverse: la prima per la piattaforma Jason, che utilizza degli agenti BDI per realizzare il protocollo di interazione; la seconda per la piattaforma TuCSoN, che utilizza il modello A&A per rendersi compatibile ad un ambiente distribuito, ma che ricalca la struttura dell'implementazione precedente; la terza ancora per TuCSoN, che sfrutta gli strumenti forniti dalle reazioni ReSpecT per generare degli artefatti in grado di fornire una infrastruttura in grado di garantire la realizzazione del protocollo di interazione agli agenti partecipanti. Infine, verranno discusse le caratteristiche di queste tre differenti implementazioni su un caso di studio reale, analizzandone i punti chiave.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Über die Liniarität der Teichmüllerschen Modulgruppe des Torus mit zwei Punktierungen. In meiner Arbeit beschäftige ich mich mit Darstellungen der Teichmüllerschen Modulgruppe des Torus mit zwei Punktierungen. Mein Ansatz hierbei ist, die Teichmüllersche Modulgruppe in eine p-adische Liegruppe einzubetten. Sei nun F die von zwei Elementen erzeugte freie Gruppe und Aut(F) die Automorphismengruppe von F. Inhalt des ersten Kapitels ist es nun zu zeigen, daß folgende Aussagen äquivalent sind: - Die Teichmüllersche Modulgruppe des Torus mit zwei Punktierungen ist linear, - Aut(F)ist linear, - F besitzt eine p-Kongruenzstruktur, deren Folgen- glieder von Aut(F) festgehalten werden, also charak- teristisch sind. Im zweiten Kapitel wird unter anderem gezeigt, daß es eine Einbettung einer Untergruppe endlichen Indexes der Aut(F) in die Automorphismengruppe einer einfachen p-adischen Liegruppe gibt. Bisher ist unbekannt, ob die Buraudarstellung treu ist.In dieser Arbeit wird ein unendliches, lineares Gleichungssystem, dessen Lösungen gerade die Koeffizienten der Wörter des Kernes der Buraudarstellung sind, vorgestellt.Im dritten Kapitel wird mit den Methoden des 1.Kapitels gezeigt, daß der Torus mit zwei Punktierungen genau dann linear ist, wenn die Teichmüllersche Modulgruppe der Sphäre mit 5 Punktierungen es auch ist. Bekanntlich ist die 4. Braidgruppe linear. Nun ist aber die 4. Braidgruppe letztlich die Teichmüllersche Modulgruppe der abgeschlossenen Kreisscheibe mit 5 Punktierungen. Wenn man nun deren Randpunkte miteinander identifiziert und anschließend wegläßt, erhält man die 5-fach punktiereSphäre.Mit der eben beschriebenen Abbildung kann man zeigen, daß die Teichmüllersche Modulgruppe der fünffach punktierten Sphäre linear ist.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In dieser Arbeit wurden wässrige Suspensionen ladungsstabilisierter kolloidaler Partikel bezüglich ihres Verhaltens unter dem Einfluss elektrischer Felder untersucht. Insbesondere wurde die elektrophoretische Mobilität µ über einen weiten Partikelkonzentrationsbereich studiert, um das individuelle Verhalten einzelner Partikel mit dem bisher nur wenig untersuchten kollektiven Verhalten von Partikelensembles (speziell von fluid oder kristallin geordneten Ensembles) zu vergleichen. Dazu wurde ein superheterodynes Dopplervelocimetrisches Lichtstreuexperiment mit integraler und lokaler Datenerfassung konzipiert, das es erlaubt, die Geschwindigkeit der Partikel in elektrischen Feldern zu studieren. Das Experiment wurde zunächst erfolgreich im Bereich nicht-ordnender und fluid geordneter Suspensionen getestet. Danach konnte mit diesem Gerät erstmals das elektrophoretische Verhalten von kristallin geordneten Suspensionen untersucht werden. Es wurde ein komplexes Fließverhalten beobachtet und ausführlich dokumentiert. Dabei wurden bisher in diesem Zusammenhang noch nicht beobachtete Effekte wie Blockfluss, Scherbandbildung, Scherschmelzen oder elastische Resonanzen gefunden. Andererseits machte dieses Verhalten die Entwicklung einer neuen Auswertungsroutine für µ im kristallinen Zustand notwendig, wozu die heterodyne Lichtstreutheorie auf den superheterodynen Fall mit Verscherung erweitert werden musste. Dies wurde zunächst für nicht geordnete Systeme durchgeführt. Diese genäherte Beschreibung genügte, um unter den gegebenen Versuchbedingungen auch das Lichtstreuverhalten gescherter kristalliner Systeme zu interpretieren. Damit konnte als weiteres wichtiges Resultat eine generelle Mobilitäts-Konzentrations-Kurve erhalten werden. Diese zeigt bei geringen Partikelkonzentrationen den bereits bekannten Anstieg und bei mittleren Konzentrationen ein Plateau. Bei hohen Konzentrationen sinkt die Mobilität wieder ab. Zur Interpretation dieses Verhaltens bzgl. Partikelladung stehen derzeit nur Theorien für nicht wechselwirkende Partikel zur Verfügung. Wendet man diese an, so findet man eine überraschend gute Übereinstimmung der elektrophoretisch bestimmten Partikelladung Z*µ mit numerisch bestimmten effektiven Partikelladungen Z*PBC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lo vendemmia meccanica incontra ancora resistenze legate al timore di peggiorare la qualità del prodotto e di avere elevate perdite di raccolta. In questo contesto sono state effettuate quattro prove sperimentali, finalizzate a definire le interazioni macchina, pianta e prodotto raccolto e a valutare nuove possibilità di regolazione delle vendemmiatrici e di gestione del prodotto raccolto. Le prime due sono state realizzate con vendemmiatrici a scuotimento orizzontale e verticale. L’obiettivo è stato quello di individuare l’influenza della frequenza del battitore sull’efficienza di raccolta e sulla qualità del prodotto e di verificare il maltrattamento provocato dagli organi di intercettazione e trasporto della vendemmiatrice. I risultati hanno dimostrato l’importanza della corretta regolazione del battitore delle vendemmiatrici a scuotimento orizzontale che operano direttamente sulla fascia produttiva del vigneto. Questa regolazione risulta più semplice sulle macchine a scuotimento verticale che agiscono indirettamente sui fili di sostegno delle doppie cortine. La misura delle sollecitazioni all’interno della macchina ha evidenziato valori anche elevati, pericolosi per l’integrità del prodotto raccolto, legati alla differente costruzione degli organi d’intercettazione e trasporto. La terza prova ha valutato l’efficacia di due nuovi accessori per le vendemmiatrici: la regolazione dell’ampiezza del battitore e un sensore per misurare in continuo il grado di ammostamento provocato. I risultati hanno dimostrato la loro validità per migliorare le prestazioni operative delle vendemmiatrici e per fornire agli operatori uno strumento di controllo in tempo reale sulla qualità della raccolta. Infine, considerando che le vendemmiatrici producono sempre un ammostamento dell’uva, abbiamo verificato un sistema innovativo che permette di anticipare la protezione del mosto libero già durante il trasporto dal campo. Il sistema si è dimostrato semplice, efficace ed economico. Queste esperienze hanno dimostrano che la vendemmia meccanica, se correttamente gestita, permette di ottenere ottimi risultati sotto il profilo qualitativo, tecnologico ed economico.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ziel der Arbeit war die enzymatische Aktivierung von Cheliceraten-Hämocyanin zur Erforschung ihrer Phenoloxidase-Aktivität. Hierzu wurden zwei Hämocyanine in vergleichenden Untersuchungen herangezogen: Das bekannte 24-mer aus der Spinne Eurypelma californicum und das ebenfalls 24-mere Hämocyanin des Skorpions Pandinus imperator, dessen Struktur hier aufgeklärt wurde. Elektronenmikroskopisch und in der dynamischer Lichtstreuung sind sich beide Hämocyanine sehr ähnlich und sedimentieren bei analytischer Ultrazentrifugation ebenfalls in gleicher Weise (Sedimentationskoeffizient von 37 S (S20, W)). Durch Dissoziation im alkalischen Milieu gewinnt man bis zu zwölf Untereinheiten, von denen sich neun immunologisch unterscheiden lassen. Das absorptionsspektroskopische Verhalten von P. imperator- und E. californicum-Hämocyanin sowie Sekundärstrukturanalyse mittels CD-Spektroskopie ist nahezu identisch. Die Stabilität des Hämocyanins gegenüber Temperatur und Denaturierungsmitteln wurde mit Circulardichroismus- und Fluoreszenzspektroskopie sowie durch die enzymatische Aktivität untersucht. Erstmals konnten die Hämocyanine von P. imperator und E. californicum nicht nur zu einer stabilen Diphenoloxidase umgewandelt werden, sondern auch eine Monophenolhydroxylase-Aktivität induziert und reguliert werden. Für letztere Aktivität ist dabei die Präsenz von Tris- oder Hepes-Puffer wesentlich. Während sich die Monophenolhydroxylase-Aktivität nur auf Ebene der oligomeren Zustände beobachten lässt, erkennt man bei den isolierten Untereinheiten-Typen lediglich eine Diphenoloxidase-Aktivität. Bei dem Spinnen-Hämocyanin zeigen die Untereinheiten bc die stärkste katalytische Aktivität auf, bei P. imperator-Hämocyanin findet man drei bis vier Untereinheiten, die enzymatisch aktiv sind. Die Aktivierung mit SDS liefert den Hinweis, dass die Quartärstruktur in eine andere Konformation gebracht und nicht durch SDS denaturiert wird. Zugabe von Mg2+ reguliert die Phenoloxidase-Aktivität und verschiebt bei P. imperator-Hämocyanin die enzymatische Aktivität zugunsten der Diphenoloxidase. Mit keiner der zur Verfügung stehenden Methoden konnte jedoch ein Konformationsübergang eindeutig nachgewiesen werden. Die Stabilität scheint durch die niedrigen SDS-Konzentrationen nicht beeinträchtigt zu werden. Die sehr lange “Verzögerungsphase“ bei der Monophenolhydroxylase-Aktivität konnte durch Zugabe von katalytischem Diphenol drastisch verkürzt werden, was ein Hinweis auf die echte Tyrosinase-Aktivität des aktivierten Hämocyanins ist. Ein in vivo-Aktivator konnte bis jetzt noch nicht gefunden werden. Trotzdem scheinen die Hämocyanine in der Immunologie von Cheliceraten eine bedeutende Rolle zu spielen, indem sie die Rolle der Tyrosinasen / Phenoloxidasen beziehungsweise Catecholoxidasen übernehmen, die bei Cheliceraten nicht vorkommen. Weitere Möglichkeiten des Cheliceraten-Immunsystems, eindringende Fremdorganismen abzuwehren, wurden untersucht. Das Fehlen einer ´echten` Phenoloxidase-Aktivität bei den Cheliceraten, mit der Fähigkeit, sowohl mono- als auch diphenolische Substrate umzusetzen, stützt die Hypothese, dass aktiviertes Hämocyanin in vivo an die Stelle der Phenoloxidase tritt.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: to evaluate the psychopathological profile in primary Restless Legs Syndrome (p-RLS) patients with and without nocturnal eating disorder (NED), analysing obsessive-compulsive traits, mood and anxiety disorder, and the two domains of personality proposed by Cloninger, temperament and character. Methods: we tested ten p-RLS patients without NED, ten p-RLS patients with NED and ten healthy control subjects, age and sex-matched, using Hamilton Depression and Anxiety Rating Scales, State-Trait Anxiety Inventory, Maudsley Obsessive Compulsive Inventory (MOCI) and Temperament and Character Inventory - revised (TCI). Results: p-RLS patients, particularly those with NED, had increased anxiety factor scores. MOCI-total, doubting and checking compulsion, and TCI-harm avoidance scores were significantly higher in p-RLS patients with NED. p-RLS patients without NED had significantly higher MOCI-doubting scores and a trend toward higher checking compulsion and harm avoidance scores with an apparent grading from controls to p-RLS patients without NED to p-RLS with NED. Conclusions: higher harm avoidance might predispose to display obsessive-compulsive symptoms, RLS and then, with increasing severity, compulsive nocturnal eating. RLS and NED could represent a pathological continuum in which a dysfunction in the limbic system, possibly driven by a dopaminergic dysfunction, could be the underlying pathophysiological mechanism.