29 resultados para Complex system

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this thesis the evolution of the techno-social systems analysis methods will be reported, through the explanation of the various research experience directly faced. The first case presented is a research based on data mining of a dataset of words association named Human Brain Cloud: validation will be faced and, also through a non-trivial modeling, a better understanding of language properties will be presented. Then, a real complex system experiment will be introduced: the WideNoise experiment in the context of the EveryAware european project. The project and the experiment course will be illustrated and data analysis will be displayed. Then the Experimental Tribe platform for social computation will be introduced . It has been conceived to help researchers in the implementation of web experiments, and aims also to catalyze the cumulative growth of experimental methodologies and the standardization of tools cited above. In the last part, three other research experience which already took place on the Experimental Tribe platform will be discussed in detail, from the design of the experiment to the analysis of the results and, eventually, to the modeling of the systems involved. The experiments are: CityRace, about the measurement of human traffic-facing strategies; laPENSOcosì, aiming to unveil the political opinion structure; AirProbe, implemented again in the EveryAware project framework, which consisted in monitoring air quality opinion shift of a community informed about local air pollution. At the end, the evolution of the technosocial systems investigation methods shall emerge together with the opportunities and the threats offered by this new scientific path.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Supramolecular architectures can be built-up from a single molecular component (building block) to obtain a complex of organic or inorganic interactions creating a new emergent condensed phase of matter, such as gels, liquid crystals and solid crystal. Further the generation of multicomponent supramolecular hybrid architecture, a mix of organic and inorganic components, increases the complexity of the condensed aggregate with functional properties useful for important areas of research, like material science, medicine and nanotechnology. One may design a molecule storing a recognition pattern and programming a informed self-organization process enables to grow-up into a hierarchical architecture. From a molecular level to a supramolecular level, in a bottom-up fashion, it is possible to create a new emergent structure-function, where the system, as a whole, is open to its own environment to exchange energy, matter and information. “The emergent property of the whole assembly is superior to the sum of a singles parts”. In this thesis I present new architectures and functional materials built through the selfassembly of guanosine, in the absence or in the presence of a cation, in solution and on the surface. By appropriate manipulation of intermolecular non-covalent interactions the spatial (structural) and temporal (dynamic) features of these supramolecular architectures are controlled. Guanosine G7 (5',3'-di-decanoil-deoxi-guanosine) is able to interconvert reversibly between a supramolecular polymer and a discrete octameric species by dynamic cation binding and release. Guanosine G16 (2',3'-O-Isopropylidene-5'-O-decylguanosine) shows selectivity binding from a mix of different cation's nature. Remarkably, reversibility, selectivity, adaptability and serendipity are mutual features to appreciate the creativity of a molecular self-organization complex system into a multilevelscale hierarchical growth. The creativity - in general sense, the creation of a new thing, a new thinking, a new functionality or a new structure - emerges from a contamination process of different disciplines such as biology, chemistry, physics, architecture, design, philosophy and science of complexity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Thrace Basin is the largest and thickest Tertiary sedimentary basin of the eastern Balkans region and constitutes an important hydrocarbon province. It is located between the Rhodope-Strandja Massif to the north and west, the Marmara Sea and Biga Peninsula to the south, and the Black Sea to the est. It consists of a complex system of depocenters and uplifts with very articulate paleotopography indicated by abrupt lateral facies variations. Its southeastern margin is widely deformed by the Ganos Fault, a segment of the North Anatolian strike-slip fault system . Most of the Thrace Basin fill ranges from the Eocene to the Late Oligocene. Maximum total thickness, including the Neogene-Quaternary succession, reaches 9.000 meters in a few narrow depocenters. This sedimentary succession consists mainly of basin plain turbiditic deposits with a significant volcaniclastic component which evolves upwards to shelf deposits and continental facies, with deltaic bodies prograding towards the basin center in the Oligocene. This work deals with the provenance of Eocene-Oligocene clastic sediments of the southern and western part of Thrace Basin in Turkey and Greece. Sandstone compositional data (78 gross composition analyses and 40 heavy minerals analyses) were used to understand the change in detrital modes which reflects the provenance and geodinamic evolution of the basin. Samples were collected at six localities, which are from west to est: Gökçeada, Gallipoli and South-Ganos (south of Ganos Fault), Alexandroupolis, Korudağ and North-Ganos (north of Ganos Fault). Petrologic (framework composition and heavy-mineral analyses) and stratigraphic-sedimentologic data, (analysis of sedimentologic facies associations along representative stratigraphic sections, paleocurrents) allowed discrimination of six petrofacies; for each petrofacies the sediment dispersal system was delineated. The Thrace Basin fill is made mainly of lithic arkoses and arkosic litharenites with variable amount of low-grade metamorphic lithics (also ophiolitic), neovolcanic lithics, and carbonate grains (mainly extrabasinal). Picotite is the most widespread heavy mineral in all petrofacies. Petrological data on analyzed successions show a complex sediment dispersal pattern and evolution of the basin, indicating one principal detrital input from a source area located to the south, along both the İzmir-Ankara and Intra-Pontide suture lines, and a possible secondary source area, represented by the Rhodope Massif to the west. A significant portion of the Thrace Basin sediments in the study area were derived from ophiolitic source rocks and from their oceanic cover, whereas epimetamorphic detrital components came from a low-grade crystalline basement. An important penecontemporaneous volcanic component is widespread in late Eocene-Oligocene times, indicating widespread post-collisional (collapse?) volcanism following the closure of the Vardar ocean. Large-scale sediment mass wasting from south to north along the southern margin of the Thrace Basin is indicated (i) in late Eocene time by large olistoliths of ophiolites and penecontemporaneous carbonates, and (ii) in the mid-Oligocene by large volcaniclastic olistoliths. The late Oligocene paleogeographic scenario was characterized by large deltaic bodies prograding northward (Osmancik Formation). This clearly indicates that the southern margin of the basin acted as a major sediment source area throughout its Eocene-Oligocene history. Another major sediment source area is represented by the Rhodope Massif, in particolar the Circum-Rhodopic belt, especially for plutonic and metamorphic rocks. Considering preexisting data on the petrologic composition of Thrace Basin, silicilastic sediments in Greece and Bulgaria (Caracciolo, 2009), a Rhodopian provenance could be considered mostly for areas of the Thrace Basin outside our study area, particularly in the northern-central portions of the basin. In summary, the most important source area for the sediment of Thrace Basin in the study area was represented by the exhumed subduction-accretion complex along the southern margin of the basin (Biga Peninsula and western-central Marmara Sea region). Most measured paleocurrent indicators show an eastward paleoflow but this is most likely the result of gravity flow deflection. This is possible considered a strong control due to the east-west-trending synsedimentary transcurrent faults which cuts the Thrace Basin, generating a series of depocenters and uplifts which deeply influenced sediment dispersal and the areal distribution of paleoenvironments. The Thrace Basin was long interpreted as a forearc basin between a magmatic arc to the north and a subduction-accretion complex to the south, developed in a context of northward subduction. This interpretation was challenged by more recent data emphasizing the lack of a coeval magmatic arc in the north and the interpretation of the chaotic deposit which outcrop south of Ganos Fault as olistoliths and large submarine slumps, derived from the erosion and sedimentary reworking of an older mélange unit located to the south (not as tectonic mélange formed in an accretionary prism). The present study corroborates instead the hypothesis of a post-collisional origin of the Thrace Basin, due to a phase of orogenic collapse, which generated a series of mid-Eocene depocenters all along the İzmir-Ankara suture (following closure of the Vardar-İzmir-Ankara ocean and the ensuing collision); then the slab roll-back of the remnant Pindos ocean played an important role in enhancing subsidence and creating additional accommodation space for sediment deposition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Il presente lavoro di ricerca ha per oggetto il tema della diffusione urbana. Dopo una breve ricostruzione delle varie definizioni presenti in letteratura sul fenomeno - sia qualitative che quantitative - e una descrizione dei limiti di volta in volta presenti all’interno di tali definizioni, si procede con la descrizione dell’evoluzione storica dello sprawl urbano all’interno del mondo occidentale. Una volta definito e contestualizzato storicamente l’oggetto della ricerca, ne vengono analizzate le cause e il complesso sistema di conseguenze che tale fenomeno urbano porta con sé. Successivamente vengono presentate le principali teorie sociologiche attraverso le quali può essere interpretato il fenomeno dello sprawl urbano e vengono descritte le varie forme con cui si può esprimere lo sprawl urbano: non esiste infatti uniformità tra i vari paesaggi suburbani, ma una grande diversità interna alle varie forme in cui si manifesta il fenomeno della dispersione insediativa. Se quanto finora esaminato, soprattutto a livello bibliografico, è riconducibile alla letteratura nordamericana, arrivati a questo punto del lavoro, l’attenzione viene spostata sul continente europeo, prendendo in esame l’emergere del periurbano all’interno del nostro continente e tentando di descrivere sia le contiguità che le differenze tra il fenomeno dell’urban sprawl e quello del periurbano. Infine, adottando un procedimento “ad imbuto”, il lavoro si sofferma sulla situazione del nostro paese in merito alla tematica in questione. L’ultima sezione della ricerca prevede una parte di lavoro empirico. Se, come è emerso nel quadro teorico, molti sono gli elementi che caratterizzano il tema dello sprawl urbano e del periurbano, si è voluto andare a verificare se, ed eventualmente quali, degli elementi descritti, sono presenti in un’area ben delimitata del territorio bolognese, per cercare di capire se si possa parlare di un “periurbano bolognese” e quali caratteristiche esso presenti.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis proposes design methods and test tools, for optical systems, which may be used in an industrial environment, where not only precision and reliability but also ease of use is important. The approach to the problem has been conceived to be as general as possible, although in the present work, the design of a portable device for automatic identification applications has been studied, because this doctorate has been funded by Datalogic Scanning Group s.r.l., a world-class producer of barcode readers. The main functional components of the complete device are: electro-optical imaging, illumination and pattern generator systems. For what concerns the electro-optical imaging system, a characterization tool and an analysis one has been developed to check if the desired performance of the system has been achieved. Moreover, two design tools for optimizing the imaging system have been implemented. The first optimizes just the core of the system, the optical part, improving its performance ignoring all other contributions and generating a good starting point for the optimization of the whole complex system. The second tool optimizes the system taking into account its behavior with a model as near as possible to reality including optics, electronics and detection. For what concerns the illumination and the pattern generator systems, two tools have been implemented. The first allows the design of free-form lenses described by an arbitrary analytical function exited by an incoherent source and is able to provide custom illumination conditions for all kind of applications. The second tool consists of a new method to design Diffractive Optical Elements excited by a coherent source for large pattern angles using the Iterative Fourier Transform Algorithm. Validation of the design tools has been obtained, whenever possible, comparing the performance of the designed systems with those of fabricated prototypes. In other cases simulations have been used.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In Unione Sovietica il partito mette in atto un sistema di istituzioni per controllare il mondo culturale e la produzione scritta: il Glavlit, il massimo istituto censorio, l’Unione degli scrittori, un’editoria centralizzata e statalizzata e un unico metodo creativo possibile, il realismo socialista. Il settore della traduzione letteraria e della ricezione della letteratura straniera vengono ugualmente posti sotto controllo. All’interno dell’Unione degli scrittori operano la Sezione dei Traduttori, a cui spetta la formazione dei nuovi traduttori sovietici, e la Commissione Straniera, che stabilisce quali autori e quali libri occidentali debbano essere tradotti. Il Reparto straniero del Glavlit controlla il materiale a stampa proveniente dall’estero, la sua distribuzione e le modalità di consultazione e si occupa di effettuare una censura sia sul testo in lingua straniera che su quello tradotto in lingua russa. Parallelamente, il codice estetico e normativo del realismo socialista comincia a influenzare lo sviluppo della teoria della traduzione. La traduttologia si allinea alla critica letteraria ufficiale e promuove un approccio libero al testo che permetta l’introduzione di modifiche testuali arbitrarie da parte del traduttore o del redattore.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le profonde trasformazioni che hanno interessato l’industria alimentare, unitamente alle accresciute capacità delle scienze mediche ed epidemiologiche di individuare nessi causali tra il consumo di determinate sostanze e l’insorgere di patologie, hanno imposto al legislatore di intervenire nella materia della c.d. sicurezza alimentare mettendo in atto sistemi articolati e complessi tesi a tutelare la salute dei consociati. Quest’ultimo obiettivo viene perseguito, da un lato, mediante disposizioni di natura pubblicistica e di carattere preventivo e, dall’altro lato, dallo strumento della responsabilità civile. Le due prospettive di tutela della salute delle persone costituiscono parti distinte ma al tempo stesso fortemente integrate in una logica unitaria. Questa prospettiva emerge chiaramente nel sistema statunitense: in quel ordinamento la disciplina pubblicistica della sicurezza degli alimenti – definita dalla Food and Drug Administration – costituisce un punto di riferimento imprescindibile anche quando si tratta di stabilire se un prodotto alimentare è difettoso e se, di conseguenza, il produttore è chiamato a risarcire i danni che scaturiscono dal suo utilizzo. L’efficace sinergia che si instaura tra la dimensione pubblicistica del c.d. Public Enforcement e quella risarcitoria (Private Enforcement) viene ulteriormente valorizzata dalla presenza di efficaci strumenti di tutela collettiva tra i quali la class action assume una importanza fondamentale. Proprio muovendo dall’analisi del sistema statunitense, l’indagine si appunta in un primo momento sull’individuazione delle lacune e delle criticità che caratterizzano il sistema nazionale e, più in generale quello comunitario. In un secondo momento l’attenzione si focalizza sull’individuazione di soluzioni interpretative e de iure condendo che, anche ispirandosi agli strumenti di tutela propri del diritto statunitense, contribuiscano a rendere maggiormente efficace la sinergia tra regole preventive sulla sicurezza alimentare e regole risarcitorie in materia di responsabilità del produttore.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ingestion of a meal evokes a series of digestive processes, which consist of the essential functions of the digestive system: food transport, secretory activity, absorption of nutrients and the expulsion of undigested residues do not absorbed. The gastrointestinal chemosensitivity is characterized by cellular elements of the endocrine gastrointestinal mucosa and nerve fibers, in particular of vagal nature. A wide range of mediators endocrine and/or paracrine can be released from various endocrine cells in response to nutrients in the diet. These hormones, in addition to their direct activity, act through specific receptors activating some of the most important functions in the control of energy intake and energy homeostasis in the body. For integration of this complex system of control of gastrointestinal chemosensitivity, recent evidence demonstrates the presence of taste receptors (TR) belonging to the family of G proteins coupled receptor expressed in the mucosa of the gastrointestinal tract of different mammals and human. This thesis is divided into several research projects that have been conceived in order to clarify the relationship between TR and nutrients. To define this relationship I have used various scientific approaches, which have gone on to evaluate changes in signal molecules of TR, in particular of the α-transducin in the fasting state and after refeeding with standard diet in the gastrointestinal tract of the pig, the mapping of the same molecule signal in the gastrointestinal tract of fish (Dicentrarchus labrax), the signaling pathway of bitter TR in the STC-1 endocrine cell line and finally the involvement of bitter TR in particular of T2R38 in patients with an excessive caloric intake. The results showed how there is a close correlation between nutrients, TR and hormonal release and how they are useful both in taste perception but also likely to be involved in chronic diseases such as obesity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This PhD thesis is aimed at studying the suitability of proteases realised by Yarrowia lipolytica to hydrolyse proteins of different origins available as industrial food by-products. Several strains of Y. lipolytica have been screened for the production of extracellular proteases by zymography. On the basis of the results some strains released only a protease having a MW of 37 kDa, which corresponds to the already reported acidic protease, while other produced prevalently or only a protease with a MW higher than 200 kDa. The proteases have been screened for their "cold attitude" on gelatin, gluten and skim milk. This property can be relevant from a biotechnological point of view in order to save energy consumption during industrial processes. Most of the strains used were endowed with proteolytic activity at 6 °C on all the three proteins. The proteolytic breakdown profiles of the proteins, detected at 27 °C, were different related to the specific strains of Y. lipolytica. The time course of the hydrolysis, tested on gelatin, affected the final bioactivities of the peptide mixtures produced. In particular, an increase in both the antioxidant and antimicrobial activities was detected when the protease of the strain Y. lipolytica 1IIYL4A was used. The final part of this work was focused on the improvement of the peptides bioactivities through a novel process based on the production of glycopeptides. Firstly, the main reaction parameters were optimized in a model system, secondly a more complex system, based on gluten hydrolysates, was taken into consideration to produce glycopeptides. The presence of the sugar moiety reduced the hydrophobicity of the glycopeptides, thus affecting the final antimicrobial activity which was significantly improved. The use of this procedure could be highly effective to modify peptides and can be employed to create innovative functional peptides using a mild temperature process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of dexterous robotic hands out of research laboratories has been limited by the intrinsic complexity that these devices present. This is directly reflected as an economically unreasonable cost and a low overall reliability. Within the research reported in this thesis it is shown how the problem of complexity in the design of robotic hands can be tackled, taking advantage of modern technologies (i.e. rapid prototyping), leading to innovative concepts for the design of the mechanical structure, the actuation and sensory systems. The solutions adopted drastically reduce the prototyping and production costs and increase the reliability, reducing the number of parts required and averaging their single reliability factors. In order to get guidelines for the design process, the problem of robotic grasp and manipulation by a dual arm/hand system has been reviewed. In this way, the requirements that should be fulfilled at hardware level to guarantee successful execution of the task has been highlighted. The contribution of this research from the manipulation planning side focuses on the redundancy resolution that arise in the execution of the task in a dexterous arm/hand system. In literature the problem of coordination of arm and hand during manipulation of an object has been widely analyzed in theory but often experimentally demonstrated in simplified robotic setup. Our aim is to cover the lack in the study of this topic and experimentally evaluate it in a complex system as a anthropomorphic arm hand system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Over the last 60 years, computers and software have favoured incredible advancements in every field. Nowadays, however, these systems are so complicated that it is difficult – if not challenging – to understand whether they meet some requirement or are able to show some desired behaviour or property. This dissertation introduces a Just-In-Time (JIT) a posteriori approach to perform the conformance check to identify any deviation from the desired behaviour as soon as possible, and possibly apply some corrections. The declarative framework that implements our approach – entirely developed on the promising open source forward-chaining Production Rule System (PRS) named Drools – consists of three components: 1. a monitoring module based on a novel, efficient implementation of Event Calculus (EC), 2. a general purpose hybrid reasoning module (the first of its genre) merging temporal, semantic, fuzzy and rule-based reasoning, 3. a logic formalism based on the concept of expectations introducing Event-Condition-Expectation rules (ECE-rules) to assess the global conformance of a system. The framework is also accompanied by an optional module that provides Probabilistic Inductive Logic Programming (PILP). By shifting the conformance check from after execution to just in time, this approach combines the advantages of many a posteriori and a priori methods proposed in literature. Quite remarkably, if the corrective actions are explicitly given, the reactive nature of this methodology allows to reconcile any deviations from the desired behaviour as soon as it is detected. In conclusion, the proposed methodology brings some advancements to solve the problem of the conformance checking, helping to fill the gap between humans and the increasingly complex technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes modelling tools and methods suited for complex systems (systems that typically are represented by a plurality of models). The basic idea is that all models representing the system should be linked by well-defined model operations in order to build a structured repository of information, a hierarchy of models. The port-Hamiltonian framework is a good candidate to solve this kind of problems as it supports the most important model operations natively. The thesis in particular addresses the problem of integrating distributed parameter systems in a model hierarchy, and shows two possible mechanisms to do that: a finite-element discretization in port-Hamiltonian form, and a structure-preserving model order reduction for discretized models obtainable from commercial finite-element packages.