26 resultados para Building blocks in elastomer composite fabrication
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Phenol and cresols represent a good example of primary chemical building blocks of which 2.8 million tons are currently produced in Europe each year. Currently, these primary phenolic building blocks are produced by refining processes from fossil hydrocarbons: 5% of the world-wide production comes from coal (which contains 0.2% of phenols) through the distillation of the tar residue after the production of coke, while 95% of current world production of phenol is produced by the distillation and cracking of crude oil. In nature phenolic compounds are present in terrestrial higher plants and ferns in several different chemical structures while they are essentially absent in lower organisms and in animals. Biomass (which contain 3-8% of phenols) represents a substantial source of secondary chemical building blocks presently underexploited. These phenolic derivatives are currently used in tens thousand of tons to produce high cost products such as food additives and flavours (i.e. vanillin), fine chemicals (i.e. non-steroidal anti-inflammatory drugs such as ibuprofen or flurbiprofen) and polymers (i.e. poly p-vinylphenol, a photosensitive polymer for electronic and optoelectronic applications). European agrifood waste represents a low cost abundant raw material (250 millions tons per year) which does not subtract land use and processing resources from necessary sustainable food production. The class of phenolic compounds is essentially constituted by simple phenols, phenolic acids, hydroxycinnamic acid derivatives, flavonoids and lignans. As in the case of coke production, the removal of the phenolic contents from biomass upgrades also the residual biomass. Focusing on the phenolic component of agrifood wastes, huge processing and marketing opportunities open since phenols are used as chemical intermediates for a large number of applications, ranging from pharmaceuticals, agricultural chemicals, food ingredients etc. Following this approach we developed a biorefining process to recover the phenolic fraction of wheat bran based on enzymatic commercial biocatalysts in completely water based process, and polymeric resins with the aim of substituting secondary chemical building blocks with the same compounds naturally present in biomass. We characterized several industrial enzymatic product for their ability to hydrolize the different molecular features that are present in wheat bran cell walls structures, focusing on the hydrolysis of polysaccharidic chains and phenolics cross links. This industrial biocatalysts were tested on wheat bran and the optimized process allowed to liquefy up to the 60 % of the treated matter. The enzymatic treatment was also able to solubilise up to the 30 % of the alkali extractable ferulic acid. An extraction process of the phenolic fraction of the hydrolyzed wheat bran based on an adsorbtion/desorption process on styrene-polyvinyl benzene weak cation-exchange resin Amberlite IRA 95 was developed. The efficiency of the resin was tested on different model system containing ferulic acid and the adsorption and desorption working parameters optimized for the crude enzymatic hydrolyzed wheat bran. The extraction process developed had an overall yield of the 82% and allowed to obtain concentrated extracts containing up to 3000 ppm of ferulic acid. The crude enzymatic hydrolyzed wheat bran and the concentrated extract were finally used as substrate in a bioconversion process of ferulic acid into vanillin through resting cells fermentation. The bioconversion process had a yields in vanillin of 60-70% within 5-6 hours of fermentation. Our findings are the first step on the way to demonstrating the economical feasibility for the recovery of biophenols from agrifood wastes through a whole crop approach in a sustainable biorefining process.
Resumo:
The challenging requirements set on new full composite aeronautical structures are mostly related to the demonstration of damage tolerance capability of their primary structures, required by the airworthiness bodies. And while composite-made structures inherently demonstrate exceptional fatigue properties, when put in real life working conditions, a number of external factors can lead to impact damages thus reducing drastically their fatigue resistance due to fiber delamination, disbonding or breaking. This PhD aims towards contributing to the better understanding of the behavior of the primary composite aeronautical structure after near-edge impacts which are inevitable during the service life of an aircraft. The behavior of CFRP structures after impacts in only one small piece of the big picture which is the certification of CFRP built aircraft, where several other parameters need to be evaluated in order to fulfill the airworthiness requirements. These parameters are also discussed in this PhD thesis in order to give a better understanding of the complex task of CFRP structure certification, in which behavior of the impacted structure plays an important role. An experimental and numerical campaign was carried out in order to determine the level of delamination damage in CFRP specimens after near-edge impacts. By calibrating the numerical model with experimental data, it was possible, for different configurations and energy levels, to predict the extension of a delamination in a CFRP structure and to estimate its residual static strength using a very simple but robust technique. The original contribution of this work to the analysis of CFRP structures is the creation of a model which could be applicable to wide range of thicknesses and stacking sequences of CFRP structures, thus potentially being suitable for industrial application, as well.
Resumo:
The aim of this PhD thesis was to study at a microscopic level different liquid crystal (LC) systems, in order to determine their physical properties, resorting to two distinct methodologies, one involving computer simulations, and the other spectroscopic techniques, in particular electron spin resonance (ESR) spectroscopy. By means of the computer simulation approach we tried to demonstrate this tool effectiveness for calculating anisotropic static properties of a LC material, as well as for predicting its behaviour and features. This required the development and adoption of suitable molecular models based on a convenient intermolecular potentials reflecting the essential molecular features of the investigated system. In particular, concerning the simulation approach, we have set up models for discotic liquid crystal dimers and we have studied, by means of Monte Carlo simulations, their phase behaviour and self-assembling properties, with respect to the simple monomer case. Each discotic dimer is described by two oblate GayBerne ellipsoids connected by a flexible spacer, modelled by a harmonic "spring" of three different lengths. In particular we investigated the effects of dimerization on the transition temperatures, as well as on the characteristics of molecular aggregation displayed and the relative orientational order. Moving to the experimental results, among the many experimental techniques that are typically employed to evaluate LC system distinctive features, ESR has proved to be a powerful tool in microscopic scale investigation of the properties, structure, order and dynamics of these materials. We have taken advantage of the high sensitivity of the ESR spin probe technique to investigate increasingly complex LC systems ranging from devices constituted by a polymer matrix in which LC molecules are confined in shape of nano- droplets, as well as biaxial liquid crystalline elastomers, and dimers whose monomeric units or lateral groups are constituted by rod-like mesogens (11BCB). Reflection-mode holographic-polymer dispersed liquid crystals (H-PDLCs) are devices in which LCs are confined into nanosized (50-300 nm) droplets, arranged in layers which alternate with polymer layers, forming a diffraction grating. We have determined the configuration of the LC local director and we have derived a model of the nanodroplet organization inside the layers. Resorting also to additional information on the nanodroplet size and shape distribution provided by SEM images of the H-PDLC cross-section, the observed director configuration has been modeled as a bidimensional distribution of elongated nanodroplets whose long axis is, on the average, parallel to the layers and whose internal director configuration is a uniaxial quasi- monodomain aligned along the nanodroplet long axis. The results suggest that the molecular organization is dictated mainly by the confinement, explaining, at least in part, the need for switching voltages significantly higher and the observed faster turn-off times in H-PDLCs compared to standard PDLC devices. Liquid crystal elastomers consist in cross-linked polymers, in which mesogens represent the monomers constituting the main chain or the laterally attached side groups. They bring together three important aspects: orientational order in amorphous soft materials, responsive molecular shape and quenched topological constraints. In biaxial nematic liquid crystalline elastomers (BLCEs), two orthogonal directions, rather than the one of normal uniaxial nematic, can be controlled, greatly enhancing their potential value for applications as novel actuators. Two versions of a side-chain BLCEs were characterized: side-on and end-on. Many tests have been carried out on both types of LCE, the main features detected being the lack of a significant dynamical behaviour, together with a strong permanent alignment along the principal director, and the confirmation of the transition temperatures already determined by DSC measurements. The end-on sample demonstrates a less hindered rotation of the side group mesogenic units and a greater freedom of alignment to the magnetic field, as already shown by previous NMR studies. Biaxial nematic ESR static spectra were also obtained on the basis of Molecular Dynamics generated biaxial configurations, to be compared to the experimentally determined ones, as a mean to establish a possible relation between biaxiality and the spectral features. This provides a concrete example of the advantages of combining the computer simulation and spectroscopic approaches. Finally, the dimer α,ω-bis(4'-cyanobiphenyl-4-yl)undecane (11BCB), synthesized in the "quest" for the biaxial nematic phase has been analysed. Its importance lies in the dimer significance as building blocks in the development of new materials to be employed in innovative technological applications, such as faster switching displays, resorting to the easier aligning ability of the secondary director in biaxial phases. A preliminary series of tests were performed revealing the population of mesogenic molecules as divided into two groups: one of elongated straightened conformers sharing a common director, and one of bent molecules, which display no order, being equally distributed in the three dimensions. Employing this model, the calculated values show a consistent trend, confirming at the same time the transition temperatures indicated by the DSC measurements, together with rotational diffusion tensor values that follow closely those of the constituting monomer 5CB.
Resumo:
The electrocatalytic reduction of CO2 (CO2RR) is a captivating strategy for the conversion of CO2 into fuels, to realize a carbon neutral circular economy. In the recent years, research has focused on the development of new materials and technology capable of capturing and converting CO2 into useful products. The main problem of CO2RR is given by its poor selectivity, which can lead to the formation of numerous reaction products, to the detriment of efficiencies. For this reason, the design of new electrocatalysts that selectively and efficiently reduce CO2 is a fundamental step for the future exploitation of this technology. Here we present a new class of electrocatalysts, designed with a modular approach, namely, deriving from the combination of different building blocks in a single nanostructure. With this approach it is possible to obtain materials with an innovative design and new functionalities, where the interconnections between the various components are essential to obtain a highly selective and efficient reduction of CO2, thus opening up new possibilities in the design of optimized electrocatalytic materials. By combining the unique physic-chemical properties of carbon nanostructures (CNS) with nanocrystalline metal oxides (MO), we were able to modulate the selectivity of CO2RR, with the production of formic acid and syngas at low overpotentials. The CNS have not only the task of stabilizing the MO nanoparticles, but the creation of an optimal interface between two nanostructures is able to improve the catalytic activity of the active phase of the material. While the presence of oxygen atoms in the MO creates defects that accelerate the reaction kinetics and stabilize certain reaction intermediates, selecting the reaction pathway. Finally, a part was dedicated to the study of the experimental parameters influencing the CO2RR, with the aim of improving the experimental setup in order to obtain commercial catalytic performances.
Resumo:
Per quanto riguarda le costruzioni in conglomerato cementizio armato gettato in opera, i sistemi strutturali più comunemente utilizzati sono quelli a telaio (con trasmissione di momento flettente), a setti portanti o una combinazione di entrambi. A partire dagli anni ’60, numerosissimi sono stati gli studi relativamente al comportamento sismico di strutture in c.a. a telaio. Lo stesso si può affermare per le costruzioni costituite da pareti miste a telai. In particolare, l’argomento della progettazione sismica di tali tipologie di edifici ha sempre riguardato soprattutto gli edifici alti nei quali, evidentemente, l’impiego delle pareti avveniva allo scopo di limitarne la elevata deformabilità. Il comportamento sismico di strutture realizzate interamente a pareti portanti in c.a. è stato meno studiato negli anni, nonostante si sia osservato che edifici realizzati mediante tali sistemi strutturali abbiano mostrato, in generale, pregevoli risorse di resistenza nei confronti di terremoti anche di elevata intensità. Negli ultimi 10 anni, l’ingegneria sismica si sta incentrando sull’approfondimento delle risorse di tipologie costruttive di cui si è sempre fatto largo uso in passato (tipicamente nei paesi dell’Europa continentale, in America latina, negli USA e anche in Italia), ma delle quali mancavano adeguate conoscenze scientifiche relativamente al loro comportamento in zona sismica. Tali tipologie riguardano sostanzialmente sistemi strutturali interamente costituiti da pareti portanti in c.a. per edifici di modesta altezza, usualmente utilizzati in un’edilizia caratterizzata da ridotti costi di realizzazione (fabbricati per abitazioni civili e/o uffici). Obiettivo “generale” del lavoro di ricerca qui presentato è lo studio del comportamento sismico di strutture realizzate interamente a setti portanti in c.a. e di modesta altezza (edilizia caratterizzata da ridotti costi di realizzazione). In particolare, le pareti che si intendono qui studiare sono caratterizzate da basse percentuali geometriche di armatura e sono realizzate secondo la tecnologia del cassero a perdere. A conoscenza dello scrivente, non sono mai stati realizzati, fino ad oggi, studi sperimentali ed analitici allo scopo di determinare il comportamento sismico di tali sistemi strutturali, mentre è ben noto il loro comportamento statico. In dettaglio, questo lavoro di ricerca ha il duplice scopo di: • ottenere un sistema strutturale caratterizzato da elevate prestazioni sismiche; • mettere a punto strumenti applicativi (congruenti e compatibili con le vigenti normative e dunque immediatamente utilizzabili dai progettisti) per la progettazione sismica dei pannelli portanti in c.a. oggetto del presente studio. Al fine di studiare il comportamento sismico e di individuare gli strumenti pratici per la progettazione, la ricerca è stata organizzata come segue: • identificazione delle caratteristiche delle strutture studiate, mediante lo sviluppo/specializzazione di opportune formulazioni analitiche; • progettazione, supervisione, ed interpretazione di una estesa campagna di prove sperimentali eseguita su pareti portanti in c.a. in vera grandezza, al fine di verificarne l’efficace comportamento sotto carico ciclico; • sviluppo di semplici indicazioni (regole) progettuali relativamente alle strutture a pareti in c.a. studiate, al fine di ottenere le caratteristiche prestazionali desiderate. I risultati delle prove sperimentali hanno mostrato di essere in accordo con le previsioni analitiche, a conferma della validità degli strumenti di predizione del comportamento di tali pannelli. Le elevatissime prestazioni riscontrate sia in termini di resistenza che in termini di duttilità hanno evidenziato come le strutture studiate, così messe a punto, abbiano manifestato un comportamento sismico più che soddisfacente.
Resumo:
This thesis deals with Context Aware Services, Smart Environments, Context Management and solutions for Devices and Service Interoperability. Multi-vendor devices offer an increasing number of services and end-user applications that base their value on the ability to exploit the information originating from the surrounding environment by means of an increasing number of embedded sensors, e.g. GPS, compass, RFID readers, cameras and so on. However, usually such devices are not able to exchange information because of the lack of a shared data storage and common information exchange methods. A large number of standards and domain specific building blocks are available and are heavily used in today's products. However, the use of these solutions based on ready-to-use modules is not without problems. The integration and cooperation of different kinds of modules can be daunting because of growing complexity and dependency. In this scenarios it might be interesting to have an infrastructure that makes the coexistence of multi-vendor devices easy, while enabling low cost development and smooth access to services. This sort of technologies glue should reduce both software and hardware integration costs by removing the trouble of interoperability. The result should also lead to faster and simplified design, development and, deployment of cross-domain applications. This thesis is mainly focused on SW architectures supporting context aware service providers especially on the following subjects: - user preferences service adaptation - context management - content management - information interoperability - multivendor device interoperability - communication and connectivity interoperability Experimental activities were carried out in several domains including Cultural Heritage, indoor and personal smart spaces – all of which are considered significant test-beds in Context Aware Computing. The work evolved within european and national projects: on the europen side, I carried out my research activity within EPOCH, the FP6 Network of Excellence on “Processing Open Cultural Heritage” and within SOFIA, a project of the ARTEMIS JU on embedded systems. I worked in cooperation with several international establishments, including the University of Kent, VTT (the Technical Reserarch Center of Finland) and Eurotech. On the national side I contributed to a one-to-one research contract between ARCES and Telecom Italia. The first part of the thesis is focused on problem statement and related work and addresses interoperability issues and related architecture components. The second part is focused on specific architectures and frameworks: - MobiComp: a context management framework that I used in cultural heritage applications - CAB: a context, preference and profile based application broker which I designed within EPOCH Network of Excellence - M3: "Semantic Web based" information sharing infrastructure for smart spaces designed by Nokia within the European project SOFIA - NoTa: a service and transport independent connectivity framework - OSGi: the well known Java based service support framework The final section is dedicated to the middleware, the tools and, the SW agents developed during my Doctorate time to support context-aware services in smart environments.
Resumo:
This dissertation deals with the design and the characterization of novel reconfigurable silicon-on-insulator (SOI) devices to filter and route optical signals on-chip. Design is carried out through circuit simulations based on basic circuit elements (Building Blocks, BBs) in order to prove the feasibility of an approach allowing to move the design of Photonic Integrated Circuits (PICs) toward the system level. CMOS compatibility and large integration scale make SOI one of the most promising material to realize PICs. The concepts of generic foundry and BB based circuit simulations for the design are emerging as a solution to reduce the costs and increase the circuit complexity. To validate the BB based approach, the development of some of the most important BBs is performed first. A novel tunable coupler is also presented and it is demonstrated to be a valuable alternative to the known solutions. Two novel multi-element PICs are then analysed: a narrow linewidth single mode resonator and a passband filter with widely tunable bandwidth. Extensive circuit simulations are carried out to determine their performance, taking into account fabrication tolerances. The first PIC is based on two Grating Assisted Couplers in a ring resonator (RR) configuration. It is shown that a trade-off between performance, resonance bandwidth and device footprint has to be performed. The device could be employed to realize reconfigurable add-drop de/multiplexers. Sensitivity with respect to fabrication tolerances and spurious effects is however observed. The second PIC is based on an unbalanced Mach-Zehnder interferometer loaded with two RRs. Overall good performance and robustness to fabrication tolerances and nonlinear effects have confirmed its applicability for the realization of flexible optical systems. Simulated and measured devices behaviour is shown to be in agreement thus demonstrating the viability of a BB based approach to the design of complex PICs.
Resumo:
In recent decades, Organic Thin Film Transistors (OTFTs) have attracted lots of interest due to their low cost, large area and flexible properties which have brought them to be considered the building blocks of the future organic electronics. Experimentally, devices based on the same organic material deposited in different ways, i.e. by varying the deposition rate of the molecules, show different electrical performance. As predicted theoretically, this is due to the speed and rate by which charge carriers can be transported by hopping in organic thin films, transport that depends on the molecular arrangement of the molecules. This strongly suggests a correlation between the morphology of the organic semiconductor and the performance of the OTFT and hence motivated us to carry out an in-situ real time SPM study of organic semiconductor growth as an almost unprecedent experiment with the aim to fully describe the morphological evolution of the ultra-thin film and find the relevant morphological parameters affecting the OTFT electrical response. For the case of 6T on silicon oxide, we have shown that the growth mechanism is 2D+3D, with a roughening transition at the third layer and a rapid roughening. Relevant morphological parameters have been extracted by the AFM images. We also developed an original mathematical model to estimate theoretically and more accurately than before, the capacitance of an EFM tip in front of a metallic substrate. Finally, we obtained Ultra High Vacuum (UHV) AFM images of 6T at lying molecules layer both on silicon oxide and on top of 6T islands. Moreover, we performed ex-situ AFM imaging on a bilayer film composed of pentacene (a p-type semiconductor) and C60 (an n-type semiconductor).
Resumo:
In Bosnia Herzegovina the development of clear policy objectives and endorsement of a long-term, coherent and mutual agricultural and rural development policy have also been affected by structural problems: a lack of reliable information on population and other relevant issues, the absence of an adequate land registry system and cadastre. Moreover in BiH the agricultural and rural sectors are characterized by many factors that have typically affected transition countries such as land fragmentation, lack of agricultural mechanization and outdated production technologies, and rural aging, high unemployment and out-migration. In such a framework the condition and role of women in rural areas suffered for the lack of gender disaggregated data and a consequent poor information that lead to the exclusion of gender related questions in the agenda of public institutions and to the absence of targeted policy interventions. The aim of the research is to investigate the role and condition of women in the rural development process of Republic of Srpska and to analyze the capacity of extension services to stimulate their empowerment. Specific research questions include the status of women in the rural areas of Republic of Srpska, the role of government in fostering the empowerment of rural women, and the role of the extension service in supporting rural women. The methodology - inspired by the case study method developed by R. Yin - is designed along the three specific research questions that are used as building blocks. Each of the three research questions is investigated with a combination of methodological tools - including surveys, experts interviews and focus groups - aimed to overcome the lack of data and knowledge that characterize the research objectives.
Resumo:
Low molecular weight gelators (LMWGs) based on pseudo-peptides are here studied for the preparation of supramolecular materials. These compounds can self-assemble through non-covalent interactions such as hydrogen bonds and π-π stacking, forming fibres and gels. A wide variety of materials can be prepared starting from these building blocks, which can be tuned and functionalised depending on the application. In this work, derivatives of the three aromatic amino acids L-Phenylalanine, L-Tyrosine and L-DOPA (3,4-dihydroxiphenylalanine) were synthesised and tested as gelators for water or organic solvents. First, the optimal gelating conditions were studied for each compound, varying concentration, solvent and trigger. Then the materials were characterised in terms of mechanical properties and morphology. Water remediation from dye pollution was the first focus of this work. Organogels were studied as absorbent of dyes from contaminated water. Hydrogels functionalised with TiO2 nanoparticles and graphene platelets were proposed as efficient materials for the photo-degradation of dyes. An efficient method for the incorporation of graphene inside hydrogels using the gelator itself as dispersant was proposed. In these materials a high storage modulus coexists with good self-healing and biocompatibility. The incorporation of a mineral phase inside the gel matrix was then investigated, leading to the preparation of composite organic/inorganic materials. In a first study, the growth of calcium carbonate crystals was achieved inside the hydrogel, which preserved its structure after crystal formation. Then the self-assembled fibres made of LMWGs were used for the first time instead of the polymeric ones as reinforcement inside calcium phosphate cements (CPCs) for bone regeneration. Gel-to-crystal transitions occurring with time in a metastable gel were also examined. The formation of organic crystals in gels can be achieved in multicomponent systems, in which a second gelator constitutes the independent gel network. Finally, some compounds unable to gelate were tested as underwater adhesives.
Resumo:
Gossip protocols have proved to be a viable solution to set-up and manage largescale P2P services or applications in a fully decentralised scenario. The gossip or epidemic communication scheme is heavily based on stochastic behaviors and it is the fundamental idea behind many large-scale P2P protocols. It provides many remarkable features, such as scalability, robustness to failures, emergent load balancing capabilities, fast spreading, and redundancy of information. In some sense, these services or protocols mimic natural system behaviors in order to achieve their goals. The key idea of this work is that the remarkable properties of gossip hold when all the participants follow the rules dictated by the actual protocols. If one or more malicious nodes join the network and start cheating according to some strategy, the result can be catastrophic. In order to study how serious the threat posed by malicious nodes can be and what can be done to prevent attackers from cheating, we focused on a general attack model aimed to defeat a key service in gossip overlay networks (the Peer Sampling Service [JGKvS04]). We also focused on the problem of protecting against forged information exchanged in gossip services. We propose a solution technique for each problem; both techniques are general enough to be applied to distinct service implementations. As gossip protocols, our solutions are based on stochastic behavior and are fully decentralized. In addition, each technique’s behaviour is abstracted by a general primitive function extending the basic gossip scheme; this approach allows the adoptions of our solutions with minimal changes in different scenarios. We provide an extensive experimental evaluation to support the effectiveness of our techniques. Basically, these techniques aim to be building blocks or P2P architecture guidelines in building more resilient and more secure P2P services.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.