921 resultados para Adaptive design, D-optimal design, MCMC, Pharmacokinetics
Resumo:
Current SoC design trends are characterized by the integration of larger amount of IPs targeting a wide range of application fields. Such multi-application systems are constrained by a set of requirements. In such scenario network-on-chips (NoC) are becoming more important as the on-chip communication structure. Designing an optimal NoC for satisfying the requirements of each individual application requires the specification of a large set of configuration parameters leading to a wide solution space. It has been shown that IP mapping is one of the most critical parameters in NoC design, strongly influencing the SoC performance. IP mapping has been solved for single application systems using single and multi-objective optimization algorithms. In this paper we propose the use of a multi-objective adaptive immune algorithm (M(2)AIA), an evolutionary approach to solve the multi-application NoC mapping problem. Latency and power consumption were adopted as the target multi-objective functions. To compare the efficiency of our approach, our results are compared with those of the genetic and branch and bound multi-objective mapping algorithms. We tested 11 well-known benchmarks, including random and real applications, and combines up to 8 applications at the same SoC. The experimental results showed that the M(2)AIA decreases in average the power consumption and the latency 27.3 and 42.1 % compared to the branch and bound approach and 29.3 and 36.1 % over the genetic approach.
Resumo:
Background: Atrial fibrillation is a serious public health problem posing a considerable burden to not only patients, but the healthcare environment due to high rates of morbidity, mortality, and medical resource utilization. There are limited data on the variation in treatment practice patterns across different countries, healthcare settings and the associated health outcomes. Methods/design: RHYTHM-AF was a prospective observational multinational study of management of recent onset atrial fibrillation patients considered for cardioversion designed to collect data on international treatment patterns and short term outcomes related to cardioversion. We present data collected in 10 countries between May 2010 and June 2011. Enrollment was ongoing in Italy and Brazil at the time of data analysis. Data were collected at the time of atrial fibrillation episode in all countries (Australia, Brazil, France, Germany, Italy, Netherlands, Poland, Spain, Sweden, United Kingdom), and cumulative follow-up data were collected at day 60 (+/- 10) in all but Spain. Information on center characteristics, enrollment data, patient demographics, detail of atrial fibrillation episode, medical history, diagnostic procedures, acute treatment of atrial fibrillation, discharge information and the follow-up data on major events and rehospitalizations up to day 60 were collected. Discussion: A total of 3940 patients were enrolled from 175 acute care centers. 70.5% of the centers were either academic (44%) or teaching (26%) hospitals with an overall median capacity of 510 beds. The sites were mostly specialized with anticoagulation clinics (65.9%), heart failure (75.1%) and hypertension clinics (60.1%) available. The RHYTHM-AF registry will provide insight into regional variability of antiarrhythmic and antithrombotic treatment of atrial fibrillation, the appropriateness of such treatments with respect to outcomes, and their cost-efficacy. Observations will help inform strategies to improve cardiovascular outcomes in patients with atrial fibrillation.
Resumo:
We report herein the synthesis and trypanocidal profile of new (E)-cinnamic N-acylhydrazones (NAHs) designed by exploiting molecular hybridization between the potent cruzain inhibitors (E)-1-(benzo[d] 11,3)dioxol-5-yl)-3-(4-bromophenyl)prop-2-en-1-one and (E)-3-hydroxy-N'-((2-hydroxynaphthalen-1-yl)methylene)-7-methoxy-2-naphthohydrazide. These derivatives were evaluated against both amastigote and trypomastigote forms of Trypanosoma cruzi and lead us to identify two compounds that were approximately two times more active than the reference drug, benznidazole, and with good cytotoxic index. Although designed as cruzain inhibitors, the weak potency displayed by the best cinnamyl NAH derivatives indicated that another mechanism of action was likely responsible for their trypanocide action. (C) 2012 Elsevier Masson SAS. All rights reserved.
Resumo:
In this study is presented an economic optimization method to design telescope irrigation laterals (multidiameter) with regular spaced outlets. The proposed analytical hydraulic solution was validated by means of a pipeline composed of three different diameters. The minimum acquisition cost of the telescope pipeline was determined by an ideal arrangement of lengths and respective diameters for each one of the three segments. The mathematical optimization method based on the Lagrange multipliers provides a strategy for finding the maximum or minimum of a function subject to certain constraints. In this case, the objective function describes the acquisition cost of pipes, and the constraints are determined from hydraulic parameters as length of irrigation laterals and total head loss permitted. The developed analytical solution provides the ideal combination of each pipe segment length and respective diameter, resulting in a decreased of the acquisition cost.
Resumo:
This paper proposes two new approaches for the sensitivity analysis of multiobjective design optimization problems whose performance functions are highly susceptible to small variations in the design variables and/or design environment parameters. In both methods, the less sensitive design alternatives are preferred over others during the multiobjective optimization process. While taking the first approach, the designer chooses the design variable and/or parameter that causes uncertainties. The designer then associates a robustness index with each design alternative and adds each index as an objective function in the optimization problem. For the second approach, the designer must know, a priori, the interval of variation in the design variables or in the design environment parameters, because the designer will be accepting the interval of variation in the objective functions. The second method does not require any law of probability distribution of uncontrollable variations. Finally, the authors give two illustrative examples to highlight the contributions of the paper.
Resumo:
In deterministic optimization, the uncertainties of the structural system (i.e. dimension, model, material, loads, etc) are not explicitly taken into account. Hence, resulting optimal solutions may lead to reduced reliability levels. The objective of reliability based design optimization (RBDO) is to optimize structures guaranteeing that a minimum level of reliability, chosen a priori by the designer, is maintained. Since reliability analysis using the First Order Reliability Method (FORM) is an optimization procedure itself, RBDO (in its classical version) is a double-loop strategy: the reliability analysis (inner loop) and the structural optimization (outer loop). The coupling of these two loops leads to very high computational costs. To reduce the computational burden of RBDO based on FORM, several authors propose decoupling the structural optimization and the reliability analysis. These procedures may be divided in two groups: (i) serial single loop methods and (ii) unilevel methods. The basic idea of serial single loop methods is to decouple the two loops and solve them sequentially, until some convergence criterion is achieved. On the other hand, uni-level methods employ different strategies to obtain a single loop of optimization to solve the RBDO problem. This paper presents a review of such RBDO strategies. A comparison of the performance (computational cost) of the main strategies is presented for several variants of two benchmark problems from the literature and for a structure modeled using the finite element method.
Resumo:
L’utilizzo di materiali compositi come i calcestruzzi fibrorinforzati sta diventando sempre più frequente e diffuso. Tuttavia la scelta di nuovi materiali richiede una approfondita analisi delle loro caratteristiche e dei loro comportamenti. I vantaggi forniti dall’aggiunta di fibre d’acciaio ad un materiale fragile, quale il calcestruzzo, sono legati al miglioramento della duttilità e all'aumento di assorbimento di energia. L’aggiunta di fibre permette quindi di migliorare il comportamento strutturale del composito, dando vita ad un nuovo materiale capace di lavorare non solo a compressione ma anche in piccola parte a trazione, ma soprattutto caratterizzato da una discreta duttilità ed una buona capacità plastica. Questa tesi ha avuto come fine l’analisi delle caratteristiche di questi compositi cementizi fibrorinforzati. Partendo da prove sperimentali classiche quali prove di trazione e compressione, si è arrivati alla caratterizzazione di questi materiali avvalendosi di una campagna sperimentale basata sull’applicazione della norma UNI 11039/2003. L’obiettivo principale di questo lavoro consiste nell’analizzare e nel confrontare calcestruzzi rinforzati con fibre di due diverse lunghezze e in diversi dosaggi. Studiando questi calcestruzzi si è cercato di comprendere meglio questi materiali e trovare un riscontro pratico ai comportamenti descritti in teorie ormai diffuse e consolidate. La comparazione dei risultati dei test condotti ha permesso di mettere in luce differenze tra i materiali rinforzati con l’aggiunta di fibre corte rispetto a quelli con fibre lunghe, ma ha anche permesso di mostrare e sottolineare le analogie che caratterizzano questi materiali fibrorinforzati. Sono stati affrontati inoltre gli aspetti legati alle fasi della costituzione di questi materiali sia da un punto di vista teorico sia da un punto di vista pratico. Infine è stato sviluppato un modello analitico basato sulla definizione di specifici diagrammi tensione-deformazione; i risultati di questo modello sono quindi stati confrontati con i dati sperimentali ottenuti in laboratorio.
Resumo:
Herpes simplex virus 1 (HSV-1) infects oral epitelial cells, then spreads to the nerve endings and estabilishes latency in sensory ganglia, from where it may, or may not reactivate. Diseases caused by virus reactivation include mild diseases such as muco-cutaneous lesions, and more severe, and even life-threatening encephalitis, or systemic infections affecting diverse organs. Herpes simplex virus represents the most comprehensive example of virus receptor interaction in Herpesviridae family, and the prototype virus encoding multipartite entry genes. In fact, it encodes 11-12 glycoproteins and a number of additional membrane proteins: five of these proteins play key roles in virus entry into subsceptible cells. Thus, glycoprotein B (gB) and glycoprotein C (gC) interact with heparan sulfate proteoglycan to enable initial attachment to cell surfaces. In the next step, in the entry cascade, gD binds a specific surface receptor such as nectin1 or HVEM. The interaction of glycoprotein D with the receptor alters the conformation of gD to enable the activation of gB, glycoprotein H, and glycoprotein L, a trio of glycoproteins that execute the fusion of the viral envelope with the plasma membrane. In this thesis, I described two distinct projects: I. The retargeting of viral tropism for the design of oncolytic Herpesviruses: • capable of infecting cells through the human epitelial growth factor receptor 2 (HER2), overexpressed in highly malignant mammary and ovarian tumors and correlates with a poor prognosis; • detargeted from its natural receptors, HVEM and nectin1. To this end, we inserted a ligand to HER2 in gD. Because HER2 has no natural ligand, the selected ligand was a single chain antibody (scFv) derived from MAb4D5 (monoclonal antibody to HER2), herein designated scHER2. All recombinant viruses were targeted to HER2 receptor, but only two viruses (R-LM113 and R-LM249) were completely detargeted from HVEM and nectin1. To engineer R-LM113, we removed a large portion at the N-terminus of gD (from aa 6 to aa 38) and inserted scHER2 sequence plus 9-aa serine-glycine flexible linker at position 39. On the other hand, to engineer R-LM249, we replaced the Ig-folded core of gD (from aa 61 to aa 218) with scHER2 flanked by Ser-Gly linkers. In summary, these results provide evidence that: i. gD can tolerate an insert almost as big as gD itself; ii. the Ig-like domain of gD can be removed; iii. the large portion at the N-terminus of gD (from aa 6 to aa 38) can be removed without loss of key function; iv. R-LM113 and R-LM249 recombinants are ready to be assayed in animal models of mammary and ovary tumour. This finding and the avaibility of a large number of scFv greatly increase the collection of potential receptors to which HSV can be redirected. II. The production and purification of recombinant truncated form of the heterodimer gHgL. We cloned a stable insect cell line expressing a soluble form of gH in complex with gL under the control of a metalloprotein inducible promoter and purified the heterodimer by means of ONE-STrEP-tag system by IBA. With respect to biological function, the purified heterodimer is capable: • of reacting to antibodies that recognize conformation dependent epitopes and neutralize virion infectivity; • of binding a variety cells at cell surface. No doubt, the availability of biological active purified gHgL heterodimer, in sufficient quantities, will speed up the efforts to solve its crystal structure and makes it feasible to identify more clearly whether gHgL has a cellular partner, and what is the role of this interaction on virus entry.
Resumo:
[EN]We present a new strategy, based on the meccano method [1, 2, 3], to construct a T-spline parameterization of 2D geometries for the application of isogeometric analysis. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between 2D objects and the parametric domain, the unit square. The key of the method lies in defining an isomorphic transformation between the parametric and physical T-mesh finding the optimal position of the interior nodes by applying a new T-mesh untangling and smoothing procedure. Bivariate T-spline representation is calculated by imposing the interpolation conditions on points sited both on the interior and on the boundary of the geometry…
Resumo:
Chemists have long sought to extrapolate the power of biological catalysis and recognition to synthetic systems. These efforts have focused largely on low molecular weight catalysts and receptors; however, biological systems themselves rely almost exclusively on polymers, proteins and RNA, to perform complex chemical functions. Proteins and RNA are unique in their ability to adopt compact, well-ordered conformations, and specific folding provides precise spatial orientation of the functional groups that comprise the “active site”. These features suggest that identification of new polymer backbones with discrete and predictable folding propensities (“foldamers”) will provide a basis for design of molecular machines with unique capabilities. The foldamer approach complements current efforts to design unnatural properties into polypeptides and polynucleotides. The aim of this thesis is the synthesis and conformational studies of new classes of foldamers, using a peptidomimetic approach. Moreover their attitude to be utilized as ionophores, catalysts, and nanobiomaterials were analyzed in solution and in the solid state. This thesis is divided in thematically chapters that are reported below. It begins with a very general introduction (page 4) which is useful, but not strictly necessary, to the expert reader. It is worth mentioning that paragraph I.3 (page 22) is the starting point of this work and paragraph I.5 (page 32) isrequired to better understand the results of chapters 4 and 5. In chapter 1 (page 39) is reported the synthesis and conformational analysis of a novel class of foldamers containing (S)-β3-homophenylglycine [(S)-β3-hPhg] and D- 4-carboxy-oxazolidin-2-one (D-Oxd) residues in alternate order is reported. The experimental conformational analysis performed in solution by IR, 1HNMR, and CD spectroscopy unambiguously proved that these oligomers fold into ordered structures with increasing sequence length. Theoretical calculations employing ab initio MO theory suggest a helix with 11-membered hydrogenbonded rings as the preferred secondary structure type. The novel structures enrich the field of peptidic foldamers and might be useful in the mimicry of native peptides. In chapter 2 cyclo-(L-Ala-D-Oxd)3 and cyclo-(L-Ala-DOxd) 4 were prepared in the liquid phase with good overall yields and were utilized for bivalent ions chelation (Ca2+, Mg2+, Cu2+, Zn2+ and Hg2+); their chelation skill was analyzed with ESI-MS, CD and 1HNMR techniques and the best results were obtained with cyclo-(L-Ala-D-Oxd)3 and Mg2+ or Ca2+. Chapter 3 describes an application of oligopeptides as catalysts for aldol reactions. Paragraph 3.1 concerns the use of prolinamides as catalysts of the cross aldol addition of hydroxyacetone to aromatic aldeydes, whereas paragraphs 3.2 and 3.3 are about the catalyzed aldol addition of acetone to isatins. By means of DFT and AIM calculations, the steric and stereoelectronic effects that control the enantioselectivity in the cross-aldol addition of acetone to isatin catalysed by L-proline have been studied, also in the presence of small quantities of water. In chapter 4 is reported the synthesis and the analysis of a new fiber-like material, obtained from the selfaggregation of the dipeptide Boc-L-Phe-D-Oxd-OBn, which spontaneously forms uniform fibers consisting of parallel infinite linear chains arising from singleintermolecular N-H···O=C hydrogen bonds. This is the absolute borderline case of a parallel β-sheet structure. Longer oligomers of the same series with general formula Boc-(L-Phe-D-Oxd)n-OBn (where n = 2-5), are described in chapter 5. Their properties in solution and in the solid state were analyzed, in correlation with their attitude to form intramolecular hydrogen bond. In chapter 6 is reported the synthesis of imidazolidin-2- one-4-carboxylate and (tetrahydro)-pyrimidin-2-one-5- carboxylate, via an efficient modification of the Hofmann rearrangement. The reaction affords the desired compounds from protected asparagine or glutamine in good to high yield, using PhI(OAc)2 as source of iodine(III).
Resumo:
The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.
Resumo:
The aim of this Ph.D. project has been the design and characterization of new and more efficient luminescent tools, in particular sensors and labels, for analytical chemistry, medical diagnostics and imaging. Actually both the increasing temporal and spatial resolutions that are demanded by those branches, coupled to a sensitivity that is required to reach the single molecule resolution, can be provided by the wide range of techniques based on luminescence spectroscopy. As far as the development of new chemical sensors is concerned, as chemists we were interested in the preparation of new, efficient, sensing materials. In this context, we kept developing new molecular chemosensors, by exploiting the supramolecular approach, for different classes of analytes. In particular we studied a family of luminescent tetrapodal-hosts based on aminopyridinium units with pyrenyl groups for the detection of anions. These systems exhibited noticeable changes in the photophysical properties, depending on the nature of the anion; in particular, addition of chloride resulted in a conformational change, giving an initial increase in excimeric emission. A good selectivity for dicarboxylic acid was also found. In the search for higher sensitivities, we moved our attention also to systems able to perform amplification effects. In this context we described the metal ion binding properties of three photoactive poly-(arylene ethynylene) co-polymers with different complexing units and we highlighted, for one of them, a ten-fold amplification of the response in case of addition of Zn2+, Cu2+ and Hg2+ ions. In addition, we were able to demonstrate the formation of complexes with Yb3+ an Er3+ and an efficient sensitization of their typical metal centered NIR emission upon excitation of the polymer structure, this feature being of particular interest for their possible applications in optical imaging and in optical amplification for telecommunication purposes. An amplification effect was also observed during this research in silica nanoparticles derivatized with a suitable zinc probe. In this case we were able to prove, for the first time, that nanoparticles can work as “off-on” chemosensors with signal amplification. Fluorescent silica nanoparticles can be thus seen as innovative multicomponent systems in which the organization of photophysically active units gives rise to fruitful collective effects. These precious effects can be exploited for biological imaging, medical diagnostic and therapeutics, as evidenced also by some results reported in this thesis. In particular, the observed amplification effect has been obtained thanks to a suitable organization of molecular probe units onto the surface of the nanoparticles. In the effort of reaching a deeper inside in the mechanisms which lead to the final amplification effects, we also attempted to find a correlation between the synthetic route and the final organization of the active molecules in the silica network, and thus with those mutual interactions between one another which result in the emerging, collective behavior, responsible for the desired signal amplification. In this context, we firstly investigated the process of formation of silica nanoparticles doped with pyrene derivative and we showed that the dyes are not uniformly dispersed inside the silica matrix; thus, core-shell structures can be formed spontaneously in a one step synthesis. Moreover, as far as the design of new labels is concerned, we reported a new synthetic approach to obtain a class of robust, biocompatible silica core-shell nanoparticles able to show a long-term stability. Taking advantage of this new approach we also showed the synthesis and photophysical properties of core-shell NIR absorbing and emitting materials that proved to be very valuable for in-vivo imaging. In general, the dye doped silica nanoparticles prepared in the framework of this project can conjugate unique properties, such as a very high brightness, due to the possibility to include many fluorophores per nanoparticle, high stability, because of the shielding effect of the silica matrix, and, to date, no toxicity, with a simple and low-cost preparation. All these features make these nanostructures suitable to reach the low detection limits that are nowadays required for effective clinical and environmental applications, fulfilling in this way the initial expectations of this research project.
Resumo:
In this thesis we study three combinatorial optimization problems belonging to the classes of Network Design and Vehicle Routing problems that are strongly linked in the context of the design and management of transportation networks: the Non-Bifurcated Capacitated Network Design Problem (NBP), the Period Vehicle Routing Problem (PVRP) and the Pickup and Delivery Problem with Time Windows (PDPTW). These problems are NP-hard and contain as special cases some well known difficult problems such as the Traveling Salesman Problem and the Steiner Tree Problem. Moreover, they model the core structure of many practical problems arising in logistics and telecommunications. The NBP is the problem of designing the optimum network to satisfy a given set of traffic demands. Given a set of nodes, a set of potential links and a set of point-to-point demands called commodities, the objective is to select the links to install and dimension their capacities so that all the demands can be routed between their respective endpoints, and the sum of link fixed costs and commodity routing costs is minimized. The problem is called non- bifurcated because the solution network must allow each demand to follow a single path, i.e., the flow of each demand cannot be splitted. Although this is the case in many real applications, the NBP has received significantly less attention in the literature than other capacitated network design problems that allow bifurcation. We describe an exact algorithm for the NBP that is based on solving by an integer programming solver a formulation of the problem strengthened by simple valid inequalities and four new heuristic algorithms. One of these heuristics is an adaptive memory metaheuristic, based on partial enumeration, that could be applied to a wider class of structured combinatorial optimization problems. In the PVRP a fleet of vehicles of identical capacity must be used to service a set of customers over a planning period of several days. Each customer specifies a service frequency, a set of allowable day-combinations and a quantity of product that the customer must receive every time he is visited. For example, a customer may require to be visited twice during a 5-day period imposing that these visits take place on Monday-Thursday or Monday-Friday or Tuesday-Friday. The problem consists in simultaneously assigning a day- combination to each customer and in designing the vehicle routes for each day so that each customer is visited the required number of times, the number of routes on each day does not exceed the number of vehicles available, and the total cost of the routes over the period is minimized. We also consider a tactical variant of this problem, called Tactical Planning Vehicle Routing Problem, where customers require to be visited on a specific day of the period but a penalty cost, called service cost, can be paid to postpone the visit to a later day than that required. At our knowledge all the algorithms proposed in the literature for the PVRP are heuristics. In this thesis we present for the first time an exact algorithm for the PVRP that is based on different relaxations of a set partitioning-like formulation. The effectiveness of the proposed algorithm is tested on a set of instances from the literature and on a new set of instances. Finally, the PDPTW is to service a set of transportation requests using a fleet of identical vehicles of limited capacity located at a central depot. Each request specifies a pickup location and a delivery location and requires that a given quantity of load is transported from the pickup location to the delivery location. Moreover, each location can be visited only within an associated time window. Each vehicle can perform at most one route and the problem is to satisfy all the requests using the available vehicles so that each request is serviced by a single vehicle, the load on each vehicle does not exceed the capacity, and all locations are visited according to their time window. We formulate the PDPTW as a set partitioning-like problem with additional cuts and we propose an exact algorithm based on different relaxations of the mathematical formulation and a branch-and-cut-and-price algorithm. The new algorithm is tested on two classes of problems from the literature and compared with a recent branch-and-cut-and-price algorithm from the literature.
Resumo:
The following Ph.D work was mainly focused on catalysis, as a key technology, to achieve the objectives of sustainable (green) chemistry. After introducing the concepts of sustainable (green) chemistry and an assessment of new sustainable chemical technologies, the relationship between catalysis and sustainable (green) chemistry was briefly discussed and illustrated via an analysis of some selected and relevant examples. Afterwards, as a continuation of the ongoing interest in Dr. Marco Bandini’s group on organometallic and organocatalytic processes, I addressed my efforts to the design and development of novel catalytic green methodologies for the synthesis of enantiomerically enriched molecules. In the first two projects the attention was focused on the employment of solid supports to carry out reactions that still remain a prerogative of omogeneous catalysis. Firstly, particular emphasis was addressed to the discovery of catalytic enantioselective variants of nitroaldol condensation (commonly termed Henry reaction), using a complex consisting in a polyethylene supported diamino thiopene (DATx) ligands and copper as active species. In the second project, a new class of electrochemically modified surfaces with DATx palladium complexes was presented. The DATx-graphite system proved to be efficient in promoting the Suzuki reaction. Moreover, in collaboration with Prof. Wolf at the University of British Columbia (Vancouver), cyclic voltammetry studies were reported. This study disclosed new opportunities for carbon–carbon forming processes by using heterogeneous, electrodeposited catalyst films. A straightforward metal-free catalysis allowed the exploration around the world of organocatalysis. In fact, three different and novel methodologies, using Cinchona, Guanidine and Phosphine derivatives, were envisioned in the three following projects. An interesting variant of nitroaldol condensation with simple trifluoromethyl ketones and also their application in a non-conventional activation of indolyl cores by Friedel-Crafts-functionalization, led to two novel synthetic protocols. These approaches allowed the preparation of synthetically useful trifluoromethyl derivatives bearing quaternary stereocenters. Lastly, in the sixth project the first γ-alkylation of allenoates with conjugated carbonyl compounds was envisioned. In the last part of this Ph.D thesis bases on an extra-ordinary collaboration with Prof. Balzani and Prof. Gigli, I was involved in the synthesis and characterization of a new type of heteroleptic cyclometaled-Ir(III) complexes, bearing bis-oxazolines (BOXs) as ancillary ligands. The new heteroleptic complexes were fully characterized and in order to examine the electroluminescent properties of FIrBOX(CH2), an Organic Light Emitting Device was realized.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.