20 resultados para evaluation design
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The scale down of transistor technology allows microelectronics manufacturers such as Intel and IBM to build always more sophisticated systems on a single microchip. The classical interconnection solutions based on shared buses or direct connections between the modules of the chip are becoming obsolete as they struggle to sustain the increasing tight bandwidth and latency constraints that these systems demand. The most promising solution for the future chip interconnects are the Networks on Chip (NoC). NoCs are network composed by routers and channels used to inter- connect the different components installed on the single microchip. Examples of advanced processors based on NoC interconnects are the IBM Cell processor, composed by eight CPUs that is installed on the Sony Playstation III and the Intel Teraflops pro ject composed by 80 independent (simple) microprocessors. On chip integration is becoming popular not only in the Chip Multi Processor (CMP) research area but also in the wider and more heterogeneous world of Systems on Chip (SoC). SoC comprehend all the electronic devices that surround us such as cell-phones, smart-phones, house embedded systems, automotive systems, set-top boxes etc... SoC manufacturers such as ST Microelectronics , Samsung, Philips and also Universities such as Bologna University, M.I.T., Berkeley and more are all proposing proprietary frameworks based on NoC interconnects. These frameworks help engineers in the switch of design methodology and speed up the development of new NoC-based systems on chip. In this Thesis we propose an introduction of CMP and SoC interconnection networks. Then focusing on SoC systems we propose: • a detailed analysis based on simulation of the Spidergon NoC, a ST Microelectronics solution for SoC interconnects. The Spidergon NoC differs from many classical solutions inherited from the parallel computing world. Here we propose a detailed analysis of this NoC topology and routing algorithms. Furthermore we propose aEqualized a new routing algorithm designed to optimize the use of the resources of the network while also increasing its performance; • a methodology flow based on modified publicly available tools that combined can be used to design, model and analyze any kind of System on Chip; • a detailed analysis of a ST Microelectronics-proprietary transport-level protocol that the author of this Thesis helped developing; • a simulation-based comprehensive comparison of different network interface designs proposed by the author and the researchers at AST lab, in order to integrate shared-memory and message-passing based components on a single System on Chip; • a powerful and flexible solution to address the time closure exception issue in the design of synchronous Networks on Chip. Our solution is based on relay stations repeaters and allows to reduce the power and area demands of NoC interconnects while also reducing its buffer needs; • a solution to simplify the design of the NoC by also increasing their performance and reducing their power and area consumption. We propose to replace complex and slow virtual channel-based routers with multiple and flexible small Multi Plane ones. This solution allows us to reduce the area and power dissipation of any NoC while also increasing its performance especially when the resources are reduced. This Thesis has been written in collaboration with the Advanced System Technology laboratory in Grenoble France, and the Computer Science Department at Columbia University in the city of New York.
Resumo:
Technology advances in recent years have dramatically changed the way users exploit contents and services available on the Internet, by enforcing pervasive and mobile computing scenarios and enabling access to networked resources almost from everywhere, at anytime, and independently of the device in use. In addition, people increasingly require to customize their experience, by exploiting specific device capabilities and limitations, inherent features of the communication channel in use, and interaction paradigms that significantly differ from the traditional request/response one. So-called Ubiquitous Internet scenario calls for solutions that address many different challenges, such as device mobility, session management, content adaptation, context-awareness and the provisioning of multimodal interfaces. Moreover, new service opportunities demand simple and effective ways to integrate existing resources into new and value added applications, that can also undergo run-time modifications, according to ever-changing execution conditions. Despite service-oriented architectural models are gaining momentum to tame the increasing complexity of composing and orchestrating distributed and heterogeneous functionalities, existing solutions generally lack a unified approach and only provide support for specific Ubiquitous Internet aspects. Moreover, they usually target rather static scenarios and scarcely support the dynamic nature of pervasive access to Internet resources, that can make existing compositions soon become obsolete or inadequate, hence in need of reconfiguration. This thesis proposes a novel middleware approach to comprehensively deal with Ubiquitous Internet facets and assist in establishing innovative application scenarios. We claim that a truly viable ubiquity support infrastructure must neatly decouple distributed resources to integrate and push any kind of content-related logic outside its core layers, by keeping only management and coordination responsibilities. Furthermore, we promote an innovative, open, and dynamic resource composition model that allows to easily describe and enforce complex scenario requirements, and to suitably react to changes in the execution conditions.
Resumo:
Alzheimer's disease (AD) and cancer represent two of the main causes of death worldwide. They are complex multifactorial diseases and several biochemical targets have been recognized to play a fundamental role in their development. Basing on their complex nature, a promising therapeutical approach could be represented by the so-called "Multi-Target-Directed Ligand" approach. This new strategy is based on the assumption that a single molecule could hit several targets responsible for the onset and/or progression of the pathology. In particular in AD, most currently prescribed drugs aim to increase the level of acetylcholine in the brain by inhibiting the enzyme acetylcholinesterase (AChE). However, clinical experience shows that AChE inhibition is a palliative treatment, and the simple modulation of a single target does not address AD aetiology. Research into newer and more potent anti-AD agents is thus focused on compounds whose properties go beyond AChE inhibition (such as inhibition of the enzyme β-secretase and inhibition of the aggregation of beta-amyloid). Therefore, the MTDL strategy seems a more appropriate approach for addressing the complexity of AD and may provide new drugs for tackling its multifactorial nature. In this thesis, it is described the design of new MTDLs able to tackle the multifactorial nature of AD. Such new MTDLs designed are less flexible analogues of Caproctamine, one of the first MTDL owing biological properties useful for the AD treatment. These new compounds are able to inhibit the enzymes AChE, beta-secretase and to inhibit both AChE-induced and self-induced beta-amyloid aggregation. In particular, the most potent compound of the series is able to inhibit AChE in subnanomolar range, to inhibit β-secretase in micromolar concentration and to inhibit both AChE-induced and self-induced beta-amyloid aggregation in micromolar concentration. Cancer, as AD, is a very complex pathology and many different therapeutical approaches are currently use for the treatment of such pathology. However, due to its multifactorial nature the MTDL approach could be, in principle, apply also to this pathology. Aim of this thesis has been the development of new molecules owing different structural motifs able to simultaneously interact with some of the multitude of targets responsible for the pathology. The designed compounds displayed cytotoxic activity in different cancer cell lines. In particular, the most potent compounds of the series have been further evaluated and they were able to bind DNA resulting 100-fold more potent than the reference compound Mitonafide. Furthermore, these compounds were able to trigger apoptosis through caspases activation and to inhibit PIN1 (preliminary result). This last protein is a very promising target because it is overexpressed in many human cancers, it functions as critical catalyst for multiple oncogenic pathways and in several cancer cell lines depletion of PIN1 determines arrest of mitosis followed by apoptosis induction. In conclusion, this study may represent a promising starting pint for the development of new MTDLs hopefully useful for cancer and AD treatment.
Resumo:
Broad consensus has been reached within the Education and Cognitive Psychology research communities on the need to center the learning process on experimentation and concrete application of knowledge, rather than on a bare transfer of notions. Several advantages arise from this educational approach, ranging from the reinforce of students learning, to the increased opportunity for a student to gain greater insight into the studied topics, up to the possibility for learners to acquire practical skills and long-lasting proficiency. This is especially true in Engineering education, where integrating conceptual knowledge and practical skills assumes a strategic importance. In this scenario, learners are called to play a primary role. They are actively involved in the construction of their own knowledge, instead of passively receiving it. As a result, traditional, teacher-centered learning environments should be replaced by novel learner-centered solutions. Information and Communication Technologies enable the development of innovative solutions that provide suitable answers to the need for the availability of experimentation supports in educational context. Virtual Laboratories, Adaptive Web-Based Educational Systems and Computer-Supported Collaborative Learning environments can significantly foster different learner-centered instructional strategies, offering the opportunity to enhance personalization, individualization and cooperation. More specifically, they allow students to explore different kinds of materials, to access and compare several information sources, to face real or realistic problems and to work on authentic and multi-facet case studies. In addition, they encourage cooperation among peers and provide support through coached and scaffolded activities aimed at fostering reflection and meta-cognitive reasoning. This dissertation will guide readers within this research field, presenting both the theoretical and applicative results of a research aimed at designing an open, flexible, learner-centered virtual lab for supporting students in learning Information Security.
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Design, synthesis and biological evaluation of substituted naphthalene diimides as anticancer agents
Resumo:
It has been proved that naphthalene diimide (NDI) derivatives display anticancer properties as intercalators and G-quadruplex-binding ligands, leading to DNA damage, senescence and down-regulation of oncogene expression. This thesis deals with the design and synthesis of disubstituted and tetrasubstituted NDI derivatives endowed with anticancer activity, interacting with DNA together with other targets implicated in cancer development. Disubstituted NDI compounds have been designed with the aim to provide potential multitarget directed ligands (MTDLs), in order to create molecules able to simultaneously interact with some of the different targets involved in this pathology. The most active compound, displayed antiproliferative activity in submicromolar range, especially against colon and prostate cancer cell lines, the ability to bind duplex and quadruplex DNA, to inhibit Taq polymerase and telomerase, to trigger caspase activation by a possible oxidative mechanism, to downregulate ERK 2 protein and to inhibit ERKs phosphorylation, without acting directly on microtubules and tubuline. Tetrasubstituted NDI compounds have been designed as G-quadruplex-binding ligands endowed with anticancer activity. In order to improve the cellular uptake of the lead compound, the N-methylpiperazine moiety have been replaced with different aromatic systems and methoxypropyl groups. The most interesting compound was 1d, which was able to interact with the G-quadruplexes both telomeric and in HSP90 promoter region, and it has been co-crystallized with the human telomeric G-quadruplex, to directly verify its ability to bind this kind of structure, and also to investigate its binding mode. All the morpholino substituted compounds show antiproliferative activity in submicromolar values mainly in pancreatic and lung cancer cell lines, and they show an improved biological profile in comparison with that of the lead compound. In conclusion, both these studies, may represent a promising starting point for the development of new interesting molecules useful for the treatment of cancer, underlining the versatility of the NDI scaffold.
Resumo:
This thesis is concerned with the role played by software tools in the analysis and dissemination of linguistic corpora and their contribution to a more widespread adoption of corpora in different fields. Chapter 1 contains an overview of some of the most relevant corpus analysis tools available today, presenting their most interesting features and some of their drawbacks. Chapter 2 begins with an explanation of the reasons why none of the available tools appear to satisfy the requirements of the user community and then continues with technical overview of the current status of the new system developed as part of this work. This presentation is followed by highlights of features that make the system appealing to users and corpus builders (i.e. scholars willing to make their corpora available to the public). The chapter concludes with an indication of future directions for the projects and information on the current availability of the software. Chapter 3 describes the design of an experiment devised to evaluate the usability of the new system in comparison to another corpus tool. Usage of the tool was tested in the context of a documentation task performed on a real assignment during a translation class in a master's degree course. In chapter 4 the findings of the experiment are presented on two levels of analysis: firstly a discussion on how participants interacted with and evaluated the two corpus tools in terms of interface and interaction design, usability and perceived ease of use. Then an analysis follows of how users interacted with corpora to complete the task and what kind of queries they submitted. Finally, some general conclusions are drawn and areas for future work are outlined.
Resumo:
Fibre-Reinforced-Plastics are composite materials composed by thin fibres with high mechanical properties, made to work together with a cohesive plastic matrix. The huge advantages of fibre reinforced plastics over traditional materials are their high specific mechanical properties i.e. high stiffness and strength to weight ratios. This kind of composite materials is the most disruptive innovation in the structural materials field seen in recent years and the areas of potential application are still many. However, there are few aspects which limit their growth: on the one hand the information available about their properties and long term behaviour is still scarce, especially if compared with traditional materials for which there has been developed an extended database through years of use and research. On the other hand, the technologies of production are still not as developed as the ones available to form plastics, metals and other traditional materials. A third aspect is that the new properties presented by these materials e.g. their anisotropy, difficult the design of components. This thesis will provide several case-studies with advancements regarding the three limitations mentioned. In particular, the long term mechanical properties have been studied through an experimental analysis of the impact of seawater on GFRP. Regarding production methods, the pre-impregnated cured in autoclave process was considered: a rapid tooling method to produce moulds will be presented, and a study about the production of thick components. Also, two liquid composite moulding methods will be presented, with a case-study regarding a large component with sandwich structure that was produced with the Vacuum-Assisted-Resin-Infusion method, and a case-study regarding a thick con-rod beam that was produced with the Resin-Transfer-Moulding process. The final case-study will analyse the loads acting during the use of a particular sportive component, made with FRP layers and a sandwich structure, practical design rules will be provided.
Resumo:
Intangible resources have raised the interests of scholars from different research areas due to their importance as crucial factors for firm performance; yet, contributions to this field still lack a theoretical framework. This research analyses the state-of-the-art results reached in the literature concerning intangibles, their main features and evaluation problems and models. In search for a possible theoretical framework, the research draws a kind of indirect analysis of intangibles through the theories of the firm, their critic and developments. The heterodox approaches of the evolutionary theory and resource-based view are indicated as possible frameworks. Based on this theoretical analysis, organization capital (OC) is identified, for its features, as the most important intangible for firm performance. Empirical studies on the relationship intangibles-firm performance have been sporadic and have failed to reach firm conclusions with respect to OC; in the attempt to fill this gap, the effect of OC is tested on a large sample of European firms using the Compustat Global database. OC is proxied by capitalizing an income statement item (Selling, General and Administrative expenses) that includes expenses linked to information technology, business process design, reputation enhancement and employee training. This measure of OC is employed in a cross-sectional estimation of a firm level production function - modeled with different functional specifications (Cobb-Douglas and Translog) - that measures OC contribution to firm output and profitability. Results are robust and confirm the importance of OC for firm performance.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
The aspartic protease BACE1 (β-amyloid precursor protein cleaving enzyme, β-secretase) is recognized as one of the most promising targets in the treatment of Alzheimer's disease (AD). The accumulation of β-amyloid peptide (Aβ) in the brain is a major factor in the pathogenesis of AD. Aβ is formed by initial cleavage of β-amyloid precursor protein (APP) by β-secretase, therefore BACE1 inhibition represents one of the therapeutic approaches to control progression of AD, by preventing the abnormal generation of Aβ. For this reason, in the last decade, many research efforts have focused at the identification of new BACE1 inhibitors as drug candidates. Generally, BACE1 inhibitors are grouped into two families: substrate-based inhibitors, designed as peptidomimetic inhibitors, and non-peptidomimetic ones. The research on non-peptidomimetic small molecules BACE1 inhibitors remains the most interesting approach, since these compounds hold an improved bioavailability after systemic administration, due to a good blood-brain barrier permeability in comparison to peptidomimetic inhibitors. Very recently, our research group discovered a new promising lead compound for the treatment of AD, named lipocrine, a hybrid derivative between lipoic acid and the AChE inhibitor (AChEI) tacrine, characterized by a tetrahydroacridinic moiety. Lipocrine is one of the first compounds able to inhibit the catalytic activity of AChE and AChE-induced amyloid-β aggregation and to protect against reactive oxygen species. Due to this interesting profile, lipocrine was also evaluated for BACE1 inhibitory activity, resulting in a potent lead compound for BACE1 inhibition. Starting from this interesting profile, a series of tetrahydroacridine analogues were synthesised varying the chain length between the two fragments. Moreover, following the approach of combining in a single molecule two different pharmacophores, we designed and synthesised different compounds bearing the moieties of known AChEIs (rivastigmine and caproctamine) coupled with lipoic acid, since it was shown that dithiolane group is an important structural feature of lipocrine for the optimal inhibition of BACE1. All the tetrahydroacridines, rivastigmine and caproctamine-based compounds, were evaluated for BACE1 inhibitory activity in a FRET (fluorescence resonance energy transfer) enzymatic assay (test A). With the aim to enhancing the biological activity of the lead compound, we applied the molecular simplification approach to design and synthesize novel heterocyclic compounds related to lipocrine, in which the tetrahydroacridine moiety was replaced by 4-amino-quinoline or 4-amino-quinazoline rings. All the synthesized compounds were also evaluated in a modified FRET enzymatic assay (test B), changing the fluorescent substrate for enzymatic BACE1 cleavage. This test method guided deep structure-activity relationships for BACE1 inhibition on the most promising quinazoline-based derivatives. By varying the substituent on the 2-position of the quinazoline ring and by replacing the lipoic acid residue in lateral chain with different moieties (i.e. trans-ferulic acid, a known antioxidant molecule), a series of quinazoline derivatives were obtained. In order to confirm inhibitory activity of the most active compounds, they were evaluated with a third FRET assay (test C) which, surprisingly, did not confirm the previous good activity profiles. An evaluation study of kinetic parameters of the three assays revealed that method C is endowed with the best specificity and enzymatic efficiency. Biological evaluation of the modified 2,4-diamino-quinazoline derivatives measured through the method C, allow to obtain a new lead compound bearing the trans-ferulic acid residue coupled to 2,4-diamino-quinazoline core endowed with a good BACE1 inhibitory activity (IC50 = 0.8 mM). We reported on the variability of the results in the three different FRET assays that are known to have some disadvantages in term of interference rates that are strongly dependent on compound properties. The observed results variability could be also ascribed to different enzyme origin, varied substrate and different fluorescent groups. The inhibitors should be tested on a parallel screening in order to have a more reliable data prior to be tested into cellular assay. With this aim, preliminary cellular BACE1 inhibition assay carried out on lipocrine confirmed a good cellular activity profile (EC50 = 3.7 mM) strengthening the idea to find a small molecule non-peptidomimetic compound as BACE1 inhibitor. In conclusion, the present study allowed to identify a new lead compound endowed with BACE1 inhibitory activity in submicromolar range. Further lead optimization to the obtained derivative is needed in order to obtain a more potent and a selective BACE1 inhibitor based on 2,4-diamino-quinazoline scaffold. A side project related to the synthesis of novel enzymatic inhibitors of BACE1 in order to explore the pseudopeptidic transition-state isosteres chemistry was carried out during research stage at Università de Montrèal (Canada) in Hanessian's group. The aim of this work has been the synthesis of the δ-aminocyclohexane carboxylic acid motif with stereochemically defined substitution to incorporating such a constrained core in potential BACE1 inhibitors. This fragment, endowed with reduced peptidic character, is not known in the context of peptidomimetic design. In particular, we envisioned an alternative route based on an organocatalytic asymmetric conjugate addition of nitroalkanes to cyclohexenone in presence of D-proline and trans-2,5-dimethylpiperazine. The enantioenriched obtained 3-(α-nitroalkyl)-cyclohexanones were further functionalized to give the corresponding δ-nitroalkyl cyclohexane carboxylic acids. These intermediates were elaborated to the target structures 3-(α-aminoalkyl)-1-cyclohexane carboxylic acids in a new readily accessible way.
Resumo:
This PhD thesis discusses the rationale for design and use of synthetic oligosaccharides for the development of glycoconjugate vaccines and the role of physicochemical methods in the characterization of these vaccines. The study concerns two infectious diseases that represent a serious problem for the national healthcare programs: human immunodeficiency virus (HIV) and Group A Streptococcus (GAS) infections. Both pathogens possess distinctive carbohydrate structures that have been described as suitable targets for the vaccine design. The Group A Streptococcus cell membrane polysaccharide (GAS-PS) is an attractive vaccine antigen candidate based on its conserved, constant expression pattern and the ability to confer immunoprotection in a relevant mouse model. Analysis of the immunogenic response within at-risk populations suggests an inverse correlation between high anti-GAS-PS antibody titres and GAS infection cases. Recent studies show that a chemically synthesized core polysaccharide-based antigen may represent an antigenic structural determinant of the large polysaccharide. Based on GAS-PS structural analysis, the study evaluates the potential to exploit a synthetic design approach to GAS vaccine development and compares the efficiency of synthetic antigens with the long isolated GAS polysaccharide. Synthetic GAS-PS structural analogues were specifically designed and generated to explore the impact of antigen length and terminal residue composition. For the HIV-1 glycoantigens, the dense glycan shield on the surface of the envelope protein gp120 was chosen as a target. This shield masks conserved protein epitopes and facilitates virus spread via binding to glycan receptors on susceptible host cells. The broadly neutralizing monoclonal antibody 2G12 binds a cluster of high-mannose oligosaccharides on the gp120 subunit of HIV-1 Env protein. This oligomannose epitope has been a subject to the synthetic vaccine development. The cluster nature of the 2G12 epitope suggested that multivalent antigen presentation was important to develop a carbohydrate based vaccine candidate. I describe the development of neoglycoconjugates displaying clustered HIV-1 related oligomannose carbohydrates and their immunogenic properties.
Resumo:
This research work faces the problem of insertion of viscous dampers into Moment Resisiting Frames (MRF) for maximum efficiency in mitigation of the seismic effects. The work would lead to a precise design indication. The fundamental result of the thesis consists in showing that, even for moment-resisting structures, you can design a system of added viscous dampers able to achieve target levels of performances. Ie given the reduction factor in the seismic response, discover the characteristics of the viscous dampers which allow to achieve it.
Resumo:
A Micro-opto-mechanical systems (MOMS) based technology for the fabrication of ultrasonic probes on optical fiber is presented. Thanks to the high miniaturization level reached, the realization of an ultrasonic system constituted by ultrasonic generating and detecting elements, suitable for minimally invasive applications or Non Destructive Evaluation (NDE) of materials at high resolution, is demonstrated. The ultrasonic generation is realized by irradiating a highly absorbing carbon film patterned on silicon micromachined structures with a nanosecond pulsed laser source, generating a mechanical shock wave due to the thermal expansion of the film induced by optical energy conversion into heat. The short duration of the pulsed laser, together with an appropriate emitter design, assure high frequency and wide band ultrasonic generation. The acoustic detection is also realized on a MOMS device using an interferometric receiver, fabricated with a Fabry-Perot optical cavity realized by means of a patterned SU-8 and two Al metallization levels. In order to detect the ultrasonic waves, the cavity is interrogated by a laser beam measuring the reflected power with a photodiode. Various issues related to the design and fabrication of these acoustic probes are investigated in this thesis. First, theoretical models are developed to characterize the opto-acoustic behavior of the devices and estimate their expected acoustic performances. Tests structures are realized to derive the relevant physical parameters of the materials constituting the MOMS devices and determine the conditions theoretically assuring the best acoustic emission and detection performances. Moreover, by exploiting the models and the theoretical results, prototypes of acoustic probes are designed and their fabrication process developed by means of an extended experimental activity.
Resumo:
A recent initiative of the European Space Agency (ESA) aims at the definition and adoption of a software reference architecture for use in on-board software of future space missions. Our PhD project placed in the context of that effort. At the outset of our work we gathered all the industrial needs relevant to ESA and all the main European space stakeholders and we were able to consolidate a set of technical high-level requirements for the fulfillment of them. The conclusion we reached from that phase confirmed that the adoption of a software reference architecture was indeed the best solution for the fulfillment of the high-level requirements. The software reference architecture we set on building rests on four constituents: (i) a component model, to design the software as a composition of individually verifiable and reusable software units; (ii) a computational model, to ensure that the architectural description of the software is statically analyzable; (iii) a programming model, to ensure that the implementation of the design entities conforms with the semantics, the assumptions and the constraints of the computational model; (iv) a conforming execution platform, to actively preserve at run time the properties asserted by static analysis. The nature, feasibility and fitness of constituents (ii), (iii) and (iv), were already proved by the author in an international project that preceded the commencement of the PhD work. The core of the PhD project was therefore centered on the design and prototype implementation of constituent (i), a component model. Our proposed component model is centered on: (i) rigorous separation of concerns, achieved with the support for design views and by careful allocation of concerns to the dedicated software entities; (ii) the support for specification and model-based analysis of extra-functional properties; (iii) the inclusion space-specific concerns.