28 resultados para WIPO Performances and Phonograms Treaty

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study is part of the EU Integrated Project “GEHA – Genetics of Healthy Aging” (Franceschi C et al., Ann N Y Acad Sci. 1100: 21-45, 2007), whose aim is to identify genes involved in healthy aging and longevity, which allow individuals to survive to advanced age in good cognitive and physical function and in the absence of major age-related diseases. Aims The major aims of this thesis were the following: 1. to outline the recruitment procedure of 90+ Italian siblings performed by the recruiting units of the University of Bologna (UNIBO) and Rome (ISS). The procedures related to the following items necessary to perform the study were described and commented: identification of the eligible area for recruitment, demographic aspects related to the need of getting census lists of 90+siblings, mail and phone contact with 90+ subjects and their families, bioethics aspects of the whole procedure, standardization of the recruitment methodology and set-up of a detailed flow chart to be followed by the European recruitment centres (obtainment of the informed consent form, anonimization of data by using a special code, how to perform the interview, how to collect the blood, how to enter data in the GEHA Phenotypic Data Base hosted at Odense). 2. to provide an overview of the phenotypic characteristics of 90+ Italian siblings recruited by the recruiting units of the University of Bologna (UNIBO) and Rome (ISS). The following items were addressed: socio-demographic characteristics, health status, cognitive assessment, physical conditions (handgrip strength test, chair-stand test, physical ability including ADL, vision and hearing ability, movement ability and doing light housework), life-style information (smoking and drinking habits) and subjective well-being (attitude towards life). Moreover, haematological parameters collected in the 90+ sibpairs as optional parameters by the Bologna and Rome recruiting units were used for a more comprehensive evaluation of the results obtained using the above mentioned phenotypic characteristics reported in the GEHA questionnaire. 3. to assess 90+ Italian siblings as far as their health/functional status is concerned on the basis of three classification methods proposed in previous studies on centenarians, which are based on: • actual functional capabilities (ADL, SMMSE, visual and hearing abilities) (Gondo et al., J Gerontol. 61A (3): 305-310, 2006); • actual functional capabilities and morbidity (ADL, ability to walk, SMMSE, presence of cancer, ictus, renal failure, anaemia, and liver diseases) (Franceschi et al., Aging Clin Exp Res, 12:77-84, 2000); • retrospectively collected data about past history of morbidity and age of disease onset (hypertension, heart disease, diabetes, stroke, cancer, osteopororis, neurological diseases, chronic obstructive pulmonary disease and ocular diseases) (Evert et al., J Gerontol A Biol Sci Med Sci. 58A (3): 232-237, 2003). Firstly these available models to define the health status of long-living subjects were applied to the sample and, since the classifications by Gondo and Franceschi are both based on the present functional status, they were compared in order to better recognize the healthy aging phenotype and to identify the best group of 90+ subjects out of the entire studied population. 4. to investigate the concordance of health and functional status among 90+ siblings in order to divide sibpairs in three categories: the best (both sibs are in good shape), the worst (both sibs are in bad shape) and an intermediate group (one sib is in good shape and the other is in bad shape). Moreover, the evaluation wanted to discover which variables are concordant among siblings; thus, concordant variables could be considered as familiar variables (determined by the environment or by genetics). 5. to perform a survival analysis by using mortality data at 1st January 2009 from the follow-up as the main outcome and selected functional and clinical parameters as explanatory variables. Methods A total of 765 90+ Italian subjects recruited by UNIBO (549 90+ siblings, belonging to 258 families) and ISS (216 90+ siblings, belonging to 106 families) recruiting units are included in the analysis. Each subject was interviewed according to a standardized questionnaire, comprising extensively utilized questions that have been validated in previous European studies on elderly subjects and covering demographic information, life style, living conditions, cognitive status (SMMSE), mood, health status and anthropometric measurements. Moreover, subjects were asked to perform some physical tests (Hand Grip Strength test and Chair Standing test) and a sample of about 24 mL of blood was collected and then processed according to a common protocol for the preparation and storage of DNA aliquots. Results From the analysis the main findings are the following: - a standardized protocol to assess cognitive status, physical performances and health status of European nonagenarian subjects was set up, in respect to ethical requirements, and it is available as a reference for other studies in this field; - GEHA families are enriched in long-living members and extreme survival, and represent an appropriate model for the identification of genes involved in healthy aging and longevity; - two simplified sets of criteria to classify 90+ sibling according to their health status were proposed, as operational tools for distinguishing healthy from non healthy subjects; - cognitive and functional parameters have a major role in categorizing 90+ siblings for the health status; - parameters such as education and good physical abilities (500 metres walking ability, going up and down the stairs ability, high scores at hand grip and chair stand tests) are associated with a good health status (defined as “cognitive unimpairment and absence of disability”); - male nonagenarians show a more homogeneous phenotype than females, and, though far fewer in number, tend to be healthier than females; - in males the good health status is not protective for survival, confirming the male-female health survival paradox; - survival after age 90 was dependent mainly on intact cognitive status and absence of functional disabilities; - haemoglobin and creatinine levels are both associated with longevity; - the most concordant items among 90+ siblings are related to the functional status, indicating that they contain a familiar component. It is still to be investigated at what level this familiar component is determined by genetics or by environment or by the interaction between genetics, environment and chance (and at what level). Conclusions In conclusion, we could state that this study, in accordance with the main objectives of the whole GEHA project, represents one of the first attempt to identify the biological and non biological determinants of successful/unsuccessful aging and longevity. Here, the analysis was performed on 90+ siblings recruited in Northern and Central Italy and it could be used as a reference for others studies in this field on Italian population. Moreover, it contributed to the definition of “successful” and “unsuccessful” aging and categorising a very large cohort of our most elderly subjects into “successful” and “unsuccessful” groups provided an unrivalled opportunity to detect some of the basic genetic/molecular mechanisms which underpin good health as opposed to chronic disability. Discoveries in the topic of the biological determinants of healthy aging represent a real possibility to identify new markers to be utilized for the identification of subgroups of old European citizens having a higher risk to develop age-related diseases and disabilities and to direct major preventive medicine strategies for the new epidemic of chronic disease in the 21st century.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ever-increasing spread of automation in industry puts the electrical engineer in a central role as a promoter of technological development in a sector such as the use of electricity, which is the basis of all the machinery and productive processes. Moreover the spread of drives for motor control and static converters with structures ever more complex, places the electrical engineer to face new challenges whose solution has as critical elements in the implementation of digital control techniques with the requirements of inexpensiveness and efficiency of the final product. The successfully application of solutions using non-conventional static converters awake an increasing interest in science and industry due to the promising opportunities. However, in the same time, new problems emerge whose solution is still under study and debate in the scientific community During the Ph.D. course several themes have been developed that, while obtaining the recent and growing interest of scientific community, have much space for the development of research activity and for industrial applications. The first area of research is related to the control of three phase induction motors with high dynamic performance and the sensorless control in the high speed range. The management of the operation of induction machine without position or speed sensors awakes interest in the industrial world due to the increased reliability and robustness of this solution combined with a lower cost of production and purchase of this technology compared to the others available in the market. During this dissertation control techniques will be proposed which are able to exploit the total dc link voltage and at the same time capable to exploit the maximum torque capability in whole speed range with good dynamic performance. The proposed solution preserves the simplicity of tuning of the regulators. Furthermore, in order to validate the effectiveness of presented solution, it is assessed in terms of performance and complexity and compared to two other algorithm presented in literature. The feasibility of the proposed algorithm is also tested on induction motor drive fed by a matrix converter. Another important research area is connected to the development of technology for vehicular applications. In this field the dynamic performances and the low power consumption is one of most important goals for an effective algorithm. Towards this direction, a control scheme for induction motor that integrates within a coherent solution some of the features that are commonly required to an electric vehicle drive is presented. The main features of the proposed control scheme are the capability to exploit the maximum torque in the whole speed range, a weak dependence on the motor parameters, a good robustness against the variations of the dc-link voltage and, whenever possible, the maximum efficiency. The second part of this dissertation is dedicated to the multi-phase systems. This technology, in fact, is characterized by a number of issues worthy of investigation that make it competitive with other technologies already on the market. Multiphase systems, allow to redistribute power at a higher number of phases, thus making possible the construction of electronic converters which otherwise would be very difficult to achieve due to the limits of present power electronics. Multiphase drives have an intrinsic reliability given by the possibility that a fault of a phase, caused by the possible failure of a component of the converter, can be solved without inefficiency of the machine or application of a pulsating torque. The control of the magnetic field spatial harmonics in the air-gap with order higher than one allows to reduce torque noise and to obtain high torque density motor and multi-motor applications. In one of the next chapters a control scheme able to increase the motor torque by adding a third harmonic component to the air-gap magnetic field will be presented. Above the base speed the control system reduces the motor flux in such a way to ensure the maximum torque capability. The presented analysis considers the drive constrains and shows how these limits modify the motor performance. The multi-motor applications are described by a well-defined number of multiphase machines, having series connected stator windings, with an opportune permutation of the phases these machines can be independently controlled with a single multi-phase inverter. In this dissertation this solution will be presented and an electric drive consisting of two five-phase PM tubular actuators fed by a single five-phase inverter will be presented. Finally the modulation strategies for a multi-phase inverter will be illustrated. The problem of the space vector modulation of multiphase inverters with an odd number of phases is solved in different way. An algorithmic approach and a look-up table solution will be proposed. The inverter output voltage capability will be investigated, showing that the proposed modulation strategy is able to fully exploit the dc input voltage either in sinusoidal or non-sinusoidal operating conditions. All this aspects are considered in the next chapters. In particular, Chapter 1 summarizes the mathematical model of induction motor. The Chapter 2 is a brief state of art on three-phase inverter. Chapter 3 proposes a stator flux vector control for a three- phase induction machine and compares this solution with two other algorithms presented in literature. Furthermore, in the same chapter, a complete electric drive based on matrix converter is presented. In Chapter 4 a control strategy suitable for electric vehicles is illustrated. Chapter 5 describes the mathematical model of multi-phase induction machines whereas chapter 6 analyzes the multi-phase inverter and its modulation strategies. Chapter 7 discusses the minimization of the power losses in IGBT multi-phase inverters with carrier-based pulse width modulation. In Chapter 8 an extended stator flux vector control for a seven-phase induction motor is presented. Chapter 9 concerns the high torque density applications and in Chapter 10 different fault tolerant control strategies are analyzed. Finally, the last chapter presents a positioning multi-motor drive consisting of two PM tubular five-phase actuators fed by a single five-phase inverter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present PhD thesis summarizes the three-years study about the neutronic investigation of a new concept nuclear reactor aiming at the optimization and the sustainable management of nuclear fuel in a possible European scenario. A new generation nuclear reactor for the nuclear reinassance is indeed desired by the actual industrialized world, both for the solution of the energetic question arising from the continuously growing energy demand together with the corresponding reduction of oil availability, and the environment question for a sustainable energy source free from Long Lived Radioisotopes and therefore geological repositories. Among the Generation IV candidate typologies, the Lead Fast Reactor concept has been pursued, being the one top rated in sustainability. The European Lead-cooled SYstem (ELSY) has been at first investigated. The neutronic analysis of the ELSY core has been performed via deterministic analysis by means of the ERANOS code, in order to retrieve a stable configuration for the overall design of the reactor. Further analyses have been carried out by means of the Monte Carlo general purpose transport code MCNP, in order to check the former one and to define an exact model of the system. An innovative system of absorbers has been conceptualized and designed for both the reactivity compensation and regulation of the core due to cycle swing, as well as for safety in order to guarantee the cold shutdown of the system in case of accident. Aiming at the sustainability of nuclear energy, the steady-state nuclear equilibrium has been investigated and generalized into the definition of the ``extended'' equilibrium state. According to this, the Adiabatic Reactor Theory has been developed, together with a New Paradigm for Nuclear Power: in order to design a reactor that does not exchange with the environment anything valuable (thus the term ``adiabatic''), in the sense of both Plutonium and Minor Actinides, it is required indeed to revert the logical design scheme of nuclear cores, starting from the definition of the equilibrium composition of the fuel and submitting to the latter the whole core design. The New Paradigm has been applied then to the core design of an Adiabatic Lead Fast Reactor complying with the ELSY overall system layout. A complete core characterization has been done in order to asses criticality and power flattening; a preliminary evaluation of the main safety parameters has been also done to verify the viability of the system. Burn up calculations have been then performed in order to investigate the operating cycle for the Adiabatic Lead Fast Reactor; the fuel performances have been therefore extracted and inserted in a more general analysis for an European scenario. The present nuclear reactors fleet has been modeled and its evolution simulated by means of the COSI code in order to investigate the materials fluxes to be managed in the European region. Different plausible scenarios have been identified to forecast the evolution of the European nuclear energy production, including the one involving the introduction of Adiabatic Lead Fast Reactors, and compared to better analyze the advantages introduced by the adoption of new concept reactors. At last, since both ELSY and the ALFR represent new concept systems based upon innovative solutions, the neutronic design of a demonstrator reactor has been carried out: such a system is intended to prove the viability of technology to be implemented in the First-of-a-Kind industrial power plant, with the aim at attesting the general strategy to use, to the largest extent. It was chosen then to base the DEMO design upon a compromise between demonstration of developed technology and testing of emerging technology in order to significantly subserve the purpose of reducing uncertainties about construction and licensing, both validating ELSY/ALFR main features and performances, and to qualify numerical codes and tools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semiconductors technologies are rapidly evolving driven by the need for higher performance demanded by applications. Thanks to the numerous advantages that it offers, gallium nitride (GaN) is quickly becoming the technology of reference in the field of power amplification at high frequency. The RF power density of AlGaN/GaN HEMTs (High Electron Mobility Transistor) is an order of magnitude higher than the one of gallium arsenide (GaAs) transistors. The first demonstration of GaN devices dates back only to 1993. Although over the past few years some commercial products have started to be available, the development of a new technology is a long process. The technology of AlGaN/GaN HEMT is not yet fully mature, some issues related to dispersive phenomena and also to reliability are still present. Dispersive phenomena, also referred as long-term memory effects, have a detrimental impact on RF performances and are due both to the presence of traps in the device structure and to self-heating effects. A better understanding of these problems is needed to further improve the obtainable performances. Moreover, new models of devices that take into consideration these effects are necessary for accurate circuit designs. New characterization techniques are thus needed both to gain insight into these problems and improve the technology and to develop more accurate device models. This thesis presents the research conducted on the development of new charac- terization and modelling methodologies for GaN-based devices and on the use of this technology for high frequency power amplifier applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Organic electronics is an emerging field with a vast number of applications having high potential for commercial success. Although an enormous progress has been made in this research area, many organic electronic applications such as organic opto-electronic devices, organic field effect transistors and organic bioelectronic devices still require further optimization to fulfill the requirements for successful commercialization. The main bottle neck that hinders large scale production of these devices is their performances and stability. The performance of the organic devices largely depends on the charge transport processes occurring at the interfaces of various material that it is composed of. As a result, the key ingredient needed for a successful improvement in the performance and stability of organic electronic devices is an in-depth knowledge of the interfacial interactions and the charge transport phenomena taking place at different interfaces. The aim of this thesis is to address the role of the various interfaces between different material in determining the charge transport properties of organic devices. In this framework, I chose an Organic Field Effect Transistor (OFET) as a model system to carry out this study as it An OFET offers various interfaces that can be investigated as it is made up of stacked layers of various material. In order to probe the intrinsic properties that governs the charge transport, we have to be able to carry out thorough investigation of the interactions taking place down at the accumulation layer thickness. However, since organic materials are highly instable in ambient conditions, it becomes quite impossible to investigate the intrinsic properties of the material without the influence of extrinsic factors like air, moisture and light. For this reason, I have employed a technique called the in situ real-time electrical characterization technique which enables electrical characterization of the OFET during the growth of the semiconductor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background-Amyloidotic cardiomyopathy (AC) can mimic true left ventricular hypertrophy (LVH), including hypertrophic cardiomyopathy (HCM) and hypertensive heart disease (HHD). We assessed the diagnostic value of combined electrocardiographic/echocardiographic indexes to identify AC among patients with increased echocardiographic LV wall thickness due to either different etiologies of amyloidosis or HCM or HHD. Method-First, we studied 469 consecutive patients: 262 with biopsy/genetically proven AC (with either AL or transthyretin (TTR)-related amyloidosis); 106 with HCM; 101 with HHD. We compared the diagnostic performance of: low QRS voltage, symmetric LVH, low QRS voltage plus interventricular septal thickness >1.98 cm, Sokolow index divided by the cross-sectional area of LV wall, Sokolow index divided by body surface area indexed LV mass (LVMI), Sokolow index divided by LV wall thickness, Sokolow index divided by (LV wall/height^2.7); peripheral QRS score divided by LVMI, Peripheral QRS score divided by LV wall thickness, Peripheral QRS score divided by LV wall thickness indexed to height^2.7, total QRS score divided by LVMI, total QRS score divided by LV wall thickness; total QRS score divided by (LV wall/height^2.7). We tested each criterion, separately in males and females, in the following settings: AC vs. HCM+HHD; AC vs. HCM; AL vs. HCM+HHD; AL vs. HCM; TTR vs. HCM+HHD; TTR vs. HCM. Results-Low QRS voltage showed high specificity but low sensitivity for the identification of AC. All the combined indexes had a higher diagnostic accuracy, being total QRS score divided by LV wall thickness or by LVMI associated with the best performances and the largest areas under the ROC curve. These results were validated in 298 consecutive patients with AC, HCM or HHD. Conclusions-In patients with increased LV wall thickness, a combined ECG/ echocardiogram analysis provides accurate indexes to non-invasively identify AC. Total QRS score divided by LVMI or LV wall thickness offers the best diagnostic performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bioinformatics is a recent and emerging discipline which aims at studying biological problems through computational approaches. Most branches of bioinformatics such as Genomics, Proteomics and Molecular Dynamics are particularly computationally intensive, requiring huge amount of computational resources for running algorithms of everincreasing complexity over data of everincreasing size. In the search for computational power, the EGEE Grid platform, world's largest community of interconnected clusters load balanced as a whole, seems particularly promising and is considered the new hope for satisfying the everincreasing computational requirements of bioinformatics, as well as physics and other computational sciences. The EGEE platform, however, is rather new and not yet free of problems. In addition, specific requirements of bioinformatics need to be addressed in order to use this new platform effectively for bioinformatics tasks. In my three years' Ph.D. work I addressed numerous aspects of this Grid platform, with particular attention to those needed by the bioinformatics domain. I hence created three major frameworks, Vnas, GridDBManager and SETest, plus an additional smaller standalone solution, to enhance the support for bioinformatics applications in the Grid environment and to reduce the effort needed to create new applications, additionally addressing numerous existing Grid issues and performing a series of optimizations. The Vnas framework is an advanced system for the submission and monitoring of Grid jobs that provides an abstraction with reliability over the Grid platform. In addition, Vnas greatly simplifies the development of new Grid applications by providing a callback system to simplify the creation of arbitrarily complex multistage computational pipelines and provides an abstracted virtual sandbox which bypasses Grid limitations. Vnas also reduces the usage of Grid bandwidth and storage resources by transparently detecting equality of virtual sandbox files based on content, across different submissions, even when performed by different users. BGBlast, evolution of the earlier project GridBlast, now provides a Grid Database Manager (GridDBManager) component for managing and automatically updating biological flatfile databases in the Grid environment. GridDBManager sports very novel features such as an adaptive replication algorithm that constantly optimizes the number of replicas of the managed databases in the Grid environment, balancing between response times (performances) and storage costs according to a programmed cost formula. GridDBManager also provides a very optimized automated management for older versions of the databases based on reverse delta files, which reduces the storage costs required to keep such older versions available in the Grid environment by two orders of magnitude. The SETest framework provides a way to the user to test and regressiontest Python applications completely scattered with side effects (this is a common case with Grid computational pipelines), which could not easily be tested using the more standard methods of unit testing or test cases. The technique is based on a new concept of datasets containing invocations and results of filtered calls. The framework hence significantly accelerates the development of new applications and computational pipelines for the Grid environment, and the efforts required for maintenance. An analysis of the impact of these solutions will be provided in this thesis. This Ph.D. work originated various publications in journals and conference proceedings as reported in the Appendix. Also, I orally presented my work at numerous international conferences related to Grid and bioinformatics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with Visual Servoing and its strictly connected disciplines like projective geometry, image processing, robotics and non-linear control. More specifically the work addresses the problem to control a robotic manipulator through one of the largely used Visual Servoing techniques: the Image Based Visual Servoing (IBVS). In Image Based Visual Servoing the robot is driven by on-line performing a feedback control loop that is closed directly in the 2D space of the camera sensor. The work considers the case of a monocular system with the only camera mounted on the robot end effector (eye in hand configuration). Through IBVS the system can be positioned with respect to a 3D fixed target by minimizing the differences between its initial view and its goal view, corresponding respectively to the initial and the goal system configurations: the robot Cartesian Motion is thus generated only by means of visual informations. However, the execution of a positioning control task by IBVS is not straightforward because singularity problems may occur and local minima may be reached where the reached image is very close to the target one but the 3D positioning task is far from being fulfilled: this happens in particular for large camera displacements, when the the initial and the goal target views are noticeably different. To overcame singularity and local minima drawbacks, maintaining the good properties of IBVS robustness with respect to modeling and camera calibration errors, an opportune image path planning can be exploited. This work deals with the problem of generating opportune image plane trajectories for tracked points of the servoing control scheme (a trajectory is made of a path plus a time law). The generated image plane paths must be feasible i.e. they must be compliant with rigid body motion of the camera with respect to the object so as to avoid image jacobian singularities and local minima problems. In addition, the image planned trajectories must generate camera velocity screws which are smooth and within the allowed bounds of the robot. We will show that a scaled 3D motion planning algorithm can be devised in order to generate feasible image plane trajectories. Since the paths in the image are off-line generated it is also possible to tune the planning parameters so as to maintain the target inside the camera field of view even if, in some unfortunate cases, the feature target points would leave the camera images due to 3D robot motions. To test the validity of the proposed approach some both experiments and simulations results have been reported taking also into account the influence of noise in the path planning strategy. The experiments have been realized with a 6DOF anthropomorphic manipulator with a fire-wire camera installed on its end effector: the results demonstrate the good performances and the feasibility of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research in art conservation has been developed from the early 1950s, giving a significant contribution to the conservation-restoration of cultural heritage artefacts. In fact, only through a profound knowledge about the nature and conditions of constituent materials, suitable decisions on the conservation and restoration measures can thus be adopted and preservation practices enhanced. The study of ancient artworks is particularly challenging as they can be considered as heterogeneous and multilayered systems where numerous interactions between the different components as well as degradation and ageing phenomena take place. However, difficulties to physically separate the different layers due to their thickness (1-200 µm) can result in the inaccurate attribution of the identified compounds to a specific layer. Therefore, details can only be analysed when the sample preparation method leaves the layer structure intact, as for example the preparation of embedding cross sections in synthetic resins. Hence, spatially resolved analytical techniques are required not only to exactly characterize the nature of the compounds but also to obtain precise chemical and physical information about ongoing changes. This thesis focuses on the application of FTIR microspectroscopic techniques for cultural heritage materials. The first section is aimed at introducing the use of FTIR microscopy in conservation science with a particular attention to the sampling criteria and sample preparation methods. The second section is aimed at evaluating and validating the use of different FTIR microscopic analytical methods applied to the study of different art conservation issues which may be encountered dealing with cultural heritage artefacts: the characterisation of the artistic execution technique (chapter II-1), the studies on degradation phenomena (chapter II-2) and finally the evaluation of protective treatments (chapter II-3). The third and last section is divided into three chapters which underline recent developments in FTIR spectroscopy for the characterisation of paint cross sections and in particular thin organic layers: a newly developed preparation method with embedding systems in infrared transparent salts (chapter III-1), the new opportunities offered by macro-ATR imaging spectroscopy (chapter III-2) and the possibilities achieved with the different FTIR microspectroscopic techniques nowadays available (chapter III-3). In chapter II-1, FTIR microspectroscopy as molecular analysis, is presented in an integrated approach with other analytical techniques. The proposed sequence is optimized in function of the limited quantity of sample available and this methodology permits to identify the painting materials and characterise the adopted execution technique and state of conservation. Chapter II-2 describes the characterisation of the degradation products with FTIR microscopy since the investigation on the ageing processes encountered in old artefacts represents one of the most important issues in conservation research. Metal carboxylates resulting from the interaction between pigments and binding media are characterized using synthesised metal palmitates and their production is detected on copper-, zinc-, manganese- and lead- (associated with lead carbonate) based pigments dispersed either in oil or egg tempera. Moreover, significant effects seem to be obtained with iron and cobalt (acceleration of the triglycerides hydrolysis). For the first time on sienna and umber paints, manganese carboxylates are also observed. Finally in chapter II-3, FTIR microscopy is combined with further elemental analyses to characterise and estimate the performances and stability of newly developed treatments, which should better fit conservation-restoration problems. In the second part, in chapter III-1, an innovative embedding system in potassium bromide is reported focusing on the characterisation and localisation of organic substances in cross sections. Not only the identification but also the distribution of proteinaceous, lipidic or resinaceous materials, are evidenced directly on different paint cross sections, especially in thin layers of the order of 10 µm. Chapter III-2 describes the use of a conventional diamond ATR accessory coupled with a focal plane array to obtain chemical images of multi-layered paint cross sections. A rapid and simple identification of the different compounds is achieved without the use of any infrared microscope objectives. Finally, the latest FTIR techniques available are highlighted in chapter III-3 in a comparative study for the characterisation of paint cross sections. Results in terms of spatial resolution, data quality and chemical information obtained are presented and in particular, a new FTIR microscope equipped with a linear array detector, which permits reducing the spatial resolution limit to approximately 5 µm, provides very promising results and may represent a good alternative to either mapping or imaging systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traceability is often perceived by food industry executives as an additional cost of doing business, one to be avoided if possible. However, a traceability system can in fact comply the regulatory requirements, increase food safety and recall performance, improving marketing performances and, as well as, improving supply chain management. Thus, traceability affects business performances of firms in terms of costs and benefits determined by traceability practices. Costs and benefits affect factors such as, firms’ characteristics, level of traceability and ,lastly, costs and benefits perceived prior to traceability implementation. This thesis was undertaken to understand how these factors are linked to affect the outcome of costs and benefits. Analysis of the results of a plant level survey of the Italian ichthyic processing industry revealed that processors generally adopt various level of traceability while government support appears to increase the level of traceability and the expectations and actual costs and benefits. None of the firms’ characteristics, with the exception of government support, influences costs and level of traceability. Only size of firms and level of QMS certifications are linked with benefits while precision of traceability increases benefits without affecting costs. Finally, traceability practices appear due to the request from “external“ stakeholders such as government, authority and customers rather than “internal” factors (e.g. improving the firm management) while the traceability system does not provide any added value from the market in terms of price premium or market share increase.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The OPERA experiment aims at the direct observation of ν_mu -> ν_tau oscillations in the CNGS (CERN Neutrinos to Gran Sasso) neutrino beam produced at CERN; since the ν_e contamination in the CNGS beam is low, OPERA will also be able to study the sub-dominant oscillation channel ν_mu -> ν_e. OPERA is a large scale hybrid apparatus divided in two supermodules, each equipped with electronic detectors, an iron spectrometer and a highly segmented ~0.7 kton target section made of Emulsion Cloud Chamber (ECC) units. During my research work in the Bologna Lab. I have taken part to the set-up of the automatic scanning microscopes studying and tuning the scanning system performances and efficiencies with emulsions exposed to a test beam at CERN in 2007. Once the triggered bricks were distributed to the collaboration laboratories, my work was centered on the procedure used for the localization and the reconstruction of neutrino events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ALICE experiment at the LHC has been designed to cope with the experimental conditions and observables of a Quark Gluon Plasma reaction. One of the main assets of the ALICE experiment with respect to the other LHC experiments is the particle identification. The large Time-Of-Flight (TOF) detector is the main particle identification detector of the ALICE experiment. The overall time resolution, better that 80 ps, allows the particle identification over a large momentum range (up to 2.5 GeV/c for pi/K and 4 GeV/c for K/p). The TOF makes use of the Multi-gap Resistive Plate Chamber (MRPC), a detector with high efficiency, fast response and intrinsic time resoltion better than 40 ps. The TOF detector embeds a highly-segmented trigger system that exploits the fast rise time and the relatively low noise of the MRPC strips, in order to identify several event topologies. This work aims to provide detailed description of the TOF trigger system. The results achieved in the 2009 cosmic-ray run at CERN are presented to show the performances and readiness of TOF trigger system. The proposed trigger configuration for the proton-proton and Pb-Pb beams are detailed as well with estimates of the efficiencies and purity samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combustion control is one of the key factors to obtain better performances and lower pollutant emissions for diesel, spark ignition and HCCI engines. An algorithm that allows estimating, as an example, the mean indicated torque for each cylinder, could be easily used in control strategies, in order to carry out cylinders trade-off, control the cycle to cycle variation, or detect misfires. A tool that allows evaluating the 50% of Mass Fraction Burned (MFB50), or the net Cumulative Heat Release (CHRNET), or the ROHR peak value (Rate of Heat Release), could be used to optimize spark advance or to detect knock in gasoline engines and to optimize injection pattern in diesel engines. Modern management systems are based on the control of the mean indicated torque produced by the engine: they need a real or virtual sensor in order to compare the measured value with the target one. Many studies have been performed in order to obtain an accurate and reliable over time torque estimation. The aim of this PhD activity was to develop two different algorithms: the first one is based on the instantaneous engine speed fluctuations measurement. The speed signal is picked up directly from the sensor facing the toothed wheel mounted on the engine for other control purposes. The engine speed fluctuation amplitudes depend on the combustion and on the amount of torque delivered by each cylinder. The second algorithm processes in-cylinder pressure signals in the angular domain. In this case a crankshaft encoder is not necessary, because the angular reference can be obtained using a standard sensor wheel. The results obtained with these two methodologies are compared in order to evaluate which one is suitable for on board applications, depending on the accuracy required.