996 resultados para compromise


Relevância:

10.00% 10.00%

Publicador:

Resumo:

La ricerca oggetto di questa tesi, come si evince dal titolo stesso, è volta alla riduzione dei consumi per vetture a forte carattere sportivo ed elevate prestazioni specifiche. In particolare, tutte le attività descritte fanno riferimento ad un ben definito modello di vettura, ovvero la Maserati Quattroporte. Lo scenario all’interno del quale questo lavoro si inquadra, è quello di una forte spinta alla riduzione dei cosiddetti gas serra, ossia dell’anidride carbonica, in linea con quelle che sono le disposizioni dettate dal protocollo di Kyoto. La necessità di ridurre l’immissione in atmosfera di CO2 sta condizionando tutti i settori della società: dal riscaldamento degli edifici privati a quello degli stabilimenti industriali, dalla generazione di energia ai processi produttivi in senso lato. Nell’ambito di questo panorama, chiaramente, sono chiamati ad uno sforzo considerevole i costruttori di automobili, alle quali è imputata una percentuale considerevole dell’anidride carbonica prodotta ogni giorno e riversata nell’atmosfera. Al delicato problema inquinamento ne va aggiunto uno forse ancor più contingente e diretto, legato a ragioni di carattere economico. I combustibili fossili, come tutti sanno, sono una fonte di energia non rinnovabile, la cui disponibilità è legata a giacimenti situati in opportune zone del pianeta e non inesauribili. Per di più, la situazione socio politica che il medio oriente sta affrontando, unita alla crescente domanda da parte di quei paesi in cui il processo di industrializzazione è partito da poco a ritmi vertiginosi, hanno letteralmente fatto lievitare il prezzo del petrolio. A causa di ciò, avere una vettura efficiente in senso lato e, quindi, a ridotti consumi, è a tutti gli effetti un contenuto di prodotto apprezzato dal punto di vista del marketing, anche per i segmenti vettura più alti. Nell’ambito di questa ricerca il problema dei consumi è stato affrontato come una conseguenza del comportamento globale della vettura in termini di efficienza, valutando il miglior compromesso fra le diverse aree funzionali costituenti il veicolo. Una parte consistente del lavoro è stata dedicata alla messa a punto di un modello di calcolo, attraverso il quale eseguire una serie di analisi di sensibilità sull’influenza dei diversi parametri vettura sul consumo complessivo di carburante. Sulla base di tali indicazioni, è stata proposta una modifica dei rapporti del cambio elettro-attuato con lo scopo di ottimizzare il compromesso tra consumi e prestazioni, senza inficiare considerevolmente queste ultime. La soluzione proposta è stata effettivamente realizzata e provata su vettura, dando la possibilità di verificare i risultati ed operare un’approfondita attività di correlazione del modello di calcolo per i consumi. Il beneficio ottenuto in termini di autonomia è stato decisamente significativo con riferimento sia ai cicli di omologazione europei, che a quelli statunitensi. Sono state inoltre analizzate le ripercussioni dal punto di vista delle prestazioni ed anche in questo caso i numerosi dati rilevati hanno permesso di migliorare il livello di correlazione del modello di simulazione per le prestazioni. La vettura con la nuova rapportatura proposta è stata poi confrontata con un prototipo di Maserati Quattroporte avente cambio automatico e convertitore di coppia. Questa ulteriore attività ha permesso di valutare il differente comportamento tra le due soluzioni, sia in termini di consumo istantaneo, che di consumo complessivo rilevato durante le principali missioni su banco a rulli previste dalle normative. L’ultima sezione del lavoro è stata dedicata alla valutazione dell’efficienza energetica del sistema vettura, intesa come resistenza all’avanzamento incontrata durante il moto ad una determinata velocità. Sono state indagate sperimentalmente le curve di “coast down” della Quattroporte e di alcune concorrenti e sono stati proposti degli interventi volti alla riduzione del coefficiente di penetrazione aerodinamica, pur con il vincolo di non alterare lo stile vettura.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 1970s are in the limelight of a growing historiographic attention, partly due to the recent opening of new archival resources. 1973, in particular, has a special interest in the historian’s eyes, as many are the events that happened that year: to name but a few, the Chilean coup, the October War, the ensuing oil crisis, the Vietnamese peace treaty. So it is may be not entirely surprising that not much attention has been paid to the Year of Europe, a nebulous American initiative destined to sum up to nothing practical - as Kissinger himself put it, it was destined to be the Year that never Was.1 It is my opinion, however, that its failure should not conceal its historical interest. Even though transatlantic relations have sometimes been seen as an uninterrupted history of crisis,2 in 1973 they reached what could then be considered as their unprecedented nadir. I believe that a thorough analysis of the events that during that year found the US increasingly at odds with the countries of Western Europe is worth carrying out not only to cast a new light on the dynamics of transatlantic relations but also to deepen our comprehension of the internal dynamics of the actors involved, mainly the Nixon administration and a unifying Europe. The Nixon administration had not carefully planned what the initiative actually should have amounted to, and its official announcement appears to have been one of Kissinger’s coups de theatre. Yet the Year of Europe responded to the vital priority of revitalising the relations with Western Europe, crucial ally, for too long neglected. But 1973 did not end with the solemn renewal of the Atlantic Declaration that Kissinger had sought. On the contrary, it saw, for the first time, the countries of the newly enlarged EC engaged in a real, if short-lived, solidarity on foreign policy, which highlighted the Nixon administration’s contradictions regarding European integration. Those, in addition to the numerous tensions that already strained transatlantic relations, gave birth to a downward spiral of incomprehensions and misperceptions, which the unexpected deflagration of the October war seriously worsened. However, even though the tensions did not disappear, the European front soon started to disintegrate, mainly under the strains imposed by the oil crisis. Significant changes in the leadership of the main European countries helped to get the tones back to normal. During the course of 1974-5, the substantial failure of the Euro-Arab dialogue, the Gymlich compromise, frequent and serene bilateral meetings bear witness that the worst was over.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this thesis was to study the effects of extremely low frequency (ELF) electromagnetic magnetic fields on potassium currents in neural cell lines ( Neuroblastoma SK-N-BE ), using the whole-cell Patch Clamp technique. Such technique is a sophisticated tool capable to investigate the electrophysiological activity at a single cell, and even at single channel level. The total potassium ion currents through the cell membrane was measured while exposing the cells to a combination of static (DC) and alternate (AC) magnetic fields according to the prediction of the so-called ‘ Ion Resonance Hypothesis ’. For this purpose we have designed and fabricated a magnetic field exposure system reaching a good compromise between magnetic field homogeneity and accessibility to the biological sample under the microscope. The magnetic field exposure system consists of three large orthogonal pairs of square coils surrounding the patch clamp set up and connected to the signal generation unit, able to generate different combinations of static and/or alternate magnetic fields. Such system was characterized in term of field distribution and uniformity through computation and direct field measurements. No statistically significant changes in the potassium ion currents through cell membrane were reveled when the cells were exposed to AC/DC magnetic field combination according to the afore mentioned ‘Ion Resonance Hypothesis’.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present work tries to display a comprehensive and comparative study of the different legal and regulatory problems involved in international securitization transactions. First, an introduction to securitization is provided, with the basic elements of the transaction, followed by the different varieties of it, including dynamic securitization and synthetic securitization structures. Together with this introduction to the intricacies of the structure, a insight into the influence of securitization in the financial and economic crisis of 2007-2009 is provided too; as well as an overview of the process of regulatory competition and cooperation that constitutes the framework for the international aspects of securitization. The next Chapter focuses on the aspects that constitute the foundations of structured finance: the inception of the vehicle, and the transfer of risks associated to the securitized assets, with particular emphasis on the validity of those elements, and how a securitization transaction could be threatened at its root. In this sense, special importance is given to the validity of the trust as an instrument of finance, to the assignment of future receivables or receivables in block, and to the importance of formalities for the validity of corporations, trusts, assignments, etc., and the interaction of such formalities contained in general corporate, trust and assignment law with those contemplated under specific securitization regulations. Then, the next Chapter (III) focuses on creditor protection aspects. As such, we provide some insights on the debate on the capital structure of the firm, and its inadequacy to assess the financial soundness problems inherent to securitization. Then, we proceed to analyze the importance of rules on creditor protection in the context of securitization. The corollary is in the rules in case of insolvency. In this sense, we divide the cases where a party involved in the transaction goes bankrupt, from those where the transaction itself collapses. Finally, we focus on the scenario where a substance over form analysis may compromise some of the elements of the structure (notably the limited liability of the sponsor, and/or the transfer of assets) by means of veil piercing, substantive consolidation, or recharacterization theories. Once these elements have been covered, the next Chapters focus on the regulatory aspects involved in the transaction. Chapter IV is more referred to “market” regulations, i.e. those concerned with information disclosure and other rules (appointment of the indenture trustee, and elaboration of a rating by a rating agency) concerning the offering of asset-backed securities to the public. Chapter V, on the other hand, focuses on “prudential” regulation of the entity entrusted with securitizing assets (the so-called Special Purpose vehicle), and other entities involved in the process. Regarding the SPV, a reference is made to licensing requirements, restriction of activities and governance structures to prevent abuses. Regarding the sponsor of the transaction, a focus is made on provisions on sound originating practices, and the servicing function. Finally, we study accounting and banking regulations, including the Basel I and Basel II Frameworks, which determine the consolidation of the SPV, and the de-recognition of the securitized asset from the originating company’s balance-sheet, as well as the posterior treatment of those assets, in particular by banks. Chapters VI-IX are concerned with liability matters. Chapter VI is an introduction to the different sources of liability. Chapter VII focuses on the liability by the SPV and its management for the information supplied to investors, the management of the asset pool, and the breach of loyalty (or fiduciary) duties. Chapter VIII rather refers to the liability of the originator as a result of such information and statements, but also as a result of inadequate and reckless originating or servicing practices. Chapter IX finally focuses on third parties entrusted with the soundness of the transaction towards the market, the so-called gatekeepers. In this respect, we make special emphasis on the liability of indenture trustees, underwriters and rating agencies. Chapters X and XI focus on the international aspects of securitization. Chapter X contains a conflicts of laws analysis of the different aspects of structured finance. In this respect, a study is made of the laws applicable to the vehicle, to the transfer of risks (either by assignment or by means of derivatives contracts), to liability issues; and a study is also made of the competent jurisdiction (and applicable law) in bankruptcy cases; as well as in cases where a substance-over-form is performed. Then, special attention is also devoted to the role of financial and securities regulations; as well as to their territorial limits, and extraterritoriality problems involved. Chapter XI supplements the prior Chapter, for it analyzes the limits to the States’ exercise of regulatory power by the personal and “market” freedoms included in the US Constitution or the EU Treaties. A reference is also made to the (still insufficient) rules from the WTO Framework, and their significance to the States’ recognition and regulation of securitization transactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Apple consumption is highly recomended for a healthy diet and is the most important fruit produced in temperate climate regions. Unfortunately, it is also one of the fruit that most ofthen provoks allergy in atopic patients and the only treatment available up to date for these apple allergic patients is the avoidance. Apple allergy is due to the presence of four major classes of allergens: Mal d 1 (PR-10/Bet v 1-like proteins), Mal d 2 (Thaumatine-like proteins), Mal d 3 (Lipid transfer protein) and Mal d 4 (profilin). In this work new advances in the characterization of apple allergen gene families have been reached using a multidisciplinary approach. First of all, a genomic approach was used for the characterization of the allergen gene families of Mal d 1 (task of Chapter 1), Mal d 2 and Mal d 4 (task of Chapter 5). In particular, in Chapter 1 the study of two large contiguos blocks of DNA sequences containing the Mal d 1 gene cluster on LG16 allowed to acquire many new findings on number and orientation of genes in the cluster, their physical distances, their regulatory sequences and the presence of other genes or pseudogenes in this genomic region. Three new members were discovered co-localizing with the other Mal d 1 genes of LG16 suggesting that the complexity of the genetic base of allergenicity will increase with new advances. Many retrotranspon elements were also retrieved in this cluster. Due to the developement of molecular markers on the two sequences, the anchoring of the physical and the genetic map of the region has been successfully achieved. Moreover, in Chapter 5 the existence of other loci for the Thaumatine-like protein family in apple (Mal d 2.03 on LG4 and Mal d 2.02 on LG17) respect the one reported up to now was demonstred for the first time. Also one new locus for profilins (Mal d 4.04) was mapped on LG2, close to the Mal d 4.02 locus, suggesting a cluster organization for this gene family, as is well reported for Mal d 1 family. Secondly, a methodological approach was used to set up an highly specific tool to discriminate and quantify the expression of each Mal d 1 allergen gene (task of Chapter 2). In aprticular, a set of 20 Mal d 1 gene specific primer pairs for the quantitative Real time PCR technique was validated and optimized. As a first application, this tool was used on leaves and fruit tissues of the cultivar Florina in order to identify the Mal d 1 allergen genes that are expressed in different tissues. The differential expression retrieved in this study revealed a tissue-specificity for some Mal d 1 genes: 10/20 Mal d 1 genes were expressed in fruits and, indeed, probably more involved in the allergic reactions; while 17/20 Mal d 1 genes were expressed in leaves challenged with the fungus Venturia inaequalis and therefore probably interesting in the study of the plant defense mechanism. In Chapter 3 the specific expression levels of the 10 Mal d 1 isoallergen genes, found to be expressed in fruits, were studied for the first time in skin and flesh of apples of different genotypes. A complex gene expression profile was obtained due to the high gene-, tissue- and genotype-variability. Despite this, Mal d 1.06A and Mal d 1.07 expression patterns resulted particularly associated with the degree of allergenicity of the different cultivars. They were not the most expressed Mal d 1 genes in apple but here it was hypotized a relevant importance in the determination of allergenicity for both qualitative and quantitative aspects of the Mal d 1 gene expression levels. In Chapter 4 a clear modulation for all the 17 PR-10 genes tested in young leaves of Florina after challenging with the fungus V. inaequalis have been reported but with a peculiar expression profile for each gene. Interestingly, all the Mal d 1 genes resulted up-regulated except Mal d 1.10 that was down-regulated after the challenging with the fungus. The differences in direction, timing and magnitude of induction seem to confirm the hypothesis of a subfunctionalization inside the gene family despite an high sequencce and structure similarity. Moreover, a modulation of PR-10 genes was showed both in compatible (Gala-V. inaequalis) and incompatible (Florina-V. inaequalis) interactions contribute to validate the hypothesis of an indirect role for at least some of these proteins in the induced defense responses. Finally, a certain modulation of PR-10 transcripts retrieved also in leaves treated with water confirm their abilty to respond also to abiotic stress. To conclude, the genomic approach used here allowed to create a comprehensive inventory of all the genes of allergen families, especially in the case of extended gene families like Mal d 1. This knowledge can be considered a basal prerequisite for many further studies. On the other hand, the specific transcriptional approach make it possible to evaluate the Mal d 1 genes behavior on different samples and conditions and therefore, to speculate on their involvement on apple allergenicity process. Considering the double nature of Mal d 1 proteins, as apple allergens and as PR-10 proteins, the gene expression analysis upon the attack of the fungus created the base for unravel the Mal d 1 biological functions. In particular, the knowledge acquired in this work about the PR-10 genes putatively more involved in the specific Malus-V. inaequalis interaction will be helpful, in the future, to drive the apple breeding for hypo-allergenicity genotype without compromise the mechanism of response of the plants to stress conditions. For the future, the survey of the differences in allergenicity among cultivars has to be be thorough including other genotypes and allergic patients in the tests. After this, the allelic diversity analysis with the high and low allergenic cultivars on all the allergen genes, in particular on the ones with transcription levels correlated to allergencity, will provide the genetic background of the low ones. This step from genes to alleles will allow the develop of molecular markers for them that might be used to effectively addressed the apple breeding for hypo-allergenicity. Another important step forward for the study of apple allergens will be the use of a specific proteomic approach since apple allergy is a multifactor-determined disease and only an interdisciplinary and integrated approach can be effective for its prevention and treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Italian radio telescopes currently undergo a major upgrade period in response to the growing demand for deep radio observations, such as surveys on large sky areas or observations of vast samples of compact radio sources. The optimised employment of the Italian antennas, at first constructed mainly for VLBI activities and provided with a control system (FS – Field System) not tailored to single-dish observations, required important modifications in particular of the guiding software and data acquisition system. The production of a completely new control system called ESCS (Enhanced Single-dish Control System) for the Medicina dish started in 2007, in synergy with the software development for the forthcoming Sardinia Radio Telescope (SRT). The aim is to produce a system optimised for single-dish observations in continuum, spectrometry and polarimetry. ESCS is also planned to be installed at the Noto site. A substantial part of this thesis work consisted in designing and developing subsystems within ESCS, in order to provide this software with tools to carry out large maps, spanning from the implementation of On-The-Fly fast scans (following both conventional and innovative observing strategies) to the production of single-dish standard output files and the realisation of tools for the quick-look of the acquired data. The test period coincided with the commissioning phase for two devices temporarily installed – while waiting for the SRT to be completed – on the Medicina antenna: a 18-26 GHz 7-feed receiver and the 14-channel analogue backend developed for its use. It is worth stressing that it is the only K-band multi-feed receiver at present available worldwide. The commissioning of the overall hardware/software system constituted a considerable section of the thesis work. Tests were led in order to verify the system stability and its capabilities, down to sensitivity levels which had never been reached in Medicina using the previous observing techniques and hardware devices. The aim was also to assess the scientific potential of the multi-feed receiver for the production of wide maps, exploiting its temporary availability on a mid-sized antenna. Dishes like the 32-m antennas at Medicina and Noto, in fact, offer the best conditions for large-area surveys, especially at high frequencies, as they provide a suited compromise between sufficiently large beam sizes to cover quickly large areas of the sky (typical of small-sized telescopes) and sensitivity (typical of large-sized telescopes). The KNoWS (K-band Northern Wide Survey) project is aimed at the realisation of a full-northern-sky survey at 21 GHz; its pilot observations, performed using the new ESCS tools and a peculiar observing strategy, constituted an ideal test-bed for ESCS itself and for the multi-feed/backend system. The KNoWS group, which I am part of, supported the commissioning activities also providing map-making and source-extraction tools, in order to complete the necessary data reduction pipeline and assess the general system scientific capabilities. The K-band observations, which were carried out in several sessions along the December 2008-March 2010 period, were accompanied by the realisation of a 5 GHz test survey during the summertime, which is not suitable for high-frequency observations. This activity was conceived in order to check the new analogue backend separately from the multi-feed receiver, and to simultaneously produce original scientific data (the 6-cm Medicina Survey, 6MS, a polar cap survey to complete PMN-GB6 and provide an all-sky coverage at 5 GHz).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last years of research, I focused my studies on different physiological problems. Together with my supervisors, I developed/improved different mathematical models in order to create valid tools useful for a better understanding of important clinical issues. The aim of all this work is to develop tools for learning and understanding cardiac and cerebrovascular physiology as well as pathology, generating research questions and developing clinical decision support systems useful for intensive care unit patients. I. ICP-model Designed for Medical Education We developed a comprehensive cerebral blood flow and intracranial pressure model to simulate and study the complex interactions in cerebrovascular dynamics caused by multiple simultaneous alterations, including normal and abnormal functional states of auto-regulation of the brain. Individual published equations (derived from prior animal and human studies) were implemented into a comprehensive simulation program. Included in the normal physiological modelling was: intracranial pressure, cerebral blood flow, blood pressure, and carbon dioxide (CO2) partial pressure. We also added external and pathological perturbations, such as head up position and intracranial haemorrhage. The model performed clinically realistically given inputs of published traumatized patients, and cases encountered by clinicians. The pulsatile nature of the output graphics was easy for clinicians to interpret. The manoeuvres simulated include changes of basic physiological inputs (e.g. blood pressure, central venous pressure, CO2 tension, head up position, and respiratory effects on vascular pressures) as well as pathological inputs (e.g. acute intracranial bleeding, and obstruction of cerebrospinal outflow). Based on the results, we believe the model would be useful to teach complex relationships of brain haemodynamics and study clinical research questions such as the optimal head-up position, the effects of intracranial haemorrhage on cerebral haemodynamics, as well as the best CO2 concentration to reach the optimal compromise between intracranial pressure and perfusion. We believe this model would be useful for both beginners and advanced learners. It could be used by practicing clinicians to model individual patients (entering the effects of needed clinical manipulations, and then running the model to test for optimal combinations of therapeutic manoeuvres). II. A Heterogeneous Cerebrovascular Mathematical Model Cerebrovascular pathologies are extremely complex, due to the multitude of factors acting simultaneously on cerebral haemodynamics. In this work, the mathematical model of cerebral haemodynamics and intracranial pressure dynamics, described in the point I, is extended to account for heterogeneity in cerebral blood flow. The model includes the Circle of Willis, six regional districts independently regulated by autoregulation and CO2 reactivity, distal cortical anastomoses, venous circulation, the cerebrospinal fluid circulation, and the intracranial pressure-volume relationship. Results agree with data in the literature and highlight the existence of a monotonic relationship between transient hyperemic response and the autoregulation gain. During unilateral internal carotid artery stenosis, local blood flow regulation is progressively lost in the ipsilateral territory with the presence of a steal phenomenon, while the anterior communicating artery plays the major role to redistribute the available blood flow. Conversely, distal collateral circulation plays a major role during unilateral occlusion of the middle cerebral artery. In conclusion, the model is able to reproduce several different pathological conditions characterized by heterogeneity in cerebrovascular haemodynamics and can not only explain generalized results in terms of physiological mechanisms involved, but also, by individualizing parameters, may represent a valuable tool to help with difficult clinical decisions. III. Effect of Cushing Response on Systemic Arterial Pressure. During cerebral hypoxic conditions, the sympathetic system causes an increase in arterial pressure (Cushing response), creating a link between the cerebral and the systemic circulation. This work investigates the complex relationships among cerebrovascular dynamics, intracranial pressure, Cushing response, and short-term systemic regulation, during plateau waves, by means of an original mathematical model. The model incorporates the pulsating heart, the pulmonary circulation and the systemic circulation, with an accurate description of the cerebral circulation and the intracranial pressure dynamics (same model as in the first paragraph). Various regulatory mechanisms are included: cerebral autoregulation, local blood flow control by oxygen (O2) and/or CO2 changes, sympathetic and vagal regulation of cardiovascular parameters by several reflex mechanisms (chemoreceptors, lung-stretch receptors, baroreceptors). The Cushing response has been described assuming a dramatic increase in sympathetic activity to vessels during a fall in brain O2 delivery. With this assumption, the model is able to simulate the cardiovascular effects experimentally observed when intracranial pressure is artificially elevated and maintained at constant level (arterial pressure increase and bradicardia). According to the model, these effects arise from the interaction between the Cushing response and the baroreflex response (secondary to arterial pressure increase). Then, patients with severe head injury have been simulated by reducing intracranial compliance and cerebrospinal fluid reabsorption. With these changes, oscillations with plateau waves developed. In these conditions, model results indicate that the Cushing response may have both positive effects, reducing the duration of the plateau phase via an increase in cerebral perfusion pressure, and negative effects, increasing the intracranial pressure plateau level, with a risk of greater compression of the cerebral vessels. This model may be of value to assist clinicians in finding the balance between clinical benefits of the Cushing response and its shortcomings. IV. Comprehensive Cardiopulmonary Simulation Model for the Analysis of Hypercapnic Respiratory Failure We developed a new comprehensive cardiopulmonary model that takes into account the mutual interactions between the cardiovascular and the respiratory systems along with their short-term regulatory mechanisms. The model includes the heart, systemic and pulmonary circulations, lung mechanics, gas exchange and transport equations, and cardio-ventilatory control. Results show good agreement with published patient data in case of normoxic and hyperoxic hypercapnia simulations. In particular, simulations predict a moderate increase in mean systemic arterial pressure and heart rate, with almost no change in cardiac output, paralleled by a relevant increase in minute ventilation, tidal volume and respiratory rate. The model can represent a valid tool for clinical practice and medical research, providing an alternative way to experience-based clinical decisions. In conclusion, models are not only capable of summarizing current knowledge, but also identifying missing knowledge. In the former case they can serve as training aids for teaching the operation of complex systems, especially if the model can be used to demonstrate the outcome of experiments. In the latter case they generate experiments to be performed to gather the missing data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of the study was to identify expression signatures unique for specific stages of osteoblast differentiation in order to improve our knowledge of the molecular mechanisms underlying bone repair and regeneration. We performed a microarray analysis on the whole transcriptome of human mesenchymal stem cells (hMSCs) obtained from the femoral canal of patients undergoing hip replacement. By defining different time-points within the differentiation and mineralization phases of hMSCs, temporal gene expression changes were visualised. Importantly, the gene expression of adherent bone marrow mononuclear cells, being the undifferentiated progenitors of bone cells, was used as reference. In addition, only the cultures able to form mineral nodules at the final time-point were considered for the gene expression analyses. To obtain the genes of our interest, we only focused on genes: i) whose expression was significantly upregulated; ii) which are involved in pathways or biological processes relevant to proliferation, differentiation and functions of bone cells; iii) which changed considerably during the different steps of differentiation and/or mineralization. Among the 213 genes identified as differentially expressed by microarray analysis, we selected 65 molecular markers related to specific steps of osteogenic differentiation. These markers are grouped into various gene clusters according to their involvement in processes which play a key role in bone cell biology such as angiogenesis, ossification, cell communication, development and in pathways like TGF beta and Wnt signaling pathways. Taken together, these results allow us to monitor hMSC cultures and to distinguish between different stages of differentiation and mineralization. The signatures represent a useful tool to analyse a broad spectrum of functions of hMSCs cultured on scaffolds, especially when the constructs are conceived for releasing growth factors or other signals to promote bone regeneration. Morover, this work will enhance our understanding of bone development and will enable us to recognize molecular defects that compromise normal bone function as occurs in pathological conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present PhD thesis summarizes the three-years study about the neutronic investigation of a new concept nuclear reactor aiming at the optimization and the sustainable management of nuclear fuel in a possible European scenario. A new generation nuclear reactor for the nuclear reinassance is indeed desired by the actual industrialized world, both for the solution of the energetic question arising from the continuously growing energy demand together with the corresponding reduction of oil availability, and the environment question for a sustainable energy source free from Long Lived Radioisotopes and therefore geological repositories. Among the Generation IV candidate typologies, the Lead Fast Reactor concept has been pursued, being the one top rated in sustainability. The European Lead-cooled SYstem (ELSY) has been at first investigated. The neutronic analysis of the ELSY core has been performed via deterministic analysis by means of the ERANOS code, in order to retrieve a stable configuration for the overall design of the reactor. Further analyses have been carried out by means of the Monte Carlo general purpose transport code MCNP, in order to check the former one and to define an exact model of the system. An innovative system of absorbers has been conceptualized and designed for both the reactivity compensation and regulation of the core due to cycle swing, as well as for safety in order to guarantee the cold shutdown of the system in case of accident. Aiming at the sustainability of nuclear energy, the steady-state nuclear equilibrium has been investigated and generalized into the definition of the ``extended'' equilibrium state. According to this, the Adiabatic Reactor Theory has been developed, together with a New Paradigm for Nuclear Power: in order to design a reactor that does not exchange with the environment anything valuable (thus the term ``adiabatic''), in the sense of both Plutonium and Minor Actinides, it is required indeed to revert the logical design scheme of nuclear cores, starting from the definition of the equilibrium composition of the fuel and submitting to the latter the whole core design. The New Paradigm has been applied then to the core design of an Adiabatic Lead Fast Reactor complying with the ELSY overall system layout. A complete core characterization has been done in order to asses criticality and power flattening; a preliminary evaluation of the main safety parameters has been also done to verify the viability of the system. Burn up calculations have been then performed in order to investigate the operating cycle for the Adiabatic Lead Fast Reactor; the fuel performances have been therefore extracted and inserted in a more general analysis for an European scenario. The present nuclear reactors fleet has been modeled and its evolution simulated by means of the COSI code in order to investigate the materials fluxes to be managed in the European region. Different plausible scenarios have been identified to forecast the evolution of the European nuclear energy production, including the one involving the introduction of Adiabatic Lead Fast Reactors, and compared to better analyze the advantages introduced by the adoption of new concept reactors. At last, since both ELSY and the ALFR represent new concept systems based upon innovative solutions, the neutronic design of a demonstrator reactor has been carried out: such a system is intended to prove the viability of technology to be implemented in the First-of-a-Kind industrial power plant, with the aim at attesting the general strategy to use, to the largest extent. It was chosen then to base the DEMO design upon a compromise between demonstration of developed technology and testing of emerging technology in order to significantly subserve the purpose of reducing uncertainties about construction and licensing, both validating ELSY/ALFR main features and performances, and to qualify numerical codes and tools.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: In the last years cardiac surgery for congenital heart disease (CHD) reduced dramatically mortality modifying prognosis, but, at the same time, increased morbidity in this patient population. Respiratory and cardiovascular systems are strictly anatomically and functionally connected, so that alterations of pulmonary hemodynamic conditions modify respiratory function. While very short-term alterations of respiratory mechanics after surgery were investigated by many authors, not as much works focused on long-term changes. In these subjects rest respiratory function may be limited by several factor: CHD itself (fetal pulmonary perfusion influences vascular and alveolar development), extracorporeal circulation (CEC), thoracotomy and/or sternotomy, rib and sternal contusions, pleural adhesions and pleural fibrosis, secondary to surgical injury. Moreover inflammatory cascade, triggered by CEC, can cause endothelial damage and compromise gas exchange. Aims: The project was conceived to 1) determine severity of respiratory functional impairement in different CHD undergone to surgical correction/palliation; 2) identify the most and the least CHD involved by pulmonary impairement; 3) find a correlation between a specific hemodynamic condition and functional anomaly, and 4) between rest respiratory function and cardiopulmonary exercise test. Materials and methods: We studied 113 subjects with CHD undergone to surgery, and distinguished by group in accord to pulmonary blood flow (group 0: 28 pts with normal pulmonary flow; group 1: 22 pts with increased flow; group 2: 43 pts with decreased flow; group 3: 20 pts with total cavo-pulmonary anastomosis-TCPC) followed by the Pediatric Cardiology and Cardiac Surgery Unit, and we compare them to 37 age- and sex-matched healthy subjects. In Pediatric Pulmonology Unit all pts performed respiratory function tests (static and dynamic volumes, flow/volume curve, airway resistances-raw- and conductance-gaw-, lung diffusion of CO-DLCO- and DLCO/alveolar volume), and CHD pts the same day had cardiopulmonary test. They all were examined and had allergological tests, and respiratory medical history. Results: restrictive pattern (measured on total lung capacity-TLC- and vital capacity-VC) was in all CHD groups, and up to 45% in group 2 and 3. Comparing all groups, we found a significant difference in TLC between healthy and group 2 (p=0.001) and 3 (p=0.004), and in VC between group 2 and healthy (p=0.001) and group 1(p=0.034). Inspiratory capacity (IC) was decreased in group 2 related to healthy (p<0.001) and group 1 (p=0.037). We showed a direct correlation between TLC and VC with age at surgery (p=0.01) and inverse with number of surgical interventions (p=0.03). Reduced FEV1/FVC ratio, Gaw and increased Raw were mostly present in group 3. DLCO was impaired in all groups, but up to 80% in group 3 and 50% in group 2; when corrected for alveolar volume (DLCO/VA) reduction persisted in group 3 (20%), 2 (6.2%) and 0 (7.1%). Exercise test was impaired in all groups: VO2max and VE markedly reduced in all but especially in group 3, and VE/VCO2 slope, marker of ventilatory response to exercise, is increased (<36) in 62.5% of group 3, where other pts had anyway value>32. Comparing group 3 and 2, the most involved categories, we found difference in VO2max and VE/VCO2 slope (respectively p=0.02 and p<0.0001). We evidenced correlation between rest and exercise tests, especially in group 0 (between VO2max and FVC, FEV1, VC, IC; inverse relation between VE/VCO2slope and FVC, FEV1 and VC), but also in group 1 (VO2max and IC), group 2 (VO2max and FVC and FEV1); never in group 3. Discussion: According with literature, we found a frequent impairment of rest pulmonary function in all groups, but especially in group 2 and 3. Restrictive pattern was the most frequent alteration probably due to compromised pulmonary (vascular and alveolar) development secondary to hypoperfusion in fetal and pre-surgery (and pre-TCPC)life. Parenchymal fibrosis, pleural adhesions and thoracic deformities can add further limitation, as showed by the correlation between group 3 and number of surgical intervention. Exercise tests were limited, particularly in group 3 (complex anatomy and lost of chronotropic response), and we found correlations between rest and exercise tests in all but group 3. We speculate that in this patients hemodynamic exceeds respiratory contribution, though markedly decreased.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

H2 demand is continuously increasing since its many relevant applications, for example, in the ammonia production, refinery processes or fuel cells. The Water Gas Shift (WGS) reaction (CO + H2O = CO2 + H2 DeltaH = -41.1 kJ.mol-1) is a step in the H2 production, reducing significantly the CO content and increasing the H2 one in the gas mixtures obtained from steam reforming. Industrially, the reaction is carried out in two stages with different temperature: the first stage operates at high temperature (350-450 °C) using Fe-based catalysts, while the second one is performed at lower temperature (190-250 °C) over Cu-based catalysts. However, recently, an increasing interest emerges to develop new catalytic formulations, operating in a single-stage at middle temperature (MTS), while maintaining optimum characteristics of activity and stability. These formulations may be obtained by improving activity and selectivity of Fe-based catalysts or increasing thermal stability of Cu-based catalysts. In the present work, Cu-based catalysts (Cu/ZnO/Al2O3) prepared starting from hydrotalcite-type precursors show good homogeneity and very interesting physical properties, which worsen by increasing the Cu content. Among the catalysts with different Cu contents, the catalyst with 20 wt.% of Cu represents the best compromise to obtain high catalytic activity and stability. On these bases, the catalytic performances seem to depend on both metallic Cu surface area and synergetic interactions between Cu and ZnO. The increase of the Al content enhances the homogeneity of the precursors, leading to a higher Cu dispersion and consequent better catalytic performances. The catalyst with 20 wt.% of Cu and a molar ratio M(II)/M(III) of 2 shows a high activity also at 250 °C and a good stability at middle temperature. Thus, it may be considered an optimum catalyst for the WGS reaction at middle temperature (about 300 °C). Finally, by replacing 50 % (as at. ratio) of Zn by Mg (which is not active in the WGS reaction), better physical properties were observed, although associate with poor catalytic performances. This result confirms the important role of ZnO on the catalytic performances, favoring synergetic interactions with metallic Cu.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Italy and France in Trianon’s Hungary: two political and cultural penetration models During the first post-war, the Danubian Europe was the theatre of an Italian-French diplomatic challenge to gain hegemony in that part of the continent. Because of his geographical position, Hungary had a decisive strategic importance for the ambitions of French and Italian foreign politics. Since in the 1920s culture and propaganda became the fourth dimension of international relations, Rome and Paris developed their diplomatic action in Hungary to affirm not only political and economic influence, but also cultural supremacy. In the 1930, after Hitler’s rise to power, the unstoppable comeback of German political influence in central-eastern Europe determined the progressive decline of Italian and French political and economic positions in Hungary: only the cultural field allowed a survey of Italian-Hungarian and French-Hungarian relations in the contest of a Europe dominated by Nazi Germany during the Second World War. Nevertheless, the radical geopolitical changes in second post-war Europe did not compromise Italian and French cultural presence in the new communist Hungary. Although cultural diplomacy is originally motivated by contingent political targets, it doesn’t respect the short time of politics, but it’s the only foreign politics tool that guarantees preservations of bilateral relations in the long run.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Flood disasters are a major cause of fatalities and economic losses, and several studies indicate that global flood risk is currently increasing. In order to reduce and mitigate the impact of river flood disasters, the current trend is to integrate existing structural defences with non structural measures. This calls for a wider application of advanced hydraulic models for flood hazard and risk mapping, engineering design, and flood forecasting systems. Within this framework, two different hydraulic models for large scale analysis of flood events have been developed. The two models, named CA2D and IFD-GGA, adopt an integrated approach based on the diffusive shallow water equations and a simplified finite volume scheme. The models are also designed for massive code parallelization, which has a key importance in reducing run times in large scale and high-detail applications. The two models were first applied to several numerical cases, to test the reliability and accuracy of different model versions. Then, the most effective versions were applied to different real flood events and flood scenarios. The IFD-GGA model showed serious problems that prevented further applications. On the contrary, the CA2D model proved to be fast and robust, and able to reproduce 1D and 2D flow processes in terms of water depth and velocity. In most applications the accuracy of model results was good and adequate to large scale analysis. Where complex flow processes occurred local errors were observed, due to the model approximations. However, they did not compromise the correct representation of overall flow processes. In conclusion, the CA model can be a valuable tool for the simulation of a wide range of flood event types, including lowland and flash flood events.