928 resultados para compromise
Resumo:
The aim of this paper is to verify the influence of composition variability of recycled aggregates (RA) of construction and demolition wastes (CDW) on the performance of concretes. Performance was evaluated building mathematical models for compressive strength, modulus of elasticity and drying shrinkage. To obtain such models, an experimental program comprising 50 concrete mixtures was carried out. Specimens were casted, tested and results for compressive strength, modulus of elasticity and drying shrinkage were statistically analyzed. Models inputs are CDW composition observed at seven Brazilian cities. Results confirm that using RA from CDW for concrete building is quite feasible, independently of its composition, once compressive strength and modulus of elasticity still reached considerable values. We concluded the variability presented by recycled aggregates of CDW does not compromise their use for concrete building. However, this information must be used with caution, and experimental tests should always be performed to certify concrete properties.
Resumo:
The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.
Resumo:
Studies of skin wound healing in crocodilians are necessary given the frequent occurrence of cannibalism in intensive farming systems. Air temperature affects tissue recovery because crocodilians are ectothermic. Therefore, the kinetics of skin wound healing in Caiman yacare were examined at temperatures of 33°C and 23°C. Sixteen caiman were selected and divided into two groups of eight maintained at 23°C or 33°C. The studied individuals' scars were photographed after 1, 2, 3, 7, 15 and 30 days of the experimental conditions, and samples were collected for histological processing after 3, 7, 15 and 30 days. Macroscopically, the blood clot (heterophilic granuloma) noticeably remained in place covering the wound longer for the caiman kept at 23°C. Microscopically, the temperature of 23°C slowed epidermal migration and skin repair. Comparatively, new blood vessels, labeled using von Willebrand factor (vWF) antibody staining, were more frequently found in the scars of the 33°C group. The collagen fibers in the dermis were denser in the 33°C treatment. Considering the delayed healing at 23°C, producers are recommended to keep wounded animals at 33°C, especially when tanks are cold, to enable rapid wound closure and better repair of collagen fibers because such lesions tend to compromise the use of their skin as leather.
Resumo:
The measurement of mesozooplankton biomass in the ocean requires the use of analytical procedures that destroy the samples. Alternatively, the development of methods to estimate biomass from optical systems and appropriate conversion factors could be a compromise between the accuracy of analytical methods and the need to preserve the samples for further taxonomic studies. The conversion of the body area recorded by an optical counter or a camera, by converting the digitized area of an organism into individual biomass, was suggested as a suitable method to estimate total biomass. In this study, crustacean mesozooplankton from subtropical waters were analyzed, and individual dry weight and body area were compared. The obtained relationships agreed with other measurements of biomass obtained from a previous study in Antarctic waters. Gelatinous mesozooplankton from subtropical and Antarctic waters were also sampled and processed for body area and biomass. As expected, differences between crustacean and gelatinous plankton were highly significant. Transparent gelatinous organisms have a lower dry weight per unit area. Therefore, to estimate biomass from digitized images, pattern recognition discerning, at least, between crustaceans and gelatinous forms is required.
Resumo:
[EN] This study was performed to test the hypothesis that administration of recombinant human erythropoietin (rHuEpo) in humans increases maximal oxygen consumption by augmenting the maximal oxygen carrying capacity of blood. Systemic and leg oxygen delivery and oxygen uptake were studied during exercise in eight subjects before and after 13 wk of rHuEpo treatment and after isovolemic hemodilution to the same hemoglobin concentration observed before the start of rHuEpo administration. At peak exercise, leg oxygen delivery was increased from 1,777.0+/-102.0 ml/min before rHuEpo treatment to 2,079.8+/-120.7 ml/min after treatment. After hemodilution, oxygen delivery was decreased to the pretreatment value (1,710.3+/-138.1 ml/min). Fractional leg arterial oxygen extraction was unaffected at maximal exercise; hence, maximal leg oxygen uptake increased from 1,511.0+/-130.1 ml/min before treatment to 1,793.0+/-148.7 ml/min with rHuEpo and decreased after hemodilution to 1,428.0+/-111.6 ml/min. Pulmonary oxygen uptake at peak exercise increased from 3,950.0+/-160.7 before administration to 4,254.5+/-178.4 ml/min with rHuEpo and decreased to 4,059.0+/-161.1 ml/min with hemodilution (P=0.22, compared with values before rHuEpo treatment). Blood buffer capacity remained unaffected by rHuEpo treatment and hemodilution. The augmented hematocrit did not compromise peak cardiac output. In summary, in healthy humans, rHuEpo increases maximal oxygen consumption due to augmented systemic and muscular peak oxygen delivery.
Resumo:
Programa de Doctorado: Ingeniería de Telecomunicación Avanzada.
Resumo:
Immunosenescence is characterized by a complex remodelling of the immune system, mainly driven by lifelong antigenic burden. Cells of the immune system are constantly exposed to a variety of stressors capable of inducing apoptosis, including antigens and reactive oxygen species continuously produced during immune response and metabolic pathways. The overall homeostasis of the immune system is based on the balance between antigenic load, oxidative stress, and apoptotic processes on one side, and the regenerative potential and renewal of the immune system on the other. Zinc is an essential trace element playing a central role on the immune function, being involved in many cellular processes, such as cell death and proliferation, as cofactor of enzymes, nuclear factors and hormones. In this context, the age associated changes in the immune system may be in part due to zinc deficiency, often observed in aged subjects and able to induce impairment of several immune functions. Thus, the aim of this work was to investigate the role of zinc in two essential events for immunity during aging, i.e. apoptosis and cell proliferation. Spontaneous and oxidative stress-induced apoptosis were evaluated by flow cytometry in presence of a physiological concentration of zinc in vitro on peripheral blood mononuclear cells (PBMCs) obtained from healthy subjects of different age: a group of young subjects, a group of old subjects and a group of nonagenarians. In addition, cell cycle phases were analyzed by flow cytometry in PBMCs, obtained from the subjects of the same groups in presence of different concentration of zinc. We also analyzed the influence of zinc in these processes in relation to p53 codon 72 polymorphism, known to affect apoptosis and cell cycle in age-dependent manner. Zinc significantly reduces spontaneous apoptosis in all age-groups; while it significantly increases oxidative stress-induced late apoptosis/necrosis in old and nonagenarians subjects. Some factors involved in the apoptotic pathway were studied and a zinc effect on mitochondrial membrane depolarization, cytochrome C release, caspase-3 activation, PARP cleavage and Bcl-2 expression was found. In conclusion, zinc inhibits spontaneous apoptosis in PBMCs contrasting the harmful effects due to the cellular culture conditions. On the other hand, zinc is able to increase toxicity and induce cell death in PBMCs from aged subjects when cells are exposed to stressing agents that compromise antioxidant cellular systems. Concerning the relationship between the susceptibility to apoptosis and p53 codon 72 genotype, zinc seems to affect apoptosis only in PBMCs from Pro- people suggesting a role of this ion in strengthening the mechanism responsible of the higher propensity of Pro- towards apoptosis. Regarding cell cycle, high doses of zinc could have a role in the progression of cells from G1 to S phase and from S to G2/M phase. These effect seems depend on the age of the donor but seems to be unrelated to p53 codon 72 genotype. In order to investigate the effect of an in vivo zinc supplementation on apoptosis and cell cycle, PBMCs from a group of aged subjects were studied before and after six weeks of oral zinc supplementation. Zinc supplementation reduces spontaneous apoptosis and it strongly reduces oxidative stress-induced apoptosis. On the contrary, no effect of zinc was observed on cell cycle. Therefore, it’s clear that in vitro and in vivo zinc supplementation have different effects on apoptosis and cell cycle in PBMCs from aged subjects. Further experiments and clinical trials are necessary to clarify the real effect of an in vivo zinc supplementation because this preliminary data could encourage the of this element in all that disease with oxidative stress pathogenesis. Moreover, the expression of metallothioneins (MTs), proteins well known for their zinc-binding ability and involved in many cellular processes, i.e. apoptosis, metal ions detoxification, oxidative stress, differentiation, was evaluated in total lymphocytes, in CD4+ and in CD8+ T lymphocytes from young and old healthy subjects in presence of different concentration of zinc in vitro. Literature data reported that during ageing the levels of these proteins increase and concomitantly they lose the ability to release zinc. This fact induce a down-regulation of many biological functions related to zinc, such as metabolism, gene expression and signal transduction. Therefore, these proteins may turn from protective in young-adult age to harmful agents for the immune function in ageing following the concept that several genes/proteins that increase fitness early in life may have negative effects later in life: named “Antagonistic Pleyotropy Theory of Ageing”. Data obtained in this work indicate an higher and faster expression of MTs with lower doses of zinc in total lymphocytes, in CD4+ and in CD8+ T lymphocytes from old subjects supporting the antagonistic pleiotropic role of these proteins.
Resumo:
La ricerca oggetto di questa tesi, come si evince dal titolo stesso, è volta alla riduzione dei consumi per vetture a forte carattere sportivo ed elevate prestazioni specifiche. In particolare, tutte le attività descritte fanno riferimento ad un ben definito modello di vettura, ovvero la Maserati Quattroporte. Lo scenario all’interno del quale questo lavoro si inquadra, è quello di una forte spinta alla riduzione dei cosiddetti gas serra, ossia dell’anidride carbonica, in linea con quelle che sono le disposizioni dettate dal protocollo di Kyoto. La necessità di ridurre l’immissione in atmosfera di CO2 sta condizionando tutti i settori della società: dal riscaldamento degli edifici privati a quello degli stabilimenti industriali, dalla generazione di energia ai processi produttivi in senso lato. Nell’ambito di questo panorama, chiaramente, sono chiamati ad uno sforzo considerevole i costruttori di automobili, alle quali è imputata una percentuale considerevole dell’anidride carbonica prodotta ogni giorno e riversata nell’atmosfera. Al delicato problema inquinamento ne va aggiunto uno forse ancor più contingente e diretto, legato a ragioni di carattere economico. I combustibili fossili, come tutti sanno, sono una fonte di energia non rinnovabile, la cui disponibilità è legata a giacimenti situati in opportune zone del pianeta e non inesauribili. Per di più, la situazione socio politica che il medio oriente sta affrontando, unita alla crescente domanda da parte di quei paesi in cui il processo di industrializzazione è partito da poco a ritmi vertiginosi, hanno letteralmente fatto lievitare il prezzo del petrolio. A causa di ciò, avere una vettura efficiente in senso lato e, quindi, a ridotti consumi, è a tutti gli effetti un contenuto di prodotto apprezzato dal punto di vista del marketing, anche per i segmenti vettura più alti. Nell’ambito di questa ricerca il problema dei consumi è stato affrontato come una conseguenza del comportamento globale della vettura in termini di efficienza, valutando il miglior compromesso fra le diverse aree funzionali costituenti il veicolo. Una parte consistente del lavoro è stata dedicata alla messa a punto di un modello di calcolo, attraverso il quale eseguire una serie di analisi di sensibilità sull’influenza dei diversi parametri vettura sul consumo complessivo di carburante. Sulla base di tali indicazioni, è stata proposta una modifica dei rapporti del cambio elettro-attuato con lo scopo di ottimizzare il compromesso tra consumi e prestazioni, senza inficiare considerevolmente queste ultime. La soluzione proposta è stata effettivamente realizzata e provata su vettura, dando la possibilità di verificare i risultati ed operare un’approfondita attività di correlazione del modello di calcolo per i consumi. Il beneficio ottenuto in termini di autonomia è stato decisamente significativo con riferimento sia ai cicli di omologazione europei, che a quelli statunitensi. Sono state inoltre analizzate le ripercussioni dal punto di vista delle prestazioni ed anche in questo caso i numerosi dati rilevati hanno permesso di migliorare il livello di correlazione del modello di simulazione per le prestazioni. La vettura con la nuova rapportatura proposta è stata poi confrontata con un prototipo di Maserati Quattroporte avente cambio automatico e convertitore di coppia. Questa ulteriore attività ha permesso di valutare il differente comportamento tra le due soluzioni, sia in termini di consumo istantaneo, che di consumo complessivo rilevato durante le principali missioni su banco a rulli previste dalle normative. L’ultima sezione del lavoro è stata dedicata alla valutazione dell’efficienza energetica del sistema vettura, intesa come resistenza all’avanzamento incontrata durante il moto ad una determinata velocità. Sono state indagate sperimentalmente le curve di “coast down” della Quattroporte e di alcune concorrenti e sono stati proposti degli interventi volti alla riduzione del coefficiente di penetrazione aerodinamica, pur con il vincolo di non alterare lo stile vettura.
Resumo:
The 1970s are in the limelight of a growing historiographic attention, partly due to the recent opening of new archival resources. 1973, in particular, has a special interest in the historian’s eyes, as many are the events that happened that year: to name but a few, the Chilean coup, the October War, the ensuing oil crisis, the Vietnamese peace treaty. So it is may be not entirely surprising that not much attention has been paid to the Year of Europe, a nebulous American initiative destined to sum up to nothing practical - as Kissinger himself put it, it was destined to be the Year that never Was.1 It is my opinion, however, that its failure should not conceal its historical interest. Even though transatlantic relations have sometimes been seen as an uninterrupted history of crisis,2 in 1973 they reached what could then be considered as their unprecedented nadir. I believe that a thorough analysis of the events that during that year found the US increasingly at odds with the countries of Western Europe is worth carrying out not only to cast a new light on the dynamics of transatlantic relations but also to deepen our comprehension of the internal dynamics of the actors involved, mainly the Nixon administration and a unifying Europe. The Nixon administration had not carefully planned what the initiative actually should have amounted to, and its official announcement appears to have been one of Kissinger’s coups de theatre. Yet the Year of Europe responded to the vital priority of revitalising the relations with Western Europe, crucial ally, for too long neglected. But 1973 did not end with the solemn renewal of the Atlantic Declaration that Kissinger had sought. On the contrary, it saw, for the first time, the countries of the newly enlarged EC engaged in a real, if short-lived, solidarity on foreign policy, which highlighted the Nixon administration’s contradictions regarding European integration. Those, in addition to the numerous tensions that already strained transatlantic relations, gave birth to a downward spiral of incomprehensions and misperceptions, which the unexpected deflagration of the October war seriously worsened. However, even though the tensions did not disappear, the European front soon started to disintegrate, mainly under the strains imposed by the oil crisis. Significant changes in the leadership of the main European countries helped to get the tones back to normal. During the course of 1974-5, the substantial failure of the Euro-Arab dialogue, the Gymlich compromise, frequent and serene bilateral meetings bear witness that the worst was over.
Resumo:
The running innovation processes of the microwave transistor technologies, used in the implementation of microwave circuits, have to be supported by the study and development of proper design methodologies which, depending on the applications, will fully exploit the technology potentialities. After the choice of the technology to be used in the particular application, the circuit designer has few degrees of freedom when carrying out his design; in the most cases, due to the technological constrains, all the foundries develop and provide customized processes optimized for a specific performance such as power, low-noise, linearity, broadband etc. For these reasons circuit design is always a “compromise”, an investigation for the best solution to reach a trade off between the desired performances. This approach becomes crucial in the design of microwave systems to be used in satellite applications; the tight space constraints impose to reach the best performances under proper electrical and thermal de-rated conditions, respect to the maximum ratings provided by the used technology, in order to ensure adequate levels of reliability. In particular this work is about one of the most critical components in the front-end of a satellite antenna, the High Power Amplifier (HPA). The HPA is the main power dissipation source and so the element which mostly engrave on space, weight and cost of telecommunication apparatus; it is clear from the above reasons that design strategies addressing optimization of power density, efficiency and reliability are of major concern. Many transactions and publications demonstrate different methods for the design of power amplifiers, highlighting the availability to obtain very good levels of output power, efficiency and gain. Starting from existing knowledge, the target of the research activities summarized in this dissertation was to develop a design methodology capable optimize power amplifier performances complying all the constraints imposed by the space applications, tacking into account the thermal behaviour in the same manner of the power and the efficiency. After a reminder of the existing theories about the power amplifier design, in the first section of this work, the effectiveness of the methodology based on the accurate control of the dynamic Load Line and her shaping will be described, explaining all steps in the design of two different kinds of high power amplifiers. Considering the trade-off between the main performances and reliability issues as the target of the design activity, we will demonstrate that the expected results could be obtained working on the characteristics of the Load Line at the intrinsic terminals of the selected active device. The methodology proposed in this first part is based on the assumption that designer has the availability of an accurate electrical model of the device; the variety of publications about this argument demonstrates that it is so difficult to carry out a CAD model capable to taking into account all the non-ideal phenomena which occur when the amplifier operates at such high frequency and power levels. For that, especially for the emerging technology of Gallium Nitride (GaN), in the second section a new approach for power amplifier design will be described, basing on the experimental characterization of the intrinsic Load Line by means of a low frequency high power measurements bench. Thanks to the possibility to develop my Ph.D. in an academic spin-off, MEC – Microwave Electronics for Communications, the results of this activity has been applied to important research programs requested by space agencies, with the aim support the technological transfer from universities to industrial world and to promote a science-based entrepreneurship. For these reasons the proposed design methodology will be explained basing on many experimental results.
Resumo:
The aim of this thesis was to study the effects of extremely low frequency (ELF) electromagnetic magnetic fields on potassium currents in neural cell lines ( Neuroblastoma SK-N-BE ), using the whole-cell Patch Clamp technique. Such technique is a sophisticated tool capable to investigate the electrophysiological activity at a single cell, and even at single channel level. The total potassium ion currents through the cell membrane was measured while exposing the cells to a combination of static (DC) and alternate (AC) magnetic fields according to the prediction of the so-called â Ion Resonance Hypothesis â. For this purpose we have designed and fabricated a magnetic field exposure system reaching a good compromise between magnetic field homogeneity and accessibility to the biological sample under the microscope. The magnetic field exposure system consists of three large orthogonal pairs of square coils surrounding the patch clamp set up and connected to the signal generation unit, able to generate different combinations of static and/or alternate magnetic fields. Such system was characterized in term of field distribution and uniformity through computation and direct field measurements. No statistically significant changes in the potassium ion currents through cell membrane were reveled when the cells were exposed to AC/DC magnetic field combination according to the afore mentioned âIon Resonance Hypothesisâ.
Resumo:
The present work tries to display a comprehensive and comparative study of the different legal and regulatory problems involved in international securitization transactions. First, an introduction to securitization is provided, with the basic elements of the transaction, followed by the different varieties of it, including dynamic securitization and synthetic securitization structures. Together with this introduction to the intricacies of the structure, a insight into the influence of securitization in the financial and economic crisis of 2007-2009 is provided too; as well as an overview of the process of regulatory competition and cooperation that constitutes the framework for the international aspects of securitization. The next Chapter focuses on the aspects that constitute the foundations of structured finance: the inception of the vehicle, and the transfer of risks associated to the securitized assets, with particular emphasis on the validity of those elements, and how a securitization transaction could be threatened at its root. In this sense, special importance is given to the validity of the trust as an instrument of finance, to the assignment of future receivables or receivables in block, and to the importance of formalities for the validity of corporations, trusts, assignments, etc., and the interaction of such formalities contained in general corporate, trust and assignment law with those contemplated under specific securitization regulations. Then, the next Chapter (III) focuses on creditor protection aspects. As such, we provide some insights on the debate on the capital structure of the firm, and its inadequacy to assess the financial soundness problems inherent to securitization. Then, we proceed to analyze the importance of rules on creditor protection in the context of securitization. The corollary is in the rules in case of insolvency. In this sense, we divide the cases where a party involved in the transaction goes bankrupt, from those where the transaction itself collapses. Finally, we focus on the scenario where a substance over form analysis may compromise some of the elements of the structure (notably the limited liability of the sponsor, and/or the transfer of assets) by means of veil piercing, substantive consolidation, or recharacterization theories. Once these elements have been covered, the next Chapters focus on the regulatory aspects involved in the transaction. Chapter IV is more referred to “market” regulations, i.e. those concerned with information disclosure and other rules (appointment of the indenture trustee, and elaboration of a rating by a rating agency) concerning the offering of asset-backed securities to the public. Chapter V, on the other hand, focuses on “prudential” regulation of the entity entrusted with securitizing assets (the so-called Special Purpose vehicle), and other entities involved in the process. Regarding the SPV, a reference is made to licensing requirements, restriction of activities and governance structures to prevent abuses. Regarding the sponsor of the transaction, a focus is made on provisions on sound originating practices, and the servicing function. Finally, we study accounting and banking regulations, including the Basel I and Basel II Frameworks, which determine the consolidation of the SPV, and the de-recognition of the securitized asset from the originating company’s balance-sheet, as well as the posterior treatment of those assets, in particular by banks. Chapters VI-IX are concerned with liability matters. Chapter VI is an introduction to the different sources of liability. Chapter VII focuses on the liability by the SPV and its management for the information supplied to investors, the management of the asset pool, and the breach of loyalty (or fiduciary) duties. Chapter VIII rather refers to the liability of the originator as a result of such information and statements, but also as a result of inadequate and reckless originating or servicing practices. Chapter IX finally focuses on third parties entrusted with the soundness of the transaction towards the market, the so-called gatekeepers. In this respect, we make special emphasis on the liability of indenture trustees, underwriters and rating agencies. Chapters X and XI focus on the international aspects of securitization. Chapter X contains a conflicts of laws analysis of the different aspects of structured finance. In this respect, a study is made of the laws applicable to the vehicle, to the transfer of risks (either by assignment or by means of derivatives contracts), to liability issues; and a study is also made of the competent jurisdiction (and applicable law) in bankruptcy cases; as well as in cases where a substance-over-form is performed. Then, special attention is also devoted to the role of financial and securities regulations; as well as to their territorial limits, and extraterritoriality problems involved. Chapter XI supplements the prior Chapter, for it analyzes the limits to the States’ exercise of regulatory power by the personal and “market” freedoms included in the US Constitution or the EU Treaties. A reference is also made to the (still insufficient) rules from the WTO Framework, and their significance to the States’ recognition and regulation of securitization transactions.
Resumo:
The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.
Resumo:
Apple consumption is highly recomended for a healthy diet and is the most important fruit produced in temperate climate regions. Unfortunately, it is also one of the fruit that most ofthen provoks allergy in atopic patients and the only treatment available up to date for these apple allergic patients is the avoidance. Apple allergy is due to the presence of four major classes of allergens: Mal d 1 (PR-10/Bet v 1-like proteins), Mal d 2 (Thaumatine-like proteins), Mal d 3 (Lipid transfer protein) and Mal d 4 (profilin). In this work new advances in the characterization of apple allergen gene families have been reached using a multidisciplinary approach. First of all, a genomic approach was used for the characterization of the allergen gene families of Mal d 1 (task of Chapter 1), Mal d 2 and Mal d 4 (task of Chapter 5). In particular, in Chapter 1 the study of two large contiguos blocks of DNA sequences containing the Mal d 1 gene cluster on LG16 allowed to acquire many new findings on number and orientation of genes in the cluster, their physical distances, their regulatory sequences and the presence of other genes or pseudogenes in this genomic region. Three new members were discovered co-localizing with the other Mal d 1 genes of LG16 suggesting that the complexity of the genetic base of allergenicity will increase with new advances. Many retrotranspon elements were also retrieved in this cluster. Due to the developement of molecular markers on the two sequences, the anchoring of the physical and the genetic map of the region has been successfully achieved. Moreover, in Chapter 5 the existence of other loci for the Thaumatine-like protein family in apple (Mal d 2.03 on LG4 and Mal d 2.02 on LG17) respect the one reported up to now was demonstred for the first time. Also one new locus for profilins (Mal d 4.04) was mapped on LG2, close to the Mal d 4.02 locus, suggesting a cluster organization for this gene family, as is well reported for Mal d 1 family. Secondly, a methodological approach was used to set up an highly specific tool to discriminate and quantify the expression of each Mal d 1 allergen gene (task of Chapter 2). In aprticular, a set of 20 Mal d 1 gene specific primer pairs for the quantitative Real time PCR technique was validated and optimized. As a first application, this tool was used on leaves and fruit tissues of the cultivar Florina in order to identify the Mal d 1 allergen genes that are expressed in different tissues. The differential expression retrieved in this study revealed a tissue-specificity for some Mal d 1 genes: 10/20 Mal d 1 genes were expressed in fruits and, indeed, probably more involved in the allergic reactions; while 17/20 Mal d 1 genes were expressed in leaves challenged with the fungus Venturia inaequalis and therefore probably interesting in the study of the plant defense mechanism. In Chapter 3 the specific expression levels of the 10 Mal d 1 isoallergen genes, found to be expressed in fruits, were studied for the first time in skin and flesh of apples of different genotypes. A complex gene expression profile was obtained due to the high gene-, tissue- and genotype-variability. Despite this, Mal d 1.06A and Mal d 1.07 expression patterns resulted particularly associated with the degree of allergenicity of the different cultivars. They were not the most expressed Mal d 1 genes in apple but here it was hypotized a relevant importance in the determination of allergenicity for both qualitative and quantitative aspects of the Mal d 1 gene expression levels. In Chapter 4 a clear modulation for all the 17 PR-10 genes tested in young leaves of Florina after challenging with the fungus V. inaequalis have been reported but with a peculiar expression profile for each gene. Interestingly, all the Mal d 1 genes resulted up-regulated except Mal d 1.10 that was down-regulated after the challenging with the fungus. The differences in direction, timing and magnitude of induction seem to confirm the hypothesis of a subfunctionalization inside the gene family despite an high sequencce and structure similarity. Moreover, a modulation of PR-10 genes was showed both in compatible (Gala-V. inaequalis) and incompatible (Florina-V. inaequalis) interactions contribute to validate the hypothesis of an indirect role for at least some of these proteins in the induced defense responses. Finally, a certain modulation of PR-10 transcripts retrieved also in leaves treated with water confirm their abilty to respond also to abiotic stress. To conclude, the genomic approach used here allowed to create a comprehensive inventory of all the genes of allergen families, especially in the case of extended gene families like Mal d 1. This knowledge can be considered a basal prerequisite for many further studies. On the other hand, the specific transcriptional approach make it possible to evaluate the Mal d 1 genes behavior on different samples and conditions and therefore, to speculate on their involvement on apple allergenicity process. Considering the double nature of Mal d 1 proteins, as apple allergens and as PR-10 proteins, the gene expression analysis upon the attack of the fungus created the base for unravel the Mal d 1 biological functions. In particular, the knowledge acquired in this work about the PR-10 genes putatively more involved in the specific Malus-V. inaequalis interaction will be helpful, in the future, to drive the apple breeding for hypo-allergenicity genotype without compromise the mechanism of response of the plants to stress conditions. For the future, the survey of the differences in allergenicity among cultivars has to be be thorough including other genotypes and allergic patients in the tests. After this, the allelic diversity analysis with the high and low allergenic cultivars on all the allergen genes, in particular on the ones with transcription levels correlated to allergencity, will provide the genetic background of the low ones. This step from genes to alleles will allow the develop of molecular markers for them that might be used to effectively addressed the apple breeding for hypo-allergenicity. Another important step forward for the study of apple allergens will be the use of a specific proteomic approach since apple allergy is a multifactor-determined disease and only an interdisciplinary and integrated approach can be effective for its prevention and treatment.
Resumo:
The Italian radio telescopes currently undergo a major upgrade period in response to the growing demand for deep radio observations, such as surveys on large sky areas or observations of vast samples of compact radio sources. The optimised employment of the Italian antennas, at first constructed mainly for VLBI activities and provided with a control system (FS – Field System) not tailored to single-dish observations, required important modifications in particular of the guiding software and data acquisition system. The production of a completely new control system called ESCS (Enhanced Single-dish Control System) for the Medicina dish started in 2007, in synergy with the software development for the forthcoming Sardinia Radio Telescope (SRT). The aim is to produce a system optimised for single-dish observations in continuum, spectrometry and polarimetry. ESCS is also planned to be installed at the Noto site. A substantial part of this thesis work consisted in designing and developing subsystems within ESCS, in order to provide this software with tools to carry out large maps, spanning from the implementation of On-The-Fly fast scans (following both conventional and innovative observing strategies) to the production of single-dish standard output files and the realisation of tools for the quick-look of the acquired data. The test period coincided with the commissioning phase for two devices temporarily installed – while waiting for the SRT to be completed – on the Medicina antenna: a 18-26 GHz 7-feed receiver and the 14-channel analogue backend developed for its use. It is worth stressing that it is the only K-band multi-feed receiver at present available worldwide. The commissioning of the overall hardware/software system constituted a considerable section of the thesis work. Tests were led in order to verify the system stability and its capabilities, down to sensitivity levels which had never been reached in Medicina using the previous observing techniques and hardware devices. The aim was also to assess the scientific potential of the multi-feed receiver for the production of wide maps, exploiting its temporary availability on a mid-sized antenna. Dishes like the 32-m antennas at Medicina and Noto, in fact, offer the best conditions for large-area surveys, especially at high frequencies, as they provide a suited compromise between sufficiently large beam sizes to cover quickly large areas of the sky (typical of small-sized telescopes) and sensitivity (typical of large-sized telescopes). The KNoWS (K-band Northern Wide Survey) project is aimed at the realisation of a full-northern-sky survey at 21 GHz; its pilot observations, performed using the new ESCS tools and a peculiar observing strategy, constituted an ideal test-bed for ESCS itself and for the multi-feed/backend system. The KNoWS group, which I am part of, supported the commissioning activities also providing map-making and source-extraction tools, in order to complete the necessary data reduction pipeline and assess the general system scientific capabilities. The K-band observations, which were carried out in several sessions along the December 2008-March 2010 period, were accompanied by the realisation of a 5 GHz test survey during the summertime, which is not suitable for high-frequency observations. This activity was conceived in order to check the new analogue backend separately from the multi-feed receiver, and to simultaneously produce original scientific data (the 6-cm Medicina Survey, 6MS, a polar cap survey to complete PMN-GB6 and provide an all-sky coverage at 5 GHz).