893 resultados para Pure points of a measure


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study orthogonal projections of generic embedded hypersurfaces in 'R POT.4' with boundary to 2-spaces. Therefore, we classify simple map germs from 'R POT.3' to the plane of codimension less than or equal to 4 with the source containing a distinguished plane which is preserved by coordinate changes. We also go into some detail on their geometrical properties in order to recognize the cases of codimension less than or equal to 1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studies about natural and artificial radionuclides in areas such as the Antarctic are key to understand natural and dynamic processes in marine environments. These studies are important to determine levels of radioactive elements and local sedimentation rates. Five marine sediment cores were collected in different points of Admiralty Bay, in the Antarctic Peninsula. The purpose of this study was to determine 137Cs, 226Ra and 210Pb and sedimentation rates at each site. 137Cs, 210Pb and 226Ra were assayed by gamma-counting through direct measurement of the peak at 661 keV, 47 keV and 609 keV, respectively. Sedimentation rates were obtained by 137Cs and 210Pb (CIC and CRS). The activities for 137Cs ranged from 0.84 to 7.09 Bq kg-1; to 226Ra from 6.77 to 31.07 Bq kg-1 and for 210Pb ranged from 1.10 to 36.90 Bq kg-1. The sedimentation rates obtained by the three models ranged from 0.11±0.01 cm y-1 to 0.46±0.05 cm y-1. The levels of 137Cs registered in this study, as well as in other studies in the Antarctic region indicate that global fallout is the main cause of artificial radionuclides present in this environment, since the Antarctic has not suffered a direct action of human activities that released radioactive elements. The possible grain size variations that occur in the studied points of Admiralty Bay may explain the differences found in the vertical distribution of radionuclides, because of the different values of sedimentation rates and respective dating determined in their profiles

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present work proposes a method based on CLV (Clustering around Latent Variables) for identifying groups of consumers in L-shape data. This kind of datastructure is very common in consumer studies where a panel of consumers is asked to assess the global liking of a certain number of products and then, preference scores are arranged in a two-way table Y. External information on both products (physicalchemical description or sensory attributes) and consumers (socio-demographic background, purchase behaviours or consumption habits) may be available in a row descriptor matrix X and in a column descriptor matrix Z respectively. The aim of this method is to automatically provide a consumer segmentation where all the three matrices play an active role in the classification, getting homogeneous groups from all points of view: preference, products and consumer characteristics. The proposed clustering method is illustrated on data from preference studies on food products: juices based on berry fruits and traditional cheeses from Trentino. The hedonic ratings given by the consumer panel on the products under study were explained with respect to the product chemical compounds, sensory evaluation and consumer socio-demographic information, purchase behaviour and consumption habits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asset Management (AM) is a set of procedures operable at the strategic-tacticaloperational level, for the management of the physical asset’s performance, associated risks and costs within its whole life-cycle. AM combines the engineering, managerial and informatics points of view. In addition to internal drivers, AM is driven by the demands of customers (social pull) and regulators (environmental mandates and economic considerations). AM can follow either a top-down or a bottom-up approach. Considering rehabilitation planning at the bottom-up level, the main issue would be to rehabilitate the right pipe at the right time with the right technique. Finding the right pipe may be possible and practicable, but determining the timeliness of the rehabilitation and the choice of the techniques adopted to rehabilitate is a bit abstruse. It is a truism that rehabilitating an asset too early is unwise, just as doing it late may have entailed extra expenses en route, in addition to the cost of the exercise of rehabilitation per se. One is confronted with a typical ‘Hamlet-isque dilemma’ – ‘to repair or not to repair’; or put in another way, ‘to replace or not to replace’. The decision in this case is governed by three factors, not necessarily interrelated – quality of customer service, costs and budget in the life cycle of the asset in question. The goal of replacement planning is to find the juncture in the asset’s life cycle where the cost of replacement is balanced by the rising maintenance costs and the declining level of service. System maintenance aims at improving performance and maintaining the asset in good working condition for as long as possible. Effective planning is used to target maintenance activities to meet these goals and minimize costly exigencies. The main objective of this dissertation is to develop a process-model for asset replacement planning. The aim of the model is to determine the optimal pipe replacement year by comparing, temporally, the annual operating and maintenance costs of the existing asset and the annuity of the investment in a new equivalent pipe, at the best market price. It is proposed that risk cost provide an appropriate framework to decide the balance between investment for replacing or operational expenditures for maintaining an asset. The model describes a practical approach to estimate when an asset should be replaced. A comprehensive list of criteria to be considered is outlined, the main criteria being a visà- vis between maintenance and replacement expenditures. The costs to maintain the assets should be described by a cost function related to the asset type, the risks to the safety of people and property owing to declining condition of asset, and the predicted frequency of failures. The cost functions reflect the condition of the existing asset at the time the decision to maintain or replace is taken: age, level of deterioration, risk of failure. The process model is applied in the wastewater network of Oslo, the capital city of Norway, and uses available real-world information to forecast life-cycle costs of maintenance and rehabilitation strategies and support infrastructure management decisions. The case study provides an insight into the various definitions of ‘asset lifetime’ – service life, economic life and physical life. The results recommend that one common value for lifetime should not be applied to the all the pipelines in the stock for investment planning in the long-term period; rather it would be wiser to define different values for different cohorts of pipelines to reduce the uncertainties associated with generalisations for simplification. It is envisaged that more criteria the municipality is able to include, to estimate maintenance costs for the existing assets, the more precise will the estimation of the expected service life be. The ability to include social costs enables to compute the asset life, not only based on its physical characterisation, but also on the sensitivity of network areas to social impact of failures. The type of economic analysis is very sensitive to model parameters that are difficult to determine accurately. The main value of this approach is the effort to demonstrate that it is possible to include, in decision-making, factors as the cost of the risk associated with a decline in level of performance, the level of this deterioration and the asset’s depreciation rate, without looking at age as the sole criterion for making decisions regarding replacements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Until recently the debate on the ontology of spacetime had only a philosophical significance, since, from a physical point of view, General Relativity has been made "immune" to the consequences of the "Hole Argument" simply by reducing the subject to the assertion that solutions of Einstein equations which are mathematically different and related by an active diffeomorfism are physically equivalent. From a technical point of view, the natural reading of the consequences of the "Hole Argument” has always been to go further and say that the mathematical representation of spacetime in General Relativity inevitably contains a “superfluous structure” brought to light by the gauge freedom of the theory. This position of apparent split between the philosophical outcome and the physical one has been corrected thanks to a meticulous and complicated formal analysis of the theory in a fundamental and recent (2006) work by Luca Lusanna and Massimo Pauri entitled “Explaining Leibniz equivalence as difference of non-inertial appearances: dis-solution of the Hole Argument and physical individuation of point-events”. The main result of this article is that of having shown how, from a physical point of view, point-events of Einstein empty spacetime, in a particular class of models considered by them, are literally identifiable with the autonomous degrees of freedom of the gravitational field (the Dirac observables, DO). In the light of philosophical considerations based on realism assumptions of the theories and entities, the two authors then conclude by saying that spacetime point-events have a degree of "weak objectivity", since they, depending on a NIF (non-inertial frame), unlike the points of the homogeneous newtonian space, are plunged in a rich and complex non-local holistic structure provided by the “ontic part” of the metric field. Therefore according to the complex structure of spacetime that General Relativity highlights and within the declared limits of a methodology based on a Galilean scientific representation, we can certainly assert that spacetime has got "elements of reality", but the inevitably relational elements that are in the physical detection of point-events in the vacuum of matter (highlighted by the “ontic part” of the metric field, the DO) are closely dependent on the choice of the global spatiotemporal laboratory where the dynamics is expressed (NIF). According to the two authors, a peculiar kind of structuralism takes shape: the point structuralism, with common features both of the absolutist and substantival tradition and of the relationalist one. The intention of this thesis is that of proposing a method of approaching the problem that is, at least at the beginning, independent from the previous ones, that is to propose an approach based on the possibility of describing the gravitational field at three distinct levels. In other words, keeping the results achieved by the work of Lusanna and Pauri in mind and following their underlying philosophical assumptions, we intend to partially converge to their structuralist approach, but starting from what we believe is the "foundational peculiarity" of General Relativity, which is that characteristic inherent in the elements that constitute its formal structure: its essentially geometric nature as a theory considered regardless of the empirical necessity of the measure theory. Observing the theory of General Relativity from this perspective, we can find a "triple modality" for describing the gravitational field that is essentially based on a geometric interpretation of the spacetime structure. The gravitational field is now "visible" no longer in terms of its autonomous degrees of freedom (the DO), which, in fact, do not have a tensorial and, therefore, nor geometric nature, but it is analyzable through three levels: a first one, called the potential level (which the theory identifies with the components of the metric tensor), a second one, known as the connections level (which in the theory determine the forces acting on the mass and, as such, offer a level of description related to the one that the newtonian gravitation provides in terms of components of the gravitational field) and, finally, a third level, that of the Riemann tensor, which is peculiar to General Relativity only. Focusing from the beginning on what is called the "third level" seems to present immediately a first advantage: to lead directly to a description of spacetime properties in terms of gauge-invariant quantites, which allows to "short circuit" the long path that, in the treatises analyzed, leads to identify the "ontic part” of the metric field. It is then shown how to this last level it is possible to establish a “primitive level of objectivity” of spacetime in terms of the effects that matter exercises in extended domains of spacetime geometrical structure; these effects are described by invariants of the Riemann tensor, in particular of its irreducible part: the Weyl tensor. The convergence towards the affirmation by Lusanna and Pauri that the existence of a holistic, non-local and relational structure from which the properties quantitatively identified of point-events depend (in addition to their own intrinsic detection), even if it is obtained from different considerations, is realized, in our opinion, in the assignment of a crucial role to the degree of curvature of spacetime that is defined by the Weyl tensor even in the case of empty spacetimes (as in the analysis conducted by Lusanna and Pauri). In the end, matter, regarded as the physical counterpart of spacetime curvature, whose expression is the Weyl tensor, changes the value of this tensor even in spacetimes without matter. In this way, going back to the approach of Lusanna and Pauri, it affects the DOs evolution and, consequently, the physical identification of point-events (as our authors claim). In conclusion, we think that it is possible to see the holistic, relational, and non-local structure of spacetime also through the "behavior" of the Weyl tensor in terms of the Riemann tensor. This "behavior" that leads to geometrical effects of curvature is characterized from the beginning by the fact that it concerns extensive domains of the manifold (although it should be pointed out that the values of the Weyl tensor change from point to point) by virtue of the fact that the action of matter elsewhere indefinitely acts. Finally, we think that the characteristic relationality of spacetime structure should be identified in this "primitive level of organization" of spacetime.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we present our work about some generalisations of ideas, techniques and physical interpretations typical for integrable models to one of the most outstanding advances in theoretical physics of nowadays: the AdS/CFT correspondences. We have undertaken the problem of testing this conjectured duality under various points of view, but with a clear starting point - the integrability - and with a clear ambitious task in mind: to study the finite-size effects in the energy spectrum of certain string solutions on a side and in the anomalous dimensions of the gauge theory on the other. Of course, the final desire woul be the exact comparison between these two faces of the gauge/string duality. In few words, the original part of this work consists in application of well known integrability technologies, in large parte borrowed by the study of relativistic (1+1)-dimensional integrable quantum field theories, to the highly non-relativisic and much complicated case of the thoeries involved in the recent conjectures of AdS5/CFT4 and AdS4/CFT3 corrspondences. In details, exploiting the spin chain nature of the dilatation operator of N = 4 Super-Yang-Mills theory, we concentrated our attention on one of the most important sector, namely the SL(2) sector - which is also very intersting for the QCD understanding - by formulating a new type of nonlinear integral equation (NLIE) based on a previously guessed asymptotic Bethe Ansatz. The solutions of this Bethe Ansatz are characterised by the length L of the correspondent spin chain and by the number s of its excitations. A NLIE allows one, at least in principle, to make analytical and numerical calculations for arbitrary values of these parameters. The results have been rather exciting. In the important regime of high Lorentz spin, the NLIE clarifies how it reduces to a linear integral equations which governs the subleading order in s, o(s0). This also holds in the regime with L ! 1, L/ ln s finite (long operators case). This region of parameters has been particularly investigated in literature especially because of an intriguing limit into the O(6) sigma model defined on the string side. One of the most powerful methods to keep under control the finite-size spectrum of an integrable relativistic theory is the so called thermodynamic Bethe Ansatz (TBA). We proposed a highly non-trivial generalisation of this technique to the non-relativistic case of AdS5/CFT4 and made the first steps in order to determine its full spectrum - of energies for the AdS side, of anomalous dimensions for the CFT one - at any values of the coupling constant and of the size. At the leading order in the size parameter, the calculation of the finite-size corrections is much simpler and does not necessitate the TBA. It consists in deriving for a nonrelativistc case a method, invented for the first time by L¨uscher to compute the finite-size effects on the mass spectrum of relativisic theories. So, we have formulated a new version of this approach to adapt it to the case of recently found classical string solutions on AdS4 × CP3, inside the new conjecture of an AdS4/CFT3 correspondence. Our results in part confirm the string and algebraic curve calculations, in part are completely new and then could be better understood by the rapidly evolving developments of this extremely exciting research field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present Thesis studies three alternative solvent groups as sustainable replacement of traditional organic solvents. Some aspects of fluorinated solvents, supercritical fluids and ionic liquids, have been analysed with a critical approach and their effective “greenness” has been evaluated from the points of view of the synthesis, the properties and the applications. In particular, the attention has been put on the environmental and human health issues, evaluating the eco-toxicity, the toxicity and the persistence, to underline that applicability and sustainability are subjects with equal importance. The “green” features of fluorous solvents and supercritical fluids are almost well-established; in particular supercritical carbon dioxide (scCO2) is probably the “greenest” solvent among the alternative solvent systems developed in the last years, enabling to combine numerous advantages both from the point of view of industrial/technological applications and eco-compatibility. In the Thesis the analysis of these two classes of alternative solvents has been mainly focused on their applicability, rather than the evaluation of their environmental impact. Specifically they have been evaluated as alternative media for non-aqueous biocatalysis. For this purpose, the hydrophobic ion pairing (HIP), which allows solubilising enzymes in apolar solvents by an ion pairing between the protein and a surfactant, has been investigated as effective enzymatic derivatisation technique to improve the catalytic activity under homogeneous conditions in non conventional media. The results showed that the complex enzyme-surfactant was much more active both in fluorous solvents and in supercritical carbon dioxide than the native form of the enzyme. Ionic liquids, especially imidazolium salts, have been proposed some years ago as “fully green” alternative solvents; however this epithet does not take into account several “brown” aspects such as their synthesis from petro-chemical starting materials, their considerable eco-toxicity, toxicity and resistance to biodegradation, and the difficulty of clearly outline applications in which ionic liquids are really more advantageous than traditional solvents. For all of these reasons in this Thesis a critical analysis of ionic liquids has been focused on three main topics: i) alternative synthesis by introducing structural moieties which could reduce the toxicity of the most known liquid salts, and by using starting materials from renewable resources; ii) on the evaluation of their environmental impact through eco-toxicological tests (Daphnia magna and Vibrio fischeri acute toxicity tests, and algal growth inhibition), toxicity tests (MTT test, AChE inhibition and LDH release tests) and fate and rate of aerobic biodegradation in soil and water; iii) and on the demonstration of their effectiveness as reaction media in organo-catalysis and as extractive solvents in the recovery of vegetable oil from terrestrial and aquatic biomass. The results about eco-toxicity tests with Daphnia magna, Vibrio fischeri and algae, and toxicity assay using cultured cell lines, clearly indicate that the difference in toxicity between alkyl and oxygenated cations relies in differences of polarity, according to the general trend of decreasing toxicity by decreasing the lipophilicity. Independently by the biological approach in fact, all the results are in agreement, showing a lower toxicity for compounds with oxygenated lateral chains than for those having purely alkyl lateral chains. These findings indicate that an appropriate choice of cation and anion structures is important not only to design the IL with improved and suitable chemico-physical properties but also to obtain safer and eco-friendly ILs. Moreover there is a clear indication that the composition of the abiotic environment has to be taken into account when the toxicity of ILs in various biological test systems is analysed, because, for example, the data reported in the Thesis indicate a significant influence of salinity variations on algal toxicity. Aerobic biodegradation of four imidazolium ionic liquids, two alkylated and two oxygenated, in soil was evaluated for the first time. Alkyl ionic liquids were shown to be biodegradable over the 6 months test period, and in contrast no significant mineralisation was observed with oxygenated derivatives. A different result was observed in the aerobic biodegradation of alkylated and oxygenated pyridinium ionic liquids in water because all the ionic liquids were almost completely degraded after 10 days, independently by the number of oxygen in the lateral chain of the cation. The synthesis of new ionic liquids by using renewable feedstock as starting materials, has been developed through the synthesis of furan-based ion pairs from furfural. The new ammonium salts were synthesised in very good yields, good purity of the products and wide versatility, combining low melting points with high decomposition temperatures and reduced viscosities. Regarding the possible applications as surfactants and biocides, furan-based salts could be a valuable alternative to benzyltributylammonium salts and benzalkonium chloride that are produced from non-renewable resources. A new procedure for the allylation of ketones and aldehydes with tetraallyltin in ionic liquids was developed. The reaction afforded high yields both in sulfonate-containing ILs and in ILs without sulfonate upon addition of a small amount of sulfonic acid. The checked reaction resulted in peculiar chemoselectivity favouring aliphatic substrates towards aromatic ketones and good stereoselectivity in the allylation of levoglucosenone. Finally ILs-based systems could be easily and successfully recycled, making the described procedure environmentally benign. The potential role of switchable polarity solvents as a green technology for the extraction of vegetable oil from terrestrial and aquatic biomass has been investigated. The extraction efficiency of terrestrial biomass rich in triacylglycerols, as soy bean flakes and sunflower seeds, was comparable to those of traditional organic solvents, being the yield of vegetable oils recovery very similar. Switchable polarity solvents as been also exploited for the first time in the extraction of hydrocarbons from the microalga Botryococcus braunii, demonstrating the efficiency of the process for the extraction of both dried microalgal biomass and directly of the aqueous growth medium. The switchable polarity solvents exhibited better extraction efficiency than conventional solvents, both with dried and liquid samples. This is an important issue considering that the harvest and the dewatering of algal biomass have a large impact on overall costs and energy balance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Organic electronics has grown enormously during the last decades driven by the encouraging results and the potentiality of these materials for allowing innovative applications, such as flexible-large-area displays, low-cost printable circuits, plastic solar cells and lab-on-a-chip devices. Moreover, their possible field of applications reaches from medicine, biotechnology, process control and environmental monitoring to defense and security requirements. However, a large number of questions regarding the mechanism of device operation remain unanswered. Along the most significant is the charge carrier transport in organic semiconductors, which is not yet well understood. Other example is the correlation between the morphology and the electrical response. Even if it is recognized that growth mode plays a crucial role into the performance of devices, it has not been exhaustively investigated. The main goal of this thesis was the finding of a correlation between growth modes, electrical properties and morphology in organic thin-film transistors (OTFTs). In order to study the thickness dependence of electrical performance in organic ultra-thin-film transistors, we have designed and developed a home-built experimental setup for performing real-time electrical monitoring and post-growth in situ electrical characterization techniques. We have grown pentacene TFTs under high vacuum conditions, varying systematically the deposition rate at a fixed room temperature. The drain source current IDS and the gate source current IGS were monitored in real-time; while a complete post-growth in situ electrical characterization was carried out. At the end, an ex situ morphological investigation was performed by using the atomic force microscope (AFM). In this work, we present the correlation for pentacene TFTs between growth conditions, Debye length and morphology (through the correlation length parameter). We have demonstrated that there is a layered charge carriers distribution, which is strongly dependent of the growth mode (i.e. rate deposition for a fixed temperature), leading to a variation of the conduction channel from 2 to 7 monolayers (MLs). We conciliate earlier reported results that were apparently contradictory. Our results made evident the necessity of reconsidering the concept of Debye length in a layered low-dimensional device. Additionally, we introduce by the first time a breakthrough technique. This technique makes evident the percolation of the first MLs on pentacene TFTs by monitoring the IGS in real-time, correlating morphological phenomena with the device electrical response. The present thesis is organized in the following five chapters. Chapter 1 makes an introduction to the organic electronics, illustrating the operation principle of TFTs. Chapter 2 presents the organic growth from theoretical and experimental points of view. The second part of this chapter presents the electrical characterization of OTFTs and the typical performance of pentacene devices is shown. In addition, we introduce a correcting technique for the reconstruction of measurements hampered by leakage current. In chapter 3, we describe in details the design and operation of our innovative home-built experimental setup for performing real-time and in situ electrical measurements. Some preliminary results and the breakthrough technique for correlating morphological and electrical changes are presented. Chapter 4 meets the most important results obtained in real-time and in situ conditions, which correlate growth conditions, electrical properties and morphology of pentacene TFTs. In chapter 5 we describe applicative experiments where the electrical performance of pentacene TFTs has been investigated in ambient conditions, in contact to water or aqueous solutions and, finally, in the detection of DNA concentration as label-free sensor, within the biosensing framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesi si occupa della traduzione di Measure for Measure di Shakespeare scritta da Cesare Garboli e pubblicata nel 1992 con Einaudi nella collana «Scrittori tradotti da scrittori». La traduzione fu concepita per il Teatro Stabile di Torino diretto da Luca Ronconi, che debuttò al teatro Carignano nel 1992 e venne successivamente ripresa, con alcune varianti, dalla compagnia di Carlo Cecchi nel 1998, per una nuova messinscena al teatro Garibaldi di Palermo. A partire dagli esiti più recenti dei Translation Studies, il lavoro sviluppa uno studio comparato, dal punto di vista linguistico e sotto il profilo ermeneutico, fra la traduzione di Garboli, il testo originale nelle due edizioni Arden e Cambridge e le traduzioni italiane di Measure for Measure pubblicate nel Novecento. La parte finale della tesi è dedicata alle messinscene a Torino e a Palermo: un confronto per evidenziare gli elementi che in entrambe appartengono alla strutturazione del testo tradotto e i caratteri specifici degli universi di finzione raffigurati dai due registi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comparative genomic sequence analysis of a region in human chromosome 11p15.3 and its homologous segment in mouse chromosome 7 between ST5 and LMO1 genes has been performed. 158,201 bases were sequenced in the mouse and compared with the syntenic region in human, partially available in the public databases. The analysed region exhibits the typical eukaryotic genomic structure and compared with the close neighbouring regions, strikingly reflexes the mosaic pattern distribution of (G+C) and repeats content despites its relative short size. Within this region the novel gene STK33 was discovered (Stk33 in the mouse), that codes for a serine/threonine kinase. The finding of this gene constitutes an excellent example of the strength of the comparative sequencing approach. Poor gene-predictions in the mouse genomic sequence were corrected and improved by the comparison with the unordered data from the human genomic sequence publicly available. Phylogenetical analysis suggests that STK33 belongs to the calcium/calmodulin-dependent protein kinases group and seems to be a novelty in the chordate lineage. The gene, as a whole, seems to evolve under purifying selection whereas some regions appear to be under strong positive selection. Both human and mouse versions of serine/threonine kinase 33, consists of seventeen exons highly conserved in the coding regions, particularly in those coding for the core protein kinase domain. Also the exon/intron structure in the coding regions of the gene is conserved between human and mouse. The existence and functionality of the gene is supported by the presence of entries in the EST databases and was in vivo fully confirmed by isolating specific transcripts from human uterus total RNA and from several mouse tissues. Strong evidence for alternative splicing was found, which may result in tissue-specific starting points of transcription and in some extent, different protein N-termini. RT-PCR and hybridisation experiments suggest that STK33/Stk33 is differentially expressed in a few tissues and in relative low levels. STK33 has been shown to be reproducibly down-regulated in tumor tissues, particularly in ovarian tumors. RNA in-situ hybridisation experiments using mouse Stk33-specific probes showed expression in dividing cells from lung and germinal epithelium and possibly also in macrophages from kidney and lungs. Preliminary experimentation with antibodies designed in this work, performed in parallel to the preparation of this manuscript, seems to confirm this expression pattern. The fact that the chromosomal region 11p15 in which STK33 is located may be associated with several human diseases including tumor development, suggest further investigation is necessary to establish the role of STK33 in human health.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this thesis is to investigate the strength and structure of the magnetized medium surrounding radio galaxies via observations of the Faraday effect. This study is based on an analysis of the polarization properties of radio galaxies selected to have a range of morphologies (elongated tails, or lobes with small axial ratios) and to be located in a variety of environments (from rich cluster core to small group). The targets include famous objects like M84 and M87. A key aspect of this work is the combination of accurate radio imaging with high-quality X-ray data for the gas surrounding the sources. Although the focus of this thesis is primarily observational, I developed analytical models and performed two- and three-dimensional numerical simulations of magnetic fields. The steps of the thesis are: (a) to analyze new and archival observations of Faraday rotation measure (RM) across radio galaxies and (b) to interpret these and existing RM images using sophisticated two and three-dimensional Monte Carlo simulations. The approach has been to select a few bright, very extended and highly polarized radio galaxies. This is essential to have high signal-to-noise in polarization over large enough areas to allow computation of spatial statistics such as the structure function (and hence the power spectrum) of rotation measure, which requires a large number of independent measurements. New and archival Very Large Array observations of the target sources have been analyzed in combination with high-quality X-ray data from the Chandra, XMM-Newton and ROSAT satellites. The work has been carried out by making use of: 1) Analytical predictions of the RM structure functions to quantify the RM statistics and to constrain the power spectra of the RM and magnetic field. 2) Two-dimensional Monte Carlo simulations to address the effect of an incomplete sampling of RM distribution and so to determine errors for the power spectra. 3) Methods to combine measurements of RM and depolarization in order to constrain the magnetic-field power spectrum on small scales. 4) Three-dimensional models of the group/cluster environments, including different magnetic field power spectra and gas density distributions. This thesis has shown that the magnetized medium surrounding radio galaxies appears more complicated than was apparent from earlier work. Three distinct types of magnetic-field structure are identified: an isotropic component with large-scale fluctuations, plausibly associated with the intergalactic medium not affected by the presence of a radio source; a well-ordered field draped around the front ends of the radio lobes and a field with small-scale fluctuations in rims of compressed gas surrounding the inner lobes, perhaps associated with a mixing layer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

General aim of the study is equine welfare, particularly concerning different husbandry methodic and inter-specific relational factors. Specific aim is the evaluation of possible mutual (to humans and to equines) benefits and the analysis of critical factors/strength points, of human-horse relationship within Therapeutic Riding context (TR). The peculiarities of human-horse relationship (compared to the bond with “Pet”) are analyzed, concerning their socio-anthropological, psychological, psycho-dynamic distinctive characteristics. 8 European representative therapeutic riding centers (TRC) were therefore selected (on the basis of their different animals’ husbandry criteria, and of the different rehabilitative methodologies adopted). TRC were investigated through 2 different questionnaires, specifically settled to access objective/subjective animal welfare parameters; the quality of human-horse relationship; technicians’ emotional experienced. 3 Centers were further selected, and behavioral (145 hours of behavioral recording) and physiological parameters (heart rate and heart rate variability) were evaluated, aimed to access equine welfare and horses’ adaptive responses/coping (towards general environment and towards TR job). Moreover a specific “handling-task” was ideated and experimented, aimed to measure the quality of TR technicians-horses relationship. We did therefore evaluate both the individual horses’ responses and the possible differences among Centers. Data collected highlight the lack of univocal standardized methodic, concerning the general animals’ management and the specific methodologies (aimed to improve animal welfare and to empower TR efficacy). Some positive and some critical aspects were detected concerning TR personnel-horse relationship. Another experimental approach did evaluate the efficacy (concerning the mutual benefits’ empowerment) of an “ethologically-fitted” TR intervention, aimed to educate children to and through the relationship with horses. Our data evidenced that the improvement of human horse relationship, through structured educational programs for TR personnel might have important consequences both to human and equine welfare.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die Wechselwirkung zwischen Proteinen und anorganischen Oberflächen fasziniert sowohl aus angewandter als auch theoretischer Sicht. Sie ist ein wichtiger Aspekt in vielen Anwendungen, unter anderem in chirugischen Implantaten oder Biosensoren. Sie ist außerdem ein Beispiel für theoretische Fragestellungen betreffend die Grenzfläche zwischen harter und weicher Materie. Fest steht, dass Kenntnis der beteiligten Mechanismen erforderlich ist um die Wechselwirkung zwischen Proteinen und Oberflächen zu verstehen, vorherzusagen und zu optimieren. Aktuelle Fortschritte im experimentellen Forschungsbereich ermöglichen die Untersuchung der direkten Peptid-Metall-Bindung. Dadurch ist die Erforschung der theoretischen Grundlagen weiter ins Blickfeld aktueller Forschung gerückt. Eine Möglichkeit die Wechselwirkung zwischen Proteinen und anorganischen Oberflächen zu erforschen ist durch Computersimulationen. Obwohl Simulationen von Metalloberflächen oder Proteinen als Einzelsysteme schon länger verbreitet sind, bringt die Simulation einer Kombination beider Systeme neue Schwierigkeiten mit sich. Diese zu überwinden erfordert ein Mehrskalen-Verfahren: Während Proteine als biologische Systeme ausreichend mit klassischer Molekulardynamik beschrieben werden können, bedarf die Beschreibung delokalisierter Elektronen metallischer Systeme eine quantenmechanische Formulierung. Die wichtigste Voraussetzung eines Mehrskalen-Verfahrens ist eine Übereinstimmung der Simulationen auf den verschiedenen Skalen. In dieser Arbeit wird dies durch die Verknüpfung von Simulationen alternierender Skalen erreicht. Diese Arbeit beginnt mit der Untersuchung der Thermodynamik der Benzol-Hydratation mittels klassischer Molekulardynamik. Dann wird die Wechselwirkung zwischen Wasser und den [111]-Metalloberflächen von Gold und Nickel mittels eines Multiskalen-Verfahrens modelliert. In einem weiteren Schritt wird die Adsorbtion des Benzols an Metalloberflächen in wässriger Umgebung studiert. Abschließend wird die Modellierung erweitert und auch die Aminosäuren Alanin und Phenylalanin einbezogen. Dies eröffnet die Möglichkeit realistische Protein- Metall-Systeme in Computersimulationen zu betrachten und auf theoretischer Basis die Wechselwirkung zwischen Peptiden und Oberflächen für jede Art Peptide und Oberfläche vorauszusagen.