974 resultados para Phenomena and statements
Resumo:
In this paper, we consider non-ideal excitation devices such as DC motors with restrictenergy output capacity. When such motors are attached to structures which needexcitation power levels similar to the source power capacity, jump phenomena and theincrease in power required near resonance characterize the Sommerfeld Effect, actingas a sort of an energy sink. One of the problems often faced by designers of suchstructures is how to drive the system through resonance and avoid this energy sink.Our basic structural model is a simple portal frame driven by a num-ideal powersource-(NIPF). We also investigate the absorption of resonant vibrations (nonlinearand chaotic) by means of a nonlinear sub-structure known as a Nonlinear Energy Sink(NES). An energy exchange process between the NIPF and NES in the passagethrough resonance is investigated, as well the suppression of chaos.
Resumo:
We analyze new results on a magnetically levitated body (a block including a magnet whose bottom pole is set in such a way as to repel the upper pole of a magnetic base) excited by a non-ideal energy source (an unbalanced electric motor of limited power supply). These new results are related to the jump phenomena and increase of power required of such sources near resonance are manifestations of a non-ideal system and they are referred as the Sommerfeld effect, which emulates an energy sink. In this work, we also discuss control strategies to be applied to this system, in resonance conditions, in order to decrease its vibration amplitude and avoiding this apparent energy sink.
Resumo:
Phenomena as reconnection scenarios, periodic-orbit collisions, and primary shearless tori have been recognized as features of nontwist maps. Recently, these phenomena and secondary shearless tori were analytically predicted for generic maps in the neighborhood of the tripling bifurcation of an elliptic fixed point. In this paper, we apply a numerical procedure to find internal rotation number profiles that highlight the creation of periodic orbits within islands of stability by a saddle-center bifurcation that emerges out a secondary shearless torus. In addition to the analytical predictions, our numerical procedure applied to the twist and nontwist standard maps reveals that the atypical secondary shearless torus occurs not only near a tripling bifurcation of the fixed point but also near a quadrupling bifurcation. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4750040]
Resumo:
Dapsone (DAP) is a synthetic sulfone drug with bacteriostatic activity, mainly against Mycobacterium leprae. In this study we have investigated the interactions of DAP with cyclodextrins, 2-hydroxypropyl-beta-cyclodextrin (HP beta CD) and beta-cyclodextrin (beta CD), in the presence and absence of water-soluble polymers, in order to improve its solubility and bioavailability. Solid systems DAP/HP beta CD and DAP/beta CD, in the presence or absence of polyvinylpyrrolidone (PVP K30) or hydroxypropyl methylcellulose (HPMC), were prepared. The binary and ternary systems were evaluated and characterized by SEM, DSC, XRD and NMR analysis as well as phase solubility assays, in order to investigate the interactions between DAP and the excipients in aqueous solution. This study revealed that inclusion complexes of DAP and cyclodextrins (HP beta CD and beta CD) can be produced in order to improve DAP solubility and bioavailability in the presence or absence of polymers (PVP K30 and HPMC). The more stable inclusion complex was obtained with HP beta CD, and consequently HP beta CD was more efficient in improving DAP solubility than beta CD, and the addition of polymers had no influence on DAP solubility or on the stability of the DAP/CDs complexes.
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
Abstract. This thesis presents a discussion on a few specific topics regarding the low velocity impact behaviour of laminated composites. These topics were chosen because of their significance as well as the relatively limited attention received so far by the scientific community. The first issue considered is the comparison between the effects induced by a low velocity impact and by a quasi-static indentation experimental test. An analysis of both test conditions is presented, based on the results of experiments carried out on carbon fibre laminates and on numerical computations by a finite element model. It is shown that both quasi-static and dynamic tests led to qualitatively similar failure patterns; three characteristic contact force thresholds, corresponding to the main steps of damage progression, were identified and found to be equal for impact and indentation. On the other hand, an equal energy absorption resulted in a larger delaminated area in quasi-static than in dynamic tests, while the maximum displacement of the impactor (or indentor) was higher in the case of impact, suggesting a probably more severe fibre damage than in indentation. Secondly, the effect of different specimen dimensions and boundary conditions on its impact response was examined. Experimental testing showed that the relationships of delaminated area with two significant impact parameters, the absorbed energy and the maximum contact force, did not depend on the in-plane dimensions and on the support condition of the coupons. The possibility of predicting, by means of a simplified numerical computation, the occurrence of delaminations during a specific impact event is also discussed. A study about the compressive behaviour of impact damaged laminates is also presented. Unlike most of the contributions available about this subject, the results of compression after impact tests on thin laminates are described in which the global specimen buckling was not prevented. Two different quasi-isotropic stacking sequences, as well as two specimen geometries, were considered. It is shown that in the case of rectangular coupons the lay-up can significantly affect the damage induced by impact. Different buckling shapes were observed in laminates with different stacking sequences, in agreement with the results of numerical analysis. In addition, the experiments showed that impact damage can alter the buckling mode of the laminates in certain situations, whereas it did not affect the compressive strength in every case, depending on the buckling shape. Some considerations about the significance of the test method employed are also proposed. Finally, a comprehensive study is presented regarding the influence of pre-existing in-plane loads on the impact response of laminates. Impact events in several conditions, including both tensile and compressive preloads, both uniaxial and biaxial, were analysed by means of numerical finite element simulations; the case of laminates impacted in postbuckling conditions was also considered. The study focused on how the effect of preload varies with the span-to-thickness ratio of the specimen, which was found to be a key parameter. It is shown that a tensile preload has the strongest effect on the peak stresses at low span-to-thickness ratios, leading to a reduction of the minimum impact energy required to initiate damage, whereas this effect tends to disappear as the span-to-thickness ratio increases. On the other hand, a compression preload exhibits the most detrimental effects at medium span-to-thickness ratios, at which the laminate compressive strength and the critical instability load are close to each other, while the influence of preload can be negligible for thin plates or even beneficial for very thick plates. The possibility to obtain a better explanation of the experimental results described in the literature, in view of the present findings, is highlighted. Throughout the thesis the capabilities and limitations of the finite element model, which was implemented in an in-house program, are discussed. The program did not include any damage model of the material. It is shown that, although this kind of analysis can yield accurate results as long as damage has little effect on the overall mechanical properties of a laminate, it can be helpful in explaining some phenomena and also in distinguishing between what can be modelled without taking into account the material degradation and what requires an appropriate simulation of damage. Sommario. Questa tesi presenta una discussione su alcune tematiche specifiche riguardanti il comportamento dei compositi laminati soggetti ad impatto a bassa velocità. Tali tematiche sono state scelte per la loro importanza, oltre che per l’attenzione relativamente limitata ricevuta finora dalla comunità scientifica. La prima delle problematiche considerate è il confronto fra gli effetti prodotti da una prova sperimentale di impatto a bassa velocità e da una prova di indentazione quasi statica. Viene presentata un’analisi di entrambe le condizioni di prova, basata sui risultati di esperimenti condotti su laminati in fibra di carbonio e su calcoli numerici svolti con un modello ad elementi finiti. È mostrato che sia le prove quasi statiche sia quelle dinamiche portano a un danneggiamento con caratteristiche qualitativamente simili; tre valori di soglia caratteristici della forza di contatto, corrispondenti alle fasi principali di progressione del danno, sono stati individuati e stimati uguali per impatto e indentazione. D’altro canto lo stesso assorbimento di energia ha portato ad un’area delaminata maggiore nelle prove statiche rispetto a quelle dinamiche, mentre il massimo spostamento dell’impattatore (o indentatore) è risultato maggiore nel caso dell’impatto, indicando la probabilità di un danneggiamento delle fibre più severo rispetto al caso dell’indentazione. In secondo luogo è stato esaminato l’effetto di diverse dimensioni del provino e diverse condizioni al contorno sulla sua risposta all’impatto. Le prove sperimentali hanno mostrato che le relazioni fra l’area delaminata e due parametri di impatto significativi, l’energia assorbita e la massima forza di contatto, non dipendono dalle dimensioni nel piano dei provini e dalle loro condizioni di supporto. Viene anche discussa la possibilità di prevedere, per mezzo di un calcolo numerico semplificato, il verificarsi di delaminazioni durante un determinato caso di impatto. È presentato anche uno studio sul comportamento a compressione di laminati danneggiati da impatto. Diversamente della maggior parte della letteratura disponibile su questo argomento, vengono qui descritti i risultati di prove di compressione dopo impatto su laminati sottili durante le quali l’instabilità elastica globale dei provini non è stata impedita. Sono state considerate due differenti sequenze di laminazione quasi isotrope, oltre a due geometrie per i provini. Viene mostrato come nel caso di provini rettangolari la sequenza di laminazione possa influenzare sensibilmente il danno prodotto dall’impatto. Due diversi tipi di deformate in condizioni di instabilità sono stati osservati per laminati con diversa laminazione, in accordo con i risultati dell’analisi numerica. Gli esperimenti hanno mostrato inoltre che in certe situazioni il danno da impatto può alterare la deformata che il laminato assume in seguito ad instabilità; d’altra parte tale danno non ha sempre influenzato la resistenza a compressione, a seconda della deformata. Vengono proposte anche alcune considerazioni sulla significatività del metodo di prova utilizzato. Infine viene presentato uno studio esaustivo riguardo all’influenza di carichi membranali preesistenti sulla risposta all’impatto dei laminati. Sono stati analizzati con simulazioni numeriche ad elementi finiti casi di impatto in diverse condizioni di precarico, sia di trazione sia di compressione, sia monoassiali sia biassiali; è stato preso in considerazione anche il caso di laminati impattati in condizioni di postbuckling. Lo studio si è concentrato in particolare sulla dipendenza degli effetti del precarico dal rapporto larghezza-spessore del provino, che si è rivelato un parametro fondamentale. Viene illustrato che un precarico di trazione ha l’effetto più marcato sulle massime tensioni per bassi rapporti larghezza-spessore, portando ad una riduzione della minima energia di impatto necessaria per innescare il danneggiamento, mentre questo effetto tende a scomparire all’aumentare di tale rapporto. Il precarico di compressione evidenzia invece gli effetti più deleteri a rapporti larghezza-spessore intermedi, ai quali la resistenza a compressione del laminato e il suo carico critico di instabilità sono paragonabili, mentre l’influenza del precarico può essere trascurabile per piastre sottili o addirittura benefica per piastre molto spesse. Viene evidenziata la possibilità di trovare una spiegazione più soddisfacente dei risultati sperimentali riportati in letteratura, alla luce del presente contributo. Nel corso della tesi vengono anche discussi le potenzialità ed i limiti del modello ad elementi finiti utilizzato, che è stato implementato in un programma scritto in proprio. Il programma non comprende alcuna modellazione del danneggiamento del materiale. Viene però spiegato come, nonostante questo tipo di analisi possa portare a risultati accurati soltanto finché il danno ha scarsi effetti sulle proprietà meccaniche d’insieme del laminato, esso possa essere utile per spiegare alcuni fenomeni, oltre che per distinguere fra ciò che si può riprodurre senza tenere conto del degrado del materiale e ciò che invece richiede una simulazione adeguata del danneggiamento.
Resumo:
The object of the present study is the process of gas transport in nano-sized materials, i.e. systems having structural elements of the order of nanometers. The aim of this work is to advance the understanding of the gas transport mechanism in such materials, for which traditional models are not often suitable, by providing a correct interpretation of the relationship between diffusive phenomena and structural features. This result would allow the development new materials with permeation properties tailored on the specific application, especially in packaging systems. The methods used to achieve this goal were a detailed experimental characterization and different simulation methods. The experimental campaign regarded the determination of oxygen permeability and diffusivity in different sets of organic-inorganic hybrid coatings prepared via sol-gel technique. The polymeric samples coated with these hybrid layers experienced a remarkable enhancement of the barrier properties, which was explained by the strong interconnection at the nano-scale between the organic moiety and silica domains. An analogous characterization was performed on microfibrillated cellulose films, which presented remarkable barrier effect toward oxygen when it is dry, while in the presence of water the performance significantly drops. The very low value of water diffusivity at low activities is also an interesting characteristic which deals with its structural properties. Two different approaches of simulation were then considered: the diffusion of oxygen through polymer-layered silicates was modeled on a continuum scale with a CFD software, while the properties of n-alkanthiolate self assembled monolayers on gold were analyzed from a molecular point of view by means of a molecular dynamics algorithm. Modeling transport properties in layered nanocomposites, resulting from the ordered dispersion of impermeable flakes in a 2-D matrix, allowed the calculation of the enhancement of barrier effect in relation with platelets structural parameters leading to derive a new expression. On this basis, randomly distributed systems were simulated and the results were analyzed to evaluate the different contributions to the overall effect. The study of more realistic three-dimensional geometries revealed a prefect correspondence with the 2-D approximation. A completely different approach was applied to simulate the effect of temperature on the oxygen transport through self assembled monolayers; the structural information obtained from equilibrium MD simulations showed that raising the temperature, makes the monolayer less ordered and consequently less crystalline. This disorder produces a decrease in the barrier free energy and it lowers the overall resistance to oxygen diffusion, making the monolayer more permeable to small molecules.
Resumo:
The present work tries to display a comprehensive and comparative study of the different legal and regulatory problems involved in international securitization transactions. First, an introduction to securitization is provided, with the basic elements of the transaction, followed by the different varieties of it, including dynamic securitization and synthetic securitization structures. Together with this introduction to the intricacies of the structure, a insight into the influence of securitization in the financial and economic crisis of 2007-2009 is provided too; as well as an overview of the process of regulatory competition and cooperation that constitutes the framework for the international aspects of securitization. The next Chapter focuses on the aspects that constitute the foundations of structured finance: the inception of the vehicle, and the transfer of risks associated to the securitized assets, with particular emphasis on the validity of those elements, and how a securitization transaction could be threatened at its root. In this sense, special importance is given to the validity of the trust as an instrument of finance, to the assignment of future receivables or receivables in block, and to the importance of formalities for the validity of corporations, trusts, assignments, etc., and the interaction of such formalities contained in general corporate, trust and assignment law with those contemplated under specific securitization regulations. Then, the next Chapter (III) focuses on creditor protection aspects. As such, we provide some insights on the debate on the capital structure of the firm, and its inadequacy to assess the financial soundness problems inherent to securitization. Then, we proceed to analyze the importance of rules on creditor protection in the context of securitization. The corollary is in the rules in case of insolvency. In this sense, we divide the cases where a party involved in the transaction goes bankrupt, from those where the transaction itself collapses. Finally, we focus on the scenario where a substance over form analysis may compromise some of the elements of the structure (notably the limited liability of the sponsor, and/or the transfer of assets) by means of veil piercing, substantive consolidation, or recharacterization theories. Once these elements have been covered, the next Chapters focus on the regulatory aspects involved in the transaction. Chapter IV is more referred to “market” regulations, i.e. those concerned with information disclosure and other rules (appointment of the indenture trustee, and elaboration of a rating by a rating agency) concerning the offering of asset-backed securities to the public. Chapter V, on the other hand, focuses on “prudential” regulation of the entity entrusted with securitizing assets (the so-called Special Purpose vehicle), and other entities involved in the process. Regarding the SPV, a reference is made to licensing requirements, restriction of activities and governance structures to prevent abuses. Regarding the sponsor of the transaction, a focus is made on provisions on sound originating practices, and the servicing function. Finally, we study accounting and banking regulations, including the Basel I and Basel II Frameworks, which determine the consolidation of the SPV, and the de-recognition of the securitized asset from the originating company’s balance-sheet, as well as the posterior treatment of those assets, in particular by banks. Chapters VI-IX are concerned with liability matters. Chapter VI is an introduction to the different sources of liability. Chapter VII focuses on the liability by the SPV and its management for the information supplied to investors, the management of the asset pool, and the breach of loyalty (or fiduciary) duties. Chapter VIII rather refers to the liability of the originator as a result of such information and statements, but also as a result of inadequate and reckless originating or servicing practices. Chapter IX finally focuses on third parties entrusted with the soundness of the transaction towards the market, the so-called gatekeepers. In this respect, we make special emphasis on the liability of indenture trustees, underwriters and rating agencies. Chapters X and XI focus on the international aspects of securitization. Chapter X contains a conflicts of laws analysis of the different aspects of structured finance. In this respect, a study is made of the laws applicable to the vehicle, to the transfer of risks (either by assignment or by means of derivatives contracts), to liability issues; and a study is also made of the competent jurisdiction (and applicable law) in bankruptcy cases; as well as in cases where a substance-over-form is performed. Then, special attention is also devoted to the role of financial and securities regulations; as well as to their territorial limits, and extraterritoriality problems involved. Chapter XI supplements the prior Chapter, for it analyzes the limits to the States’ exercise of regulatory power by the personal and “market” freedoms included in the US Constitution or the EU Treaties. A reference is also made to the (still insufficient) rules from the WTO Framework, and their significance to the States’ recognition and regulation of securitization transactions.
Resumo:
Semiconductors technologies are rapidly evolving driven by the need for higher performance demanded by applications. Thanks to the numerous advantages that it offers, gallium nitride (GaN) is quickly becoming the technology of reference in the field of power amplification at high frequency. The RF power density of AlGaN/GaN HEMTs (High Electron Mobility Transistor) is an order of magnitude higher than the one of gallium arsenide (GaAs) transistors. The first demonstration of GaN devices dates back only to 1993. Although over the past few years some commercial products have started to be available, the development of a new technology is a long process. The technology of AlGaN/GaN HEMT is not yet fully mature, some issues related to dispersive phenomena and also to reliability are still present. Dispersive phenomena, also referred as long-term memory effects, have a detrimental impact on RF performances and are due both to the presence of traps in the device structure and to self-heating effects. A better understanding of these problems is needed to further improve the obtainable performances. Moreover, new models of devices that take into consideration these effects are necessary for accurate circuit designs. New characterization techniques are thus needed both to gain insight into these problems and improve the technology and to develop more accurate device models. This thesis presents the research conducted on the development of new charac- terization and modelling methodologies for GaN-based devices and on the use of this technology for high frequency power amplifier applications.
Resumo:
Diese Arbeit befasst sich mit den optischen Resonanzen metallischer Nanopartikel im Abstand weniger Nanometer von einer metallischen Grenzfläche. Die elektromagnetische Wechselwirkung dieser „Kugel-vor-Fläche“ Geometrie ruft interessante optische Phänomene hervor. Sie erzeugt eine spezielle elektromagnetische Eigenmode, auch Spaltmode genannt, die im Wesentlichen auf den Nanospalt zwi-schen Kugel und Oberfläche lokalisiert ist. In der quasistatischen Näherung hängt die Resonanzposition nur vom Material, der Umgebung, dem Film-Kugel Abstand und dem Kugelradius selbst ab. Theoretische Berechnungen sagen für diese Region unter Resonanzbedingungen eine große Verstärkung des elektro-magnetischen Feldes voraus. rnUm die optischen Eigenschaften dieser Systeme zu untersuchen, wurde ein effizienter plasmonenver-mittelnder Dunkelfeldmodus für die konfokale Rastermikroskopie durch dünne Metallfilme entwickelt, der die Verstärkung durch Oberflächenplasmonen sowohl im Anregungs- als auch Emissionsprozess ausnutzt. Dadurch sind hochwertige Dunkelfeldaufnahmen durch die Metallfilme der Kugel-vor-Fläche Systeme garantiert, und die Spektroskopie einzelner Resonatoren wird erleichtert. Die optischen Untersuchungen werden durch eine Kombination von Rasterkraft- und Rasterelektronenmikroskopie vervollständigt, so dass die Form und Größe der untersuchten Resonatoren in allen drei Dimensionen bestimmt und mit den optischen Resonanzen korreliert werden können. Die Leistungsfähigkeit des neu entwickelten Modus wird für ein Referenzsystem aus Polystyrol-Kugeln auf einem Goldfilm demonstriert. Hierbei zeigen Partikel gleicher Größe auch die erwartete identische Resonanz.rnFür ein aus Gold bestehendes Kugel-vor-Fläche System, bei dem der Spalt durch eine selbstorganisierte Monolage von 2-Aminoethanthiol erzeugt wird, werden die Resonanzen von Goldpartikeln, die durch Reduktion mit Chlorgoldsäure erzeugt wurden, mit denen von idealen Goldkugeln verglichen. Diese ent-stehen aus den herkömmlichen Goldpartikeln durch zusätzliche Bestrahlung mit einem Pikosekunden Nd:Yag Laser. Bei den unbestrahlten Partikeln mit ihrer Unzahl an verschiedenen Formen zeigen nur ein Drittel der untersuchten Resonatoren ein Verhalten, das von der Theorie vorhergesagt wird, ohne das dies mit ihrer Form oder Größe korrelieren würde. Im Fall der bestrahlten Goldkugeln tritt eine spürbare Verbesserung ein, bei dem alle Resonatoren mit den theoretischen Rechnungen übereinstimmen. Eine Änderung der Oberflächenrauheit des Films zeigt hingegen keinen Einfluß auf die Resonanzen. Obwohl durch die Kombination von Goldkugeln und sehr glatten Metallfilmen eine sehr definierte Probengeometrie geschaffen wurde, sind die experimentell bestimmten Linienbreiten der Resonanzen immer noch wesentlich größer als die berechneten. Die Streuung der Daten, selbst für diese Proben, deutet auf weitere Faktoren hin, die die Spaltmoden beeinflußen, wie z.B. die genaue Form des Spalts. rnDie mit den Nanospalten verbundenen hohen Feldverstärkungen werden untersucht, indem ein mit Farbstoff beladenes Polyphenylen-Dendrimer in den Spalt eines aus Silber bestehenden Kugel-vor-Fläche Systems gebracht wird. Das Dendrimer in der Schale besteht lediglich aus Phenyl-Phenyl Bindungen und garantiert durch die damit einhergende Starrheit des Moleküls eine überragende Formstabiliät, ohne gleichzeitig optisch aktiv zu sein. Die 16 Dithiolan Endgruppen sorgen gleichzeitig für die notwendige Affinität zum Silber. Dadurch kann der im Inneren befindliche Farbstoff mit einer Präzision von wenigen Nanometern im Spalt zwischen den Metallstrukturen platziert werden. Der gewählte Perylen Farbstoff zeichnet sich wiederum durch hohe Photostabilität und Fluoreszenz-Quantenausbeute aus. Für alle untersuchten Partikel wird ein starkes Fluoreszenzsignal gefunden, das mindestens 1000-mal stärker ist, als das des mit Farbstoff überzogenen Metallfilms. Das Profil des Fluoreszenz-Anregungsspektrums variiert zwischen den Partikeln und zeigt im Vergleich zum freien Farbstoff eine zusätzliche Emission bei höheren Frequenzen, was in der Literatur als „hot luminescence“ bezeichnet wird. Bei der Untersuchung des Streuverhaltens der Resonatoren können wieder zwei unterschiedliche Arten von Resonatoren un-terschieden werden. Es gibt zunächst die Fälle, die bis auf die beschriebene Linienverbreiterung mit einer idealen Kugel-vor-Fläche Geometrie übereinstimmen und dann andere, die davon stark abweichen. Die Veränderungen der Fluoreszenz-Anregungsspektren für den gebundenen Farbstoffs weisen auf physikalische Mechanismen hin, die bei diesen kleinen Metall/Farbstoff Abständen eine Rolle spielen und die über eine einfache wellenlängenabhängige Verstärkung hinausgehen.
Resumo:
The Scilla rock avalanche occurred on 6 February 1783 along the coast of the Calabria region (southern Italy), close to the Messina Strait. It was triggered by a mainshock of the Terremoto delle Calabrie seismic sequence, and it induced a tsunami wave responsible for more than 1500 casualties along the neighboring Marina Grande beach. The main goal of this work is the application of semi-analtycal and numerical models to simulate this event. The first one is a MATLAB code expressly created for this work that solves the equations of motion for sliding particles on a two-dimensional surface through a fourth-order Runge-Kutta method. The second one is a code developed by the Tsunami Research Team of the Department of Physics and Astronomy (DIFA) of the Bologna University that describes a slide as a chain of blocks able to interact while sliding down over a slope and adopts a Lagrangian point of view. A wide description of landslide phenomena and in particular of landslides induced by earthquakes and with tsunamigenic potential is proposed in the first part of the work. Subsequently, the physical and mathematical background is presented; in particular, a detailed study on derivatives discratization is provided. Later on, a description of the dynamics of a point-mass sliding on a surface is proposed together with several applications of numerical and analytical models over ideal topographies. In the last part, the dynamics of points sliding on a surface and interacting with each other is proposed. Similarly, different application on an ideal topography are shown. Finally, the applications on the 1783 Scilla event are shown and discussed.
Resumo:
Four experiments investigated perception of major and minor thirds whose component tones were sounded simultaneously. Effects akin to categorical perception of speech sounds were found. In the first experiment, musicians demonstrated relatively sharp category boundaries in identification and peaks near the boundary in discrimination tasks of an interval continuum where the bottom note was always an F and the top note varied from A to A flat in seven equal logarithmic steps. Nonmusicians showed these effects only to a small extent. The musicians showed higher than predicted discrimination performance overall, and reaction time increases at category boundaries. In the second experiment, musicians failed to consistently identify or discriminate thirds which varied in absolute pitch, but retained the proper interval ratio. In the last two experiments, using selective adaptation, consistent shifts were found in both identification and discrimination, similar to those found in speech experiments. Manipulations of adapting and test showed that the mechanism underlying the effect appears to be centrally mediated and confined to a frequency-specific level. A multistage model of interval perception, where the first stages deal only with specific pitches may account for the results.
Resumo:
Metamaterials are artificial materials that exhibit properties, such as negative index of refraction, that are not possible through natural materials. Due to many potential applications of negative index metamaterials, significant progress in the field has been observed in the last decade. However, achieving negative index at visible frequencies is a challenging task. Generally, fishnet metamaterials are considered as a possible route to achieve negative index in the visible spectrum. However, so far no metamaterial has been demonstrated to exhibit simultaneously negative permittivity and permeability (double-negative) beyond the red region of the visible spectrum. This study is mainly focused on achieving higher operating frequency for low-loss, double-negative metamaterials. Two double-negative metamaterials have been proposed to operate at highest reported frequencies. The first proposed metamaterial is based on the interaction of surface plasmon polaritons of a thin metal film with localized surface plasmons of a metallic array placed close to the thin film. It is demonstrated that the metamaterial can easily be scaled to operate at any frequency in the visible spectrum as well as possibly to the ultraviolet spectrum. Furthermore, the underlying physical phenomena and possible future extensions of the metamaterial are also investigated. The second proposed metamaterial is a modification to the so-called fishnet metamaterial. It has been demonstrated that this ‘modified fishnet’ exhibits two double-negative bands in the visible spectrum with highest operating frequency in the green region with considerably high figure of merit. In contrast to most of the fishnet metamaterials proposed in the past, behavior of this modified fishnet is independent of polarization of the incident field. In addition to the two negative index metamaterials proposed in this study, the use of metamaterial as a spacer, named as metaspacer, is also investigated. In contrast to naturally available dielectric spacers used in microfabrication, metaspacers can be realized with any (positive or negative) permittivity and permeability. As an example, the use of a negative index metaspacer in place of the dielectric layer in a fishnet metamaterial is investigated. It is shown that fishnet based on negative index metaspacer gives many improved optical properties over the conventional fishnet such as wider negative index band, higher figure of merit, higher optical transmission and stronger magnetic response. In addition to the improved properties, following interesting features were observed in the metaspacer based fishnet metamaterial. At the resonance frequency, the shape of the permeability curve was ‘inverted’ as compared to that for conventional fishnet metamaterial. Furthermore, dependence of the resonance frequency on the fishnet geometry was also reversed. Moreover, simultaneously negative group and phase velocities were observed in the low-loss region of the metaspacer based fishnet metamaterial. Due to interesting features observed using metaspacer, this study will open a new horizon for the metamaterial research.
Resumo:
Currently, social work is witnessing a quite polarized debate about what should be the basis for good practice. Simply stated, the different attempts to define the required basis for effective and accountable interventions in social work practice can be grouped in two paradigmatic positions, which seem to be in strong opposition to each other. On the one hand the highly influential evidence based practice movement highlights the necessity to base practice interventions on proven effectiveness from empirical research. Despite some variations, such as between narrow conceptions of evidence based practice (see e.g. McNeece/Thyer, 2004) and broader approaches to it (see e.g. Gambrill, 1999, 2001, 2008), the evidence based practice movement embodies a positivist orientation and more explicitly scientific aspirations of social work by using positivistic empirical strategies. Critics of the evidence based practice movement argue that its narrow epistemological assumptions are not appropriate for the understanding of social phenomena and that evidence based guidelines to practice are insufficient to deal with the extremely complex activities social work practice requires in different and always somewhat unique practice situations (Webb, 2001; Gray & Mc Donald, 2006; Otto, Polutta &Ziegler, 2009). Furthermore critics of evidence based practice argue that it privileges an uncritical and a-political positivism which seems highly problematic in the current climate of welfare state reforms, in which the question ‘what works’ is highly politicized and the legitimacy of professional social work practice is being challenged maybe more than ever before (Kessl, 2009). Both opponents and proponents of evidence based practice argue on the epistemological, the methodological and the ethical level to sustain their point of view and raise fundamental questions about the real nature of social work practice, so that one could get the impression that social work is really at the crossroads between two very different conceptions of social work practice and its further professional development (Stepney, 2009). However, this article is not going to merely rehearse the pro and contra of different positions that are being invoked in the debate about evidence based practice. Instead it aims to go further by identifying the dilemmas underlying these positions which - so it is argued – re-emerge in the debate about evidence based practice, but which are older than this debate. They concern the fundamental ambivalence modern professionalization processes in social work were subjected to from their very beginnings.