989 resultados para basic concepts
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
Máster Universitario en Eficiencia Energética (SIANI)
Resumo:
The research work concerns the analysis of the foundations of Quantum Field Theory carried out from an educational perspective. The whole research has been driven by two questions: • How the concept of object changes when moving from classical to contemporary physics? • How are the concepts of field and interaction shaped and conceptualized within contemporary physics? What makes quantum field and interaction similar to and what makes them different from the classical ones? The whole work has been developed through several studies: 1. A study aimed to analyze the formal and conceptual structures characterizing the description of the continuous systems that remain invariant in the transition from classical to contemporary physics. 2. A study aimed to analyze the changes in the meanings of the concepts of field and interaction in the transition to quantum field theory. 3. A detailed study of the Klein-Gordon equation aimed at analyzing, in a case considered emblematic, some interpretative (conceptual and didactical) problems in the concept of field that the university textbooks do not address explicitly. 4. A study concerning the application of the “Discipline-Culture” Model elaborated by I. Galili to the analysis of the Klein-Gordon equation, in order to reconstruct the meanings of the equation from a cultural perspective. 5. A critical analysis, in the light of the results of the studies mentioned above, of the existing proposals for teaching basic concepts of Quantum Field Theory and particle physics at the secondary school level or in introductory physics university courses.
Resumo:
Il nostro lavoro è incentrato su Filosofia dell’ineguaglianza, acceso libello di filosofia sociale in forma epistolare, composto da Nikolaj Berdjaev all’inizio del 1918. Nelle quattordici veementi lettere che costituiscono l’opera, egli critica aspramente l’idea di eguaglianza sociale e metafisica propagandata dai rivoluzionari, schierandosi a favore dell’ineguaglianza gerarchica, da lui considerata l’unica garanzia della libertà e della statura teantropica dell’uomo. Abbiamo suddiviso la nostra indagine in tre parti: il primo capitolo è un’introduzione storico-filosofica al testo, in cui sono evidenziati i concetti fondamentali del pensiero del Nostro; nel secondo capitolo abbiamo messo in luce il legame tra lo “stile filosofico” di Berdjaev e la cultura religiosa a cui egli appartiene, riflettendo poi sui problemi traduttivi che ne derivano; in particolare ci siamo soffermati sull’aforisticità del suo pensiero e sullo spiccato afflato emotivo che pervade la sua esposizione. Infine, abbiamo incluso nel terzo capitolo la traduzione di quattro lettere (Sulla rivoluzione, Sui fondamenti ontologico-religiosi della socialità, Sullo Stato, Sul regno di Dio) e della postfazione aggiunta da Berdjaev a Berlino nel 1923, in occasione della pubblicazione del libro.
Resumo:
Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.
Resumo:
One of the basic concepts of molecular self-assembly is that the morphology of the aggregate is directly related to the structure and interaction of the aggregating molecules. This is not only true for the aggregation in bulk solution, but also for the formation of Langmuir films at the air/water interface. Thus, molecules at the interface do not necessarily form flat monomolecular films but can also aggregate into multilayers or surface micelles. In this context, various novel synthetic molecules were investigated in terms of their morphology at the air/water interface and in transferred films. rnFirst, the self-assembly of semifluorinated alkanes and their molecular orientation at the air/water interface and in transferred films was studied employing scanning force microscopy (SFM) and Kelvin potential force microscopy. Here it was found, that the investigated semifluorinated alkanes aggregate to form circular surface micelles with a diameter of 30 nm, which are constituted of smaller muffin-shaped subunits with a diameter of 10 nm. A further result is that the introduction of an aromatic core into the molecular structure leads to the formation of elongated surface micelles and thus implements a directionality to the self-assembly. rnSecond, the self-assembly of two different amphiphilic hybrid materials containing a short single stranded desoxyribonucleic acid (DNA) sequence was investigated at the air/water interface. The first molecule was a single stranded DNA (11mer) molecule with two hydrophobically modified 5-(dodec-1-ynyl)uracil nucleobases at the terminal 5'-end of the oligonucleotide sequence. Isotherm measurements revealed the formation of semi-stable films at the air/water interface. SFM imaging of films transferred via Langmuir-Blodgett technique supported this finding and indicated mono-, bi- and multilayer formation, according to the surface pressure applied upon transfer. Within these films, the hydrophilic DNA sequence was oriented towards air covering 95% of the substrate.rnSimilar results were obtained with a second type of amphiphile, a DNA block copolymer. Furthermore, the potential to perform molecular recognition experiments at the air/water interface with these DNA hybrid materials was evaluated.rnThird, polyglycerol ester molecules (PGE), which are known to form very stable foams, were studies. Aim was to elucidate the molecular structure of PGE molecules at the air/water interface in order to comprehend the foam stabilization mechanism. Several model systems mimicking the air/water interface of a PGE foam and methods for a noninvasive transfer were tested and characterized by SFM. It could be shown, that PGE stabilizes the air/water interface of a foam bubble by formation of multiple surfactant layers. Additionally, a new transfer technique, the bubble film transfer was established and characterized by high speed camera imaging.The results demonstrate the diversity of structures, which can be formed by amphiphilic molecules at the air/water interface and after film transfer, as well as the impact of the chemical structure on the aggregate morphology.
Resumo:
BCJ-relations have a series of important consequences in Quantum FieldrnTheory and in Gravity. In QFT, one can use BCJ-relations to reduce thernnumber of independent colour-ordered partial amplitudes and to relate nonplanarrnand planar diagrams in loop calculations. In addition, one can usernBCJ-numerators to construct gravity scattering amplitudes through a squaringrn procedure. For these reasons, it is important to nd a prescription tornobtain BCJ-numerators without requiring a diagram by diagram approach.rnIn this thesis, after introducing some basic concepts needed for the discussion,rnI will examine the existing diagrammatic prescriptions to obtainrnBCJ-numerators. Subsequently, I will present an algorithm to construct anrneective Yang-Mills Lagrangian which automatically produces kinematic numeratorsrnsatisfying BCJ-relations. A discussion on the kinematic algebrarnfound through scattering equations will then be presented as a way to xrnnon-uniqueness problems in the algorithm.
Resumo:
Interpreter profession is currently changing: migration flows, the economic crisis and the fast development of ICTs brought unexpected changes in our societies and in traditional interpreting services all over. Remote interpreting (RI), which entails new methods such as videoconference interpreting and telephone interpreting (TI), has greatly developed and now sees interpreters working remotely and being connected to service users via videoconference set up or telephone calls. This dissertation aims at studying and analyzing the relevant aspects of interpreter-mediated telephone calls, describing the consequences for the interpreters in this new working field, as well as defining new strategies and techniques interpreters must develop in order to adjust to the new working context. For these purposes, the objectives of this dissertation are the following: to describe the settings in which RI is mostly used, to study the prominent consequences on interpreters and analyze real interpreter-mediated conversations. The dissertation deals with issues studied by the Shift project, a European project which aims at creating teaching materials for remote interpreting; the project started in 2015 and the University of Bologna and in particular the DIT - Department of Interpreting and Translation is the coordinating unit and promoting partner. This dissertation is divided into five chapters. Chapter 1 contains an outline of the major research related to RI and videoconference interpreting as well as a description of its main settings: healthcare, law, business economics and institution. Chapter 2 focuses on the physiological and psychological implications for interpreters working on RI. The concepts of absence, presence and remoteness are discussed; some opinions of professional interpreters and legal practitioners (LPs) concerning remote interpreting are offered as well. In chapter 3, telephone interpreting is presented; basic concepts of conversational analysis and prominent traits of interpreter-mediated calls are also explored. Chapter 4 presents the materials and methodology used for the analysis of data. The results, discussed in Chapter 5, show that telephone interpreting may be suitable for some specific contexts; however, it is clear that interpreters must get appropriate training before working in any form of RI. The dissertation finally offers suggestions for the implementation of training in RI for future interpreting students.
Resumo:
In this critical analysis of sociological studies of the political subsystem in Yugoslavia since the fall of communism Mr. Ilic examined the work of the majority of leading researchers of politics in the country between 1990 and 1996. Where the question of continuity was important, he also looked at previous research by the writers in question. His aim was to demonstrate the overall extent of existing research and at the same time to identify its limits and the social conditions which defined it. Particular areas examined included the problems of defining basic concepts and selecting the theoretically most relevant indicators; the sources of data including the types of authentic materials exploited; problems of research work (contacts, field control, etc.); problems of analysisl and finally the problems arising from different relations with the people who commission the research. In the first stage of the research, looking at methods of defining key terms, special attention was paid to the analysis of the most frequently used terms such as democracy, totalitarianism, the political left and right, and populism. Numerous weaknesses were noted in the analytic application of these terms. In studies of the possibilities of creating a democratic political system in Serbia and its possible forms (democracy of the majority or consensual democracy), the profound social division of Serbian society was neglected. The left-right distinction tends to be identified with the government-opposition relation, in the way of practical politics. The idea of populism was used to pass responsibility for the policy of war from the manipulator to the manipulated, while the concept of totalitarianism is used in a rather old-fashioned way, with echoes of the cold war. In general, the terminology used in the majority of recent research on the political subsystem in Yugoslavia is characterised by a special ideological style and by practical political material, rather than by developed theoretical effort. The second section of analysis considered the wider theoretical background of the research and focused on studies of the processes of transformation and transition in Yugoslav society, particularly the work of Mladen Lazic and Silvano Bolcic, who he sees as representing the most important and influential contemporary Yugoslav sociologists. Here Mr. Ilic showed that the meaning of empirical data is closely connected with the stratification schemes towards which they are oriented, so that the same data can have different meanings in shown through different schemes. He went on to show the observed theoretical frames in the context of wider ideological understanding of the authors' ideas and research. Here the emphasis was on the formalistic character of such notions as command economy and command work which were used in analysing the functioning and the collapse of communist society, although Mr. Ilic passed favourable judgement on the Lazic's critique of political over-determination in its various attempts to explain the disintegration of the communist political (sub)system. The next stage of the analysis was devoted to the problem of empirical identification of the observed phenomena. Here again the notions of the political left and right were of key importance. He sees two specific problems in using these notion in talking about Yugoslavia, the first being that the process of transition in the FR Yugoslavia has hardly begun. The communist government has in effect remained in power continuously since 1945, despite the introduction of a multi-party system in 1990. The process of privatisation of public property was interrupted at a very early stage and the results of this are evident on the structural level in the continuous weakening of the social status of the middle class and on the political level because the social structure and dominant form of property direct the majority of votes towards to communists in power. This has been combined with strong chauvinist confusion associated with the wars in Croatia and Bosnia, and these ideas were incorporated by all the relevant Yugoslav political parties, making it more difficult to differentiate between them empirically. In this context he quotes the situation of the stream of political scientists who emerged in the Faculty of Political Science in Belgrade. During the time of the one-party regime, this faculty functioned as ideological support for official communist policy and its teachers were unable to develop views which differed from the official line, but rather treated all contrasting ideas in the same way, neglecting their differences. Following the introduction of a multi-party system, these authors changed their idea of a public enemy, but still retained an undifferentiated and theoretically undeveloped approach to the issue of the identification of political ideas. The fourth section of the work looked at problems of explanation in studying the political subsystem and the attempts at an adequate causal explanation of the triumph of Slobodan Milosevic's communists at four subsequent elections was identified as the key methodological problem. The main problem Mr. Ilic isolated here was the neglect of structural factors in explaining the voters' choice. He then went on to look at the way empirical evidence is collected and studied, pointing out many mistakes in planning and determining the samples used in surveys as well as in the scientifically incorrect use of results. He found these weaknesses particularly noticeable in the works of representatives of the so-called nationalistic orientation in Yugoslav sociology of politics, and he pointed out the practical political abuses which these methodological weaknesses made possible. He also identified similar types of mistakes in research by Serbian political parties made on the basis of party documentation and using methods of content analysis. He found various none-sided applications of survey data and looked at attempts to apply other sources of data (statistics, official party documents, various research results). Mr. Ilic concluded that there are two main sets of characteristics in modern Yugoslav sociological studies of political subsystems. There are a considerable number of surveys with ambitious aspirations to explain political phenomena, but at the same time there is a clear lack of a developed sociological theory of political (sub)systems. He feels that, in the absence of such theory, most researcher are over-ready to accept the theoretical solutions found for interpretation of political phenomena in other countries. He sees a need for a stronger methodological bases for future research, either 1) in complementary usage of different sources and ways of collecting data, or 2) in including more of a historical dimension in different attempts to explain the political subsystem in Yugoslavia.
Resumo:
Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.
Resumo:
Understanding how a living cell behaves has become a very important topic in today’s research field. Hence, different sensors and testing devices have been designed to test the mechanical properties of these living cells. This thesis presents a method of micro-fabricating a bio-MEMS based force sensor which is used to measure the force response of living cells. Initially, the basic concepts of MEMS have been discussed and the different micro-fabrication techniques used to manufacture various MEMS devices have been described. There have been many MEMS based devices manufactured and employed for testing many nano-materials and bio-materials. Each of the MEMS based devices described in this thesis use a novel concept of testing the specimens. The different specimens tested are nano-tubes, nano-wires, thin film membranes and biological living cells. Hence, these different devices used for material testing and cell mechanics have been explained. The micro-fabrication techniques used to fabricate this force sensor has been described and the experiments preformed to successfully characterize each step in the fabrication have been explained. The fabrication of this force sensor is based on the facilities available at Michigan Technological University. There are some interesting and uncommon concepts in MEMS which have been observed during this fabrication. These concepts in MEMS which have been observed are shown in multiple SEM images.
Resumo:
This morning Dr. Battle will review basic concepts of linear functions and piecewise functions and how they can be used as models for real-world applications. She will also introduce techniques for using a spreadsheet to analyze data.
Resumo:
The ability of cryogenic photonic crystals to carry out high performance microwave signal processing operations has been developed into systems that can: rapidly record broadband microwave spectra with fine resolution and high dynamic range; search for patterns in 40 gigabits per second data streams; and communicate via spread- spectrum signals that are well below the noise floor. The basic concepts of the technology and its many applications, along with an overview of university-industry partnerships and the growing photonics industry in Bozeman, will be presented.
Resumo:
The theory on the intensities of 4f-4f transitions introduced by B.R. Judd and G.S. Ofelt in 1962 has become a center piece in rare-earth optical spectroscopy over the past five decades. Many fundamental studies have since explored the physical origins of the Judd–Ofelt theory and have proposed numerous extensions to the original model. A great number of studies have applied the Judd–Ofelt theory to a wide range of rare-earth doped materials, many of them with important applications in solid-state lasers, optical amplifiers, phosphors for displays and solid state lighting, upconversion and quantum-cutting materials, and fluorescent markers. This paper takes the view of the experimentalist who is interested in appreciating the basic concepts, implications, assumptions, and limitations of the Judd–Ofelt theory in order to properly apply it to practical problems. We first present the formalism for calculating the wavefunctions of 4f electronic states in a concise form and then show their application to the calculation and fitting of 4f-4f transition intensities. The potential, limitations and pitfalls of the theory are discussed, and a detailed case study of LaCl3:Er3+ is presented.