921 resultados para classical conditioning, mere exposure effect, classical conditioning of preferences.
Resumo:
We report a photoacoustic (PA) study of the thermal and transport properties of a GaAs epitaxial layer doped with Si at varying doping concentration, grown on GaAs substrate by molecular beam epitaxy. The data are analyzed on the basis of Rosencwaig and Gersho’s theory of the PA effect. The amplitude of the PA signal gives information about various heat generation mechanisms in semiconductors. The experimental data obtained from the measurement of the PA signal as a function of modulation frequency in a heat transmission configuration were fitted with the phase of PA signal obtained from the theoretical model evaluated by considering four parameters—viz., thermal diffusivity, diffusion coefficient, nonradiative recombination time, and surface recombination velocity—as adjustable parameters. It is seen from the analysis that the photoacoustic technique is sensitive to the changes in the surface states depend on the doping concentration. The study demonstrates the effectiveness of the photoacoustic technique as a noninvasive and nondestructive method to measure and evaluate the thermal and transport properties of epitaxial layers.
Resumo:
The use of catalysts in chemical and refining processes has increased rapidly since 1945, when oil began to replace coal as the most important industrial raw material. Catalysis has a major impact on the quality of human life as well as economic development. The demand for catalysts is still increasing since catalysis is looked up as a solution to eliminate or replace polluting processes. Metal oxides represent one of the most important and widely employed classes of solid catalysts. Much effort has been spent in the preparation, characterization and application of metal oxides. Recently, great interest has been devoted to the cerium dioxide (CeO2) containing materials due to their broad range of applications in various fields, ranging from catalysis to ceramics, fuel cell technologies, gas sensors, solid state electrolytes, ceramic biomaterials, etc., in addition to the classical application of CeO2 as an additive in the so-called three way catalysts (TWC) for automotive exhaust treatment. Moreover, it can promote water gas shift and steam reforming reactions, favours catalytic activity at the interfacial metal-support sites. The solid solutions of ceria with Group IV transitional-metals deserve particular attention for their applicability in various technologically important catalytic processes. Mesoporous CeO2−ZrO2 solid solutions have been reported to be employed in various reactions which include CO oxidation, soot oxidation, water-gas shift reaction, and so on. Inspired by the unique and promising characteristics of ceria based mixed oxides and solid solutions for various applications, we have selected ceria-zirconia oxides for our studies. The focus of the work is the synthesis and investigation of the structural and catalytic properties of modified and pure ceria-zirconia mixed oxide.
Resumo:
The classical methods of analysing time series by Box-Jenkins approach assume that the observed series uctuates around changing levels with constant variance. That is, the time series is assumed to be of homoscedastic nature. However, the nancial time series exhibits the presence of heteroscedasticity in the sense that, it possesses non-constant conditional variance given the past observations. So, the analysis of nancial time series, requires the modelling of such variances, which may depend on some time dependent factors or its own past values. This lead to introduction of several classes of models to study the behaviour of nancial time series. See Taylor (1986), Tsay (2005), Rachev et al. (2007). The class of models, used to describe the evolution of conditional variances is referred to as stochastic volatility modelsThe stochastic models available to analyse the conditional variances, are based on either normal or log-normal distributions. One of the objectives of the present study is to explore the possibility of employing some non-Gaussian distributions to model the volatility sequences and then study the behaviour of the resulting return series. This lead us to work on the related problem of statistical inference, which is the main contribution of the thesis
Resumo:
Im Rahmen dieser Arbeit wurden magneto-optische Speicherschichten und ihre Kopplungen untereinander untersucht. Hierzu wurden zum Einen die für die magneto-optische Speichertechnologie "klassischen" Schichten aus RE/TM-Legierungen verwendet, zum Anderen aber auch erfolgreich Granate integriert, die bisher nicht in diesem Anwendungsgebiet verwendet wurden. Einleitend werden die magneto-optischen Verfahren, die resultierenden Anforderungen an die dünnen Schichten und die entsprechenden physikalischen Grundlagen diskutiert. Außerdem wird auf das Hochfrequenz-Sputtern von RE/TM-Legierungen eingegangen und die verwendeten magneto-optischen Messverfahren werden erläutert [Kap. 2 & 3]. Die Untersuchungen an RE/TM-Schichten bestätigen die aus der Literatur bekannten Eigenschaften. Sie lassen sich effektiv, und für magneto-optische Anwendungen geeignet, über RF-Sputtern herstellen. Die unmittelbaren Schicht-Parameter, wie Schichtdicke und Terbium-Konzentration, lassen sich über einfache Zusammenhänge einstellen. Da die Terbium-Konzentration eine Änderung der Kompensationstemperatur bewirkt, lässt sich diese mit Messungen am Kerr-Magnetometer überprüfen. Die für die Anwendung interessante senkrechte magnetische Anisotropie konnte ebenfalls mit den Herstellungsbedingungen verknüpft werden. Bei der Herstellung der Schichten auf einer glatten Glas-Oberfläche (Floatglas) zeigt die RE/TM-Schicht bereits in den ersten Lagen ein Wachstumsverhalten, das eine senkrechte Anisotropie bewirkt. Auf einer Quarzglas- oder Keramik-Oberfläche wachsen die ersten Lagen in einer durch das Substrat induzierten Struktur auf, danach ändert sich das Wachstumsverhalten stetig, bis eine senkrechte Anisotropie erreicht wird. Dieses Verhalten kann auch durch verschiedene Pufferschichten (Aluminium und Siliziumnitrid) nur unwesentlich beeinflusst werden [Kap. 5 & Kap. 6]. Bei der direkten Aufbringung von Doppelschichten, bestehend aus einer Auslese-Schicht (GdFeCo) auf einer Speicherschicht (TbFeCo), wurde die Austausch-Kopplung demonstriert. Die Ausleseschicht zeigt unterhalb der Kompensationstemperatur keine Kopplung an die Speicherschicht, während oberhalb der Kompensationstemperatur eine direkte Kopplung der Untergitter stattfindet. Daraus ergibt sich das für den MSR-Effekt erwünschte Maskierungsverhalten. Die vorher aus den Einzelschichten gewonnen Ergebnisse zu Kompensationstemperatur und Wachstumsverhalten konnten in den Doppelschichten wiedergefunden werden. Als Idealfall erweist sich hier die einfachste Struktur. Man bringt die Speicherschicht auf Floatglas auf und bedeckt diese direkt mit der Ausleseschicht [Kap. 7]. Weiterhin konnte gezeigt werden, dass es möglich ist, den Faraday-Effekt einer Granatschicht als verstärkendes Element zu nutzen. Im anwendungstauglichen, integrierten Schichtsystem konnten die kostengünstig, mit dem Sol-Gel-Verfahren produzierten, Granate die strukturellen Anforderungen nicht erfüllen, da sich während der Herstellung Risse und Löcher gebildet haben. Bei der experimentellen Realisierung mit einer einkristallinen Granatschicht und einer RE/TM-Schicht konnte die prinzipielle Eignung des Schichtsystems demonstriert werden [Kap. 8].
Resumo:
Numerous studies have proven an effect of a probable climate change on the hydrosphere’s different subsystems. In the 21st century global and regional redistribution of water has to be expected and it is very likely that extreme weather phenomenon will occur more frequently. From a global view the flood situation will exacerbate. In contrast to these discoveries the classical approach of flood frequency analysis provides terms like “mean flood recurrence interval”. But for this analysis to be valid there is a need for the precondition of stationary distribution parameters which implies that the flood frequencies are constant in time. Newer approaches take into account extreme value distributions with time-dependent parameters. But the latter implies a discard of the mentioned old terminology that has been used up-to-date in engineering hydrology. On the regional scale climate change affects the hydrosphere in various ways. So, the question appears to be whether in central Europe the classical approach of flood frequency analysis is not usable anymore and whether the traditional terminology should be renewed. In the present case study hydro-meteorological time series of the Fulda catchment area (6930 km²), upstream of the gauging station Bonaforth, are analyzed for the time period 1960 to 2100. At first a distributed catchment area model (SWAT2005) is build up, calibrated and finally validated. The Edertal reservoir is regulated as well by a feedback control of the catchments output in case of low water. Due to this intricacy a special modeling strategy has been necessary: The study area is divided into three SWAT basin models and an additional physically-based reservoir model is developed. To further improve the streamflow predictions of the SWAT model, a correction by an artificial neural network (ANN) has been tested successfully which opens a new way to improve hydrological models. With this extension the calibration and validation of the SWAT model for the Fulda catchment area is improved significantly. After calibration of the model for the past 20th century observed streamflow, the SWAT model is driven by high resolution climate data of the regional model REMO using the IPCC scenarios A1B, A2, and B1, to generate future runoff time series for the 21th century for the various sub-basins in the study area. In a second step flood time series HQ(a) are derived from the 21st century runoff time series (scenarios A1B, A2, and B1). Then these flood projections are extensively tested with regard to stationarity, homogeneity and statistical independence. All these tests indicate that the SWAT-predicted 21st-century trends in the flood regime are not significant. Within the projected time the members of the flood time series are proven to be stationary and independent events. Hence, the classical stationary approach of flood frequency analysis can still be used within the Fulda catchment area, notwithstanding the fact that some regional climate change has been predicted using the IPCC scenarios. It should be noted, however, that the present results are not transferable to other catchment areas. Finally a new method is presented that enables the calculation of extreme flood statistics, even if the flood time series is non-stationary and also if the latter exhibits short- and longterm persistence. This method, which is called Flood Series Maximum Analysis here, enables the calculation of maximum design floods for a given risk- or safety level and time period.
Resumo:
This paper develops some theoretical and methodological considerations for the development of a critical competence model (CCM). The model is defined as a set of skills and knowledge functionally organized allowing measurable results with positive consequences for the strategic business objectives. The theoretical approaches of classical model of competences, the contemporary model of competencies and human competencies model were revised for the proposal development. implementation of the model includes 5 steps: 1) conducting a job analysis considering which dimensions or facets are subject to revision, 2) identify people with the opposite performance (the higher performance and lower performance); 3) identify critical incidents most relevant to the job position, 4) develop behavioral expectation scales (bes) and 5) validate BES obtained for experts in the field. As a final consideration, is determined that the competence models require accurate measurement. Approaches considering excessive theoreticism may cause the issue of competence become a fashion business with low or minimal impact, affecting its validity, reliability and deployment in organizations.
Resumo:
In the present work the toxic activity of extracts of Eupatorium microphyllum L.F. was evaluated on 4th instar larvae of the mosquito Aedes aegypti (Linneaus), under laboratory conditions. Aqueous extracts were utilized in concentrations of 500 mg L-1, 1,500 mg L-1 and 2,500 mg L-1 and acetone in concentrations of 10 mg L-1, 20 mg L-1, 30 mg L-1, 40 mg L-1and 50 mg L-1. The bioassays were carried out for triplicate each one with 20 larvae, exposed for 24 hours to 150 mL of solution. In all the bioassays were employed control groups. In the evaluation of the acetone extracts, a negative control was employed to avoid that the mortality of the larvae to occur on account of the solvent. The Aqueous extracts showed low moderate action in the mortality of larvae, less than 20%. On the contrary, the action of the acetone extracts was observed to 10 and 20 mg L-1with 15% of mortality, while to 30 and 40 mg L-1 were registered 22 to 38% of mortality. However, to 50 mg L-1 the mortality was of 95.4% with highly significant statistical results. The concentrations of the acetone extracts showed to be the most efficient for the control of the mosquitoes selected. Both types of extracts showed toxic effect in larvae of A. aegypti, nevertheless, greater effect in the acetone extracts was observed relating to the aqueous extracts of E. microphyllum, which constitutes a viable alternative in the search of new larvicides from composed natural.
Resumo:
Despite a growing body of literature on how environmental degradation can fuel civil war, the reverse effect, namely that of conflict on environmental outcomes, is relatively understudied. From a theoretical point of view this effect is ambiguous, with some forces pointing to pressures for environmental degradation and some pointing in the opposite direction. Hence, the overall effect of conflict on the environment is an empirical question. We study this relationship in the case of Colombia. We combine a detailed satellite-based longitudinal dataset on forest cover across municipalities over the period 1990-2010 with a comprehensive panel of conflict-related violent actions by paramilitary militias. We first provide evidence that paramilitary activity significantly reduces the share of forest cover in a panel specification that includes municipal and time fixed effects. Then we confirm these findings by taking advantage of a quasi-experiment that provides us with an exogenous source of variation for the expansion of the paramilitary. Using the distance to the region of Urab´a, the epicenter of such expansion, we instrument paramilitary activity in each cross-section for which data on forest cover is available. As a falsification exercise, we show that the instrument ceases to be relevant after the paramilitaries largely demobilized following peace negotiations with the government. Further, after the demobilization the deforestation effect of the paramilitaries disappears. We explore a number of potential mechanisms that may explain the conflict-driven deforestation, and show evidence suggesting that paramilitary violence generates large outflows of people in order to secure areas for growing illegal crops, exploit mineral resources, and engage in extensive agriculture. In turn, these activities are associated with deforestation.
Resumo:
The classical description of Si oxidation given by Deal and Grove has well-known limitations for thin oxides (below 200 Ã). Among the large number of alternative models published so far, the interfacial emission model has shown the greatest ability to fit the experimental oxidation curves. It relies on the assumption that during oxidation Si interstitials are emitted to the oxide to release strain and that the accumulation of these interstitials near the interface reduces the reaction rate there. The resulting set of differential equations makes it possible to model diverse oxidation experiments. In this paper, we have compared its predictions with two sets of experiments: (1) the pressure dependence for subatmospheric oxygen pressure and (2) the enhancement of the oxidation rate after annealing in inert atmosphere. The result is not satisfactory and raises serious doubts about the model’s correctness
Resumo:
La present tesi pretén recollir l'experiència viscuda en desenvolupar un sistema supervisor intel·ligent per a la millora de la gestió de plantes depuradores d'aigües residuals., implementar-lo en planta real (EDAR Granollers) i avaluar-ne el funcionament dia a dia amb situacions típiques de la planta. Aquest sistema supervisor combina i integra eines de control clàssic de les plantes depuradores (controlador automàtic del nivell d'oxigen dissolt al reactor biològic, ús de models descriptius del procés...) amb l'aplicació d'eines del camp de la intel·ligència artificial (sistemes basats en el coneixement, concretament sistemes experts i sistemes basats en casos, i xarxes neuronals). Aquest document s'estructura en 9 capítols diferents. Hi ha una primera part introductòria on es fa una revisió de l'estat actual del control de les EDARs i s'explica el perquè de la complexitat de la gestió d'aquests processos (capítol 1). Aquest capítol introductori juntament amb el capítol 2, on es pretén explicar els antecedents d'aquesta tesi, serveixen per establir els objectius d'aquest treball (capítol 3). A continuació, el capítol 4 descriu les peculiaritats i especificitats de la planta que s'ha escollit per implementar el sistema supervisor. Els capítols 5 i 6 del present document exposen el treball fet per a desenvolupar el sistema basat en regles o sistema expert (capítol 6) i el sistema basat en casos (capítol 7). El capítol 8 descriu la integració d'aquestes dues eines de raonament en una arquitectura multi nivell distribuïda. Finalment, hi ha una darrer capítol que correspon a la avaluació (verificació i validació), en primer lloc, de cadascuna de les eines per separat i, posteriorment, del sistema global en front de situacions reals que es donin a la depuradora
Resumo:
Facilitating the visual exploration of scientific data has received increasing attention in the past decade or so. Especially in life science related application areas the amount of available data has grown at a breath taking pace. In this paper we describe an approach that allows for visual inspection of large collections of molecular compounds. In contrast to classical visualizations of such spaces we incorporate a specific focus of analysis, for example the outcome of a biological experiment such as high throughout screening results. The presented method uses this experimental data to select molecular fragments of the underlying molecules that have interesting properties and uses the resulting space to generate a two dimensional map based on a singular value decomposition algorithm and a self organizing map. Experiments on real datasets show that the resulting visual landscape groups molecules of similar chemical properties in densely connected regions.
Resumo:
Active Networks can be seen as an evolution of the classical model of packet-switched networks. The traditional and ”passive” network model is based on a static definition of the network node behaviour. Active Networks propose an “active” model where the intermediate nodes (switches and routers) can load and execute user code contained in the data units (packets). Active Networks are a programmable network model, where bandwidth and computation are both considered shared network resources. This approach opens up new interesting research fields. This paper gives a short introduction of Active Networks, discusses the advantages they introduce and presents the research advances in this field.
Resumo:
Recent studies into price transmission have recognized the important role played by transport and transaction costs. Threshold models are one approach to accommodate such costs. We develop a generalized Threshold Error Correction Model to test for the presence and form of threshold behavior in price transmission that is symmetric around equilibrium. We use monthly wheat, maize, and soya prices from the United States, Argentina, and Brazil to demonstrate this model. Classical estimation of these generalized models can present challenges but Bayesian techniques avoid many of these problems. Evidence for thresholds is found in three of the five commodity price pairs investigated.
Resumo:
The development of genetically modified (GM) crops has led the European Union (EU) to put forward the concept of 'coexistence' to give fanners the freedom to plant both conventional and GM varieties. Should a premium for non-GM varieties emerge in the market, 'contamination' by GM pollen would generate a negative externality to conventional growers. It is therefore important to assess the effect of different 'policy variables'on the magnitude of the externality to identify suitable policies to manage coexistence. In this paper, taking GM herbicide tolerant oilseed rape as a model crop, we start from the model developed in Ceddia et al. [Ceddia, M.G., Bartlett, M., Perrings, C., 2007. Landscape gene flow, coexistence and threshold effect: the case of genetically modified herbicide tolerant oilseed rape (Brassica napus). Ecol. Modell. 205, pp. 169-180] use a Monte Carlo experiment to generate data and then estimate the effect of the number of GM and conventional fields, width of buffer areas and the degree of spatial aggregation (i.e. the 'policy variables') on the magnitude of the externality at the landscape level. To represent realistic conditions in agricultural production, we assume that detection of GM material in conventional produce might occur at the field level (no grain mixing occurs) or at the silos level (where grain mixing from different fields in the landscape occurs). In the former case, the magnitude of the externality will depend on the number of conventional fields with average transgenic presence above a certain threshold. In the latter case, the magnitude of the externality will depend on whether the average transgenic presence across all conventional fields exceeds the threshold. In order to quantify the effect of the relevant' policy variables', we compute the marginal effects and the elasticities. Our results show that when relying on marginal effects to assess the impact of the different 'policy variables', spatial aggregation is far more important when transgenic material is detected at field level, corroborating previous research. However, when elasticity is used, the effectiveness of spatial aggregation in reducing the externality is almost identical whether detection occurs at field level or at silos level. Our results show also that the area planted with GM is the most important 'policy variable' in affecting the externality to conventional growers and that buffer areas on conventional fields are more effective than those on GM fields. The implications of the results for the coexistence policies in the EU are discussed. (C) 2008 Elsevier B.V. All rights reserved.