986 resultados para semi empirical calculations
Resumo:
Development of empirical potentials for amorphous silica Amorphous silica (SiO2) is of great importance in geoscience and mineralogy as well as a raw material in glass industry. Its structure is characterized as a disordered continuous network of SiO4 tetrahedra. Many efforts have been undertaken to understand the microscopic properties of silica by classical molecular dynamics (MD) simulations. In this method the interatomic interactions are modeled by an effective potential that does not take explicitely into account the electronic degrees of freedom. In this work, we propose a new methodology to parameterize such a potential for silica using ab initio simulations, namely Car-Parrinello (CP) method [Phys. Rev. Lett. 55, 2471 (1985)]. The new potential proposed is compared to the BKS potential [Phys. Rev. Lett. 64, 1955 (1990)] that is considered as the benchmark potential for silica. First, CP simulations have been performed on a liquid silica sample at 3600 K. The structural features so obtained have been compared to the ones predicted by the classical BKS potential. Regarding the bond lengths the BKS tends to underestimate the Si-O bond whereas the Si-Si bond is overestimated. The inter-tetrahedral angular distribution functions are also not well described by the BKS potential. The corresponding mean value of theSiOSi angle is found to be ≃ 147◦, while the CP yields to aSiOSi angle centered around 135◦. Our aim is to fit a classical Born-Mayer/Coulomb pair potential using ab initio calculations. To this end, we use the force-matching method proposed by Ercolessi and Adams [Europhys. Lett. 26, 583 (1994)]. The CP configurations and their corresponding interatomic forces have been considered for a least square fitting procedure. The classical MD simulations with the resulting potential have lead to a structure that is very different from the CP one. Therefore, a different fitting criterion based on the CP partial pair correlation functions was applied. Using this approach the resulting potential shows a better agreement with the CP data than the BKS ones: pair correlation functions, angular distribution functions, structure factors, density of states and pressure/density were improved. At low temperature, the diffusion coefficients appear to be three times higher than those predicted by the BKS model, however showing a similar temperature dependence. Calculations have also been carried out on crystalline samples in order to check the transferability of the potential. The equilibrium geometry as well as the elastic constants of α-quartz at 0 K are well described by our new potential although the crystalline phases have not been considered for the parameterization. We have developed a new potential for silica which represents an improvement over the pair potentials class proposed so far. Furthermore, the fitting methodology that has been developed in this work can be applied to other network forming systems such as germania as well as mixtures of SiO2 with other oxides (e.g. Al2O3, K2O, Na2O).
Resumo:
Coupled-cluster theory provides one of the most successful concepts in electronic-structure theory. This work covers the parallelization of coupled-cluster energies, gradients, and second derivatives and its application to selected large-scale chemical problems, beside the more practical aspects such as the publication and support of the quantum-chemistry package ACES II MAB and the design and development of a computational environment optimized for coupled-cluster calculations. The main objective of this thesis was to extend the range of applicability of coupled-cluster models to larger molecular systems and their properties and therefore to bring large-scale coupled-cluster calculations into day-to-day routine of computational chemistry. A straightforward strategy for the parallelization of CCSD and CCSD(T) energies, gradients, and second derivatives has been outlined and implemented for closed-shell and open-shell references. Starting from the highly efficient serial implementation of the ACES II MAB computer code an adaptation for affordable workstation clusters has been obtained by parallelizing the most time-consuming steps of the algorithms. Benchmark calculations for systems with up to 1300 basis functions and the presented applications show that the resulting algorithm for energies, gradients and second derivatives at the CCSD and CCSD(T) level of theory exhibits good scaling with the number of processors and substantially extends the range of applicability. Within the framework of the ’High accuracy Extrapolated Ab initio Thermochemistry’ (HEAT) protocols effects of increased basis-set size and higher excitations in the coupled- cluster expansion were investigated. The HEAT scheme was generalized for molecules containing second-row atoms in the case of vinyl chloride. This allowed the different experimental reported values to be discriminated. In the case of the benzene molecule it was shown that even for molecules of this size chemical accuracy can be achieved. Near-quantitative agreement with experiment (about 2 ppm deviation) for the prediction of fluorine-19 nuclear magnetic shielding constants can be achieved by employing the CCSD(T) model together with large basis sets at accurate equilibrium geometries if vibrational averaging and temperature corrections via second-order vibrational perturbation theory are considered. Applying a very similar level of theory for the calculation of the carbon-13 NMR chemical shifts of benzene resulted in quantitative agreement with experimental gas-phase data. The NMR chemical shift study for the bridgehead 1-adamantyl cation at the CCSD(T) level resolved earlier discrepancies of lower-level theoretical treatment. The equilibrium structure of diacetylene has been determined based on the combination of experimental rotational constants of thirteen isotopic species and zero-point vibrational corrections calculated at various quantum-chemical levels. These empirical equilibrium structures agree to within 0.1 pm irrespective of the theoretical level employed. High-level quantum-chemical calculations on the hyperfine structure parameters of the cyanopolyynes were found to be in excellent agreement with experiment. Finally, the theoretically most accurate determination of the molecular equilibrium structure of ferrocene to date is presented.
Resumo:
Im Rahmen dieser Arbeit wurde ein neuartiger Experimentaufbau -- das γ3 Experiment -- zur Messung von photoneninduzierten Kern-Dipolanregungen in stabilen Isotopen konzipiert und an der High Intensity γ-Ray Source (HIγS) an der Duke University installiert.rnDie hohe Energieauflösung und die hohe Nachweiseffizienz des Detektoraufbaus, welcher aus einer Kombination von LaBr Szintillatoren und hochreinen Germanium-Detektoren besteht, erlaubt erstmals die effiziente Messung von γ-γ-Koinzidenzen in Verbindung mit der Methode der Kernresonanzfluoreszenz.rnDiese Methode eröffnet den Zugang zum Zerfallsverhalten der angeregten Dipolzustände als zusätzlicher Observablen, die ein detaillierteres Verständnis der zugrunde liegenden Struktur dieser Anregungen ermöglicht.rnDer Detektoraufbau wurde bereits erfolgreich im Rahmen von zwei Experimentkampagnen in 2012 und 2013 für die Untersuchung von 13 verschiedenen Isotopen verwendet. Im Fokus dieser Arbeit stand die Analyse der Pygmy-Dipolresonanz (PDR) im Kern 140Ce im Energiebereich von 5,2 MeV bis 8,3 MeV basierend auf den mit dem γ3 Experimentaufbau gemessenen Daten. Insbesondere das Zerfallsverhalten der Zustände, die an der PDR beteiligt sind, wurde untersucht. Der Experimentaufbau, die Details der Analyse sowie die Resultate werden in der vorliegenden Arbeit präsentiert. Desweiteren erlaubt ein Vergleich der Ergebnisse mit theoretischen Rechnungen im quasi-particle phonon model (QPM) eine Interpretation des beobachteten Zerfallsverhaltens.
Resumo:
Stemmatology, or the reconstruction of the transmission history of texts, is a field that stands particularly to gain from digital methods. Many scholars already take stemmatic approaches that rely heavily on computational analysis of the collated text (e.g. Robinson and O’Hara 1996; Salemans 2000; Heikkilä 2005; Windram et al. 2008 among many others). Although there is great value in computationally assisted stemmatology, providing as it does a reproducible result and allowing access to the relevant methodological process in related fields such as evolutionary biology, computational stemmatics is not without its critics. The current state-of-the-art effectively forces scholars to choose between a preconceived judgment of the significance of textual differences (the Lachmannian or neo-Lachmannian approach, and the weighted phylogenetic approach) or to make no judgment at all (the unweighted phylogenetic approach). Some basis for judgment of the significance of variation is sorely needed for medieval text criticism in particular. By this, we mean that there is a need for a statistical empirical profile of the text-genealogical significance of the different sorts of variation in different sorts of medieval texts. The rules that apply to copies of Greek and Latin classics may not apply to copies of medieval Dutch story collections; the practices of copying authoritative texts such as the Bible will most likely have been different from the practices of copying the Lives of local saints and other commonly adapted texts. It is nevertheless imperative that we have a consistent, flexible, and analytically tractable model for capturing these phenomena of transmission. In this article, we present a computational model that captures most of the phenomena of text variation, and a method for analysis of one or more stemma hypotheses against the variation model. We apply this method to three ‘artificial traditions’ (i.e. texts copied under laboratory conditions by scholars to study the properties of text variation) and four genuine medieval traditions whose transmission history is known or deduced in varying degrees. Although our findings are necessarily limited by the small number of texts at our disposal, we demonstrate here some of the wide variety of calculations that can be made using our model. Certain of our results call sharply into question the utility of excluding ‘trivial’ variation such as orthographic and spelling changes from stemmatic analysis.
Resumo:
Context. Planet formation models have been developed during the past years to try to reproduce what has been observed of both the solar system and the extrasolar planets. Some of these models have partially succeeded, but they focus on massive planets and, for the sake of simplicity, exclude planets belonging to planetary systems. However, more and more planets are now found in planetary systems. This tendency, which is a result of radial velocity, transit, and direct imaging surveys, seems to be even more pronounced for low-mass planets. These new observations require improving planet formation models, including new physics, and considering the formation of systems. Aims: In a recent series of papers, we have presented some improvements in the physics of our models, focussing in particular on the internal structure of forming planets, and on the computation of the excitation state of planetesimals and their resulting accretion rate. In this paper, we focus on the concurrent effect of the formation of more than one planet in the same protoplanetary disc and show the effect, in terms of architecture and composition of this multiplicity. Methods: We used an N-body calculation including collision detection to compute the orbital evolution of a planetary system. Moreover, we describe the effect of competition for accretion of gas and solids, as well as the effect of gravitational interactions between planets. Results: We show that the masses and semi-major axes of planets are modified by both the effect of competition and gravitational interactions. We also present the effect of the assumed number of forming planets in the same system (a free parameter of the model), as well as the effect of the inclination and eccentricity damping. We find that the fraction of ejected planets increases from nearly 0 to 8% as we change the number of embryos we seed the system with from 2 to 20 planetary embryos. Moreover, our calculations show that, when considering planets more massive than ~5 M⊕, simulations with 10 or 20 planetary embryos statistically give the same results in terms of mass function and period distribution.
Resumo:
Identifying drivers of species diversity is a major challenge in understanding and predicting the dynamics of species-rich semi-natural grasslands. In particular in temperate grasslands changes in land use and its consequences, i.e. increasing fragmentation, the on-going loss of habitat and the declining importance of regional processes such as seed dispersal by livestock, are considered key drivers of the diversity loss witnessed within the last decades. It is a largely unresolved question to what degree current temperate grassland communities already reflect a decline of regional processes such as longer distance seed dispersal. Answering this question is challenging since it requires both a mechanistic approach to community dynamics and a sufficient data basis that allows identifying general patterns. Here, we present results of a local individual- and trait-based community model that was initialized with plant functional types (PFTs) derived from an extensive empirical data set of species-rich grasslands within the `Biodiversity Exploratories' in Germany. Driving model processes included above- and belowground competition, dynamic resource allocation to shoots and roots, clonal growth, grazing, and local seed dispersal. To test for the impact of regional processes we also simulated seed input from a regional species pool. Model output, with and without regional seed input, was compared with empirical community response patterns along a grazing gradient. Simulated response patterns of changes in PFT richness, Shannon diversity, and biomass production matched observed grazing response patterns surprisingly well if only local processes were considered. Already low levels of additional regional seed input led to stronger deviations from empirical community pattern. While these findings cannot rule out that regional processes other than those considered in the modeling study potentially play a role in shaping the local grassland communities, our comparison indicates that European grasslands are largely isolated, i.e. local mechanisms explain observed community patterns to a large extent.
Resumo:
The gas-phase rotational motion of hexafluorobenzene has been measured in real time using femtosecond (fs) time-resolved rotational Raman coherence spectroscopy (RR-RCS) at T = 100 and 295 K. This four-wave mixing method allows to probe the rotation of non-polar gas-phase molecules with fs time resolution over times up to ∼5 ns. The ground state rotational constant of hexafluorobenzene is determined as B 0 = 1029.740(28) MHz (2σ uncertainty) from RR-RCS transients measured in a pulsed seeded supersonic jet, where essentially only the v = 0 state is populated. Using this B 0 value, RR-RCS measurements in a room temperature gas cell give the rotational constants B v of the five lowest-lying thermally populated vibrationally excited states ν7/8, ν9, ν11/12, ν13, and ν14/15. Their B v constants differ from B 0 by between −1.02 MHz and +2.23 MHz. Combining the B 0 with the results of all-electron coupled-cluster CCSD(T) calculations of Demaison et al. [Mol. Phys.111, 1539 (2013)] and of our own allow to determine the C-C and C-F semi-experimental equilibrium bond lengths r e(C-C) = 1.3866(3) Å and r e(C-F) = 1.3244(4) Å. These agree with the CCSD(T)/wCVQZ r e bond lengths calculated by Demaison et al. within ±0.0005 Å. We also calculate the semi-experimental thermally averaged bond lengths r g(C-C)=1.3907(3) Å and r g(C-F)=1.3250(4) Å. These are at least ten times more accurate than two sets of experimental gas-phase electron diffraction r g bond lengths measured in the 1960s.
Resumo:
The growth rate of atmospheric carbondioxide(CO2) concentrations since industrialization is characterized by large interannual variability, mostly resulting from variability in CO 2 uptake by terrestrial ecosystems (typically termed carbon sink). However, the contributions of regional ecosystems to that variability are not well known. Using an ensemble of ecosystem and land-surface models and an empirical observation-based product of global gross primary production, we show that the mean sink, trend, and interannual variability in CO2 uptake by terrestrial ecosystems are dominated by distinct biogeographic regions. Whereas the mean sink is dominated by highly productive lands (mainly tropical forests), the trend and interannual variability of the sink are dominated by semi-arid ecosystems whose carbon balance is strongly associated with circulation-driven variations in both precipitation and temperature.
Resumo:
Stubacher Sonnblickkees (SSK) is located in the Hohe Tauern Range (Eastern Alps) in the south of Salzburg Province (Austria) in the region of Oberpinzgau in the upper Stubach Valley. The glacier is situated at the main Alpine crest and faces east, starting at elevations close to 3050 m and in the 1980s terminated at 2500 m a.s.l. It had an area of 1.7 km² at that time, compared with 1 km² in 2013. The glacier type can be classified as a slope glacier, i.e. the relief is covered by a relatively thin ice sheet and there is no regular glacier tongue. The rough subglacial topography makes for a complex shape in the surface topography, with various concave and convex patterns. The main reason for selecting this glacier for mass balance observations (as early as 1963) was to verify on a complex glacier how the mass balance methods and the conclusions - derived during the more or less pioneer phase of glaciological investigations in the 1950s and 1960s - could be applied to the SSK glacier. The decision was influenced by the fact that close to the SSK there was the Rudolfshütte, a hostel of the Austrian Alpine Club (OeAV), newly constructed in the 1950s to replace the old hut dating from 1874. The new Alpenhotel Rudolfshütte, which was run by the Slupetzky family from 1958 to 1970, was the base station for the long-term observation; the cable car to Rudolfshütte, operated by the Austrian Federal Railways (ÖBB), was a logistic advantage. Another factor for choosing SSK as a glaciological research site was the availability of discharge records of the catchment area from the Austrian Federal Railways who had turned the nearby lake Weißsee ('White Lake') - a former natural lake - into a reservoir for their hydroelectric power plants. In terms of regional climatic differences between the Central Alps in Tyrol and those of the Hohe Tauern, the latter experienced significantly higher precipitation , so one could expect new insights in the different response of the two glaciers SSK and Hintereisferner (Ötztal Alps) - where a mass balance series went back to 1952. In 1966 another mass balance series with an additional focus on runoff recordings was initiated at Vernagtfener, near Hintereisferner, by the Commission of the Bavarian Academy of Sciences in Munich. The usual and necessary link to climate and climate change was given by a newly founded weather station (by Heinz and Werner Slupetzky) at the Rudolfshütte in 1961, which ran until 1967. Along with an extension and enlargement to the so-called Alpine Center Rudolfshütte of the OeAV, a climate observatory (suggested by Heinz Slupetzky) has been operating without interruption since 1980 under the responsibility of ZAMG and the Hydrological Service of Salzburg, providing long-term met observations. The weather station is supported by the Berghotel Rudolfshütte (in 2004 the OeAV sold the hotel to a private owner) with accommodation and facilities. Direct yearly mass balance measurements were started in 1963, first for 3 years as part of a thesis project. In 1965 the project was incorporated into the Austrian glacier measurement sites within the International Hydrological Decade (IHD) 1965 - 1974 and was afterwards extended via the International Hydrological Program (IHP) 1975 - 1981. During both periods the main financial support came from the Hydrological Survey of Austria. After 1981 funds were provided by the Hydrological Service of the Federal Government of Salzburg. The research was conducted from 1965 onwards by Heinz Slupetzky from the (former) Department of Geography of the University of Salzburg. These activities received better recognition when the High Alpine Research Station of the University of Salzburg was founded in 1982 and brought in additional funding from the University. With recent changes concerning Rudolfshütte, however, it became unfeasible to keep the research station going. Fortunately, at least the weather station at Rudolfshütte is still operating. In the pioneer years of the mass balance recordings at SSK, the main goal was to understand the influence of the complicated topography on the ablation and accumulation processes. With frequent strong southerly winds (foehn) on the one hand, and precipitation coming in with storms from the north to northwest, the snow drift is an important factor on the undulating glacier surface. This results in less snow cover in convex zones and in more or a maximum accumulation in concave or flat areas. As a consequence of the accentuated topography, certain characteristic ablation and accumulation patterns can be observed during the summer season every year, which have been regularly observed for many decades . The process of snow depletion (Ausaperung) runs through a series of stages (described by the AAR) every year. The sequence of stages until the end of the ablation season depends on the weather conditions in a balance year. One needs a strong negative mass balance year at the beginning of glacier measurements to find out the regularities; 1965, the second year of observation resulted in a very positive mass balance with very little ablation but heavy accumulation. To date it is the year with the absolute maximum positive balance in the entire mass balance series since 1959, probably since 1950. The highly complex ablation patterns required a high number of ablation stakes at the beginning of the research and it took several years to develop a clearer idea of the necessary density of measurement points to ensure high accuracy. A great number of snow pits and probing profiles (and additional measurements at crevasses) were necessary to map the accumulation area/patterns. Mapping the snow depletion, especially at the end of the ablation season, which coincides with the equilibrium line, is one of the main basic data for drawing contour lines of mass balance and to calculate the total mass balance (on a regular-shaped valley glacier there might be an equilibrium line following a contour line of elevation separating the accumulation area and the ablation area, but not at SSK). - An example: in 1969/70, 54 ablation stakes and 22 snow pits were used on the 1.77 km² glacier surface. In the course of the study the consistency of the accumulation and ablation patterns could be used to reduce the number of measurement points. - At the SSK the stratigraphic system, i.e. the natural balance year, is used instead the usual hydrological year. From 1964 to 1981, the yearly mass balance was calculated by direct measurements. Based on these records of 17 years, a regression analysis between the specific net mass balance and the ratio of ablation area to total area (AAR) has been used since then. The basic requirement was mapping the maximum snow depletion at the end of each balance year. There was the advantage of Heinz Slupetzky's detailed local and long-term experience, which ensured homogeneity of the series on individual influences of the mass balance calculations. Verifications took place as often as possible by means of independent geodetic methods, i.e. monoplotting , aerial and terrestrial photogrammetry, more recently also the application of PHOTOMODELLER and laser scans. The semi-direct mass balance determinations used at SSK were tentatively compared with data from periods of mass/volume change, resulting in promising first results on the reliability of the method. In recent years re-analyses of the mass balance series have been conducted by the World Glacier Monitoring Service and will be done at SSK too. - The methods developed at SSK also add to another objective, much discussed in the 1960s within the community, namely to achieve time- and labour-saving methods to ensure continuation of long-term mass balance series. The regression relations were used to extrapolate the mass balance series back to 1959, the maximum depletion could be reconstructed by means of photographs for those years. R. Günther (1982) calculated the mass balance series of SSK back to 1950 by analysing the correlation between meteorological data and the mass balance; he found a high statistical relation between measured and determined mass balance figures for SSK. In spite of the complex glacier topography, interesting empirical experiences were gained from the mass balance data sets, giving a better understanding of the characteristics of the glacier type, mass balance and mass exchange. It turned out that there are distinct relations between the specific net balance, net accumulation (defined as Bc/S) and net ablation (Ba/S) to the AAR, resulting in characteristic so-called 'turnover curves'. The diagram of SSK represents the type of a glacier without a glacier tongue. Between 1964 and 1966, a basic method was developed, starting from the idea that instead of measuring years to cover the range between extreme positive and extreme negative yearly balances one could record the AAR/snow depletion/Ausaperung during one or two summers. The new method was applied on Cathedral Massif Glacier, a cirque glacier with the same area as the Stubacher Sonnblickkees, in British Columbia, Canada. during the summers of 1977 and 1978. It returned exactly the expected relations, e.g. mass turnover curves, as found on SSK. The SSK was mapped several times on a scale of 1:5000 to 1:10000. Length variations have been measured since 1960 within the OeAV glacier length measurement programme. Between 1965 and 1981, there was a mass gain of 10 million cubic metres. With a time lag of 10 years, this resulted in an advance until the mid-1980s. Since 1982 there has been a distinct mass loss of 35 million cubic metres by 2013. In recent years, the glacier has disintegrated faster, forced by the formation of a periglacial lake at the glacier terminus and also by the outcrops of rocks (typical for the slope glacier type), which have accelerated the meltdown. The formation of this lake is well documented. The glacier has retreated by some 600 m since 1981. - Since August 2002, a runoff gauge installed by the Hydrographical Service of Salzburg has recorded the discharge of the main part of SSK at the outlet of the new Unterer Eisboden See. The annual reports - submitted from 1982 on as a contractual obligation to the Hydrological Service of Salzburg - document the ongoing processes on the one hand, and emphasize the mass balance of SSK and outline the climatological reasons, mainly based on the met-data of the observatory Rudolfshütte, on the other. There is an additional focus on estimating the annual water balance in the catchment area of the lake. There are certain preconditions for the water balance equation in the area. Runoff is recorded by the ÖBB power stations, the mass balance of the now approx. 20% glaciated area (mainly the Sonnblickkees) is measured andthe change of the snow and firn patches/the water content is estimated as well as possible. (Nowadays laserscanning and ground radar are available to measure the snow pack). There is a net of three precipitation gauges plus the recordings at Rudolfshütte. The evaporation is of minor importance. The long-term annual mean runoff depth in the catchment area is around 3.000 mm/year. The precipitation gauges have measured deficits between 10% and 35%, on average probably 25% to 30%. That means that the real precipitation in the catchment area Weißsee (at elevations between 2,250 and 3,000 m) is in an order of 3,200 to 3,400 mm a year. The mass balance record of SSK was the first one established in the Hohe Tauern region (and now since the Hohe Tauern National Park was founded in 1983 in Salzburg) and is one of the longest measurement series worldwide. Great efforts are under way to continue the series, to safeguard against interruption and to guarantee a long-term monitoring of the mass balance and volume change of SSK (until the glacier is completely gone, which seems to be realistic in the near future as a result of the ongoing global warming). Heinz Slupetzky, March 2014
Resumo:
ObjectKineticMonteCarlo models allow for the study of the evolution of the damage created by irradiation to time scales that are comparable to those achieved experimentally. Therefore, the essential ObjectKineticMonteCarlo parameters can be validated through comparison with experiments. However, this validation is not trivial since a large number of parameters is necessary, including migration energies of point defects and their clusters, binding energies of point defects in clusters, as well as the interactionradii. This is particularly cumbersome when describing an alloy, such as the Fe–Cr system, which is of interest for fusion energy applications. In this work we describe an ObjectKineticMonteCarlo model for Fe–Cr alloys in the dilute limit. The parameters used in the model come either from density functional theory calculations or from empirical interatomic potentials. This model is used to reproduce isochronal resistivity recovery experiments of electron irradiateddiluteFe–Cr alloys performed by Abe and Kuramoto. The comparison between the calculated results and the experiments reveal that an important parameter is the capture radius between substitutionalCr and self-interstitialFe atoms. A parametric study is presented on the effect of the capture radius on the simulated recovery curves.
Resumo:
Com o atual desenvolvimento industrial e tecnológico da sociedade, a presença de substâncias inflamáveis e/ou tóxicas aumentou significativamente em um grande número de atividades. A possível dispersão de gases perigosos em instalações de armazenamento ou em operações de transporte representam uma grande ameaça à saúde e ao meio ambiente. Portanto, a caracterização de uma nuvem inflamável e/ou tóxica é um ponto crítico na análise quantitativa de riscos. O objetivo principal desta tese foi fornecer novas perspectivas que pudessem auxiliar analistas de risco envolvidos na análise de dispersões em cenários complexos, por exemplo, cenários com barreiras ou semi-confinados. A revisão bibliográfica mostrou que, tradicionalmente, modelos empíricos e integrais são usados na análise de dispersão de substâncias tóxicas / inflamáveis, fornecendo estimativas rápidas e geralmente confiáveis ao descrever cenários simples (por exemplo, dispersão em ambientes sem obstruções sobre terreno plano). No entanto, recentemente, o uso de ferramentas de CFD para simular dispersões aumentou de forma significativa. Estas ferramentas permitem modelar cenários mais complexos, como os que ocorrem em espaços semi-confinados ou com a presença de barreiras físicas. Entre todas as ferramentas CFD disponíveis, consta na bibliografia que o software FLACS® tem bom desempenho na simulação destes cenários. Porém, como outras ferramentas similares, ainda precisa ser totalmente validado. Após a revisão bibliográfica sobre testes de campo já executados ao longo dos anos, alguns testes foram selecionados para realização de um exame preliminar de desempenho da ferramenta CFD utilizado neste estudo. Foram investigadas as possíveis fontes de incertezas em termos de capacidade de reprodutibilidade, de dependência de malha e análise de sensibilidade das variáveis de entrada e parâmetros de simulação. Os principais resultados desta fase foram moldados como princípios práticos a serem utilizados por analistas de risco ao realizar análise de dispersão com a presença de barreiras utilizando ferramentas CFD. Embora a revisão bibliográfica tenha mostrado alguns dados experimentais disponíveis na literatura, nenhuma das fontes encontradas incluem estudos detalhados sobre como realizar simulações de CFD precisas nem fornecem indicadores precisos de desempenho. Portanto, novos testes de campo foram realizados a fim de oferecer novos dados para estudos de validação mais abrangentes. Testes de campo de dispersão de nuvem de propano (com e sem a presença de barreiras obstruindo o fluxo) foram realizados no campo de treinamento da empresa Can Padró Segurança e Proteção (em Barcelona). Quatro testes foram realizados, consistindo em liberações de propano com vazões de até 0,5 kg/s, com duração de 40 segundos em uma área de descarga de 700 m2. Os testes de campo contribuíram para a reavaliação dos pontos críticos mapeados durante as primeiras fases deste estudo e forneceram dados experimentais para serem utilizados pela comunidade internacional no estudo de dispersão e validação de modelos. Simulações feitas utilizando-se a ferramenta CFD foram comparadas com os dados experimentais obtidos nos testes de campo. Em termos gerais, o simulador mostrou bom desempenho em relação às taxas de concentração da nuvem. O simulador reproduziu com sucesso a geometria complexa e seus efeitos sobre a dispersão da nuvem, mostrando claramente o efeito da barreira na distribuição das concentrações. No entanto, as simulações não foram capazes de representar toda a dinâmica da dispersão no que concerne aos efeitos da variação do vento, uma vez que as nuvens simuladas diluíram mais rapidamente do que nuvens experimentais.
Resumo:
Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions.
Resumo:
This thesis examines the effect of rights issue announcements on stock prices by companies listed on the Kuala Lumpur Stock Exchange (KLSE) between 1987 to 1996. The emphasis is to report whether the KLSE is semi strongly efficient with respect to the announcement of rights issues and to check whether the implications of corporate finance theories on the effect of an event can be supported in the context of an emerging market. Once the effect is established, potential determinants of abnormal returns identified by previous empirical work and corporate financial theory are analysed. By examining 70 companies making clean rights issue announcements, this thesis will hopefully shed light on some important issues in long term corporate financing. Event study analysis is used to check on the efficiency of the Malaysian stock market; while cross-sectional regression analysis is executed to identify possible explanators of the rights issue announcements' effect. To ensure the results presented are not contaminated, econometric and statistical issues raised in both analyses have been taken into account. Given the small amount of empirical research conducted in this part of the world, the results of this study will hopefully be of use to investors, security analysts, corporate financial managements, regulators and policy makers as well as those who are interested in capital market based research of an emerging market. It is found that the Malaysian stock market is not semi strongly efficient since there exists a persistent non-zero abnormal return. This finding is not consistent with the hypothesis that security returns adjust rapidly to reflect new information. It may be possible that the result is influenced by the sample, consisting mainly of below average size companies which tend to be thinly traded. Nevertheless, these issues have been addressed. Another important issue which has emerged from the study is that there is some evidence to suggest that insider trading activity existed in this market. In addition to these findings, when the rights issue announcements' effect is compared to the implications of corporate finance theories in predicting the sign of abnormal returns, the signalling model, asymmetric information model, perfect substitution hypothesis and Scholes' information hypothesis cannot be supported.
Resumo:
In many areas of northern India, salinity renders groundwater unsuitable for drinking and even for irrigation. Though membrane treatment can be used to remove the salt, there are some drawbacks to this approach e.g. (1) depletion of the groundwater due to over-abstraction, (2) saline contamination of surface water and soil caused by concentrate disposal and (3) high electricity usage. To address these issues, a system is proposed in which a photovoltaic-powered reverse osmosis (RO) system is used to irrigate a greenhouse (GH) in a stand-alone arrangement. The concentrate from the RO is supplied to an evaporative cooling system, thus reducing the volume of the concentrate so that finally it can be evaporated in a pond to solid for safe disposal. Based on typical meteorological data for Delhi, calculations based on mass and energy balance are presented to assess the sizing and cost of the system. It is shown that solar radiation, freshwater output and evapotranspiration demand are readily matched due to the approximately linear relation among these variables. The demand for concentrate varies independently, however, thus favouring the use of a variable recovery arrangement. Though enough water may be harvested from the GH roof to provide year-round irrigation, this would require considerable storage. Some practical options for storage tanks are discussed. An alternative use of rainwater is in misting to reduce peak temperatures in the summer. An example optimised design provides internal temperatures below 30EC (monthly average daily maxima) for 8 months of the year and costs about €36,000 for the whole system with GH floor area of 1000 m2 . Further work is needed to assess technical risks relating to scale-deposition in the membrane and evaporative pads, and to develop a business model that will allow such a project to succeed in the Indian rural context.
Resumo:
The spatial distribution of self-employment in India: evidence from semiparametric geoadditive models, Regional Studies. The entrepreneurship literature has rarely considered spatial location as a micro-determinant of occupational choice. It has also ignored self-employment in developing countries. Using Bayesian semiparametric geoadditive techniques, this paper models spatial location as a micro-determinant of self-employment choice in India. The empirical results suggest the presence of spatial occupational neighbourhoods and a clear north–south divide in self-employment when the entire sample is considered; however, spatial variation in the non-agriculture sector disappears to a large extent when individual factors that influence self-employment choice are explicitly controlled. The results further suggest non-linear effects of age, education and wealth on self-employment.