983 resultados para modern techniques
Resumo:
In view of the current fragmentation in management and organisation studies, we argue that there is a need to elaborate techniques that help reconcile contradictory and superficially incommensurable standpoints. For this purpose, we draw on ‘pre-modern’ Aristotelian epistemological and methodological sources, particularly the idea of ‘saving the appearances’ (SA), not previously introduced into organisation studies. Using SA as our starting point, we outline a methodology that helps to develop reasonable and acceptable intermediary positions in contemporary debates between ‘modernism’ and ‘post-modernism’. We illustrate the functioning of SA in the case of three issues in the philosophy of science where ‘modernist’ and ‘post-modernist’ scholars seem to have incommensurable standpoints: the nature of scientific knowledge; the conception of causality; and the epistemology of practice. We show in particular how to use the logics of ‘qualification’, ‘new conception’, and ‘complementary combination’ to form the basis for mediating positions which could then be accepted by less extreme proponents of both ‘modernism’ and ‘postmodernism’.
Resumo:
Recently, focus of real estate investment has expanded from the building-specific level to the aggregate portfolio level. The portfolio perspective requires investment analysis for real estate which is comparable with that of other asset classes, such as stocks and bonds. Thus, despite its distinctive features, such as heterogeneity, high unit value, illiquidity and the use of valuations to measure performance, real estate should not be considered in isolation. This means that techniques which are widely used for other assets classes can also be applied to real estate. An important part of investment strategies which support decisions on multi-asset portfolios is identifying the fundamentals of movements in property rents and returns, and predicting them on the basis of these fundamentals. The main objective of this thesis is to find the key drivers and the best methods for modelling and forecasting property rents and returns in markets which have experienced structural changes. The Finnish property market, which is a small European market with structural changes and limited property data, is used as a case study. The findings in the thesis show that is it possible to use modern econometric tools for modelling and forecasting property markets. The thesis consists of an introduction part and four essays. Essays 1 and 3 model Helsinki office rents and returns, and assess the suitability of alternative techniques for forecasting these series. Simple time series techniques are able to account for structural changes in the way markets operate, and thus provide the best forecasting tool. Theory-based econometric models, in particular error correction models, which are constrained by long-run information, are better for explaining past movements in rents and returns than for predicting their future movements. Essay 2 proceeds by examining the key drivers of rent movements for several property types in a number of Finnish property markets. The essay shows that commercial rents in local markets can be modelled using national macroeconomic variables and a panel approach. Finally, Essay 4 investigates whether forecasting models can be improved by accounting for asymmetric responses of office returns to the business cycle. The essay finds that the forecast performance of time series models can be improved by introducing asymmetries, and the improvement is sufficient to justify the extra computational time and effort associated with the application of these techniques.
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
The notion of optimization is inherent in protein design. A long linear chain of twenty types of amino acid residues are known to fold to a 3-D conformation that minimizes the combined inter-residue energy interactions. There are two distinct protein design problems, viz. predicting the folded structure from a given sequence of amino acid monomers (folding problem) and determining a sequence for a given folded structure (inverse folding problem). These two problems have much similarity to engineering structural analysis and structural optimization problems respectively. In the folding problem, a protein chain with a given sequence folds to a conformation, called a native state, which has a unique global minimum energy value when compared to all other unfolded conformations. This involves a search in the conformation space. This is somewhat akin to the principle of minimum potential energy that determines the deformed static equilibrium configuration of an elastic structure of given topology, shape, and size that is subjected to certain boundary conditions. In the inverse-folding problem, one has to design a sequence with some objectives (having a specific feature of the folded structure, docking with another protein, etc.) and constraints (sequence being fixed in some portion, a particular composition of amino acid types, etc.) while obtaining a sequence that would fold to the desired conformation satisfying the criteria of folding. This requires a search in the sequence space. This is similar to structural optimization in the design-variable space wherein a certain feature of structural response is optimized subject to some constraints while satisfying the governing static or dynamic equilibrium equations. Based on this similarity, in this work we apply the topology optimization methods to protein design, discuss modeling issues and present some initial results.
Resumo:
A thermodynamic study of the Ti-O system at 1573 K has been conducted using a combination of thermogravimetric and emf techniques. The results indicate that the variation of oxygen potential with the nonstoichiometric parameter delta in stability domain of TiO2-delta with rutile structure can be represented by the relation, Delta mu o(2) = -6RT In delta - 711970(+/-1600) J/mol. The corresponding relation between non-stoichiometric parameter delta and partial pressure of oxygen across the whole stability range of TiO2-delta at 1573 K is delta proportional to P-O2(-1/6). It is therefore evident that the oxygen deficient behavior of nonstoichiometric TiO2-delta is dominated by the presence of doubly charged oxygen vacancies and free electrons. The high-precision measurements enabled the resolution of oxygen potential steps corresponding to the different Magneli phases (Ti-n O2n-1) up to n = 15. Beyond this value of n, the oxygen potential steps were too small to be resolved. Based on composition of the Magneli phase in equilibrium with TiO2-delta, the maximum value of n is estimated to be 28. The chemical potential of titanium was derived as a function of composition using the Gibbs-Duhem relation. Gibbs energies of formation of the Magneli phases were derived from the chemical potentials of oxygen and titanium. The values of -2441.8(+/-5.8) kJ/mol for Ti4O7 and -1775.4(+/-4.3) kJ/mol for Ti3O5 Obtained in this study refine values of -2436.2(+/-26.1) kJ/mol and-1771.3(+/-6.9) kJ/mol, respectively, given in the JANAF thermochemical tables.
Resumo:
Silicon strip detectors are fast, cost-effective and have an excellent spatial resolution. They are widely used in many high-energy physics experiments. Modern high energy physics experiments impose harsh operation conditions on the detectors, e.g., of LHC experiments. The high radiation doses cause the detectors to eventually fail as a result of excessive radiation damage. This has led to a need to study radiation tolerance using various techniques. At the same time, a need to operate sensors approaching the end their lifetimes has arisen. The goal of this work is to demonstrate that novel detectors can survive the environment that is foreseen for future high-energy physics experiments. To reach this goal, measurement apparatuses are built. The devices are then used to measure the properties of irradiated detectors. The measurement data are analyzed, and conclusions are drawn. Three measurement apparatuses built as a part of this work are described: two telescopes measuring the tracks of the beam of a particle accelerator and one telescope measuring the tracks of cosmic particles. The telescopes comprise layers of reference detectors providing the reference track, slots for the devices under test, the supporting mechanics, electronics, software, and the trigger system. All three devices work. The differences between these devices are discussed. The reconstruction of the reference tracks and analysis of the device under test are presented. Traditionally, silicon detectors have produced a very clear response to the particles being measured. In the case of detectors nearing the end of their lifefimes, this is no longer true. A new method benefitting from the reference tracks to form clusters is presented. The method provides less biased results compared to the traditional analysis, especially when studying the response of heavily irradiated detectors. Means to avoid false results in demonstrating the particle-finding capabilities of a detector are also discussed. The devices and analysis methods are primarily used to study strip detectors made of Magnetic Czochralski silicon. The detectors studied were irradiated to various fluences prior to measurement. The results show that Magnetic Czochralski silicon has a good radiation tolerance and is suitable for future high-energy physics experiments.
Resumo:
Superfluidity is perhaps one of the most remarkable observed macroscopic quantum effect. Superfluidity appears when a macroscopic number of particles occupies a single quantum state. Using modern experimental techniques one dark solitons) and vortices. There is a large literature on theoretical work studying the properties of such solitons using semiclassical methods. This thesis describes an alternative method for the study of superfluid solitons. The method used here is a holographic duality between a class of quantum field theories and gravitational theories. The classical limit of the gravitational system maps into a strong coupling limit of the quantum field theory. We use a holographic model of superfluidity to study solitons in these systems. One particularly appealing feature of this technique is that it allows us to take into account finite temperature effects in a large range of temperatures.
Resumo:
The nature of the chemisorbed states of nitrogen on various transition metal surfaces is discussed comprehensively on the basis of the results of electron spectroscopic investigations augmented by those from other techniques such as LEED and thermal desorption. A brief discussion of the photoemission spectra of free N2, a comparison of adsorbed N2 and CO as well as of physisorption of N2 on metal surfaces is also presented. We discuss the chemisorption of N2 on the surfaces of certain metals (e.g. Ni, Fe, Ru and W) in some detail, paying considerable attention to the effect of electropositive and electronegative surface modifiers. Features of the various chemisorbed states (one or more weakly chemisorbed gamma-states, strongly chemisorbed alpha-states with bond orders between 1 and 2. and dissociative chemisorbed beta-states) on different surfaces are described and relations between them indicated. While the gamma-state could be a precursor of the alpha-state, the alpha-state could be the precursor of the beta-state and this kind of information is of direct relevance to ammonia synthesis. The nature of adsorption of N2 on the surfaces of some metals (e.g. Cr, Co) deserves further study and such investigations might as well suggest alternative catalysts for ammonia synthesis.
Resumo:
The habit of "drinking smoke" , meaning tobacco smoking, caused a true controversy in early modern England. The new substance was used both for its alleged therapeutic properties as well as its narcotic effects. The dispute over tobacco continues the line of written controversies which were an important means of communication in the sixteenth and seventeenth century Europe. The tobacco controversy is special among medical controversies because the recreational use of tobacco soon spread and outweighed its medicinal use, ultimately causing a social and cultural crisis in England. This study examines how language is used in polemic discourse and argumentation. The material consists of medical texts arguing for and against tobacco in early modern England. The texts were compiled into an electronic corpus of tobacco texts (1577 1670) representing different genres and styles of writing. With the help of the corpus, the tobacco controversy is described and analyzed in the context of early modern medicine. A variety of methods suitable for the study of conflict discourse were used to assess internal and external text variation. The linguistic features examined include personal pronouns, intertextuality, structural components, and statistically derived keywords. A common thread in the work is persuasive language use manifested, for example, in the form of emotive adjectives and the generic use of pronouns; the latter is especially pronounced in the dichotomy between us and them. Controversies have not been studied in this manner before but the methods applied have supplemented each other and proven their suitability in the study of conflictive discourse. These methods can also be applied to present-day materials.
Resumo:
The simple two dimensional C-13-satellite J/D-resolved experiments have been proposed for the visualization of enantiomers, extraction of homo- and hetero-nuclear residual dipolar couplings and also H-1 chemical shift differences between the enantiomers in the anisotropic medium. The significant advantages of the techniques are in the determination of scalar couplings of bigger organic molecules. The scalar couplings specific to a second abundant spin such as F-19 can be selectively extracted from the severely overlapped spectrum. The methodologies are demonstrated on a chiral molecule aligned in the chiral liquid crystal medium and two different organic molecules in the isotropic solutions. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The design optimization of laminated composites using naturally inspired optimization techniques such as vector evaluated particle swarm optimization (VEPSO) and genetic algorithms (GA) are used in this paper. The design optimization of minimum weight of the laminated composite is evaluated using different failure criteria. The failure criteria considered are maximum stress (MS), Tsai-Wu (TW) and failure mechanism based (FMB) failure criteria. Minimum weight of the laminates are obtained for different failure criteria using VEPSO and GA for different combinations of loading. From the study it is evident that VEPSO and GA predict almost the same minimum weight of the laminate for the given loading. Comparison of minimum weight of the laminates by different failure criteria differ for some loading combinations. The comparison shows that FMBFC provide better results for all combinations of loading. (C) 2010 Elsevier Ltd. All rights reserved.