955 resultados para Structure theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new model based on thermodynamic and molecular interaction between molecules to describe the vapour-liquid phase equilibria and surface tension of pure component. The model assumes that the bulk fluid can be characterised as set of parallel layers. Because of this molecular structure, we coin the model as the molecular layer structure theory (MLST). Each layer has two energetic components. One is the interaction energy of one molecule of that layer with all surrounding layers. The other component is the intra-layer Helmholtz free energy, which accounts for the internal energy and the entropy of that layer. The equilibrium between two separating phases is derived from the minimum of the grand potential, and the surface tension is calculated as the excess of the Helmholtz energy of the system. We test this model with a number of components, argon, krypton, ethane, n-butane, iso-butane, ethylene and sulphur hexafluoride, and the results are very satisfactory. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a detailed analysis of adsorption of supercritical fluids on nonporous graphitized thermal carbon black. Two methods are employed in the analysis. One is the molecular layer structure theory (MLST), proposed recently by our group, and the other is the grand canonical Monte Carlo (GCMC) simulation. They were applied to describe the adsorption of argon, krypton, methane, ethylene, and sulfur hexafluoride on graphitized thermal carbon black. It was found that the MLST describes all the experimental data at various temperatures well. Results from GCMC simulations describe well the data at low pressure but show some deviations at higher pressures for all the adsorbates tested. The question of negative surface excess is also discussed in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Adsorption of pure nitrogen, argon, acetone, chloroform and acetone-chloroform mixture on graphitized thermal carbon black is considered at sub-critical conditions by means of molecular layer structure theory (MLST). In the present version of the MLST an adsorbed fluid is considered as a sequence of 2D molecular layers, whose Helmholtz free energies are obtained directly from the analysis of experimental adsorption isotherm of pure components. The interaction of the nearest layers is accounted for in the framework of mean field approximation. This approach allows quantitative correlating of experimental nitrogen and argon adsorption isotherm both in the monolayer region and in the range of multi-layer coverage up to 10 molecular layers. In the case of acetone and chloroform the approach also leads to excellent quantitative correlation of adsorption isotherms, while molecular approaches such as the non-local density functional theory (NLDFT) fail to describe those isotherms. We extend our new method to calculate the Helmholtz free energy of an adsorbed mixture using a simple mixing rule, and this allows us to predict mixture adsorption isotherms from pure component adsorption isotherms. The approach, which accounts for the difference in composition in different molecular layers, is tested against the experimental data of acetone-chloroform mixture (non-ideal mixture) adsorption on graphitized thermal carbon black at 50 degrees C. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new approach is developed to analyze the thermodynamic properties of a sub-critical fluid adsorbed in a slit pore of activated carbon. The approach is based on a representation that an adsorbed fluid forms an ordered structure close to a smoothed solid surface. This ordered structure is modelled as a collection of parallel molecular layers. Such a structure allows us to express the Helmholtz free energy of a molecular layer as the sum of the intrinsic Helmholtz free energy specific to that layer and the potential energy of interaction of that layer with all other layers and the solid surface. The intrinsic Helmholtz free energy of a molecular layer is a function (at given temperature) of its two-dimensional density and it can be readily obtained from bulk-phase properties, while the interlayer potential energy interaction is determined by using the 10-4 Lennard-Jones potential. The positions of all layers close to the graphite surface or in a slit pore are considered to correspond to the minimum of the potential energy of the system. This model has led to accurate predictions of nitrogen and argon adsorption on carbon black at their normal boiling points. In the case of adsorption in slit pores, local isotherms are determined from the minimization of the grand potential. The model provides a reasonable description of the 0-1 monolayer transition, phase transition and packing effect. The adsorption of nitrogen at 77.35 K and argon at 87.29 K on activated carbons is analyzed to illustrate the potential of this theory, and the derived pore-size distribution is compared favourably with that obtained by the Density Functional Theory (DFT). The model is less time-consuming than methods such as the DFT and Monte-Carlo simulation, and most importantly it can be readily extended to the adsorption of mixtures and capillary condensation phenomena.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The use of a robust position controller for a robotic manipulator moving in free space is presented. The aim is to implement in practice a controller that is robust to uncertainties in the model of the system, as well as being inexpensive from a computational point of view. Variable structure theory provides the technique for the design of such controller. The design steps are presented, first from a theoretical perspective and then applied to the control of a two degree-of-freedom manipulator. Simulation results that backed the implementation are presented, followed by the experiments conducted and the results that were obtained. The conclusion is that variable structure control is readily applicable to industrial robots for the robust control of positions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Research on transition-metal nanoalloy clusters composed of a few atoms is fascinating by their unusual properties due to the interplay among the structure, chemical order and magnetism. Such nanoalloy clusters, can be used to construct nanometer devices for technological applications by manipulating their remarkable magnetic, chemical and optical properties. Determining the nanoscopic features exhibited by the magnetic alloy clusters signifies the need for a systematic global and local exploration of their potential-energy surface in order to identify all the relevant energetically low-lying magnetic isomers. In this thesis the sampling of the potential-energy surface has been performed by employing the state-of-the-art spin-polarized density-functional theory in combination with graph theory and the basin-hopping global optimization techniques. This combination is vital for a quantitative analysis of the quantum mechanical energetics. The first approach, i.e., spin-polarized density-functional theory together with the graph theory method, is applied to study the Fe$_m$Rh$_n$ and Co$_m$Pd$_n$ clusters having $N = m+n \leq 8$ atoms. We carried out a thorough and systematic sampling of the potential-energy surface by taking into account all possible initial cluster topologies, all different distributions of the two kinds of atoms within the cluster, the entire concentration range between the pure limits, and different initial magnetic configurations such as ferro- and anti-ferromagnetic coupling. The remarkable magnetic properties shown by FeRh and CoPd nanoclusters are attributed to the extremely reduced coordination number together with the charge transfer from 3$d$ to 4$d$ elements. The second approach, i.e., spin-polarized density-functional theory together with the basin-hopping method is applied to study the small Fe$_6$, Fe$_3$Rh$_3$ and Rh$_6$ and the larger Fe$_{13}$, Fe$_6$Rh$_7$ and Rh$_{13}$ clusters as illustrative benchmark systems. This method is able to identify the true ground-state structures of Fe$_6$ and Fe$_3$Rh$_3$ which were not obtained by using the first approach. However, both approaches predict a similar cluster for the ground-state of Rh$_6$. Moreover, the computational time taken by this approach is found to be significantly lower than the first approach. The ground-state structure of Fe$_{13}$ cluster is found to be an icosahedral structure, whereas Rh$_{13}$ and Fe$_6$Rh$_7$ isomers relax into cage-like and layered-like structures, respectively. All the clusters display a remarkable variety of structural and magnetic behaviors. It is observed that the isomers having similar shape with small distortion with respect to each other can exhibit quite different magnetic moments. This has been interpreted as a probable artifact of spin-rotational symmetry breaking introduced by the spin-polarized GGA. The possibility of combining the spin-polarized density-functional theory with some other global optimization techniques such as minima-hopping method could be the next step in this direction. This combination is expected to be an ideal sampling approach having the advantage of avoiding efficiently the search over irrelevant regions of the potential energy surface.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We discuss non-steady state electrical characteristics of a metal-insulator-metal structure. We consider an exponential distribution (in energy) of impurity states in addition to impurity states at a single energy level within the depletion region. We discuss thermal as well as isothermal characteristics and present an expression for the temperature of maximum current (Tm) and a method to calculate the density of exponentially distributed impurity states. We plot the theoretical curves for various sets of parameters and the variation of Tm, and Im (maximum current) with applied potential for various impurity distributions. The present model can explain the available experimental results. Finally we compare the non-steady state characteristics in three cases: (i) impurity states only at a single energy level, (ii) uniform energetic distribution of impurity states, and (iii) exponential energetic distribution of impurity states.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Regulatory change not seen since the Great Depression swept the U.S. banking industry beginning in the early 1980s, culminating with the Interstate Banking and Branching Efficiency Act of 1994. Significant consolidations have occurred in the banking industry. This paper considers the market-power versus the efficient-structure theories of the positive correlation between banking concentration and performance on a state-by-state basis. Temporal causality tests imply that bank concentration leads bank profitability, supporting the market-power, rather than the efficient-structure, theory of that positive correlation. Our finding suggests that bank regulators, by focusing on local banking markets, missed the initial stages of an important structural change at the state level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the standard generalized-gradient approximations (GGAs) in use in modern electronic-structure theory [Perdew-Burke-Ernzerhof (PBE) GGA] and a recently proposed modification designed specifically for solids (PBEsol) are identified as particular members of a family of functionals taking their parameters from different properties of homogeneous or inhomogeneous electron liquids. Three further members of this family are constructed and tested, together with the original PBE and PBEsol, for atoms, molecules, and solids. We find that PBE, in spite of its popularity in solid-state physics and quantum chemistry, is not always the best performing member of the family and that PBEsol, in spite of having been constructed specifically for solids, is not the best for solids. The performance of GGAs for finite systems is found to sensitively depend on the choice of constraints stemming from infinite systems. Guidelines both for users and for developers of density functionals emerge from this work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we investigate several important aspects of the structure theory of the recently introduced quasi-Hopf superalgebras (QHSAs), which play a fundamental role in knot theory and integrable systems. In particular we introduce the opposite structure and prove in detail (for the graded case) Drinfeld's result that the coproduct Delta ' =_ (S circle times S) (.) T (.) Delta (.) S-1 induced on a QHSA is obtained from the coproduct Delta by twisting. The corresponding "Drinfeld twist" F-D is explicitly constructed, as well as its inverse, and we investigate the complete QHSA associated with Delta '. We give a universal proof that the coassociator Phi ' = (S circle times S circle times S) Phi (321) and canonical elements alpha ' = S(beta), beta ' = S(alpha) correspond to twisting, the original coassociator Phi = Phi (123) and canonical elements alpha, beta with the Drinfeld twist F-D. Moreover in the quasi-tri angular case, it is shown algebraically that the R-matrix R ' = (S circle times S)R corresponds to twisting the original R-matrix R with F-D. This has important consequences in knot theory, which will be investigated elsewhere.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tutkielman tavoitteena on selvittää lineaarisen regressioanalyysin avulla paneelidataa käyttäen suomalaisten pörssiyritysten pääomarakenteisiin vaikuttavat tekijät vuosina 1999-2004. Näiden tekijöiden avulla päätellään, mitä pääomarakenneteoriaa/-teorioita nämä yritykset noudattavat. Pääomarakenneteoriat voidaan jakaa kahteen luokkaan sen mukaan, pyritäänkö niissä optimaaliseen pääomarakenteeseen vai ei. Tradeoff- ja siihen liittyvässä agenttiteoriassa pyritään optimaaliseen pääomarakenteeseen. Tradeoff-teoriassa pääomarakenne valitaan punnitsemalla vieraan pääoman hyötyjä ja haittoja. Agenttiteoria on muuten samanlainen kuin tradeoff-teoria, mutta siinä otetaan lisäksi huomioon velan agenttikustannukset. Pecking order - ja ajoitusteoriassa ei pyritä optimaaliseen pääoma-rakenteeseen. Pecking order -teoriassa rahoitus valitaan hierarkian mukaan (tulorahoitus, vieras pääoma, välirahoitus, oma pääoma). Ajoitusteoriassa valitaan se rahoitusmuoto, jota on kannattavinta hankkia vallitsevassa markkinatilanteessa. Empiiristen tulosten mukaan velkaantumisaste riippuu positiivisesti riskistä, vakuudesta ja aineettomasta omaisuudesta. Velkaantumisaste riippuu negatiivisesti likviditeetistä, osaketuotoista ja kannattavuudesta. Osingoilla ei ole vaikutusta velkaantumisasteeseen. Toimialoista teollisuustuotteiden ja -palveluiden sekä perusteollisuuden aloilla on korkeammat velkaantumisasteet kuin muilla toimialoilla. Tulokset tukevat pääosin pecking order -teoriaa ja jonkin verran ajoitusteoriaa. Muut teoriat saavat vain vähäistä tukea.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper a general view about the modern molecular structure theory is developed discussing the ionized hydrogen molecule. We introduce some necessary approximation methods for the electronic and nuclear spectra study adopting a systematic approach. In addition though, we have performed calculations in order to illustrate these methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les usines récupérées par les ouvriers en Argentine sont devenues un mouvement social emblématique symbolisant l'un des aspects de la révolte sociale entourant la crise économique de 2001-2002. Les usines récupérées sont des entreprises abandonnées par leurs propriétaires originaux ou déclarées faillite, laissant derrières elles des salaires et des dettes impayés. Par conséquence, les ouvriers ont commencé à récupérer leurs usines; reprenant la production sans leurs anciens patrons, sous, et au profit de la gestion collective des ouvriers. Le mouvement est remarquable pour sa rémunération égalitaire et sa gestion horizontale. Ce travail examine la continuité des usines récupérées et ceci à travers l'évolution sociale, politique et économique du paysage de l'Argentine. Il évalue également l'impact du mouvement en tant que défi aux modes économiques de production hégémoniques et orientés vers le marché. En supposant que l'avenir du mouvement dépend de deux ensembles de facteurs, le rapport analyse les facteurs internes à travers le prisme de la théorie de mobilisation des ressources, ainsi que les facteurs externes à travers la perspective de la théorie de la structure de l'opportunité politique. Le travail conclut que la situation actuelle se trouve dans une impasse dans laquelle le mouvement a gagné l'acceptation institutionnelle, mais a échoué d'effectuer le changement structurel favorisant ses pratiques et garantissant la sécurité à long terme. Il argumente que le mouvement doit consolider certains aspects combatifs. Il doit consolider sa nouvelle identité en tant que mouvement social et forger des alliances stratégiques et tactiques tout en préservant son autonomie.