860 resultados para conceptual data modelling
Resumo:
Les études sur les milieux de vie et la santé ont traditionnellement porté sur le seul quartier de résidence. Des critiques ont été émises à cet égard, soulignant le fait que la mobilité quotidienne des individus n’était pas prise en compte et que l’accent mis sur le quartier de résidence se faisait au détriment d’autres milieux de vie où les individus passent du temps, c’est-à-dire leur espace d’activité. Bien que la mobilité quotidienne fasse l’objet d’un intérêt croissant en santé publique, peu d’études se sont intéressé aux inégalités sociales de santé. Ceci, même en dépit du fait que différents groupes sociaux n’ont pas nécessairement la même capacité à accéder à des milieux favorables pour la santé. Le lien entre les inégalités en matière de mobilité et les inégalités sociales de santé mérite d’être exploré. Dans cette thèse, je développe d'abord une proposition conceptuelle qui ancre la mobilité quotidienne dans le concept de potentiel de mobilité. Le potentiel de mobilité englobe les opportunités et les lieux que les individus peuvent choisir d’accéder en convertissant leur potentiel en mobilité réalisée. Le potentiel de mobilité est façonné par des caractéristiques individuelles (ex. le revenu) et géographiques (ex. la proximité des transports en commun), ainsi que par des règles régissant l’accès à certaines ressources et à certains lieux (ex. le droit). Ces caractéristiques et règles sont inégalement distribuées entre les groupes sociaux. Des inégalités sociales en matière de mobilité réalisée peuvent donc en découler, autant en termes de l'ampleur de la mobilité spatiale que des expositions contextuelles rencontrées dans l'espace d'activité. Je discute de différents processus par lesquels les inégalités en matière de mobilité réalisée peuvent mener à des inégalités sociales de santé. Par exemple, les groupes défavorisés sont plus susceptibles de vivre et de mener des activités dans des milieux défavorisés, comparativement à leurs homologues plus riches, ce qui pourrait contribuer aux différences de santé entre ces groupes. Cette proposition conceptuelle est mise à l’épreuve dans deux études empiriques. Les données de la première vague de collecte de l’étude Interdisciplinaire sur les inégalités sociales de santé (ISIS) menée à Montréal, Canada (2011-2012) ont été analysées. Dans cette étude, 2 093 jeunes adultes (18-25 ans) ont rempli un questionnaire et fourni des informations socio-démographiques, sur leur consommation de tabac et sur leurs lieux d’activités. Leur statut socio-économique a été opérationnalisé à l’aide de leur plus haut niveau d'éducation atteint. Les lieux de résidence et d'activité ont servi à créer des zones tampons de 500 mètres à partir du réseau routier. Des mesures de défavorisation et de disponibilité des détaillants de produits du tabac ont été agrégées au sein des ces zones tampons. Dans une première étude empirique je compare l'exposition à la défavorisation dans le quartier résidentiel et celle dans l'espace d’activité non-résidentiel entre les plus et les moins éduqués. J’identifie également des variables individuelles et du quartier de résidence associées au niveau de défavorisation mesuré dans l’espace d’activité. Les résultats démontrent qu’il y a un gradient social dans l’exposition à la défavorisation résidentielle et dans l’espace d’activité : elle augmente à mesure que le niveau d’éducation diminue. Chez les moins éduqués les écarts dans l’exposition à la défavorisation sont plus marquées dans l’espace d’activité que dans le quartier de résidence, alors que chez les moyennement éduqués, elle diminuent. Un niveau inférieur d'éducation, l'âge croissant, le fait d’être ni aux études, ni à l’emploi, ainsi que la défavorisation résidentielle sont positivement corrélés à la défavorisation dans l’espace d’activité. Dans la seconde étude empirique j'étudie l'association entre le tabagisme et deux expositions contextuelles (la défavorisation et la disponibilité de détaillants de tabac) mesurées dans le quartier de résidence et dans l’espace d’activité non-résidentiel. J'évalue si les inégalités sociales dans ces expositions contribuent à expliquer les inégalités sociales dans le tabagisme. J’observe que les jeunes dont les activités quotidiennes ont lieu dans des milieux défavorisés sont plus susceptibles de fumer. La présence de détaillants de tabac dans le quartier de résidence et dans l’espace d’activité est aussi associée à la probabilité de fumer, alors que le fait de vivre dans un quartier caractérisé par une forte défavorisation protège du tabagisme. En revanche, aucune des variables contextuelles n’affectent de manière significative l’association entre le niveau d’éducation et le tabagisme. Les résultats de cette thèse soulignent l’importance de considérer non seulement le quartier de résidence, mais aussi les lieux où les gens mènent leurs activités quotidiennes, pour comprendre le lien entre le contexte et les inégalités sociales de santé. En discussion, j’élabore sur l’idée de reconnaître la mobilité quotidienne comme facteur de différenciation sociale chez les jeunes adultes. En outre, je conclus que l’identification de facteurs favorisant ou contraignant la mobilité quotidienne des individus est nécessaire afin: 1 ) d’acquérir une meilleure compréhension de la façon dont les inégalités sociales en matière de mobilité (potentielle et réalisée) surviennent et influencent la santé et 2) d’identifier des cibles d’intervention en santé publique visant à créer des environnements sains et équitables.
Resumo:
Depuis la révolution industrielle, l’évolution de la technologie bouleverse le monde de la fabrication. Aujourd'hui, de nouvelles technologies telles que le prototypage rapide font une percée dans des domaines comme celui de la fabrication de bijoux, appartenant jadis à l'artisanat et en bouscule les traditions par l'introduction de méthodes plus rapides et plus faciles. Cette recherche vise à répondre aux deux questions suivantes : - ‘En quoi le prototypage rapide influence-t-il la pratique de fabrication de bijoux?’ - ‘En quoi influence-t-il de potentiels acheteurs dans leur appréciation du bijou?’ L' approche consiste en une collecte de données faite au cours de trois entretiens avec différents bijoutiers et une rencontre de deux groupes de discussion composés de consommateurs potentiels. Les résultats ont révélé l’utilité du prototypage rapide pour surmonter un certain nombre d'obstacles inhérents au fait-main, tel que dans sa géométrie, sa commercialisation, et sa finesse de détails. Cependant, il se crée une distance entre la main du bijoutier et l'objet, changeant ainsi la nature de la pratique. Cette technologie est perçue comme un moyen moins authentique car la machine rappelle la production de masse et la possibilité de reproduction en série détruit la notion d’unicité du bijou, en réduisant ainsi sa charge émotionnelle. Cette recherche propose une meilleure compréhension de l'utilisation du prototypage rapide et de ses conséquences dans la fabrication de bijoux. Peut-être ouvrira-t-elle la voie à une recherche visant un meilleur mariage entre cette technique et les méthodes traditionnelles.
Resumo:
The primary aim of the present study is to acquire a large amount of gravity data, to prepare gravity maps and interpret the data in terms of crustal structure below the Bavali shear zone and adjacent regions of northern Kerala. The gravity modeling is basically a tool to obtain knowledge of the subsurface extension of the exposed geological units and their structural relationship with the surroundings. The study is expected to throw light on the nature of the shear zone, crustal configuration below the high-grade granulite terrain and the tectonics operating during geological times in the region. The Bavali shear is manifested in the gravity profiles by a steep gravity gradient. The gravity models indicate that the Bavali shear coincides with steep plane that separates two contrasting crustal densities extending beyond a depth of 30 km possibly down to Moho, justifying it to be a Mantle fault. It is difficult to construct a generalized model of crustal evolution in terms of its varied manifestations using only the gravity data. However, the data constrains several aspects of crustal evolution and provides insights into some of the major events.
Resumo:
The thesis deals with some of the non-linear Gaussian and non-Gaussian time models and mainly concentrated in studying the properties and application of a first order autoregressive process with Cauchy marginal distribution. In this thesis some of the non-linear Gaussian and non-Gaussian time series models and mainly concentrated in studying the properties and application of a order autoregressive process with Cauchy marginal distribution. Time series relating to prices, consumptions, money in circulation, bank deposits and bank clearing, sales and profit in a departmental store, national income and foreign exchange reserves, prices and dividend of shares in a stock exchange etc. are examples of economic and business time series. The thesis discuses the application of a threshold autoregressive(TAR) model, try to fit this model to a time series data. Another important non-linear model is the ARCH model, and the third model is the TARCH model. The main objective here is to identify an appropriate model to a given set of data. The data considered are the daily coconut oil prices for a period of three years. Since it is a price data the consecutive prices may not be independent and hence a time series based model is more appropriate. In this study the properties like ergodicity, mixing property and time reversibility and also various estimation procedures used to estimate the unknown parameters of the process.
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.
Resumo:
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .
Resumo:
Department of Marine Geology and Geophysics,Cochin University of Science and Technology
Resumo:
This thesis deals with the use of simulation as a problem-solving tool to solve a few logistic system related problems. More specifically it relates to studies on transport terminals. Transport terminals are key elements in the supply chains of industrial systems. One of the problems related to use of simulation is that of the multiplicity of models needed to study different problems. There is a need for development of methodologies related to conceptual modelling which will help reduce the number of models needed. Three different logistic terminal systems Viz. a railway yard, container terminal of apart and airport terminal were selected as cases for this study. The standard methodology for simulation development consisting of system study and data collection, conceptual model design, detailed model design and development, model verification and validation, experimentation, and analysis of results, reporting of finding were carried out. We found that models could be classified into tightly pre-scheduled, moderately pre-scheduled and unscheduled systems. Three types simulation models( called TYPE 1, TYPE 2 and TYPE 3) of various terminal operations were developed in the simulation package Extend. All models were of the type discrete-event simulation. Simulation models were successfully used to help solve strategic, tactical and operational problems related to three important logistic terminals as set in our objectives. From the point of contribution to conceptual modelling we have demonstrated that clubbing problems into operational, tactical and strategic and matching them with tightly pre-scheduled, moderately pre-scheduled and unscheduled systems is a good workable approach which reduces the number of models needed to study different terminal related problems.
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron. The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.
Resumo:
Upgrading two widely used standard plastics, polypropylene (PP) and high density polyethylene (HDPE), and generating a variety of useful engineering materials based on these blends have been the main objective of this study. Upgradation was effected by using nanomodifiers and/or fibrous modifiers. PP and HDPE were selected for modification due to their attractive inherent properties and wide spectrum of use. Blending is the engineered method of producing new materials with tailor made properties. It has the advantages of both the materials. PP has high tensile and flexural strength and the HDPE acts as an impact modifier in the resultant blend. Hence an optimized blend of PP and HDPE was selected as the matrix material for upgradation. Nanokaolinite clay and E-glass fibre were chosen for modifying PP/HDPE blend. As the first stage of the work, the mechanical, thermal, morphological, rheological, dynamic mechanical and crystallization characteristics of the polymer nanocomposites prepared with PP/HDPE blend and different surface modified nanokaolinite clay were analyzed. As the second stage of the work, the effect of simultaneous inclusion of nanokaolinite clay (both N100A and N100) and short glass fibres are investigated. The presence of nanofiller has increased the properties of hybrid composites to a greater extent than micro composites. As the last stage, micromechanical modeling of both nano and hybrid A composite is carried out to analyze the behavior of the composite under load bearing conditions. These theoretical analyses indicate that the polymer-nanoclay interfacial characteristics partially converge to a state of perfect interfacial bonding (Takayanagi model) with an iso-stress (Reuss IROM) response. In the case of hybrid composites the experimental data follows the trend of Halpin-Tsai model. This implies that matrix and filler experience varying amount of strain and interfacial adhesion between filler and matrix and also between the two fillers which play a vital role in determining the modulus of the hybrid composites.A significant observation from this study is that the requirement of higher fibre loading for efficient reinforcement of polymers can be substantially reduced by the presence of nanofiller together with much lower fibre content in the composite. Hybrid composites with both nanokaolinite clay and micron sized E-glass fibre as reinforcements in PP/HDPE matrix will generate a novel class of high performance, cost effective engineering material.
Resumo:
A stand-alone power system is an autonomous system that supplies electricity to the user load without being connected to the electric grid. This kind of decentralized system is frequently located in remote and inaccessible areas. It is essential for about one third of the world population which are living in developed or isolated regions and have no access to an electricity utility grid. The most people live in remote and rural areas, with low population density, lacking even the basic infrastructure. The utility grid extension to these locations is not a cost effective option and sometimes technically not feasible. The purpose of this thesis is the modelling and simulation of a stand-alone hybrid power system, referred to as “hydrogen Photovoltaic-Fuel Cell (PVFC) hybrid system”. It couples a photovoltaic generator (PV), an alkaline water electrolyser, a storage gas tank, a proton exchange membrane fuel cell (PEMFC), and power conditioning units (PCU) to give different system topologies. The system is intended to be an environmentally friendly solution since it tries maximising the use of a renewable energy source. Electricity is produced by a PV generator to meet the requirements of a user load. Whenever there is enough solar radiation, the user load can be powered totally by the PV electricity. During periods of low solar radiation, auxiliary electricity is required. An alkaline high pressure water electrolyser is powered by the excess energy from the PV generator to produce hydrogen and oxygen at a pressure of maximum 30bar. Gases are stored without compression for short- (hourly or daily) and long- (seasonal) term. A proton exchange membrane (PEM) fuel cell is used to keep the system’s reliability at the same level as for the conventional system while decreasing the environmental impact of the whole system. The PEM fuel cell consumes gases which are produced by an electrolyser to meet the user load demand when the PV generator energy is deficient, so that it works as an auxiliary generator. Power conditioning units are appropriate for the conversion and dispatch the energy between the components of the system. No batteries are used in this system since they represent the weakest when used in PV systems due to their need for sophisticated control and their short lifetime. The model library, ISET Alternative Power Library (ISET-APL), is designed by the Institute of Solar Energy supply Technology (ISET) and used for the simulation of the hybrid system. The physical, analytical and/or empirical equations of each component are programmed and implemented separately in this library for the simulation software program Simplorer by C++ language. The model parameters are derived from manufacturer’s performance data sheets or measurements obtained from literature. The identification and validation of the major hydrogen PVFC hybrid system component models are evaluated according to the measured data of the components, from the manufacturer’s data sheet or from actual system operation. Then, the overall system is simulated, at intervals of one hour each, by using solar radiation as the primary energy input and hydrogen as energy storage for one year operation. A comparison between different topologies, such as DC or AC coupled systems, is carried out on the basis of energy point of view at two locations with different geographical latitudes, in Kassel/Germany (Europe) and in Cairo/Egypt (North Africa). The main conclusion in this work is that the simulation method of the system study under different conditions could successfully be used to give good visualization and comparison between those topologies for the overall performance of the system. The operational performance of the system is not only depending on component efficiency but also on system design and consumption behaviour. The worst case of this system is the low efficiency of the storage subsystem made of the electrolyser, the gas storage tank, and the fuel cell as it is around 25-34% at Cairo and 29-37% at Kassel. Therefore, the research for this system should be concentrated in the subsystem components development especially the fuel cell.
Resumo:
Landwirtschaft spielt eine zentrale Rolle im Erdsystem. Sie trägt durch die Emission von CO2, CH4 und N2O zum Treibhauseffekt bei, kann Bodendegradation und Eutrophierung verursachen, regionale Wasserkreisläufe verändern und wird außerdem stark vom Klimawandel betroffen sein. Da all diese Prozesse durch die zugrunde liegenden Nährstoff- und Wasserflüsse eng miteinander verknüpft sind, sollten sie in einem konsistenten Modellansatz betrachtet werden. Dennoch haben Datenmangel und ungenügendes Prozessverständnis dies bis vor kurzem auf der globalen Skala verhindert. In dieser Arbeit wird die erste Version eines solchen konsistenten globalen Modellansatzes präsentiert, wobei der Schwerpunkt auf der Simulation landwirtschaftlicher Erträge und den resultierenden N2O-Emissionen liegt. Der Grund für diese Schwerpunktsetzung liegt darin, dass die korrekte Abbildung des Pflanzenwachstums eine essentielle Voraussetzung für die Simulation aller anderen Prozesse ist. Des weiteren sind aktuelle und potentielle landwirtschaftliche Erträge wichtige treibende Kräfte für Landnutzungsänderungen und werden stark vom Klimawandel betroffen sein. Den zweiten Schwerpunkt bildet die Abschätzung landwirtschaftlicher N2O-Emissionen, da bislang kein prozessbasiertes N2O-Modell auf der globalen Skala eingesetzt wurde. Als Grundlage für die globale Modellierung wurde das bestehende Agrarökosystemmodell Daycent gewählt. Neben der Schaffung der Simulationsumgebung wurden zunächst die benötigten globalen Datensätze für Bodenparameter, Klima und landwirtschaftliche Bewirtschaftung zusammengestellt. Da für Pflanzzeitpunkte bislang keine globale Datenbasis zur Verfügung steht, und diese sich mit dem Klimawandel ändern werden, wurde eine Routine zur Berechnung von Pflanzzeitpunkten entwickelt. Die Ergebnisse zeigen eine gute Übereinstimmung mit Anbaukalendern der FAO, die für einige Feldfrüchte und Länder verfügbar sind. Danach wurde das Daycent-Modell für die Ertragsberechnung von Weizen, Reis, Mais, Soja, Hirse, Hülsenfrüchten, Kartoffel, Cassava und Baumwolle parametrisiert und kalibriert. Die Simulationsergebnisse zeigen, dass Daycent die wichtigsten Klima-, Boden- und Bewirtschaftungseffekte auf die Ertragsbildung korrekt abbildet. Berechnete Länderdurchschnitte stimmen gut mit Daten der FAO überein (R2 = 0.66 für Weizen, Reis und Mais; R2 = 0.32 für Soja), und räumliche Ertragsmuster entsprechen weitgehend der beobachteten Verteilung von Feldfrüchten und subnationalen Statistiken. Vor der Modellierung landwirtschaftlicher N2O-Emissionen mit dem Daycent-Modell stand eine statistische Analyse von N2O-und NO-Emissionsmessungen aus natürlichen und landwirtschaftlichen Ökosystemen. Die als signifikant identifizierten Parameter für N2O (Düngemenge, Bodenkohlenstoffgehalt, Boden-pH, Textur, Feldfrucht, Düngersorte) und NO (Düngemenge, Bodenstickstoffgehalt, Klima) entsprechen weitgehend den Ergebnissen einer früheren Analyse. Für Emissionen aus Böden unter natürlicher Vegetation, für die es bislang keine solche statistische Untersuchung gab, haben Bodenkohlenstoffgehalt, Boden-pH, Lagerungsdichte, Drainierung und Vegetationstyp einen signifikanten Einfluss auf die N2O-Emissionen, während NO-Emissionen signifikant von Bodenkohlenstoffgehalt und Vegetationstyp abhängen. Basierend auf den daraus entwickelten statistischen Modellen betragen die globalen Emissionen aus Ackerböden 3.3 Tg N/y für N2O, und 1.4 Tg N/y für NO. Solche statistischen Modelle sind nützlich, um Abschätzungen und Unsicherheitsbereiche von N2O- und NO-Emissionen basierend auf einer Vielzahl von Messungen zu berechnen. Die Dynamik des Bodenstickstoffs, insbesondere beeinflusst durch Pflanzenwachstum, Klimawandel und Landnutzungsänderung, kann allerdings nur durch die Anwendung von prozessorientierten Modellen berücksichtigt werden. Zur Modellierung von N2O-Emissionen mit dem Daycent-Modell wurde zunächst dessen Spurengasmodul durch eine detailliertere Berechnung von Nitrifikation und Denitrifikation und die Berücksichtigung von Frost-Auftau-Emissionen weiterentwickelt. Diese überarbeitete Modellversion wurde dann an N2O-Emissionsmessungen unter verschiedenen Klimaten und Feldfrüchten getestet. Sowohl die Dynamik als auch die Gesamtsummen der N2O-Emissionen werden befriedigend abgebildet, wobei die Modelleffizienz für monatliche Mittelwerte zwischen 0.1 und 0.66 für die meisten Standorte liegt. Basierend auf der überarbeiteten Modellversion wurden die N2O-Emissionen für die zuvor parametrisierten Feldfrüchte berechnet. Emissionsraten und feldfruchtspezifische Unterschiede stimmen weitgehend mit Literaturangaben überein. Düngemittelinduzierte Emissionen, die momentan vom IPCC mit 1.25 +/- 1% der eingesetzten Düngemenge abgeschätzt werden, reichen von 0.77% (Reis) bis 2.76% (Mais). Die Summe der berechneten Emissionen aus landwirtschaftlichen Böden beträgt für die Mitte der 1990er Jahre 2.1 Tg N2O-N/y, was mit den Abschätzungen aus anderen Studien übereinstimmt.
Resumo:
Concept exploration is a knowledge acquisition tool for interactively exploring the hierarchical structure of finitely generated lattices. Applications comprise the support of knowledge engineers by constructing a type lattice for conceptual graphs, and the exploration of large formal contexts in formal concept analysis.
Resumo:
A conceptual information system consists of a database together with conceptual hierarchies. The management system TOSCANA visualizes arbitrary combinations of conceptual hierarchies by nested line diagrams and allows an on-line interaction with a database to analyze data conceptually. The paper describes the conception of conceptual information systems and discusses the use of their visualization techniques for on-line analytical processing (OLAP).