992 resultados para Database application
Resumo:
Much information on flavonoid content of Brazilian foods has already been obtained; however, this information is spread in scientific publications and non-published data. The objectives of this work were to compile and evaluate the quality of national flavonoid data according to the United States Department of Agriculture`s Data Quality Evaluation System (USDA-DQES) with few modifications, for future dissemination in the TBCA-USP (Brazilian Food Composition Database). For the compilation, the most abundant compounds in the flavonoid subclasses were considered (flavonols, flavones, isoflavones, flavanones, flavan-3-ols, and anthocyanidins) and the analysis of the compounds by HPLC was adopted as criteria for data inclusion. The evaluation system considers five categories, and the maximum score assigned to each category is 20. For each data, a confidence code (CC) was attributed (A, B, C and D), indicating the quality and reliability of the information. Flavonoid data (773) present in 197 Brazilian foods were evaluated. The CC ""C"" (as average) was attributed to 99% of the data and ""B"" (above average) to 1%. The main categories assigned low average scores were: number of samples; sampling plan and analytical quality control (average scores 2, 5 and 4, respectively). The analytical method category received an average score of 9. The category assigned the highest score was the sample handling (20 average). These results show that researchers need to be conscious about the importance of the number and plan of evaluated samples and the complete description and documentation of all the processes of methodology execution and analytical quality control. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Rockburst is characterized by a violent explosion of a block causing a sudden rupture in the rock and is quite common in deep tunnels. It is critical to understand the phenomenon of rockburst, focusing on the patterns of occurrence so these events can be avoided and/or managed saving costs and possibly lives. The failure mechanism of rockburst needs to be better understood. Laboratory experiments are undergoing at the Laboratory for Geomechanics and Deep Underground Engineering (SKLGDUE) of Beijing and the system is described. A large number of rockburst tests were performed and their information collected, stored in a database and analyzed. Data Mining (DM) techniques were applied to the database in order to develop predictive models for the rockburst maximum stress (σRB) and rockburst risk index (IRB) that need the results of such tests to be determined. With the developed models it is possible to predict these parameters with high accuracy levels using data from the rock mass and specific project.
Resumo:
A database containing the global and diffuse components of the surface solar hourly irradiation measured from 1 January 2004 to 31 December 2010 at eight stations of the Egyptian Meteorological Authority is presented. For three of these sites (Cairo, Aswan, and El-Farafra), the direct component is also available. In addition, a series of meteorological variables including surface pressure, relative humidity, temperature, wind speed and direction is provided at the same hourly resolution at all stations. The details of the experimental sites and instruments used for the acquisition are given. Special attention is paid to the quality of the data and the procedure applied to flag suspicious or erroneous measurements is described in details. Between 88 and 99% of the daytime measurements are validated by this quality control. Except at Barrani where the number is lower (13500), between 20000 and 29000 measurements of global and diffuse hourly irradiation are available at all sites for the 7-year period. Similarly, from 9000 to 13000 measurements of direct hourly irradiation values are provided for the three sites where this component is measured. With its high temporal resolution this consistent irradiation and meteorological database constitutes a reliable source to estimate the potential of solar energy in Egypt. It is also adapted to the study of high-frequency atmospheric processes such as the impact of aerosols on atmospheric radiative transfer. In the next future, it is planned to complete regularly the present 2004-2010 database.
Resumo:
Query processing is a commonly performed procedure and a vital and integral part of information processing. It is therefore important and necessary for information processing applications to continuously improve the accessibility of data sources as well as the ability to perform queries on those data sources. ^ It is well known that the relational database model and the Structured Query Language (SQL) are currently the most popular tools to implement and query databases. However, a certain level of expertise is needed to use SQL and to access relational databases. This study presents a semantic modeling approach that enables the average user to access and query existing relational databases without the concern of the database's structure or technicalities. This method includes an algorithm to represent relational database schemas in a more semantically rich way. The result of which is a semantic view of the relational database. The user performs queries using an adapted version of SQL, namely Semantic SQL. This method substantially reduces the size and complexity of queries. Additionally, it shortens the database application development cycle and improves maintenance and reliability by reducing the size of application programs. Furthermore, a Semantic Wrapper tool illustrating the semantic wrapping method is presented. ^ I further extend the use of this semantic wrapping method to heterogeneous database management. Relational, object-oriented databases and the Internet data sources are considered to be part of the heterogeneous database environment. Semantic schemas resulting from the algorithm presented in the method were employed to describe the structure of these data sources in a uniform way. Semantic SQL was utilized to query various data sources. As a result, this method provides users with the ability to access and perform queries on heterogeneous database systems in a more innate way. ^
Resumo:
When considering deployment of wave energy converters at a given site, it is of prime importance from both a technical and an economical point of view to accurately assess the total yearly energy that can be extracted by the given device. Especially, to be considered is the assessment of the efficiency of the device over the widest span of the sea-states spectral bandwidth. Hence, the aim of this study is to assess the biases and errors introduced on extracted power classically computed using spectral data derived from analytical functions such as a JONSWAP spectrum, compared to the power derived using actual wave spectra obtained from a spectral hindcast database.
Resumo:
La comprensión de los factores que interviene en la internacionalización de las Pymes en Colombia, conlleva toda una compleja estructura, fundamentación, estrategias, teorías, modelos y metodología organizacional, en el contexto dinámico del mundo comercial y financiero. Por consiguiente, se realizó un análisis de las teorías, modelos de internacionalización y de los factores que allí se reflejan y que interviene en el desarrollo de las pequeñas y medianas empresas, se comparó con las Pymes en Colombia y se apoyó en datos estadísticos de entidades gubernamentales y bases de datos.
Resumo:
La visió és probablement el nostre sentit més dominant a partir del qual derivem la majoria d'informació del món que ens envolta. A través de la visió podem percebre com són les coses, on són i com es mouen. En les imatges que percebem amb el nostre sistema de visió podem extreure'n característiques com el color, la textura i la forma, i gràcies a aquesta informació som capaços de reconèixer objectes fins i tot quan s'observen sota unes condicions totalment diferents. Per exemple, som capaços de distingir un mateix objecte si l'observem des de diferents punts de vista, distància, condicions d'il·luminació, etc. La Visió per Computador intenta emular el sistema de visió humà mitjançant un sistema de captura d'imatges, un ordinador, i un conjunt de programes. L'objectiu desitjat no és altre que desenvolupar un sistema que pugui entendre una imatge d'una manera similar com ho realitzaria una persona. Aquesta tesi es centra en l'anàlisi de la textura per tal de realitzar el reconeixement de superfícies. La motivació principal és resoldre el problema de la classificació de superfícies texturades quan han estat capturades sota diferents condicions, com ara distància de la càmera o direcció de la il·luminació. D'aquesta forma s'aconsegueix reduir els errors de classificació provocats per aquests canvis en les condicions de captura. En aquest treball es presenta detalladament un sistema de reconeixement de textures que ens permet classificar imatges de diferents superfícies capturades en diferents condicions. El sistema proposat es basa en un model 3D de la superfície (que inclou informació de color i forma) obtingut mitjançant la tècnica coneguda com a 4-Source Colour Photometric Stereo (CPS). Aquesta informació és utilitzada posteriorment per un mètode de predicció de textures amb l'objectiu de generar noves imatges 2D de les textures sota unes noves condicions. Aquestes imatges virtuals que es generen seran la base del nostre sistema de reconeixement, ja que seran utilitzades com a models de referència per al nostre classificador de textures. El sistema de reconeixement proposat combina les Matrius de Co-ocurrència per a l'extracció de característiques de textura, amb la utilització del Classificador del veí més proper. Aquest classificador ens permet al mateix temps aproximar la direcció d'il·luminació present en les imatges que s'utilitzen per testejar el sistema de reconeixement. És a dir, serem capaços de predir l'angle d'il·luminació sota el qual han estat capturades les imatges de test. Els resultats obtinguts en els diferents experiments que s'han realitzat demostren la viabilitat del sistema de predicció de textures, així com del sistema de reconeixement.
Resumo:
The common GIS-based approach to regional analyses of soil organic carbon (SOC) stocks and changes is to define geographic layers for which unique sets of driving variables are derived, which include land use, climate, and soils. These GIS layers, with their associated attribute data, can then be fed into a range of empirical and dynamic models. Common methodologies for collating and formatting regional data sets on land use, climate, and soils were adopted for the project Assessment of Soil Organic Carbon Stocks and Changes at National Scale (GEFSOC). This permitted the development of a uniform protocol for handling the various input for the dynamic GEFSOC Modelling System. Consistent soil data sets for Amazon-Brazil, the Indo-Gangetic Plains (IGP) of India, Jordan and Kenya, the case study areas considered in the GEFSOC project, were prepared using methodologies developed for the World Soils and Terrain Database (SOTER). The approach involved three main stages: (1) compiling new soil geographic and attribute data in SOTER format; (2) using expert estimates and common sense to fill selected gaps in the measured or primary data; (3) using a scheme of taxonomy-based pedotransfer rules and expert-rules to derive soil parameter estimates for similar soil units with missing soil analytical data. The most appropriate approach varied from country to country, depending largely on the overall accessibility and quality of the primary soil data available in the case study areas. The secondary SOTER data sets discussed here are appropriate for a wide range of environmental applications at national scale. These include agro-ecological zoning, land evaluation, modelling of soil C stocks and changes, and studies of soil vulnerability to pollution. Estimates of national-scale stocks of SOC, calculated using SOTER methods, are presented as a first example of database application. Independent estimates of SOC stocks are needed to evaluate the outcome of the GEFSOC Modelling System for current conditions of land use and climate. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Data mining is a relatively new field of research that its objective is to acquire knowledge from large amounts of data. In medical and health care areas, due to regulations and due to the availability of computers, a large amount of data is becoming available [27]. On the one hand, practitioners are expected to use all this data in their work but, at the same time, such a large amount of data cannot be processed by humans in a short time to make diagnosis, prognosis and treatment schedules. A major objective of this thesis is to evaluate data mining tools in medical and health care applications to develop a tool that can help make rather accurate decisions. In this thesis, the goal is finding a pattern among patients who got pneumonia by clustering of lab data values which have been recorded every day. By this pattern we can generalize it to the patients who did not have been diagnosed by this disease whose lab values shows the same trend as pneumonia patients does. There are 10 tables which have been extracted from a big data base of a hospital in Jena for my work .In ICU (intensive care unit), COPRA system which is a patient management system has been used. All the tables and data stored in German Language database.
Resumo:
Company X develops a laboratory information system (LIS) called System Y. The informationsystem has a two-tier database architecture consisting of a production database and a historicaldatabase. A database constitutes the backbone of a IS, which makes the design of the databasevery important. A poorly designed database can cause major problems within an organization.The two databases in System Y are poorly modeled, particularly the historical database. Thecause of the poor modeling was unclear concepts. The unclear concepts have remained in thedatabase and in the company organization and caused a general confusion of concepts. The splitdatabase architecture itself has evolved into a bottleneck and is the cause of many problemsduring the development of System Y.Company X investigates the possibility of integrating the historical database with the productiondatabase. The goal of our thesis is to conduct a consequence analysis of such integration andwhat the effects would be on System Y, and to create a new design for the integrated database.We will also examine and describe the practical effects of confusion of concepts for a databaseconceptual design.To achieve the goal of the thesis, five different method steps have been performed: a preliminarystudy of the organization, a change analysis, a consequence analysis and an investigation of theconceptual design of the database. These method steps have helped identify changes necessaryfor the organization, a new design proposal for an integrated database, the impact of theproposed design and a number of effects of confusion for the database.
Resumo:
The Urinary Tract Infection (UTI) in pregnancy is important as a consequence of the high incidence during the gestation. It is the third most common clinical complication in pregnancy affecting 10-12% of women whether prevalence is increasing in the first trimester of pregnancy, it may also contribute to maternal and infant mortality. Due the relevance for the results of obstetric and neonatal complications from UTI, these complications must be prevented, because it can lead to health hazards to pregnant women and newborns, producing a direct effect on morbidity and perinatal mortality. On this basis, it was defined as objectives of this research the identification of the profile of nurses from the Family Health Strategy (FHS) in the East and West Health Districts from the city of Natal / RN before the women with UTI and to verify the nurse performance during prenatal consultations. This is an exploratory study with a quantitative approach using a sample of 40 nurses active workers during this survey, it was approved by the Research Ethics Committee of the Universidade Federal do Rio Grande do Norte Protocol n0 232/10 P-CEP/UFRN and opinion n0 080/2011. The tool for data collection was a structured interview. The data collected were organized into an electronic database application Microsoft ® Excel 2007, exported and analyzed using the Statistical Package for Social Sciences (SPSS) version 17.0, and coded, tabulated and presented through tables and charts into their respective percentage distributions, using the descriptive and inferential statistical analysis, chi-square test and significance level of 5% (distribution in relative and absolute frequencies) in the independent variables. Therefore, it was observed from these results that the longer action of nurses in the FHS from the East and Weast health districts of the city of Natal/RN contributed to the development of a greater number of activities to control the incidence of UTI in women who are attended in the prenatal care service, proven by significance in statistics
Resumo:
An extended version of HIER, a query-the-user facility for expert systems is presented. HIER was developed to run over Prolog programs, and has been incorporated to systems that support the design of large and complex applications. The framework of the extended version is described,; as well as the major features of the implementation. An example is included to illustrate the use of the tool, involving the design of a specific database application.
Resumo:
We propose a way to incorporate NTBs for the four workhorse models of the modern trade literature in computable general equilibrium models (CGEs). CGE models feature intermediate linkages and thus allow us to study global value chains (GVCs). We show that the Ethier-Krugman monopolistic competition model, the Melitz firm heterogeneity model and the Eaton and Kortum model can be defined as an Armington model with generalized marginal costs, generalized trade costs and a demand externality. As already known in the literature in both the Ethier-Krugman model and the Melitz model generalized marginal costs are a function of the amount of factor input bundles. In the Melitz model generalized marginal costs are also a function of the price of the factor input bundles. Lower factor prices raise the number of firms that can enter the market profitably (extensive margin), reducing generalized marginal costs of a representative firm. For the same reason the Melitz model features a demand externality: in a larger market more firms can enter. We implement the different models in a CGE setting with multiple sectors, intermediate linkages, non-homothetic preferences and detailed data on trade costs. We find the largest welfare effects from trade cost reductions in the Melitz model. We also employ the Melitz model to mimic changes in Non tariff Barriers (NTBs) with a fixed cost-character by analysing the effect of changes in fixed trade costs. While we work here with a model calibrated to the GTAP database, the methods developed can also be applied to CGE models based on the WIOD database.