997 resultados para measurement database
Resumo:
Landscape fires show large variability in the amount of biomass or fuel consumed per unit area burned. Fuel consumption (FC) depends on the biomass available to burn and the fraction of the biomass that is actually combusted, and can be combined with estimates of area burned to assess emissions. While burned area can be detected from space and estimates are becoming more reliable due to improved algorithms and sensors, FC is usually modeled or taken selectively from the literature. We compiled the peerreviewed literature on FC for various biomes and fuel categories to understand FC and its variability better, and to provide a database that can be used to constrain biogeochemical models with fire modules. We compiled in total 77 studies covering 11 biomes including savanna (15 studies, average FC of 4.6 t DM (dry matter) ha 1 with a standard deviation of 2.2), tropical forest (n = 19, FC = 126 +/- 77), temperate forest (n = 12, FC = 58 +/- 72), boreal forest (n = 16, FC = 35 +/- 24), pasture (n = 4, FC = 28 +/- 9.3), shifting cultivation (n = 2, FC = 23, with a range of 4.0-43), crop residue (n = 4, FC = 6.5 +/- 9.0), chaparral (n = 3, FC = 27 +/- 19), tropical peatland (n = 4, FC = 314 +/- 196), boreal peatland (n = 2, FC = 42 [42-43]), and tundra (n = 1, FC = 40). Within biomes the regional variability in the number of measurements was sometimes large, with e. g. only three measurement locations in boreal Russia and 35 sites in North America. Substantial regional differences in FC were found within the defined biomes: for example, FC of temperate pine forests in the USA was 37% lower than Australian forests dominated by eucalypt trees. Besides showing the differences between biomes, FC estimates were also grouped into different fuel classes. Our results highlight the large variability in FC, not only between biomes but also within biomes and fuel classes. This implies that substantial uncertainties are associated with using biome-averaged values to represent FC for whole biomes. Comparing the compiled FC values with co-located Global Fire Emissions Database version 3 (GFED3) FC indicates that modeling studies that aim to represent variability in FC also within biomes, still require improvements as they have difficulty in representing the dynamics governing FC.
Resumo:
Data management consists of collecting, storing, and processing the data into the format which provides value-adding information for decision-making process. The development of data management has enabled of designing increasingly effective database management systems to support business needs. Therefore as well as advanced systems are designed for reporting purposes, also operational systems allow reporting and data analyzing. The used research method in the theory part is qualitative research and the research type in the empirical part is case study. Objective of this paper is to examine database management system requirements from reporting managements and data managements perspectives. In the theory part these requirements are identified and the appropriateness of the relational data model is evaluated. In addition key performance indicators applied to the operational monitoring of production are studied. The study has revealed that the appropriate operational key performance indicators of production takes into account time, quality, flexibility and cost aspects. Especially manufacturing efficiency has been highlighted. In this paper, reporting management is defined as a continuous monitoring of given performance measures. According to the literature review, the data management tool should cover performance, usability, reliability, scalability, and data privacy aspects in order to fulfill reporting managements demands. A framework is created for the system development phase based on requirements, and is used in the empirical part of the thesis where such a system is designed and created for reporting management purposes for a company which operates in the manufacturing industry. Relational data modeling and database architectures are utilized when the system is built for relational database platform.
Resumo:
A survey was undertaken among Swiss occupational health and safety specialists in 2004 to identify uses, difficulties, and possible developments of exposure models. Occupational hygienists (121), occupational physicians (169), and safety specialists (95) were surveyed with an in depth questionnaire. Results obtained indicate that models are not used very much in practice in Switzerland and are reserved to research groups focusing on specific topics. However, various determinants of exposure are often considered important by professionals (emission rate, work activity), and in some cases recorded and used (room parameters, operator activity). These parameters cannot be directly included in present models. Nevertheless, more than half of the occupational hygienists think that it is important to develop quantitative exposure models. Looking at research institutions, there is, however, a big interest in the use of models to solve problems which are difficult to address with direct measurements; i. e. retrospective exposure assessment for specific clinical cases and prospective evaluation for new situations or estimation of the effect of selected parameters. In a recent study about cases of acutepulmonary toxicity following water proofing spray exposure, exposure models have been used to reconstruct exposure of a group of patients. Finally, in the context of exposure prediction, it is also important to report about a measurement database existing in Switzerland since 1991. [Authors]
Resumo:
Luotettavien energiamittauspalvelujen tarjoaminen vaatii mittauksien ja niihin liittyvien työvaiheiden tarkan dokumentoinnin ja ohjeistuksen. Sähkötaseiden hallinnassa ja sel-vityksessä käytettävien mittaustietojen määrä on suuri. Dokumentointijärjestelmä on välttämätön mittaustietojen hallitsemiseksi. Sujuva työvaiheiden suorittaminen vaatii niiden kartoittamisen ja sen pohjalta tehdyn ohjeistamisen. Diplomityö voidaan jakaa neljään osaan. Aluksi selvitetään sähköenergiamittausten käyttöä tasehallinnassa ja -selvityksessä. Siihen liittyen tehdään selvitys myös mittaus-velvollisuuksista sekä mittausten tarvitsijoista. Toisessa osassa tutkitaan, miten mittaustieto siirretään jännite- ja virtamuuntajilta ener-gianhallintajärjestelmään, ja mitä työvaiheita on suoritettava ennen kuin mittaustieto voidaan ohjata asiakkaalle tai jatkokäsittelyyn. Kolmas osa muodostuu mittaustietokannan kehittämisestä Microsoft Access-tietokantaohjelmalla. Tietokantaan keskitetään kaikki mahdollinen mittauksiin liittyvä tieto. Tietokanta helpottaa monia mittauksiin liittyviä selvitystehtäviä, sillä siitä voidaan hakea tietoa monin eri hakukriteerein. Neljännessä osassa tarkastellaan mittauspalvelun toiminnassa esiin tulleita epäkohtia ja esitetään niille korjausehdotukset.
Resumo:
Thesis--University of Maryland.
Resumo:
The purpose of this work is the development of database of the distributed information measurement and control system that implements methods of optical spectroscopy for plasma physics research and atomic collisions and provides remote access to information and hardware resources within the Intranet/Internet networks. The database is based on database management system Oracle9i. Client software was realized in Java language. The software was developed using Model View Controller architecture, which separates application data from graphical presentation components and input processing logic. The following graphical presentations were implemented: measurement of radiation spectra of beam and plasma objects, excitation function for non-elastic collisions of heavy particles and analysis of data acquired in preceding experiments. The graphical clients have the following functionality of the interaction with the database: browsing information on experiments of a certain type, searching for data with various criteria, and inserting the information about preceding experiments.
Resumo:
Purpose: Several attempts to determine the transit time of a high dose rate (HDR) brachytherapy unit have been reported in the literature with controversial results. The determination of the source speed is necessary to accurately calculate the transient dose in brachytherapy treatments. In these studies, only the average speed of the source was measured as a parameter for transit dose calculation, which does not account for the realistic movement of the source, and is therefore inaccurate for numerical simulations. The purpose of this work is to report the implementation and technical design of an optical fiber based detector to directly measure the instantaneous speed profile of a (192)Ir source in a Nucletron HDR brachytherapy unit. Methods: To accomplish this task, we have developed a setup that uses the Cerenkov light induced in optical fibers as a detection signal for the radiation source moving inside the HDR catheter. As the (192)Ir source travels between two optical fibers with known distance, the threshold of the induced signals are used to extract the transit time and thus the velocity. The high resolution of the detector enables the measurement of the transit time at short separation distance of the fibers, providing the instantaneous speed. Results: Accurate and high resolution speed profiles of the 192Ir radiation source traveling from the safe to the end of the catheter and between dwell positions are presented. The maximum and minimum velocities of the source were found to be 52.0 +/- 1.0 and 17.3 +/- 1:2 cm/s. The authors demonstrate that the radiation source follows a uniformly accelerated linear motion with acceleration of vertical bar a vertical bar = 113 cm/s(2). In addition, the authors compare the average speed measured using the optical fiber detector to those obtained in the literature, showing deviation up to 265%. Conclusions: To the best of the authors` knowledge, the authors directly measured for the first time the instantaneous speed profile of a radiation source in a HDR brachytherapy unit traveling from the unit safe to the end of the catheter and between interdwell distances. The method is feasible and accurate to implement on quality assurance tests and provides a unique database for efficient computational simulations of the transient dose. (C) 2010 American Association of Physicists in Medicine. [DOI: 10.1118/1.3483780]
Resumo:
Background: The COSMIN checklist is a tool for evaluating the methodological quality of studies on measurement properties of health-related patient-reported outcomes. The aim of this study is to determine the inter-rater agreement and reliability of each item score of the COSMIN checklist (n = 114). Methods: 75 articles evaluating measurement properties were randomly selected from the bibliographic database compiled by the Patient-Reported Outcome Measurement Group, Oxford, UK. Raters were asked to assess the methodological quality of three articles, using the COSMIN checklist. In a one-way design, percentage agreement and intraclass kappa coefficients or quadratic-weighted kappa coefficients were calculated for each item. Results: 88 raters participated. Of the 75 selected articles, 26 articles were rated by four to six participants, and 49 by two or three participants. Overall, percentage agreement was appropriate (68% was above 80% agreement), and the kappa coefficients for the COSMIN items were low (61% was below 0.40, 6% was above 0.75). Reasons for low inter-rater agreement were need for subjective judgement, and accustom to different standards, terminology and definitions.Conclusions: Results indicated that raters often choose the same response option, but that it is difficult on item level to distinguish between articles. When using the COSMIN checklist in a systematic review, we recommend getting some training and experience, completing it by two independent raters, and reaching consensus on one final rating. Instructions for using the checklist are improved.
Resumo:
In recent decade customer loyalty programs have become very popular and almost every retail chain seems to have one. Through the loyalty programs companies are able to collect information about the customer behavior and to use this information in business and marketing management to guide decision making and resource allocation. The benefits for the loyalty program member are often monetary, which has an effect on the profitability of the loyalty program. Not all the loyalty program members are equally profitable, as some purchase products for the recommended retail price and some buy only discounted products. If the company spends similar amount of resources to all members, it can be seen that the customer margin is lower on the customer who bought only discounted products. It is vital for a company to measure the profitability of their members in order to be able to calculate the customer value. To calculate the customer value several different customer value metrics can be used. During the recent years especially customer lifetime value has received a lot of attention and it is seen to be superior against other customer value metrics. In this master’s thesis the customer lifetime value is implemented on the case company’s customer loyalty program. The data was collected from the customer loyalty program’s database and represents year 2012 on the Finnish market. The data was not complete to fully take advantage of customer lifetime value and as a conclusion it can be stated that a new key performance indicator of customer margin should be acquired in order to profitably drive the business of the customer loyalty program. Through the customer margin the company would be able to compute the customer lifetime value on regular basis enabling efficient resource allocation in marketing.
Resumo:
The authors identified several specific problems with the measurement of achievement goals in the current literature and illustrated these problems, focusing primarily on A. J. Elliot and H. A. McGregor's (2001) Achievement Goal Questionnaire (AGQ). They attended to these problems by creating the AGQ-Revised and conducting a study that examined the measure's structural validity and predictive utility with 229 (76 male, 150 female, 3 unspecified) undergraduates. The hypothesized factor and dimensional structures of the measure were confirmed and shown to be superior to a host of alternatives. The predictions were nearly uniformly supported with regard to both the antecedents (need for achievement and fear of failure) and consequences (intrinsic motivation and exam performance) of the 4 achievement goals. In discussing their work, the authors highlight the importance and value of additional precision in the area of achievement goal measurement. (PsycINFO Database Record (c) 2012 APA, all rights reserved)(journal abstract)
Resumo:
The aim of this study is to survey radiographic measurement estimation in the assessment of dental implant length according to dentists' confidence. A 19-point questionnaire with closed-ended questions was used by two graduate students to interview 69 dentists during a dental implant meeting. Included were 12 questions related to over- and underestimation in three radiographic modalities: panoramic (P), conventional tomography (T), and computerized tomography (CT). The database was analyzed by Epi-Info 6.04 software and the values from two radiographic modalities, P and T, were compared using a chi2 test. The results showed that 38.24% of the dentists' confidence was in the overestimation of measurements in P, 30.56% in T, and 0% in CT. On the other hand, considering the underestimated measurements, the percentages were 47.06% in P, 33.33% in T, and 1.92% in CT. The frequency of under- and overestimation were statistically significant (chi2 = 6.32; P = .0425) between P and T. CT was the radiographic modality with higher measurement precision according to dentists' confidence. In conclusion, the interviewed dentists felt that CT was the best radiographic modality when considering the measurement estimation precision in preoperative dental implant assessment.
Resumo:
This paper presents a general modeling approach to investigate and to predict measurement errors in active energy meters both induction and electronic types. The measurement error modeling is based on Generalized Additive Model (GAM), Ridge Regression method and experimental results of meter provided by a measurement system. The measurement system provides a database of 26 pairs of test waveforms captured in a real electrical distribution system, with different load characteristics (industrial, commercial, agricultural, and residential), covering different harmonic distortions, and balanced and unbalanced voltage conditions. In order to illustrate the proposed approach, the measurement error models are discussed and several results, which are derived from experimental tests, are presented in the form of three-dimensional graphs, and generalized as error equations. © 2009 IEEE.
Specialist tool for monitoring the measurement degradation process of induction active energy meters
Resumo:
This paper presents a methodology and a specialist tool for failure probability analysis of induction type watt-hour meters, considering the main variables related to their measurement degradation processes. The database of the metering park of a distribution company, named Elektro Electricity and Services Co., was used for determining the most relevant variables and to feed the data in the software. The modeling developed to calculate the watt-hour meters probability of failure was implemented in a tool through a user friendly platform, written in Delphi language. Among the main features of this tool are: analysis of probability of failure by risk range; geographical localization of the meters in the metering park, and automatic sampling of induction type watt-hour meters, based on a risk classification expert system, in order to obtain information to aid the management of these meters. The main goals of the specialist tool are following and managing the measurement degradation, maintenance and replacement processes for induction watt-hour meters. © 2011 IEEE.
Resumo:
Intangible resources have raised the interests of scholars from different research areas due to their importance as crucial factors for firm performance; yet, contributions to this field still lack a theoretical framework. This research analyses the state-of-the-art results reached in the literature concerning intangibles, their main features and evaluation problems and models. In search for a possible theoretical framework, the research draws a kind of indirect analysis of intangibles through the theories of the firm, their critic and developments. The heterodox approaches of the evolutionary theory and resource-based view are indicated as possible frameworks. Based on this theoretical analysis, organization capital (OC) is identified, for its features, as the most important intangible for firm performance. Empirical studies on the relationship intangibles-firm performance have been sporadic and have failed to reach firm conclusions with respect to OC; in the attempt to fill this gap, the effect of OC is tested on a large sample of European firms using the Compustat Global database. OC is proxied by capitalizing an income statement item (Selling, General and Administrative expenses) that includes expenses linked to information technology, business process design, reputation enhancement and employee training. This measure of OC is employed in a cross-sectional estimation of a firm level production function - modeled with different functional specifications (Cobb-Douglas and Translog) - that measures OC contribution to firm output and profitability. Results are robust and confirm the importance of OC for firm performance.