997 resultados para estimated parameters
Resumo:
Background In an agreement assay, it is of interest to evaluate the degree of agreement between the different methods (devices, instruments or observers) used to measure the same characteristic. We propose in this study a technical simplification for inference about the total deviation index (TDI) estimate to assess agreement between two devices of normally-distributed measurements and describe its utility to evaluate inter- and intra-rater agreement if more than one reading per subject is available for each device. Methods We propose to estimate the TDI by constructing a probability interval of the difference in paired measurements between devices, and thereafter, we derive a tolerance interval (TI) procedure as a natural way to make inferences about probability limit estimates. We also describe how the proposed method can be used to compute bounds of the coverage probability. Results The approach is illustrated in a real case example where the agreement between two instruments, a handle mercury sphygmomanometer device and an OMRON 711 automatic device, is assessed in a sample of 384 subjects where measures of systolic blood pressure were taken twice by each device. A simulation study procedure is implemented to evaluate and compare the accuracy of the approach to two already established methods, showing that the TI approximation produces accurate empirical confidence levels which are reasonably close to the nominal confidence level. Conclusions The method proposed is straightforward since the TDI estimate is derived directly from a probability interval of a normally-distributed variable in its original scale, without further transformations. Thereafter, a natural way of making inferences about this estimate is to derive the appropriate TI. Constructions of TI based on normal populations are implemented in most standard statistical packages, thus making it simpler for any practitioner to implement our proposal to assess agreement.
Resumo:
Tämän diplomityön tavoitteena oli sekundäärisen esiflotaation optimointi Stora Enso Sachsen GmbH:n tehtaalla. Optimoinnin muuttujana käytettiin vaahdon määrää ja optimointiparametreinä ISO-vaaleutta, saantoja sekä tuhkapitoisuutta. Lisäksi tutkittiin flotaatiosakeuden vaikutusta myös muihin tehtaan flotaatioprosesseihin. Kirjallisuusosassa tarkasteltiin flotaatiotapahtumaa, poistettavien partikkeleiden ja ilmakuplien kontaktia, vaahdon muodostumista sekä tärkeimpiä käytössä olevia siistausflotaattoreiden laiteratkaisuja. Kokeellisessa osassa tutkittiin flotaatiosakeuden pienetämisen vaikutuksia tehtaan flotaatioprosesseihin tuhkapitoisuuden, ISO-vaaleuden, valon sironta- ja valon absorpiokerrointen kannalta. Sekundäärisen esiflotaation optimonti suoritettiin muuttamalla vaahdon määrää kolmella erilaisella injektorin koolla, (8 mm, 10 mm ja 13 mm), joista keskimmäinen kasvattaa 30 % massan tilavuusvirtaa ilmapitoisuuden muodossa. Optimonnin tarkoituksena oli kasvattaa hyväksytyn massajakeen ISO-vaaleutta, sekä kasvattaa kuitu- ja kokonaissaantoa sekundäärisessä esiflotaatiossa. Flotaatiosakeuden pienentämisellä oli edullisia vaikutuksia ISO-vaaleuteen ja valon sirontakertoimeen kussakin flotaatiossa. Tuhkapitoisuus pieneni sekundäärisissä flotaatioissa enemmän sakeuden ollessa pienempi, kun taas primäärisissä flotaatiossa vaikutus oli päinvastainen. Valon absorptiokerroin parani jälkiflotaatioissa alhaisemmalla sakeudella, kun taas esiflotaatioissa vaikutus oli päinvastainen. Sekundäärisen esiflotaation optimoinnin tuloksena oli lähes 5 % parempi ISO-vaaleus hyväksytyssä massajakeessa. Kokonaissaanto parani optimoinnin myötä 5 % ja kuitusaanto 2 %. Saantojen nousu tuottaa vuosittaisia säästöjä siistauslaitoksen tuotantokapasiteetin noustessa 0,5 %. Tämän lisäksi sekundäärisessä esiflotaatiossa rejektoituvan massavirran pienentyminen tuottaa lisäsäästöjä tehtaan voimalaitoksella.
Resumo:
Huonetilojen lämpöolosuhteiden hallinta on tärkeä osa talotekniikan suunnittelua. Tavallisesti huonetilan lämpöolosuhteita mallinnetaan menetelmillä, joissa lämpödynamiikkaa lasketaan huoneilmassa yhdessä laskentapisteessä ja rakenteissa seinäkohtaisesti. Tarkastelun kohteena on yleensä vain huoneilman lämpötila. Tämän diplomityön tavoitteena oli kehittää huoneilman lämpöolosuhteiden simulointimalli, jossa rakenteiden lämpödynamiikka lasketaan epästationaarisesti energia-analyysilaskennalla ja huoneilman virtauskenttä mallinnetaan valittuna ajanhetkenä stationaarisesti virtauslaskennalla. Tällöin virtauskentälle saadaan jakaumat suunnittelun kannalta olennaisista suureista, joita tyypillisesti ovat esimerkiksi ilman lämpötila ja nopeus. Simulointimallin laskentatuloksia verrattiin testihuonetiloissa tehtyihin mittauksiin. Tulokset osoittautuivat riittävän tarkoiksi talotekniikan suunnitteluun. Mallilla simuloitiin kaksi huonetilaa, joissa tarvittiin tavallista tarkempaa mallinnusta. Vertailulaskelmia tehtiin eri turbulenssimalleilla, diskretointitarkkuuksilla ja hilatiheyksillä. Simulointitulosten havainnollistamiseksi suunniteltiin asiakastuloste, jossa on esitetty suunnittelun kannalta olennaiset asiat. Simulointimallilla saatiin lisätietoa varsinkin lämpötilakerrostumista, joita tyypillisesti on arvioitu kokemukseen perustuen. Simulointimallin kehityksen taustana käsiteltiin rakennusten sisäilmastoa, lämpöolosuhteita ja laskentamenetelmiä sekä mallinnukseen soveltuvia kaupallisia ohjelmia. Simulointimallilla saadaan entistä tarkempaa ja yksityiskohtaisempaa tietoa lämpöolosuhteiden hallinnan suunnitteluun. Mallin käytön ongelmia ovat vielä virtauslaskennan suuri laskenta-aika, turbulenssin mallinnus, tuloilmalaitteiden reunaehtojen tarkka määritys ja laskennan konvergointi. Kehitetty simulointimalli tarjoaa hyvän perustan virtauslaskenta- ja energia-analyysiohjelmien kehittämiseksi ja yhdistämiseksi käyttäjäystävälliseksi talotekniikan suunnittelutyökaluksi.
Resumo:
Uusissa ydinvoimalaitostyypeissä aiotaan käyttää aiempaa enemmän passiivisia turvallisuusjärjestelmiä. Näistä järjestelmistä on vielä vähän käyttökokemusta aktiivisiin turvallisuusjärjestelmiin verrattuna. Työssä tarkastellaan passiivisten turvallisuusjärjestelmien toimintaa sekä etsitään niiden mahdollisia luontaisia vikatilanteita. Luontaisten vikatilanteiden seurauksia järjestelmän suorituskykyyn arvioitiin yksinkertaisilla laskuilla ja mallintamalla RELAP5/MOD3.2.2 beta -termohydrauliikkaohjelmalla. Tarkastelu rajattiin kahden erityyppisen ydinvoimalaitoksen passiivisiin turvallisuusjärjestelmiin. Turvallisuusjärjestelmien suuntaa antavat mitat ja käyttötilanteiden parametrit saatiin laitosvalmistajien laitoskuvauksista. Osoittautui, että vikatilanteissa passiivisissa turvallisuusjärjestelmissä geometrialla on merkittävä vaikutus järjestelmän kapasiteettiin. Tarkasteluissa saatiin myös selville, että laitosmittakaavassa painovoimaisen hätälisävesijärjestelmän turvallisuustoiminto voi toteutua vaikka esiintyisi lyhytaikaisia toimintahäiriöitä, kuten lauhtumista hätälisävesisäiliössä. Sen sijaan lämmönsiirtopiirin virtausreittien tukkeutuminen voi olla fysikaalisesti merkittävä toimintaa haittaava tekijä.
Resumo:
Parkinson disease (PD) is associated with a clinical course of variable duration, severity, and a combination of motor and non-motor features. Recent PD research has focused primarily on etiology rather than clinical progression and long-term outcomes. For the PD patient, caregivers, and clinicians, information on expected clinical progression and long-term outcomes is of great importance. Today, it remains largely unknown what factors influence long-term clinical progression and outcomes in PD; recent data indicate that the factors that increase the risk to develop PD differ, at least partly, from those that accelerate clinical progression and lead to worse outcomes. Prospective studies will be required to identify factors that influence progression and outcome. We suggest that data for such studies is collected during routine office visits in order to guarantee high external validity of such research. We report here the results of a consensus meeting of international movement disorder experts from the Genetic Epidemiology of Parkinson's Disease (GEO-PD) consortium, who convened to define which long-term outcomes are of interest to patients, caregivers and clinicians, and what is presently known about environmental or genetic factors influencing clinical progression or long-term outcomes in PD. We propose a panel of rating scales that collects a significant amount of phenotypic information, can be performed in the routine office visit and allows international standardization. Research into the progression and long-term outcomes of PD aims at providing individual prognostic information early, adapting treatment choices, and taking specific measures to provide care optimized to the individual patient's needs.
Resumo:
This paper discusses the levels of degradation of some co- and byproducts of the food chain intended for feed uses. As the first part of a research project, 'Feeding Fats Safety', financed by the sixth Framework Programme-EC, a total of 123 samples were collected from 10 European countries, corresponding to fat co- and byproducts such as animal fats, fish oils, acid oils from refining, recycled cooking oils, and other. Several composition and degradation parameters (moisture, acid value, diacylglycerols and monoacylglycerols, peroxides, secondary oxidation products, polymers of triacylglycerols, fatty acid composition, tocopherols, and tocotrienols) were evaluated. These findings led to the conclusion that some fat by- and coproducts, such as fish oils, lecithins, and acid oils, show poor, nonstandardized quality and that production processes need to be greatly improved. Conclusions are also put forward about the applicability and utility of each analytical parameter for characterization and quality control.
Resumo:
UNLABELLED: The relationship between bone quantitative ultrasound (QUS) and fracture risk was estimated in an individual level data meta-analysis of 9 prospective studies of 46,124 individuals and 3018 incident fractures. Low QUS is associated with an increase in fracture risk, including hip fracture. The association with osteoporotic fracture decreases with time. INTRODUCTION: The aim of this meta-analysis was to investigate the association between parameters of QUS and risk of fracture. METHODS: In an individual-level analysis, we studied participants in nine prospective cohorts from Asia, Europe and North America. Heel broadband ultrasonic attenuation (BUA dB/MHz) and speed of sound (SOS m/s) were measured at baseline. Fractures during follow-up were collected by self-report and in some cohorts confirmed by radiography. An extension of Poisson regression was used to examine the gradient of risk (GR, hazard ratio per 1 SD decrease) between QUS and fracture risk adjusted for age and time since baseline in each cohort. Interactions between QUS and age and time since baseline were explored. RESULTS: Baseline measurements were available in 46,124 men and women, mean age 70 years (range 20-100). Three thousand and eighteen osteoporotic fractures (787 hip fractures) occurred during follow-up of 214,000 person-years. The summary GR for osteoporotic fracture was similar for both BUA (1.45, 95 % confidence intervals (CI) 1.40-1.51) and SOS (1.42, 95 % CI 1.36-1.47). For hip fracture, the respective GRs were 1.69 (95 % CI, 1.56-1.82) and 1.60 (95 % CI, 1.48-1.72). However, the GR was significantly higher for both fracture outcomes at lower baseline BUA and SOS (p < 0.001). The predictive value of QUS was the same for men and women and for all ages (p > 0.20), but the predictive value of both BUA and SOS for osteoporotic fracture decreased with time (p = 0.018 and p = 0.010, respectively). For example, the GR of BUA for osteoporotic fracture, adjusted for age, was 1.51 (95 % CI 1.42-1.61) at 1 year after baseline, but at 5 years, it was 1.36 (95 % CI 1.27-1.46). CONCLUSIONS: Our results confirm that quantitative ultrasound is an independent predictor of fracture for men and women particularly at low QUS values.
Resumo:
A headspace-gas chromatography-tandem mass spectrometry (HS-GC-MS/MS) method for the trace measurement of perfluorocarbon compounds (PFCs) in blood was developed. Due to oxygen carrying capabilities of PFCs, application to doping and sports misuse is speculated. This study was therefore extended to perform validation methods for F-tert-butylcyclohexane (Oxycyte(®)), perfluoro(methyldecalin) (PFMD) and perfluorodecalin (PFD). The limit of detection of these compounds was established and found to be 1.2µg/mL blood for F-tert-butylcyclohexane, 4.9µg/mL blood for PFMD and 9.6µg/mL blood for PFD. The limit of quantification was assumed to be 12µg/mL blood (F-tert-butylcyclohexane), 48µg/mL blood (PFMD) and 96µg/mL blood (PFD). HS-GC-MS/MS technique allows detection from 1000 to 10,000 times lower than the estimated required dose to ensure a biological effect for the investigated PFCs. Thus, this technique could be used to identify a PFC misuse several hours, maybe days, after the injection or the sporting event. Clinical trials with those compounds are still required to evaluate the validation parameters with the calculated estimations.
Resumo:
PURPOSE: To meta-analyze the literature on the clinical performance of Class V restorations to assess the factors that influence retention, marginal integrity, and marginal discoloration of cervical lesions restored with composite resins, glass-ionomer-cement-based materials [glass-ionomer cement (GIC) and resin-modified glass ionomers (RMGICs)], and polyacid-modified resin composites (PMRC). MATERIALS AND METHODS: The English literature was searched (MEDLINE and SCOPUS) for prospective clinical trials on cervical restorations with an observation period of at least 18 months. The studies had to report about retention, marginal discoloration, marginal integrity, and marginal caries and include a description of the operative technique (beveling of enamel, roughening of dentin, type of isolation). Eighty-one studies involving 185 experiments for 47 adhesives matched the inclusion criteria. The statistical analysis was carried out by using the following linear mixed model: log (-log (Y /100)) = β + α log(T ) + error with β = log(λ), where β is a summary measure of the non-linear deterioration occurring in each experiment, including a random study effect. RESULTS: On average, 12.3% of the cervical restorations were lost, 27.9% exhibited marginal discoloration, and 34.6% exhibited deterioration of marginal integrity after 5 years. The calculation of the clinical index was 17.4% of failures after 5 years and 32.3% after 8 years. A higher variability was found for retention loss and marginal discoloration. Hardly any secondary caries lesions were detected, even in the experiments with a follow-up time longer than 8 years. Restorations placed using rubber-dam in teeth whose dentin was roughened showed a statistically significantly higher retention rate than those placed in teeth with unprepared dentin or without rubber-dam (p < 0.05). However, enamel beveling had no influence on any of the examined variables. Significant differences were found between pairs of adhesive systems and also between pairs of classes of adhesive systems. One-step self-etching had a significantly worse clinically index than two-step self-etching and three-step etch-and-rinse (p = 0.026 and p = 0.002, respectively). CONCLUSION: The clinical performance is significantly influenced by the type of adhesive system and/or the adhesive class to which the system belongs. Whether the dentin/enamel is roughened or not and whether rubberdam isolation is used or not also significantly influenced the clinical performance. Composite resin restorations placed with two-step self-etching and three-step etch-and-rinse adhesive systems should be preferred over onestep self-etching adhesive systems, GIC-based materials, and PMRCs.
Resumo:
INTRODUCTION: Occupational exposure to grain dust causes respiratory symptoms and pathologies. To decrease these effects, major changes have occurred in the grain processing industry in the last twenty years. However, there are no data on the effects of these changes on workers' respiratory health. OBJECTIVES: The aim of this study was to evaluate the respiratory health of grain workers and farmers involved in different steps of the processing industry of wheat, the most frequently used cereal in Europe, fifteen years after major improvements in collective protective equipment due to mechanisation. MATERIALS AND METHOD: Information on estimated personal exposure to wheat dust was collected from 87 workers exposed to wheat dust and from 62 controls. Lung function (FEV1, FVC, and PEF), exhaled nitrogen monoxide (FENO) and respiratory symptoms were assessed after the period of highest exposure to wheat during the year. Linear regression models were used to explore the associations between exposure indices and respiratory effects. RESULTS: Acute symptoms - cough, sneezing, runny nose, scratchy throat - were significantly more frequent in exposed workers than in controls. Increased mean exposure level, increased cumulative exposure and chronic exposure to more than 6 mg.m (-3) of inhaled wheat dust were significantly associated with decreased spirometric parameters, including FEV1 and PEF (40 ml and 123 ml.s (-1) ), FEV1 and FVC (0.4 ml and 0.5 ml per 100 h.mg.m (-3) ), FEV1 and FVC (20 ml and 20 ml per 100 h at >6 mg.m (-3) ). However, no increase in FENO was associated with increased exposure indices. CONCLUSIONS: The lung functions of wheat-related workers are still affected by their cumulative exposure to wheat dust, despite improvements in the use of collective protective equipment.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
The present work is a part of the large project with purpose to qualify the Flash memory for automotive application using a standardized test and measurement flow. High memory reliability and data retention are the most critical parameters in this application. The current work covers the functional tests and data retention test. The purpose of the data retention test is to obtain the data retention parameters of the designed memory, i.e. the maximum time of information storage at specified conditions without critical charge leakage. For this purpose the charge leakage from the cells, which results in decrease of cells threshold voltage, was measured after a long-time hightemperature treatment at several temperatures. The amount of lost charge for each temperature was used to calculate the Arrhenius constant and activation energy for the discharge process. With this data, the discharge of the cells at different temperatures during long time can be predicted and the probability of data loss after years can be calculated. The memory chips, investigated in this work, were 0.035 μm CMOS Flash memory testchips, designed for further use in the Systems-on-Chips for automotive electronics.
Resumo:
Pensions together with savings and investments during active life are key elements of retirement planning. Motivation for personal choices about the standard of living, bequest and the replacement ratio of pension with respect to last salary income must be considered. This research contributes to the financial planning by helping to quantify long-term care economic needs. We estimate life expectancy from retirement age onwards. The economic cost of care per unit of service is linked to the expected time of needed care and the intensity of required services. The expected individual cost of long-term care from an onset of dependence is estimated separately for men and women. Assumptions on the mortality of the dependent people compared to the general population are introduced. Parameters defining eligibility for various forms of coverage by the universal public social care of the welfare system are addressed. The impact of the intensity of social services on individual predictions is assessed, and a partial coverage by standard private insurance products is also explored. Data were collected by the Spanish Institute of Statistics in two surveys conducted on the general Spanish population in 1999 and in 2008. Official mortality records and life table trends were used to create realistic scenarios for longevity. We find empirical evidence that the public long-term care system in Spain effectively mitigates the risk of incurring huge lifetime costs. We also find that the most vulnerable categories are citizens with moderate disabilities that do not qualify to obtain public social care support. In the Spanish case, the trends between 1999 and 2008 need to be further explored.
Resumo:
Although surgical aortic valve replacement has been the standard of care for patient with severe aortic stenosis, transcatheter aortic valve implantation (TAVI) is now a fair standard of care for patients not eligible or high risk for surgical treatment. The decision of therapeutic choice between TAVI and surgery considers surgical risk (estimated by the Euro-SCORE and STS-PROM) as well as many parameters that go beyond the assessment of the valvular disease's severity by echocardiography: a multidisciplinary assessment in "Heart Team" is needed to assess each case in all its complexity.