11 resultados para reasonable accuracy
em Helda - Digital Repository of University of Helsinki
Resumo:
Rheumatoid arthritis (RA) is an autoimmune disease characterized by synovitis, progressive joint destruction, and disability. Reactive arthritis (ReA) is a sterile joint inflammation following a distant mucosal infection. The clinical course of these diseases is variable and cannot be predicted with reasonable accuracy by clinical and laboratory markers. The predictive value of circulating soluble interleukin-2 receptor (sIL-2R), a marker of lymphocyte activation, measured by Immulite® automated immunoassay analyzer, was evaluated in two cohorts of RA patients. In 175 patients with active early RA randomized to treatment with either on disease-modifying antirheumatic drug (DMARD) or a combination of 3 DMARDs and prednisolone, low baseline sIL-2R level predicted remission after 6 months in patients treated with a single DMARD. In 24 patients with active RA refractory to DMARDs, low baseline sIL-2R level predicted rapid clinical response to treatment with infliximab, an anti-tumour necrosis factor antibody. Furthermore, in a cohort of 26 patients with acute ReA, high baseline sIL-2R level predicted remission after 6 months. Levels of circulating soluble E-selectin (sE-selectin), a marker of endothelial activation, were measured annually by enzyme-linked immunosorbent assay (ELISA) in a cohort of 85 patients with early RA. During a five-year follow-up, sE-selectin levels were associated with activity and outcome of RA. The levels of neutrophil and monocyte CD11b/CD18 expression measured by flow cytometry, and circulating levels of sE-selectin measured by ELISA, and procalcitonin by immunoluminometric assay, were compared in 28 patients with acute ReA and 16 patients with early RA. The levels of the markers were comparable in ReA, RA, and healthy control subjects. In conlusion, sIL-2R may provide a new predictive marker in early RA treated with a single DMARD and refractory RA treated with infliximab. In addition, sIL-2R level predicts remission in acute ReA.
Resumo:
Acute knee injury is a common event throughout life, and it is usually the result of a traffic accident, simple fall, or twisting injury. Over 90% of patients with acute knee injury undergo radiography. An overlooked fracture or delayed diagnosis can lead to poor patient outcome. The major aim of this thesis was retrospectively to study imaging of knee injury with a special focus on tibial plateau fractures in patients referred to a level-one trauma center. Multi-detector computed tomography (MDCT) findings of acute knee trauma were studied and compared to radiography, as well as whether non-contrast MDCT can detect cruciate ligaments with reasonable accuracy. The prevalence, type, and location of meniscal injuries in magnetic resonance imaging (MRI) were evaluated, particularly in order to assess the prevalence of unstable meniscal tears in acute knee trauma with tibial plateau fractures. The possibility to analyze with conventional MRI the signal appearance of menisci repaired with bioabsorbable arrows was also studied. The postoperative use of MDCT was studied in surgically treated tibial plateau fractures: to establish the frequency and indications of MDCT and to assess the common findings and their clinical impact in a level-one trauma hospital. This thesis focused on MDCT and MRI of knee injuries, and radiographs were analyzed when applica-ble. Radiography constitutes the basis for imaging acute knee injury, but MDCT can yield information beyond the capabilities of radiography. Especially in severely injured patients , sufficient radiographs are often difficult to obtain, and in those patients, radiography is unreliable to rule out fractures. MDCT detected intact cruciate ligaments with good specificity, accuracy, and negative predictive value, but the assessment of torn ligaments was unreliable. A total of 36% (14/39) patients with tibial plateau fracture had an unstable meniscal tear in MRI. When a meniscal tear is properly detected preoperatively, treatment can be combined with primary fracture fixation, thus avoiding another operation. The number of meniscal contusions was high. Awareness of the imaging features of this meniscal abnormality can help radiologists increase specificity by avoiding false-positive findings in meniscal tears. Postoperative menisci treated with bioabsorbable arrows showed no difference, among different signal intensities in MRI, among menisci between patients with operated or intact ACL. The highest incidence of menisci with an increased signal intensity extending to the meniscal surface was in patients whose surgery was within the previous 18 months. The results may indicate that a rather long time is necessary for menisci to heal completely after arrow repair. Whether the menisci with an increased signal intensity extending to the meniscal surface represent improper healing or re-tear, or whether this is just the earlier healing feature in the natural process remains unclear, and further prospective studies are needed to clarify this. Postoperative use of MDCT in tibial plateau fractures was rather infrequent even in this large trauma center, but when performed, it revealed clinically significant information, thus benefitting patients in regard to treatment.
Resumo:
In lake-rich regions, the gathering of information about water quality is challenging because only a small proportion of the lakes can be assessed each year by conventional methods. One of the techniques for improving the spatial and temporal representativeness of lake monitoring is remote sensing from satellites and aircrafts. The experimental material included detailed optical measurements in 11 lakes, air- and spaceborne remote sensing measurements with concurrent field sampling, automatic raft measurements and a national dataset of routine water quality measurements from over 1100 lakes. The analyses of the spatially high-resolution airborne remote sensing data from eutrophic and mesotrophic lakes showed that one or a few discrete water quality observations using conventional monitoring can yield a clear over- or underestimation of the overall water quality in a lake. The use of TM-type satellite instruments in addition to routine monitoring results substantially increases the number of lakes for which water quality information can be obtained. The preliminary results indicated that coloured dissolved organic matter (CDOM) can be estimated with TM-type satellite instruments, which could possibly be utilised as an aid in estimating the role of lakes in global carbon budgets. Based on the results of reflectance modelling and experimental data, MERIS satellite instrument has optimal or near-optimal channels for the estimation of turbidity, chlorophyll a and CDOM in Finnish lakes. MERIS images with a 300 m spatial resolution can provide water quality information in different parts of large and medium-sized lakes, and in filling in the gaps resulting from conventional monitoring. Algorithms that would not require simultaneous field data for algorithm training would increase the amount of remote sensing-based information available for lake monitoring. The MERIS Boreal Lakes processor, trained with the optical data and concentration ranges provided by this study, enabled turbidity estimations with good accuracy without the need for algorithm correction with field measurements, while chlorophyll a and CDOM estimations require further development of the processor. The accuracy of interpreting chlorophyll a via semi empirical algorithms can be improved by classifying lakes prior to interpretation according to their CDOM level and trophic status. Optical modelling indicated that the spectral diffuse attenuation coefficient can be estimated with reasonable accuracy from the measured water quality concentrations. This provides more detailed information on light attenuation from routine monitoring measurements than is available through the Secchi disk transparency. The results of this study improve the interpretation of lake water quality by remote sensing and encourage the use of remote sensing in lake monitoring.
Resumo:
Drug Analysis without Primary Reference Standards: Application of LC-TOFMS and LC-CLND to Biofluids and Seized Material Primary reference standards for new drugs, metabolites, designer drugs or rare substances may not be obtainable within a reasonable period of time or their availability may also be hindered by extensive administrative requirements. Standards are usually costly and may have a limited shelf life. Finally, many compounds are not available commercially and sometimes not at all. A new approach within forensic and clinical drug analysis involves substance identification based on accurate mass measurement by liquid chromatography coupled with time-of-flight mass spectrometry (LC-TOFMS) and quantification by LC coupled with chemiluminescence nitrogen detection (LC-CLND) possessing equimolar response to nitrogen. Formula-based identification relies on the fact that the accurate mass of an ion from a chemical compound corresponds to the elemental composition of that compound. Single-calibrant nitrogen based quantification is feasible with a nitrogen-specific detector since approximately 90% of drugs contain nitrogen. A method was developed for toxicological drug screening in 1 ml urine samples by LC-TOFMS. A large target database of exact monoisotopic masses was constructed, representing the elemental formulae of reference drugs and their metabolites. Identification was based on matching the sample component s measured parameters with those in the database, including accurate mass and retention time, if available. In addition, an algorithm for isotopic pattern match (SigmaFit) was applied. Differences in ion abundance in urine extracts did not affect the mass accuracy or the SigmaFit values. For routine screening practice, a mass tolerance of 10 ppm and a SigmaFit tolerance of 0.03 were established. Seized street drug samples were analysed instantly by LC-TOFMS and LC-CLND, using a dilute and shoot approach. In the quantitative analysis of amphetamine, heroin and cocaine findings, the mean relative difference between the results of LC-CLND and the reference methods was only 11%. In blood specimens, liquid-liquid extraction recoveries for basic lipophilic drugs were first established and the validity of the generic extraction recovery-corrected single-calibrant LC-CLND was then verified with proficiency test samples. The mean accuracy was 24% and 17% for plasma and whole blood samples, respectively, all results falling within the confidence range of the reference concentrations. Further, metabolic ratios for the opioid drug tramadol were determined in a pharmacogenetic study setting. Extraction recovery estimation, based on model compounds with similar physicochemical characteristics, produced clinically feasible results without reference standards.
Resumo:
Sensor networks represent an attractive tool to observe the physical world. Networks of tiny sensors can be used to detect a fire in a forest, to monitor the level of pollution in a river, or to check on the structural integrity of a bridge. Application-specific deployments of static-sensor networks have been widely investigated. Commonly, these networks involve a centralized data-collection point and no sharing of data outside the organization that owns it. Although this approach can accommodate many application scenarios, it significantly deviates from the pervasive computing vision of ubiquitous sensing where user applications seamlessly access anytime, anywhere data produced by sensors embedded in the surroundings. With the ubiquity and ever-increasing capabilities of mobile devices, urban environments can help give substance to the ubiquitous sensing vision through Urbanets, spontaneously created urban networks. Urbanets consist of mobile multi-sensor devices, such as smart phones and vehicular systems, public sensor networks deployed by municipalities, and individual sensors incorporated in buildings, roads, or daily artifacts. My thesis is that "multi-sensor mobile devices can be successfully programmed to become the underpinning elements of an open, infrastructure-less, distributed sensing platform that can bring sensor data out of their traditional close-loop networks into everyday urban applications". Urbanets can support a variety of services ranging from emergency and surveillance to tourist guidance and entertainment. For instance, cars can be used to provide traffic information services to alert drivers to upcoming traffic jams, and phones to provide shopping recommender services to inform users of special offers at the mall. Urbanets cannot be programmed using traditional distributed computing models, which assume underlying networks with functionally homogeneous nodes, stable configurations, and known delays. Conversely, Urbanets have functionally heterogeneous nodes, volatile configurations, and unknown delays. Instead, solutions developed for sensor networks and mobile ad hoc networks can be leveraged to provide novel architectures that address Urbanet-specific requirements, while providing useful abstractions that hide the network complexity from the programmer. This dissertation presents two middleware architectures that can support mobile sensing applications in Urbanets. Contory offers a declarative programming model that views Urbanets as a distributed sensor database and exposes an SQL-like interface to developers. Context-aware Migratory Services provides a client-server paradigm, where services are capable of migrating to different nodes in the network in order to maintain a continuous and semantically correct interaction with clients. Compared to previous approaches to supporting mobile sensing urban applications, our architectures are entirely distributed and do not assume constant availability of Internet connectivity. In addition, they allow on-demand collection of sensor data with the accuracy and at the frequency required by every application. These architectures have been implemented in Java and tested on smart phones. They have proved successful in supporting several prototype applications and experimental results obtained in ad hoc networks of phones have demonstrated their feasibility with reasonable performance in terms of latency, memory, and energy consumption.
Resumo:
The increase in global temperature has been attributed to increased atmospheric concentrations of greenhouse gases (GHG), mainly that of CO2. The threat of severe and complex socio-economic and ecological implications of climate change have initiated an international process that aims to reduce emissions, to increase C sinks, and to protect existing C reservoirs. The famous Kyoto protocol is an offspring of this process. The Kyoto protocol and its accords state that signatory countries need to monitor their forest C pools, and to follow the guidelines set by the IPCC in the preparation, reporting and quality assessment of the C pool change estimates. The aims of this thesis were i) to estimate the changes in carbon stocks vegetation and soil in the forests in Finnish forests from 1922 to 2004, ii) to evaluate the applied methodology by using empirical data, iii) to assess the reliability of the estimates by means of uncertainty analysis, iv) to assess the effect of forest C sinks on the reliability of the entire national GHG inventory, and finally, v) to present an application of model-based stratification to a large-scale sampling design of soil C stock changes. The applied methodology builds on the forest inventory measured data (or modelled stand data), and uses statistical modelling to predict biomasses and litter productions, as well as a dynamic soil C model to predict the decomposition of litter. The mean vegetation C sink of Finnish forests from 1922 to 2004 was 3.3 Tg C a-1, and in soil was 0.7 Tg C a-1. Soil is slowly accumulating C as a consequence of increased growing stock and unsaturated soil C stocks in relation to current detritus input to soil that is higher than in the beginning of the period. Annual estimates of vegetation and soil C stock changes fluctuated considerably during the period, were frequently opposite (e.g. vegetation was a sink but soil was a source). The inclusion of vegetation sinks into the national GHG inventory of 2003 increased its uncertainty from between -4% and 9% to ± 19% (95% CI), and further inclusion of upland mineral soils increased it to ± 24%. The uncertainties of annual sinks can be reduced most efficiently by concentrating on the quality of the model input data. Despite the decreased precision of the national GHG inventory, the inclusion of uncertain sinks improves its accuracy due to the larger sectoral coverage of the inventory. If the national soil sink estimates were prepared by repeated soil sampling of model-stratified sample plots, the uncertainties would be accounted for in the stratum formation and sample allocation. Otherwise, the increases of sampling efficiency by stratification remain smaller. The highly variable and frequently opposite annual changes in ecosystem C pools imply the importance of full ecosystem C accounting. If forest C sink estimates will be used in practice average sink estimates seem a more reasonable basis than the annual estimates. This is due to the fact that annual forest sinks vary considerably and annual estimates are uncertain, and they have severe consequences for the reliability of the total national GHG balance. The estimation of average sinks should still be based on annual or even more frequent data due to the non-linear decomposition process that is influenced by the annual climate. The methodology used in this study to predict forest C sinks can be transferred to other countries with some modifications. The ultimate verification of sink estimates should be based on comparison to empirical data, in which case the model-based stratification presented in this study can serve to improve the efficiency of the sampling design.
Resumo:
Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.
Resumo:
This study contributes to the neglect effect literature by looking at the relative trading volume in terms of value. The results for the Swedish market show a significant positive relationship between the accuracy of estimation and the relative trading volume. Market capitalisation and analyst coverage have in prior studies been used as proxies for neglect. These measures however, do not take into account the effort analysts put in when estimating corporate pre-tax profits. I also find evidence that the industry of the firm influence the accuracy of estimation. In addition, supporting earlier findings, loss making firms are associated with larger forecasting errors. Further, I find that the average forecast error increased in the year 2000 – in Sweden.
Resumo:
This thesis report attempts to improve the models for predicting forest stand structure for practical use, e.g. forest management planning (FMP) purposes in Finland. Comparisons were made between Weibull and Johnson s SB distribution and alternative regression estimation methods. Data used for preliminary studies was local but the final models were based on representative data. Models were validated mainly in terms of bias and RMSE in the main stand characteristics (e.g. volume) using independent data. The bivariate SBB distribution model was used to mimic realistic variations in tree dimensions by including within-diameter-class height variation. Using the traditional method, diameter distribution with the expected height resulted in reduced height variation, whereas the alternative bivariate method utilized the error-term of the height model. The lack of models for FMP was covered to some extent by the models for peatland and juvenile stands. The validation of these models showed that the more sophisticated regression estimation methods provided slightly improved accuracy. A flexible prediction and application for stand structure consisted of seemingly unrelated regression models for eight stand characteristics, the parameters of three optional distributions and Näslund s height curve. The cross-model covariance structure was used for linear prediction application, in which the expected values of the models were calibrated with the known stand characteristics. This provided a framework to validate the optional distributions and the optional set of stand characteristics. Height distribution is recommended for the earliest state of stands because of its continuous feature. From the mean height of about 4 m, Weibull dbh-frequency distribution is recommended in young stands if the input variables consist of arithmetic stand characteristics. In advanced stands, basal area-dbh distribution models are recommended. Näslund s height curve proved useful. Some efficient transformations of stand characteristics are introduced, e.g. the shape index, which combined the basal area, the stem number and the median diameter. Shape index enabled SB model for peatland stands to detect large variation in stand densities. This model also demonstrated reasonable behaviour for stands in mineral soils.
Resumo:
In meteorology, observations and forecasts of a wide range of phenomena for example, snow, clouds, hail, fog, and tornados can be categorical, that is, they can only have discrete values (e.g., "snow" and "no snow"). Concentrating on satellite-based snow and cloud analyses, this thesis explores methods that have been developed for evaluation of categorical products and analyses. Different algorithms for satellite products generate different results; sometimes the differences are subtle, sometimes all too visible. In addition to differences between algorithms, the satellite products are influenced by physical processes and conditions, such as diurnal and seasonal variation in solar radiation, topography, and land use. The analysis of satellite-based snow cover analyses from NOAA, NASA, and EUMETSAT, and snow analyses for numerical weather prediction models from FMI and ECMWF was complicated by the fact that we did not have the true knowledge of snow extent, and we were forced simply to measure the agreement between different products. The Sammon mapping, a multidimensional scaling method, was then used to visualize the differences between different products. The trustworthiness of the results for cloud analyses [EUMETSAT Meteorological Products Extraction Facility cloud mask (MPEF), together with the Nowcasting Satellite Application Facility (SAFNWC) cloud masks provided by Météo-France (SAFNWC/MSG) and the Swedish Meteorological and Hydrological Institute (SAFNWC/PPS)] compared with ceilometers of the Helsinki Testbed was estimated by constructing confidence intervals (CIs). Bootstrapping, a statistical resampling method, was used to construct CIs, especially in the presence of spatial and temporal correlation. The reference data for validation are constantly in short supply. In general, the needs of a particular project drive the requirements for evaluation, for example, for the accuracy and the timeliness of the particular data and methods. In this vein, we discuss tentatively how data provided by general public, e.g., photos shared on the Internet photo-sharing service Flickr, can be used as a new source for validation. Results show that they are of reasonable quality and their use for case studies can be warmly recommended. Last, the use of cluster analysis on meteorological in-situ measurements was explored. The Autoclass algorithm was used to construct compact representations of synoptic conditions of fog at Finnish airports.