996 resultados para Measurement uncertainty
Resumo:
Objective To evaluate the sonographic measurement of subcutaneous and visceral fat in correlation with the grade of hepatic steatosis. Materials and Methods In the period from October 2012 to January 2013, 365 patients were evaluated. The subcutaneous and visceral fat thicknesses were measured with a convex, 3–4 MHz transducer transversely placed 1 cm above the umbilical scar. The distance between the internal aspect of the abdominal rectus muscle and the posterior aortic wall in the abdominal midline was considered for measurement of the visceral fat. Increased liver echogenicity, blurring of vascular margins and increased acoustic attenuation were the parameters considered in the quantification of hepatic steatosis. Results Steatosis was found in 38% of the study sample. In the detection of moderate to severe steatosis, the area under the ROC curve was 0.96 for women and 0.99 for men, indicating cut-off values for visceral fat thickness of 9 cm and 10 cm, respectively. Conclusion The present study evidenced the correlation between steatosis and visceral fat thickness and suggested values for visceral fat thickness to allow the differentiation of normality from risk for steatohepatitis.
Resumo:
BACKGROUND: Underweight and severe and morbid obesity are associated with highly elevated risks of adverse health outcomes. We estimated trends in mean body-mass index (BMI), which characterises its population distribution, and in the prevalences of a complete set of BMI categories for adults in all countries. METHODS: We analysed, with use of a consistent protocol, population-based studies that had measured height and weight in adults aged 18 years and older. We applied a Bayesian hierarchical model to these data to estimate trends from 1975 to 2014 in mean BMI and in the prevalences of BMI categories (<18·5 kg/m(2) [underweight], 18·5 kg/m(2) to <20 kg/m(2), 20 kg/m(2) to <25 kg/m(2), 25 kg/m(2) to <30 kg/m(2), 30 kg/m(2) to <35 kg/m(2), 35 kg/m(2) to <40 kg/m(2), ≥40 kg/m(2) [morbid obesity]), by sex in 200 countries and territories, organised in 21 regions. We calculated the posterior probability of meeting the target of halting by 2025 the rise in obesity at its 2010 levels, if post-2000 trends continue. FINDINGS: We used 1698 population-based data sources, with more than 19·2 million adult participants (9·9 million men and 9·3 million women) in 186 of 200 countries for which estimates were made. Global age-standardised mean BMI increased from 21·7 kg/m(2) (95% credible interval 21·3-22·1) in 1975 to 24·2 kg/m(2) (24·0-24·4) in 2014 in men, and from 22·1 kg/m(2) (21·7-22·5) in 1975 to 24·4 kg/m(2) (24·2-24·6) in 2014 in women. Regional mean BMIs in 2014 for men ranged from 21·4 kg/m(2) in central Africa and south Asia to 29·2 kg/m(2) (28·6-29·8) in Polynesia and Micronesia; for women the range was from 21·8 kg/m(2) (21·4-22·3) in south Asia to 32·2 kg/m(2) (31·5-32·8) in Polynesia and Micronesia. Over these four decades, age-standardised global prevalence of underweight decreased from 13·8% (10·5-17·4) to 8·8% (7·4-10·3) in men and from 14·6% (11·6-17·9) to 9·7% (8·3-11·1) in women. South Asia had the highest prevalence of underweight in 2014, 23·4% (17·8-29·2) in men and 24·0% (18·9-29·3) in women. Age-standardised prevalence of obesity increased from 3·2% (2·4-4·1) in 1975 to 10·8% (9·7-12·0) in 2014 in men, and from 6·4% (5·1-7·8) to 14·9% (13·6-16·1) in women. 2·3% (2·0-2·7) of the world's men and 5·0% (4·4-5·6) of women were severely obese (ie, have BMI ≥35 kg/m(2)). Globally, prevalence of morbid obesity was 0·64% (0·46-0·86) in men and 1·6% (1·3-1·9) in women. INTERPRETATION: If post-2000 trends continue, the probability of meeting the global obesity target is virtually zero. Rather, if these trends continue, by 2025, global obesity prevalence will reach 18% in men and surpass 21% in women; severe obesity will surpass 6% in men and 9% in women. Nonetheless, underweight remains prevalent in the world's poorest regions, especially in south Asia. FUNDING: Wellcome Trust, Grand Challenges Canada.
Resumo:
The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation
Resumo:
Peer-reviewed
Resumo:
In mathematical modeling the estimation of the model parameters is one of the most common problems. The goal is to seek parameters that fit to the measurements as well as possible. There is always error in the measurements which implies uncertainty to the model estimates. In Bayesian statistics all the unknown quantities are presented as probability distributions. If there is knowledge about parameters beforehand, it can be formulated as a prior distribution. The Bays’ rule combines the prior and the measurements to posterior distribution. Mathematical models are typically nonlinear, to produce statistics for them requires efficient sampling algorithms. In this thesis both Metropolis-Hastings (MH), Adaptive Metropolis (AM) algorithms and Gibbs sampling are introduced. In the thesis different ways to present prior distributions are introduced. The main issue is in the measurement error estimation and how to obtain prior knowledge for variance or covariance. Variance and covariance sampling is combined with the algorithms above. The examples of the hyperprior models are applied to estimation of model parameters and error in an outlier case.
Resumo:
Calculation of uncertainty of results represents the new paradigm in the area of the quality of measurements in laboratories. The guidance on the Expression of Uncertainty in Measurement of the ISO / International Organization for Standardization assumes that the analyst is being asked to give a parameter that characterizes the range of the values that could reasonably be associated with the result of the measurement. In practice, the uncertainty of the analytical result may arise from many possible sources: sampling, sample preparation, matrix effects, equipments, standards and reference materials, among others. This paper suggests a procedure for calculation of uncertainties components of an analytical result due to sample preparation (uncertainty of weights and volumetric equipment) and instrument analytical signal (calibration uncertainty). A numerical example is carefully explained based on measurements obtained for cadmium determination by flame atomic absorption spectrophotometry. Results obtained for components of total uncertainty showed that the main contribution to the analytical result was the calibration procedure.
Resumo:
Induction motors are widely used in industry, and they are generally considered very reliable. They often have a critical role in industrial processes, and their failure can lead to significant losses as a result of shutdown times. Typical failures of induction motors can be classified into stator, rotor, and bearing failures. One of the reasons for a bearing damage and eventually a bearing failure is bearing currents. Bearing currents in induction motors can be divided into two main categories; classical bearing currents and inverter-induced bearing currents. A bearing damage caused by bearing currents results, for instance, from electrical discharges that take place through the lubricant film between the raceways of the inner and the outer ring and the rolling elements of a bearing. This phenomenon can be considered similar to the one of electrical discharge machining, where material is removed by a series of rapidly recurring electrical arcing discharges between an electrode and a workpiece. This thesis concentrates on bearing currents with a special reference to bearing current detection in induction motors. A bearing current detection method based on radio frequency impulse reception and detection is studied. The thesis describes how a motor can work as a “spark gap” transmitter and discusses a discharge in a bearing as a source of radio frequency impulse. It is shown that a discharge, occurring due to bearing currents, can be detected at a distance of several meters from the motor. The issues of interference, detection, and location techniques are discussed. The applicability of the method is shown with a series of measurements with a specially constructed test motor and an unmodified frequency-converter-driven motor. The radio frequency method studied provides a nonintrusive method to detect harmful bearing currents in the drive system. If bearing current mitigation techniques are applied, their effectiveness can be immediately verified with the proposed method. The method also gives a tool to estimate the harmfulness of the bearing currents by making it possible to detect and locate individual discharges inside the bearings of electric motors.
Resumo:
The purpose of this thesis was to investigate creating and improving category purchasing visibility for corporate procurement by utilizing financial information. This thesis was a part of the global category driven spend analysis project of Konecranes Plc. While creating general understanding for building category driven corporate spend visibility, the IT architecture and needed purchasing parameters for spend analysis were described. In the case part of the study three manufacturing plants of Konecranes Standard Lifting, Heavy Lifting and Services business areas were examined. This included investigating the operative IT system architecture and needed processes for building corporate spend visibility. The key findings of this study were the identification of the needed processes for gathering purchasing data elements while creating corporate spend visibility in fragmented source system environment. As an outcome of the study, roadmap presenting further development areas was introduced for Konecranes.
Resumo:
The research around performance measurement and management has focused mainly on the design, implementation and use of performance measurement systems. However, there is little evidence about the actual impacts of performance measurement on the different levels of business and operations of organisations, as well as the underlying factors that lead to a positive impact of performance measurement. The study thus focuses on this research gap, which can be considered both important and challenging to cover. The first objective of the study was to examine the impacts of performance measurement on different aspects of management, leadership and the quality of working life, after which the factors that facilitate and improve performance and performance measurement at the operative level of an organisation were examined. The second objective was to study how these factors operate in practice. The third objective focused on the construction of a framework for successful operative level performance measurement and the utilisation of the factors in the organisations. The research objectives have been studied through six research papers utilising empirical data from three separate studies, including two sets of interview data and one of quantitative data. The study applies mainly the hermeneutical research approach. As a contribution of the study, a framework for successful operative level performance measurement was formed by matching the findings of the current study and performance measurement theory. The study extents the prior research regarding the impacts of performance measurement and the factors that have a positive effect on operative level performance and performance measurement. The results indicate that under suitable circumstances, performance measurement has positive impacts on different aspects of management, leadership, and the quality of working life. The results reveal that for example the perception of the employees and the management of the impacts of performance measurement on leadership style differ considerably. Furthermore, the fragmented literature has been reorganised into six factors that facilitate and improve the performance of the operations and employees, and the use of performance measurement at the operative level of an organisation. Regarding the managerial implications of the study, managers who operate around performance measurement can utilise the framework for example by putting the different phases of the framework into practice.
Resumo:
This thesis was produced for the Technology Marketing unit at the Nokia Research Center. Technology marketing was a new function at Nokia Research Center, and needed an established framework with the capacity to take into account multiple aspects for measuring the team performance. Technology marketing functions had existed in other parts of Nokia, yet no single method had been agreed upon for measuring their performance. The purpose of this study was to develop a performance measurement system for Nokia Research Center Technology Marketing. The target was that Nokia Research Center Technology Marketing had a framework for separate metrics; including benchmarking for starting level and target values in the future planning (numeric values were kept confidential within the company). As a result of this research, the Balanced Scorecard model of Kaplan and Norton, was chosen for the performance measurement system for Nokia Research Center Technology Marketing. This research selected the indicators, which were utilized in the chosen performance measurement system. Furthermore, performance measurement system was defined to guide the Head of Marketing in managing Nokia Research Center Technology Marketing team. During the research process the team mission, vision, strategy and critical success factors were outlined.
Resumo:
This thesis presents the calibration and comparison of two systems, a machine vision system that uses 3 channel RGB images and a line scanning spectral system. Calibration. is the process of checking and adjusting the accuracy of a measuring instrument by comparing it with standards. For the RGB system self-calibrating methods for finding various parameters of the imaging device were developed. Color calibration was done and the colors produced by the system were compared to the known colors values of the target. Software drivers for the Sony Robot were also developed and a mechanical part to connect a camera to the robot was also designed. For the line scanning spectral system, methods for the calibrating the alignment of the system and the measurement of the dimensions of the line scanned by the system were developed. Color calibration of the spectral system is also presented.
Resumo:
The aim of this project is to get used to another kind of programming. Since now, I used very complex programming languages to develop applications or even to program microcontrollers, but PicoCricket system is the evidence that we don’t need so complex development tools to get functional devices. PicoCricket system is the clear example of simple programming to make devices work the way we programmed it. There’s an easy but effective way to programs mall devices just saying what we want them to do. We cannot do complex algorithms and mathematical operations but we can program them in a short time. Nowadays, the easier and faster we produce, the more we earn. So the tendency is to develop fast, cheap and easy, and PicoCricket system can do it.
Resumo:
Aims:This study was carried out to evaluate the feasibility of two different methods to determine free flap perfusion in cancer patients undergoing major reconstructive surgery. The hypotheses was that low perfusion in the flap is associated with flap complications. Patients and methods: Between August 2002 and June 2008 at the Department of Otorhinolaryngology – Head and Neck Surgery, Department of Surgery, and at the PET Centre, Turku, 30 consecutive patients with 32 free flaps were included in this study. The perfusion of the free microvascular flaps was assessed with positron emission tomography (PET) and radioactive water ([15O] H2O) in 40 radiowater injections in 33 PET studies. Furthermore, 24 free flaps were monitored with a continuous tissue oxygen measurement using flexible polarographic catheters for an average of three postoperative days. Results: Of the 17 patients operated on for head and neck (HN) cancer and reconstructed with 18 free flaps, three re-operations were carried out due to poor tissue oxygenation as indicated by ptiO2 monitoring results and three other patients were reoperated on for postoperative hematomas in the operated area. Blood perfusion assessed with PET (BFPET) was above 2.0 mL / min / 100 g in all flaps and a low flap-to-muscle BFPET ratio appeared to correlate with poor survival of the flap. Survival in this group of HN cancer patients was 9.0 months (median, range 2.4-34.2) after a median follow-up of 11.9 months (range 1.0-61.0 months). Seven HN patients of this group are alive without any sign of recurrence and one patient has died of other causes. All of the 13 breast reconstruction patients included in the study are alive and free of disease at a median follow-up time of 27.4 months (range 13.9-35.7 months). Re-explorations were carried out in three patients due data provided by ptiO2 monitoring and one re-exploration was avoided on the basis of adequate blood perfusion assessed with PET. Two patients had donorsite morbidity and 3 patients had partial flap necrosis or fat necrosis. There were no total flap losses. Conclusions: PtiO2 monitoring is a feasible method of free flap monitoring when flap temperature is monitored and maintained close to the core temperature. When other monitoring methods give controversial results or are unavailable, [15O] H2O PET technique is feasible in the evaluation of the perfusion of the newly reconstructed free flaps.