999 resultados para Oxygen -- Measurement
Resumo:
Prompt production of charmonium χ c0, χ c1 and χ c2 mesons is studied using proton-proton collisions at the LHC at a centre-of-mass energy of TeX TeV. The χ c mesons are identified through their decay to J/ψγ, with J/ψ → μ + μ − using photons that converted in the detector. A data sample, corresponding to an integrated luminosity of 1.0 fb−1 collected by the LHCb detector, is used to measure the relative prompt production rate of χ c1 and χ c2 in the rapidity range 2.0 < y < 4.5 as a function of the J/ψ transverse momentum from 3 to 20 GeV/c. First evidence for χ c0 meson production at a high-energy hadron collider is also presented.
Resumo:
In this thesis (TFG) the results of the comparison of three assays for the measurement of AhR ligand activity are exposed. This study was part of a collaborative project aiming at the characterization of the AhR signaling activities of known naturally occurring compounds to explore the potential of using non-toxic compounds to treat inflammatory diseases via oral administration. The first goal of this project was to find an assay able to measure AhR-activity, so the comparison of different assays has been done in order to find the most convenient one according to the efficiency, sensitivity and precision. Moreover, other elements with operational nature such as price, toxicity of components or ease of use has been considered. From the use of compounds known from the literature to be AhR ligands, three assays have been tested: (1) P450-GloTM CYP1A2 Induction/Inhibition assay, (2) quantitative Polymerase Chain Reaction (qPCR) and (3) DR. CALUX® Bioassay. Moreover, a different experiment using the last assay was performed for the study in vivo of the transport of the compounds tested. The results of the TFG suggested the DR. CALUX® Bioassay as the most promising assay to be used for the screening of samples as AhR-ligands because it is quicker, easier to handle and less expensive than qPCR and more reproducible than the CYP1A2 Induction/Inhibition assay. Moreover, the use of this assay allowed having a first idea of which compounds are uptaken by the epithelial barrier and in with direction the transport happens.
Resumo:
The objective of this study was to evaluate the methodological characteristics of cost-effectiveness evaluations carried out in Spain, since 1990, which include LYG as an outcome to measure the incremental cost-effectiveness ratio. METHODS: A systematic review of published studies was conducted describing their characteristics and methodological quality. We analyse the cost per LYG results in relation with a commonly accepted Spanish cost-effectiveness threshold and the possible relation with the cost per quality adjusted life year (QALY) gained when they both were calculated for the same economic evaluation. RESULTS: A total of 62 economic evaluations fulfilled the selection criteria, 24 of them including the cost per QALY gained result as well. The methodological quality of the studies was good (55%) or very good (26%). A total of 124 cost per LYG results were obtained with a mean ratio of 49,529
Resumo:
Objective To evaluate the sonographic measurement of subcutaneous and visceral fat in correlation with the grade of hepatic steatosis. Materials and Methods In the period from October 2012 to January 2013, 365 patients were evaluated. The subcutaneous and visceral fat thicknesses were measured with a convex, 3–4 MHz transducer transversely placed 1 cm above the umbilical scar. The distance between the internal aspect of the abdominal rectus muscle and the posterior aortic wall in the abdominal midline was considered for measurement of the visceral fat. Increased liver echogenicity, blurring of vascular margins and increased acoustic attenuation were the parameters considered in the quantification of hepatic steatosis. Results Steatosis was found in 38% of the study sample. In the detection of moderate to severe steatosis, the area under the ROC curve was 0.96 for women and 0.99 for men, indicating cut-off values for visceral fat thickness of 9 cm and 10 cm, respectively. Conclusion The present study evidenced the correlation between steatosis and visceral fat thickness and suggested values for visceral fat thickness to allow the differentiation of normality from risk for steatohepatitis.
Resumo:
We determined if performance and mechanical running alterations during repeated treadmill sprinting differ between severely hot and hypoxic environments. Six male recreational sportsmen (team- and racket-sport background) performed five 5-s sprints with 25-s recovery on an instrumented treadmill, allowing the continuous (step-by-step) measurement of running kinetics/kinematics and spring-mass characteristics. These were randomly conducted in control (CON; 25°C/45% RH, inspired fraction of oxygen = 20.9%), hot (HOT; 38°C/21% RH, inspired fraction of oxygen = 20.9%; end-exercise core temperature: ~38.6°C) and normobaric hypoxic (HYP, 25°C/45% RH, inspired fraction of oxygen = 13.3%/simulated altitude of ~3600 m; end-exercise pulse oxygen saturation: ~84%) environments. Running distance was lower (P < 0.05) in HOT compared to CON and HYP for the first sprint but larger (P < 0.05) sprint decrement score occurred in HYP versus HOT and CON. Compared to CON, the cumulated distance covered over the five sprints was lower (P < 0.01) in HYP but not in HOT. Irrespective of the environmental condition, significant changes occurred from the first to the fifth sprint repetitions (all three conditions compounded) in selected running kinetics (mean horizontal forces, P < 0.01) or kinematics (contact and swing times, both P < 0.001; step frequency, P < 0.001) and spring-mass characteristics (vertical stiffness, P < 0.001; leg stiffness, P < 0.01). No significant interaction between sprint number and condition was found for any mechanical data. Preliminary evidence indicates that repeated-sprint ability is more impaired in hypoxia than in a hot environment, when compared to a control condition. However, as sprints are repeated, mechanical alterations appear not to be exacerbated in severe (heat, hypoxia) environmental conditions.
Resumo:
BACKGROUND: Underweight and severe and morbid obesity are associated with highly elevated risks of adverse health outcomes. We estimated trends in mean body-mass index (BMI), which characterises its population distribution, and in the prevalences of a complete set of BMI categories for adults in all countries. METHODS: We analysed, with use of a consistent protocol, population-based studies that had measured height and weight in adults aged 18 years and older. We applied a Bayesian hierarchical model to these data to estimate trends from 1975 to 2014 in mean BMI and in the prevalences of BMI categories (<18·5 kg/m(2) [underweight], 18·5 kg/m(2) to <20 kg/m(2), 20 kg/m(2) to <25 kg/m(2), 25 kg/m(2) to <30 kg/m(2), 30 kg/m(2) to <35 kg/m(2), 35 kg/m(2) to <40 kg/m(2), ≥40 kg/m(2) [morbid obesity]), by sex in 200 countries and territories, organised in 21 regions. We calculated the posterior probability of meeting the target of halting by 2025 the rise in obesity at its 2010 levels, if post-2000 trends continue. FINDINGS: We used 1698 population-based data sources, with more than 19·2 million adult participants (9·9 million men and 9·3 million women) in 186 of 200 countries for which estimates were made. Global age-standardised mean BMI increased from 21·7 kg/m(2) (95% credible interval 21·3-22·1) in 1975 to 24·2 kg/m(2) (24·0-24·4) in 2014 in men, and from 22·1 kg/m(2) (21·7-22·5) in 1975 to 24·4 kg/m(2) (24·2-24·6) in 2014 in women. Regional mean BMIs in 2014 for men ranged from 21·4 kg/m(2) in central Africa and south Asia to 29·2 kg/m(2) (28·6-29·8) in Polynesia and Micronesia; for women the range was from 21·8 kg/m(2) (21·4-22·3) in south Asia to 32·2 kg/m(2) (31·5-32·8) in Polynesia and Micronesia. Over these four decades, age-standardised global prevalence of underweight decreased from 13·8% (10·5-17·4) to 8·8% (7·4-10·3) in men and from 14·6% (11·6-17·9) to 9·7% (8·3-11·1) in women. South Asia had the highest prevalence of underweight in 2014, 23·4% (17·8-29·2) in men and 24·0% (18·9-29·3) in women. Age-standardised prevalence of obesity increased from 3·2% (2·4-4·1) in 1975 to 10·8% (9·7-12·0) in 2014 in men, and from 6·4% (5·1-7·8) to 14·9% (13·6-16·1) in women. 2·3% (2·0-2·7) of the world's men and 5·0% (4·4-5·6) of women were severely obese (ie, have BMI ≥35 kg/m(2)). Globally, prevalence of morbid obesity was 0·64% (0·46-0·86) in men and 1·6% (1·3-1·9) in women. INTERPRETATION: If post-2000 trends continue, the probability of meeting the global obesity target is virtually zero. Rather, if these trends continue, by 2025, global obesity prevalence will reach 18% in men and surpass 21% in women; severe obesity will surpass 6% in men and 9% in women. Nonetheless, underweight remains prevalent in the world's poorest regions, especially in south Asia. FUNDING: Wellcome Trust, Grand Challenges Canada.
Resumo:
The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation
Resumo:
Työn tarkoituksena oli hakea mittausjärjestelmän raja-arvoja optiselle kamerapohjaiselle roskalaskentajärjestelmälle sekä testata roskalaskentajärjestelmän toimivuus käytännössä. Tavoitteena oli tuotteistaa kamerapohjainen roskalaskenta-analyysi palvelutuotteeksi, jota voitaisiin hyödyntää sihtien kuntokartoituksessa ja ongelmanratkaisuvälineenä. Teoriaosa koostui kahdesta kokonaisuudesta: sulpun epäpuhtauksista, roskalaskennan teoriasta ja epäpuhtauksien mittausmenetelmistä sekä markkinoinnista, tuotteistamis- ja lanseerausprosessista palvelutuotteen näkökulmasta. Kokeellisessa osassa selvitettiin kamerapohjaiseen roskalaskentaanalyysiin vaikuttavia tekijöitä: kameran tarkennus, kuvan terävyys, analysoitavan arkin väri, neliömassa ja roskapitoisuus, impregnointi, valonlähde, kuvan muokkaus, tiedostomuoto ja pikselimäärä. Kamerapohjaisen roskalaskenta-analyysin soveltuvuus käytäntöön testattiin tehdasesimerkin avulla. Havaittiin, että kamerapohjaista roskalaskenta-analyysiä voitaisiin käyttää lähes kaikille massatyypeille. Työssä määriteltiin kalibrointimenetelmä kameran tarkentamiseksi arkin tasoon sekä suljinnopeusanalyysi massatyypistä riippuvan suljinnopeuden selvitykseen. Kamerapohjaisessa roskalaskenta-analyysissä määritettiin käytettäväksi arkin neliömassana 60 g/m2, suljinaukkoa F5 ja terävyysasetusta 5. Tulokseksi saatiin, että analysoitavia arkkeja ei tarvitse impregnoida tai jälkikäsitellä. Korrelaatiota Somerville-erotustehokkuuteen ei löytynyt. Esimerkkitehtaasta selvitettiin primääriportaan roskapitoisuudet ja erotustehokkuudet. Tehdasesimerkin tulosten perusteella havaittiin happivaiheen ja D0-vaiheen olleen tehokkaimpia epäpuhtauksien poistajia.
Resumo:
One of the primary goals for food packages is to protect food against harmful environment, especially oxygen and moisture. The gas transmission rate is the total gas transport through the package, both by permeation through the package material and by leakage through pinholes and cracks. The shelf life of a product can be extended, if the food is stored in a gas tight package. Thus there is a need to test gas tightness of packages. There are several tightness testing methods, and they can be broadly divided into destructive and nondestructive methods. One of the most sensitive methods to detect leaks is by using a non destructive tracer gas technique. Carbon dioxide, helium and hydrogen are the most commonly used tracer gases. Hydrogen is the lightest and the smallest of all gases, which allows it to escape rapidly from the leak areas. The low background concentration of H2 in air (0.5 ppm) enables sensitive leak detection. With a hydrogen leak detector it is also possible to locate leaks. That is not possible with many other tightness testing methods. The experimental work has been focused on investigating the factors which affect the measurement results with the H2leak detector. Also reasons for false results were searched to avoid them in upcoming measurements. From the results of these experiments, the appropriate measurement practice was created in order to have correct and repeatable results. The most important thing for good measurement results is to keep the probe of the detector tightly against the leak. Because of its high diffusion rate, the HZ concentration decreases quickly if holding the probe further away from the leak area and thus the measured H2 leaks would be incorrect and small leaks could be undetected. In the experimental part hydrogen, oxygen and water vapour transmissions through laser beam reference holes (diameters 1 100 μm) were also measured and compared. With the H2 leak detector it was possible to detect even a leakage through 1 μm (diameter) within a few seconds. Water vapour did not penetrate even the largest reference hole (100 μm), even at tropical conditions (38 °C, 90 % RH), whereas some O2 transmission occurred through the reference holes larger than 5 μm. Thus water vapour transmission does not have a significant effect on food deterioration, if the diameter of the leak is less than 100 μm, but small leaks (5 100 μm) are more harmful for the food products, which are sensitive to oxidation.
Resumo:
Induction motors are widely used in industry, and they are generally considered very reliable. They often have a critical role in industrial processes, and their failure can lead to significant losses as a result of shutdown times. Typical failures of induction motors can be classified into stator, rotor, and bearing failures. One of the reasons for a bearing damage and eventually a bearing failure is bearing currents. Bearing currents in induction motors can be divided into two main categories; classical bearing currents and inverter-induced bearing currents. A bearing damage caused by bearing currents results, for instance, from electrical discharges that take place through the lubricant film between the raceways of the inner and the outer ring and the rolling elements of a bearing. This phenomenon can be considered similar to the one of electrical discharge machining, where material is removed by a series of rapidly recurring electrical arcing discharges between an electrode and a workpiece. This thesis concentrates on bearing currents with a special reference to bearing current detection in induction motors. A bearing current detection method based on radio frequency impulse reception and detection is studied. The thesis describes how a motor can work as a “spark gap” transmitter and discusses a discharge in a bearing as a source of radio frequency impulse. It is shown that a discharge, occurring due to bearing currents, can be detected at a distance of several meters from the motor. The issues of interference, detection, and location techniques are discussed. The applicability of the method is shown with a series of measurements with a specially constructed test motor and an unmodified frequency-converter-driven motor. The radio frequency method studied provides a nonintrusive method to detect harmful bearing currents in the drive system. If bearing current mitigation techniques are applied, their effectiveness can be immediately verified with the proposed method. The method also gives a tool to estimate the harmfulness of the bearing currents by making it possible to detect and locate individual discharges inside the bearings of electric motors.
Resumo:
The purpose of this thesis was to investigate creating and improving category purchasing visibility for corporate procurement by utilizing financial information. This thesis was a part of the global category driven spend analysis project of Konecranes Plc. While creating general understanding for building category driven corporate spend visibility, the IT architecture and needed purchasing parameters for spend analysis were described. In the case part of the study three manufacturing plants of Konecranes Standard Lifting, Heavy Lifting and Services business areas were examined. This included investigating the operative IT system architecture and needed processes for building corporate spend visibility. The key findings of this study were the identification of the needed processes for gathering purchasing data elements while creating corporate spend visibility in fragmented source system environment. As an outcome of the study, roadmap presenting further development areas was introduced for Konecranes.
Resumo:
The research around performance measurement and management has focused mainly on the design, implementation and use of performance measurement systems. However, there is little evidence about the actual impacts of performance measurement on the different levels of business and operations of organisations, as well as the underlying factors that lead to a positive impact of performance measurement. The study thus focuses on this research gap, which can be considered both important and challenging to cover. The first objective of the study was to examine the impacts of performance measurement on different aspects of management, leadership and the quality of working life, after which the factors that facilitate and improve performance and performance measurement at the operative level of an organisation were examined. The second objective was to study how these factors operate in practice. The third objective focused on the construction of a framework for successful operative level performance measurement and the utilisation of the factors in the organisations. The research objectives have been studied through six research papers utilising empirical data from three separate studies, including two sets of interview data and one of quantitative data. The study applies mainly the hermeneutical research approach. As a contribution of the study, a framework for successful operative level performance measurement was formed by matching the findings of the current study and performance measurement theory. The study extents the prior research regarding the impacts of performance measurement and the factors that have a positive effect on operative level performance and performance measurement. The results indicate that under suitable circumstances, performance measurement has positive impacts on different aspects of management, leadership, and the quality of working life. The results reveal that for example the perception of the employees and the management of the impacts of performance measurement on leadership style differ considerably. Furthermore, the fragmented literature has been reorganised into six factors that facilitate and improve the performance of the operations and employees, and the use of performance measurement at the operative level of an organisation. Regarding the managerial implications of the study, managers who operate around performance measurement can utilise the framework for example by putting the different phases of the framework into practice.
Resumo:
This thesis was produced for the Technology Marketing unit at the Nokia Research Center. Technology marketing was a new function at Nokia Research Center, and needed an established framework with the capacity to take into account multiple aspects for measuring the team performance. Technology marketing functions had existed in other parts of Nokia, yet no single method had been agreed upon for measuring their performance. The purpose of this study was to develop a performance measurement system for Nokia Research Center Technology Marketing. The target was that Nokia Research Center Technology Marketing had a framework for separate metrics; including benchmarking for starting level and target values in the future planning (numeric values were kept confidential within the company). As a result of this research, the Balanced Scorecard model of Kaplan and Norton, was chosen for the performance measurement system for Nokia Research Center Technology Marketing. This research selected the indicators, which were utilized in the chosen performance measurement system. Furthermore, performance measurement system was defined to guide the Head of Marketing in managing Nokia Research Center Technology Marketing team. During the research process the team mission, vision, strategy and critical success factors were outlined.