88 resultados para direct measurement
Resumo:
The metabolism of an organism consists of a network of biochemical reactions that transform small molecules, or metabolites, into others in order to produce energy and building blocks for essential macromolecules. The goal of metabolic flux analysis is to uncover the rates, or the fluxes, of those biochemical reactions. In a steady state, the sum of the fluxes that produce an internal metabolite is equal to the sum of the fluxes that consume the same molecule. Thus the steady state imposes linear balance constraints to the fluxes. In general, the balance constraints imposed by the steady state are not sufficient to uncover all the fluxes of a metabolic network. The fluxes through cycles and alternative pathways between the same source and target metabolites remain unknown. More information about the fluxes can be obtained from isotopic labelling experiments, where a cell population is fed with labelled nutrients, such as glucose that contains 13C atoms. Labels are then transferred by biochemical reactions to other metabolites. The relative abundances of different labelling patterns in internal metabolites depend on the fluxes of pathways producing them. Thus, the relative abundances of different labelling patterns contain information about the fluxes that cannot be uncovered from the balance constraints derived from the steady state. The field of research that estimates the fluxes utilizing the measured constraints to the relative abundances of different labelling patterns induced by 13C labelled nutrients is called 13C metabolic flux analysis. There exist two approaches of 13C metabolic flux analysis. In the optimization approach, a non-linear optimization task, where candidate fluxes are iteratively generated until they fit to the measured abundances of different labelling patterns, is constructed. In the direct approach, linear balance constraints given by the steady state are augmented with linear constraints derived from the abundances of different labelling patterns of metabolites. Thus, mathematically involved non-linear optimization methods that can get stuck to the local optima can be avoided. On the other hand, the direct approach may require more measurement data than the optimization approach to obtain the same flux information. Furthermore, the optimization framework can easily be applied regardless of the labelling measurement technology and with all network topologies. In this thesis we present a formal computational framework for direct 13C metabolic flux analysis. The aim of our study is to construct as many linear constraints to the fluxes from the 13C labelling measurements using only computational methods that avoid non-linear techniques and are independent from the type of measurement data, the labelling of external nutrients and the topology of the metabolic network. The presented framework is the first representative of the direct approach for 13C metabolic flux analysis that is free from restricting assumptions made about these parameters.In our framework, measurement data is first propagated from the measured metabolites to other metabolites. The propagation is facilitated by the flow analysis of metabolite fragments in the network. Then new linear constraints to the fluxes are derived from the propagated data by applying the techniques of linear algebra.Based on the results of the fragment flow analysis, we also present an experiment planning method that selects sets of metabolites whose relative abundances of different labelling patterns are most useful for 13C metabolic flux analysis. Furthermore, we give computational tools to process raw 13C labelling data produced by tandem mass spectrometry to a form suitable for 13C metabolic flux analysis.
Resumo:
This thesis studies empirically whether measurement errors in aggregate production statistics affect sentiment and future output. Initial announcements of aggregate production are subject to measurement error, because many of the data required to compile the statistics are produced with a lag. This measurement error can be gauged as the difference between the latest revised statistic and its initial announcement. Assuming aggregate production statistics help forecast future aggregate production, these measurement errors are expected to affect macroeconomic forecasts. Assuming agents’ macroeconomic forecasts affect their production choices, these measurement errors should affect future output through sentiment. This thesis is primarily empirical, so the theoretical basis, strategic complementarity, is discussed quite briefly. However, it is a model in which higher aggregate production increases each agent’s incentive to produce. In this circumstance a statistical announcement which suggests aggregate production is high would increase each agent’s incentive to produce, thus resulting in higher aggregate production. In this way the existence of strategic complementarity provides the theoretical basis for output fluctuations caused by measurement mistakes in aggregate production statistics. Previous empirical studies suggest that measurement errors in gross national product affect future aggregate production in the United States. Additionally it has been demonstrated that measurement errors in the Index of Leading Indicators affect forecasts by professional economists as well as future industrial production in the United States. This thesis aims to verify the applicability of these findings to other countries, as well as study the link between measurement errors in gross domestic product and sentiment. This thesis explores the relationship between measurement errors in gross domestic production and sentiment and future output. Professional forecasts and consumer sentiment in the United States and Finland, as well as producer sentiment in Finland, are used as the measures of sentiment. Using statistical techniques it is found that measurement errors in gross domestic product affect forecasts and producer sentiment. The effect on consumer sentiment is ambiguous. The relationship between measurement errors and future output is explored using data from Finland, United States, United Kingdom, New Zealand and Sweden. It is found that measurement errors have affected aggregate production or investment in Finland, United States, United Kingdom and Sweden. Specifically, it was found that overly optimistic statistics announcements are associated with higher output and vice versa.
Resumo:
For the past twenty years, several indicator sets have been produced on international, national and regional levels. Most of the work has concentrated on the selection of the indicators and on collection of the pertinent data, but less attention has been given to the actual users and their needs. This dissertation focuses on the use of sustainable development indicator sets. The dissertation explores the reasons that have deterred the use of the indicators, discusses the role of sustainable development indicators in a policy-cycle and broadens the view of use by recognising three different types of use. The work presents two indicator development processes: The Finnish national sustainable development indicators and the socio-cultural indicators supporting the measurement of eco-efficiency in the Kymenlaakso Region. The sets are compared by using a framework created in this work to describe indicator process quality. It includes five principles supported by more specific criteria. The principles are high policy relevance, sound indicator quality, efficient participation, effective dissemination and long-term institutionalisation. The framework provided a way to identify the key obstacles for use. The two immediate problems with current indicator sets are that the users are unaware of them and the indicators are often unsuitable to their needs. The reasons for these major flaws are irrelevance of the indicators to the policy needs, technical shortcomings in the context and presentation, failure to engage the users in the development process, non-existent dissemination strategies and lack of institutionalisation to promote and update the indicators. The importance of the different obstacles differs among the users and use types. In addition to the indicator projects, materials used in the dissertation include 38 interviews of high-level policy-makers or civil servants close to them, statistics of the national indicator Internet-page downloads, citations of the national indicator publication, and the media coverage of both indicator sets. According to the results, the most likely use for a sustainable development indicator set by policy-makers is to learn about the concept. Very little evidence of direct use to support decision-making was available. Conceptual use is also common for other user groups, namely the media, civil servants, researchers, students and teachers. Decision-makers themselves consider the most obvious use for the indicators to be the promotion of their own views which is a form of legitimising use. The sustainable development indicators have different types of use in the policy cycle and most commonly expected instrumental use is not very likely or even desirable at all stages. Stages of persuading the public and the decision-makers about new problems as well as in formulating new policies employ legitimising use. Learning by conceptual use is also inherent to policy-making as people involved learn about the new situation. Instrumental use is most likely in policy formulation, implementation and evaluation. The dissertation is an article dissertation, including five papers that are published in scientific journals and an extensive introductory chapter that discusses and weaves together the papers.
Resumo:
Background: The incidence of all forms of congenital heart defects is 0.75%. For patients with congenital heart defects, life-expectancy has improved with new treatment modalities. Structural heart defects may require surgical or catheter treatment which may be corrective or palliative. Even those with corrective therapy need regular follow-up due to residual lesions, late sequelae, and possible complications after interventions. Aims: The aim of this thesis was to evaluate cardiac function before and after treatment for volume overload of the right ventricle (RV) caused by atrial septal defect (ASD), volume overload of the left ventricle (LV) caused by patent ductus arteriosus (PDA), and pressure overload of the LV caused by coarctation of the aorta (CoA), and to evaluate cardiac function in patients with Mulibrey nanism. Methods: In Study I, of the 24 children with ASD, 7 underwent surgical correction and 17 percutaneous occlusion of ASD. Study II had 33 patients with PDA undergoing percutaneous occlusion. In Study III, 28 patients with CoA underwent either surgical correction or percutaneous balloon dilatation of CoA. Study IV comprised 26 children with Mulibrey nanism. A total of 76 healthy voluntary children were examined as a control group. In each study, controls were matched to patients. All patients and controls underwent clinical cardiovascular examinations, two-dimensional (2D) and three-dimensional (3D) echocardiographic examinations, and blood sampling for measurement of natriuretic peptides prior to the intervention and twice or three times thereafter. Control children were examined once by 2D and 3D echocardiography. M-mode echocardiography was performed from the parasternal long axis view directed by 2D echocardiography. The left atrium-to-aorta (LA/Ao) ratio was calculated as an index of LA size. The end-diastolic and end-systolic dimensions of LV as well as the end-diastolic thicknesses of the interventricular septum and LV posterior wall were measured. LV volumes, and the fractional shortening (FS) and ejection fraction (EF) as indices of contractility were then calculated, and the z scores of LV dimensions determined. Diastolic function of LV was estimated from the mitral inflow signal obtained by Doppler echocardiography. In three-dimensional echocardiography, time-volume curves were used to determine end-diastolic and end-systolic volumes, stroke volume, and EF. Diastolic and systolic function of LV was estimated from the calculated first derivatives of these curves. Results: (I): In all children with ASD, during the one-year follow-up, the z score of the RV end-diastolic diameter decreased and that of LV increased. However, dilatation of RV did not resolve entirely during the follow-up in either treatment group. In addition, the size of LV increased more slowly in the surgical subgroup but reached control levels in both groups. Concentrations of natriuretic peptides in patients treated percutaneously increased during the first month after ASD closure and normalized thereafter, but in patients treated surgically, they remained higher than in controls. (II): In the PDA group, at baseline, the end-diastolic diameter of LV measured over 2SD in 5 of 33 patients. The median N-terminal pro-brain natriuretic peptide (proBNP) concentration before closure measured 72 ng/l in the control group and 141 ng/l in the PDA group (P = 0.001) and 6 months after closure measured 78.5 ng/l (P = NS). Patients differed from control subjects in indices of LV diastolic and systolic function at baseline, but by the end of follow-up, all these differences had disappeared. Even in the subgroup of patients with normal-sized LV at baseline, the LV end-diastolic volume decreased significantly during follow-up. (III): Before repair, the size and wall thickness of LV were higher in patients with CoA than in controls. Systolic blood pressure measured a median 123 mm Hg in patients before repair (P < 0.001) and 103 mm Hg one year thereafter, and 101 mm Hg in controls. The diameter of the coarctation segment measured a median 3.0 mm at baseline, and 7.9 at the 12-month (P = 0.006) follow-up. Thicknesses of the interventricular septum and posterior wall of the LV decreased after repair but increased to the initial level one year thereafter. The velocity time integrals of mitral inflow increased, but no changes were evident in LV dimensions or contractility. During follow-up, serum levels of natriuretic peptides decreased correlating with diastolic and systolic indices of LV function in 2D and 3D echocardiography. (IV): In 2D echocardiography, the interventricular septum and LV posterior wall were thicker, and velocity time integrals of mitral inflow shorter in patients with Mulibrey nanism than in controls. In 3D echocardiography, LV end-diastolic volume measured a median 51.9 (range 33.3 to 73.4) ml/m² in patients and 59.7 (range 37.6 to 87.6) ml/m² in controls (P = 0.040), and serum levels of ANPN and proBNP a median 0.54 (range 0.04 to 4.7) nmol/l and 289 (range 18 to 9170) ng/l, in patients and 0.28 (range 0.09 to 0.72) nmol/l (P < 0.001) and 54 (range 26 to 139) ng/l (P < 0.001) in controls. They correlated with several indices of diastolic LV function. Conclusions (I): During the one-year follow-up after the ASD closure, RV size decreased but did not normalize in all patients. The size of the LV normalized after ASD closure but the increase in LV size was slower in patients treated surgically than in those treated with the percutaneous technique. Serum levels of ANPN and proBNP were elevated prior to ASD closure but decreased thereafter to control levels in patients treated with the percutaneous technique but not in those treated surgically. (II): Changes in LV volume and function caused by PDA disappeared by 6 months after percutaneous closure. Even the children with normal-sized LV benefited from the procedure. (III): After repair of CoA, the RV size and the velocity time integrals of mitral inflow increased, and serum levels of natriuretic peptides decreased. Patients need close follow-up, despite cessation of LV pressure overload, since LV hypertrophy persisted even in normotensive patients with normal growth of the coarctation segment. (IV): In children with Mulibrey nanism, the LV wall was hypertrophied, with myocardial restriction and impairment of LV function. Significant correlations appeared between indices of LV function, size of the left atrium, and levels of natriuretic peptides, indicating that measurement of serum levels of natriuretic peptides can be used in the clinical follow-up of this patient group despite its dependence on loading conditions.
Resumo:
Thrombophilia (TF) predisposes both to venous and arterial thrombosis at a young age. TF may also impact the thrombosis or stenosis of hemodialysis (HD) vascular access in patients with end-stage renal disease (ESRD). When involved in severe thrombosis TF may associate with inappropriate response to anticoagulation. Lepirudin, a potent direct thrombin inhibitor (DTI), indicated for heparin-induced thrombocytopenia-related thrombosis, could offer a treatment alternative in TF. Monitoring of narrow-ranged lepirudin demands new insights also in laboratory. The above issues constitute the targets in this thesis. We evaluated the prevalence of TF in patients with ESRD and its impact upon thrombosis- or stenosis-free survival of the vascular access. Altogether 237 ESRD patients were prospectively screened for TF and thrombogenic risk factors prior to HD access surgery in 2002-2004 (mean follow-up of 3.6 years). TF was evident in 43 (18%) of the ESRD patients, more often in males (23 vs. 9%, p=0.009). Known gene mutations of FV Leiden and FII G20210A occurred in 4%. Vascular access sufficiently matured in 226 (95%). The 1-year thrombosis- and stenosis-free access survival was 72%. Female gender (hazards ratio, HR, 2.5; 95% CI 1.6-3.9) and TF (HR 1.9, 95% CI 1.1-3.3) were independent risk factors for the shortened thrombosis- and stenosis-free survival. Additionally, TF or thrombogenic background was found in relatively young patients having severe thrombosis either in hepatic veins (Budd-Chiari syndrome, BCS, one patient) or inoperable critical limb ischemia (CLI, six patients). Lepirudin was evaluated in an off-label setting in the severe thrombosis after inefficacious traditional anticoagulation without other treatment options except severe invasive procedures, such as lower extremity amputation. Lepirudin treatments were repeatedly monitored clinically and with laboratory assessments (e.g. activated partial thromboplastin time, APTT). Our preliminary studies with lepirudin in thrombotic calamities appeared safe, and no bleeds occurred. An effective DTI lepirudin calmed thrombosis as all patients gradually recovered. Only one limb amputation was performed 3 years later during the follow-up (mean 4 years). Furthermore, we aimed to overcome the limitations of APTT and confounding effects of warfarin (INR of 1.5-3.9) and lupus anticoagulant (LA). Lepirudin responses were assessed in vitro by five specific laboratory methods. Ecarin chromogenic assay (ECA) or anti-Factor IIa (anti-FIIa) correlated precisely (r=0.99) with each other and with spiked lepirudin in all plasma pools: normal, warfarin, and LA-containing plasma. In contrast, in the presence of warfarin and LA both APTT and prothrombinase-induced clotting time (PiCT®) were limited by non-linear and imprecise dose responses. As a global coagulation test APTT is useful in parallel to the precise chromogenic methods ECA or Anti-FIIa in challenging clinical situations. Lepirudin treatment requires multidisciplinary approach to ensure appropriate patient selection, interpretation of laboratory monitoring, and treatment safety. TF seemed to be associated with complicated thrombotic events, in venous (BCS), arterial (CLI), and vascular access systems. TF screening should be aimed to patients with repeated access complications or prior unprovoked thromboembolic events. Lepirudin inhibits free and clot-bound thrombin which heparin fails to inhibit. Lepirudin seems to offer a potent and safe option for treatment of severe thrombosis. Multi-centered randomized trials are necessary to assess the possible management of complicated thrombotic events with DTIs like lepirudin and seek prevention options against access complications.
Resumo:
Gastric motility disorders, including delayed gastric emptying (gastroparesis), impaired postprandial fundic relaxation, and gastric myoelectrical disorders, can occur in type 1 diabetes, chronic renal failure, and functional dyspepsia (FD). Symptoms like upper abdominal pain, early satiation, bloating, nausea and vomiting may be related to gastroparesis. Diabetic gastroparesis is related to autonomic neuropathy. Scintigraphy is the gold standard in measuring gastric emptying, but it is expensive, requires specific equipment, and exposes patients to radiation. It also gives information about the intragastric distribution of the test meal. The 13C-octanoic acid breath test (OBT) is an alternative, indirect method of measuring gastric emptying with a stable isotope. Electrogastrography (EGG) registers the slow wave originating in the pacemaker area of the stomach and regulating the peristaltic contractions of the antrum. This study compares these three methods of measuring gastric motility in patients with type 1 diabetes, functional dyspepsia, and chronic renal failure. Currently no effective drugs for treating gastric motility disorders are available. We studied the effect of nizatidine on gastric emptying, because in preliminary studies this drug has proven to have a prokinetic effect due to its cholinergic properties. Of the type 1 patients, 26% had delayed gastric emptying of solids as measured by scintigraphy. Abnormal intragastric distribution of the test meal occurred in 37% of the patients, indicating impaired fundic relaxation. The autonomic neuropathy score correlated positively with the gastric emptying rate of solids (P = 0.006), but HbA1C, plasma glucose levels, or abdominal symptoms were unrelated to gastric emptying or intragastric distribution of the test meal. Gastric emptying of both solids and liquids was normal in all FD patients but abnormal intragastric distribution occurred in 38% of the patients. Nizatidine improved symptom scores and quality of life in FD patients, but not significantly. Instead of enhancing, nizatidine slowed gastric emptying in FD patients (P < 0.05). No significant difference appeared in the frequency of the gastric slow waves measured by EGG in the patients and controls. The correlation between gastric half-emptying times of solids measured by scintigraphy and OBT was poor both in type 1 diabetes and FD patients. According to this study, dynamic dual-tracer scintigraphy is more accurate than OBT or EGG in measuring gastric emptying of solids. Additionally it provides information about gastric emptying of liquids and the intragastric distribution of the ingested test meal.
Resumo:
Airway inflammation is a key feature of bronchial asthma. In asthma management, according to international guidelines, the gold standard is anti-inflammatory treatment. Currently, only conventional procedures (i.e., symptoms, use of rescue medication, PEF-variability, and lung function tests) were used to both diagnose and evaluate the results of treatment with anti-inflammatory drugs. New methods for evaluation of degree of airway inflammation are required. Nitric oxide (NO) is a gas which is produced in the airways of healthy subjects and especially produced in asthmatic airways. Measurement of NO from the airways is possible, and NO can be measured from exhaled air. Fractional exhaled NO (FENO) is increased in asthma, and the highest concentrations are measured in asthmatic patients not treated with inhaled corticosteroids (ICS). Steroid-treated patients with asthma had levels of FENO similar to those of healthy controls. Atopic asthmatics had higher levels of FENO than did nonatopic asthmatics, indicating that level of atopy affected FENO level. Associations between FENO and bronchial hyperresponsiveness (BHR) occur in asthma. The present study demonstrated that measurement of FENO had good reproducibility, and the FENO variability was reasonable both short- and long-term in both healthy subjects and patients with respiratory symptoms or asthma. We demonstrated the upper normal limit for healthy subjects, which was 12 ppb calculated from two different healthy study populations. We showed that patients with respiratory symptoms who did not fulfil the diagnostic criteria of asthma had FENO values significantly higher than in healthy subjects, but significantly lower than in asthma patients. These findings suggest that BHR to histamine is a sensitive indicator of the effect of ICS and a valuable tool for adjustment of corticosteroid treatment in mild asthma. The findings further suggest that intermittent treatment periods of a few weeks’ duration are insufficient to provide long-term control of BHR in patients with mild persistent asthma. Moreover, during the treatment with ICS changes in BHR and changes in FENO were associated. FENO level was associated with BHR measured by a direct (histamine challenge) or indirect method (exercise challenge) in steroid-naïve symptomatic, non-smoking asthmatics. Although these associations could be found only in atopics, FENO level in nonatopic asthma was also increased. It can thus be concluded that assessment of airway inflammation by measuring FENO can be useful for clinical purposes. The methodology of FENO measurements is now validated. Especially in those patients with respiratory symptoms who did not fulfil the diagnostic criteria of asthma, FENO measurement can aid in treatment decisions. Serial measurement of FENO during treatment with ICS can be a complementary or an alternative method for evaluation in patients with asthma.
Resumo:
In order to predict the current state and future development of Earth s climate, detailed information on atmospheric aerosols and aerosol-cloud-interactions is required. Furthermore, these interactions need to be expressed in such a way that they can be represented in large-scale climate models. The largest uncertainties in the estimate of radiative forcing on the present day climate are related to the direct and indirect effects of aerosol. In this work aerosol properties were studied at Pallas and Utö in Finland, and at Mount Waliguan in Western China. Approximately two years of data from each site were analyzed. In addition to this, data from two intensive measurement campaigns at Pallas were used. The measurements at Mount Waliguan were the first long term aerosol particle number concentration and size distribution measurements conducted in this region. They revealed that the number concentration of aerosol particles at Mount Waliguan were much higher than those measured at similar altitudes in other parts of the world. The particles were concentrated in the Aitken size range indicating that they were produced within a couple of days prior to reaching the site, rather than being transported over thousands of kilometers. Aerosol partitioning between cloud droplets and cloud interstitial particles was studied at Pallas during the two measurement campaigns, First Pallas Cloud Experiment (First PaCE) and Second Pallas Cloud Experiment (Second PaCE). The method of using two differential mobility particle sizers (DMPS) to calculate the number concentration of activated particles was found to agree well with direct measurements of cloud droplet. Several parameters important in cloud droplet activation were found to depend strongly on the air mass history. The effects of these parameters partially cancelled out each other. Aerosol number-to-volume concentration ratio was studied at all three sites using data sets with long time-series. The ratio was found to vary more than in earlier studies, but less than either aerosol particle number concentration or volume concentration alone. Both air mass dependency and seasonal pattern were found at Pallas and Utö, but only seasonal pattern at Mount Waliguan. The number-to-volume concentration ratio was found to follow the seasonal temperature pattern well at all three sites. A new parameterization for partitioning between cloud droplets and cloud interstitial particles was developed. The parameterization uses aerosol particle number-to-volume concentration ratio and aerosol particle volume concentration as the only information on the aerosol number and size distribution. The new parameterization is computationally more efficient than the more detailed parameterizations currently in use, but the accuracy of the new parameterization was slightly lower. The new parameterization was also compared to directly observed cloud droplet number concentration data, and a good agreement was found.
Resumo:
There is a growing need to understand the exchange processes of momentum, heat and mass between an urban surface and the atmosphere as they affect our quality of life. Understanding the source/sink strengths as well as the mixing mechanisms of air pollutants is particularly important due to their effects on human health and climate. This work aims to improve our understanding of these surface-atmosphere interactions based on the analysis of measurements carried out in Helsinki, Finland. The vertical exchange of momentum, heat, carbon dioxide (CO2) and aerosol particle number was measured with the eddy covariance technique at the urban measurement station SMEAR III, where the concentrations of ultrafine, accumulation mode and coarse particle numbers, nitrogen oxides (NOx), carbon monoxide (CO), ozone (O3) and sulphur dioxide (SO2) were also measured. These measurements were carried out over varying measurement periods between 2004 and 2008. In addition, black carbon mass concentration was measured at the Helsinki Metropolitan Area Council site during three campaigns in 1996-2005. Thus, the analyzed dataset covered far, the most comprehensive long-term measurements of turbulent fluxes reported in the literature from urban areas. Moreover, simultaneously measured urban air pollution concentrations and turbulent fluxes were examined for the first time. The complex measurement surrounding enabled us to study the effect of different urban covers on the exchange processes from a single point of measurement. The sensible and latent heat fluxes closely followed the intensity of solar radiation, and the sensible heat flux always exceeded the latent heat flux due to anthropogenic heat emissions and the conversion of solar radiation to direct heat in urban structures. This urban heat island effect was most evident during winter nights. The effect of land use cover was seen as increased sensible heat fluxes in more built-up areas than in areas with high vegetation cover. Both aerosol particle and CO2 exchanges were largely affected by road traffic, and the highest diurnal fluxes reached 109 m-2 s-1 and 20 µmol m-2 s-1, respectively, in the direction of the road. Local road traffic had the greatest effect on ultrafine particle concentrations, whereas meteorological variables were more important for accumulation mode and coarse particle concentrations. The measurement surroundings of the SMEAR III station served as a source for both particles and CO2, except in summer, when the vegetation uptake of CO2 exceeded the anthropogenic sources in the vegetation sector in daytime, and we observed a downward median flux of 8 µmol m-2 s-1. This work improved our understanding of the interactions between an urban surface and the atmosphere in a city located at high latitudes in a semi-continental climate. The results can be utilised in urban planning, as the fraction of vegetation cover and vehicular activity were found to be the major environmental drivers affecting most of the exchange processes. However, in order to understand these exchange and mixing processes on a city scale, more measurements above various urban surfaces accompanied by numerical modelling are required.
Resumo:
The methods for estimating patient exposure in x-ray imaging are based on the measurement of radiation incident on the patient. In digital imaging, the useful dose range of the detector is large and excessive doses may remain undetected. Therefore, real-time monitoring of radiation exposure is important. According to international recommendations, the measurement uncertainty should be lower than 7% (confidence level 95%). The kerma-area product (KAP) is a measurement quantity used for monitoring patient exposure to radiation. A field KAP meter is typically attached to an x-ray device, and it is important to recognize the effect of this measurement geometry on the response of the meter. In a tandem calibration method, introduced in this study, a field KAP meter is used in its clinical position and calibration is performed with a reference KAP meter. This method provides a practical way to calibrate field KAP meters. However, the reference KAP meters require comprehensive calibration. In the calibration laboratory it is recommended to use standard radiation qualities. These qualities do not entirely correspond to the large range of clinical radiation qualities. In this work, the energy dependence of the response of different KAP meter types was examined. According to our findings, the recommended accuracy in KAP measurements is difficult to achieve with conventional KAP meters because of their strong energy dependence. The energy dependence of the response of a novel large KAP meter was found out to be much lower than with a conventional KAP meter. The accuracy of the tandem method can be improved by using this meter type as a reference meter. A KAP meter cannot be used to determine the radiation exposure of patients in mammography, in which part of the radiation beam is always aimed directly at the detector without attenuation produced by the tissue. This work assessed whether pixel values from this detector area could be used to monitor the radiation beam incident on the patient. The results were congruent with the tube output calculation, which is the method generally used for this purpose. The recommended accuracy can be achieved with the studied method. New optimization of radiation qualities and dose level is needed when other detector types are introduced. In this work, the optimal selections were examined with one direct digital detector type. For this device, the use of radiation qualities with higher energies was recommended and appropriate image quality was achieved by increasing the low dose level of the system.
Resumo:
This thesis contains three subject areas concerning particulate matter in urban area air quality: 1) Analysis of the measured concentrations of particulate matter mass concentrations in the Helsinki Metropolitan Area (HMA) in different locations in relation to traffic sources, and at different times of year and day. 2) The evolution of traffic exhaust originated particulate matter number concentrations and sizes in local street scale are studied by a combination of a dispersion model and an aerosol process model. 3) Some situations of high particulate matter concentrations are analysed with regard to their meteorological origins, especially temperature inversion situations, in the HMA and three other European cities. The prediction of the occurrence of meteorological conditions conducive to elevated particulate matter concentrations in the studied cities is examined. The performance of current numerical weather forecasting models in the case of air pollution episode situations is considered. The study of the ambient measurements revealed clear diurnal variation of the PM10 concentrations in the HMA measurement sites, irrespective of the year and the season of the year. The diurnal variation of local vehicular traffic flows seemed to have no substantial correlation with the PM2.5 concentrations, indicating that the PM10 concentrations were originated mainly from local vehicular traffic (direct emissions and suspension), while the PM2.5 concentrations were mostly of regionally and long-range transported origin. The modelling study of traffic exhaust dispersion and transformation showed that the number concentrations of particles originating from street traffic exhaust undergo a substantial change during the first tens of seconds after being emitted from the vehicle tailpipe. The dilution process was shown to dominate total number concentrations. Minimal effect of both condensation and coagulation was seen in the Aitken mode number concentrations. The included air pollution episodes were chosen on the basis of occurrence in either winter or spring, and having at least partly local origin. In the HMA, air pollution episodes were shown to be linked to predominantly stable atmospheric conditions with high atmospheric pressure and low wind speeds in conjunction with relatively low ambient temperatures. For the other European cities studied, the best meteorological predictors for the elevated concentrations of PM10 were shown to be temporal (hourly) evolutions of temperature inversions, stable atmospheric stability and in some cases, wind speed. Concerning the weather prediction during particulate matter related air pollution episodes, the use of the studied models were found to overpredict pollutant dispersion, leading to underprediction of pollutant concentration levels.
Resumo:
Volatile organic compounds (VOCs) affect atmospheric chemistry and thereafter also participate in the climate change in many ways. The long-lived greenhouse gases and tropospheric ozone are the most important radiative forcing components warming the climate, while aerosols are the most important cooling component. VOCs can have warming effects on the climate: they participate in tropospheric ozone formation and compete for oxidants with the greenhouse gases thus, for example, lengthening the atmospheric lifetime of methane. Some VOCs, on the other hand, cool the atmosphere by taking part in the formation of aerosol particles. Some VOCs, in addition, have direct health effects, such as carcinogenic benzene. VOCs are emitted into the atmosphere in various processes. Primary emissions of VOC include biogenic emissions from vegetation, biomass burning and human activities. VOCs are also produced in secondary emissions from the reactions of other organic compounds. Globally, forests are the largest source of VOC entering the atmosphere. This thesis focuses on the measurement results of emissions and concentrations of VOCs in one of the largest vegetation zones in the world, the boreal zone. An automated sampling system was designed and built for continuous VOC concentration and emission measurements with a proton transfer reaction - mass spectrometer (PTR-MS). The system measured one hour at a time in three-hourly cycles: 1) ambient volume mixing-ratios of VOCs in the Scots-pine-dominated boreal forest, 2) VOC fluxes above the canopy, and 3) VOC emissions from Scots pine shoots. In addition to the online PTR-MS measurements, we determined the composition and seasonality of the VOC emissions from a Siberian larch with adsorbent samples and GC-MS analysis. The VOC emissions from Siberian larch were reported for the fist time in the literature. The VOC emissions were 90% monoterpenes (mainly sabinene) and the rest sesquiterpenes (mainly a-farnesene). The normalized monoterpene emission potentials were highest in late summer, rising again in late autumn. The normalized sesquiterpene emission potentials were also highest in late summer, but decreased towards the autumn. The emissions of mono- and sesquiterpenes from the deciduous Siberian larch, as well as the emissions of monoterpenes measured from the evergreen Scots pine, were well described by the temperature-dependent algorithm. In the Scots-pine-dominated forest, canopy-scale emissions of monoterpenes and oxygenated VOCs (OVOCs) were of the same magnitude. Methanol and acetone were the most abundant OVOCs emitted from the forest and also in the ambient air. Annually, methanol and mixing ratios were of the order of 1 ppbv. The monoterpene and sum of isoprene 2-methyl-3-buten-2-ol (MBO) volume mixing-ratios were an order of magnitude lower. The majority of the monoterpene and methanol emissions from the Scots-pinedominated forest were explained by emissions from Scots pine shoots. The VOCs were divided into three classes based on the dynamics of the summer-time concentrations: 1) reactive compounds with local biological, anthropogenic or chemical sources (methanol, acetone, butanol and hexanal), 2) compounds whose emissions are only temperaturedependent (monoterpenes), 3) long-lived compounds (benzene, acetaldehyde). Biogenic VOC (methanol, acetone, isoprene MBO and monoterpene) volume mixing-ratios had clear diurnal patterns during summer. The ambient mixing ratios of other VOCs did not show this behaviour. During winter we did not observe systematical diurnal cycles for any of the VOCs. Different sources, removal processes and turbulent mixing explained the dynamics of the measured mixing-ratios qualitatively. However, quantitative understanding will require longterm emission measurements of the OVOCs and the use of comprehensive chemistry models. Keywords: Hydrocarbons, VOC, fluxes, volume mixing-ratio, boreal forest
Resumo:
We consider an obstacle scattering problem for linear Beltrami fields. A vector field is a linear Beltrami field if the curl of the field is a constant times itself. We study the obstacles that are of Neumann type, that is, the normal component of the total field vanishes on the boundary of the obstacle. We prove the unique solvability for the corresponding exterior boundary value problem, in other words, the direct obstacle scattering model. For the inverse obstacle scattering problem, we deduce the formulas that are needed to apply the singular sources method. The numerical examples are computed for the direct scattering problem and for the inverse scattering problem.