88 resultados para direct measurement

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present the first direct measurement of the $W$ production charge asymmetry as a function of the $W$ boson rapidity $\yW$ in $\ppbar$ collisions at $\sqrt{s} = 1.96$ $\TeV$. We use a sample of $\wenu$ events in data from 1 $\ifb$ of integrated luminosity collected using the CDF II detector. In the region $|\yW|

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

High quality of platelet analytics requires specialized knowledge and skills. It was applied to analyze platelet activation and aggregation responses in a prospective controlled study of patients with Finnish type of amyloidosis. The 20 patients with AGel amyloidosis displayed a delayed and more profound platelet shape change than healthy siblings and healthy volunteers, which may be related to altered fragmentation of mutated gelsolin during platelet activation. Alterations in platelet shape change have not been reported in association with platelet disorders. In the rare Bernard-Soulier syndrome with Asn45Ser mutation of glycoprotein (GP) IX, the diagnostic defect in the expression of GPIb-IX-V complex was characterized in seven Finnish patients, also an internationally exceptionally large patient series. When measuring thrombopoietin in serial samples of amniotic fluid and cord blood of 15 pregnant women with confirmed or suspected fetal alloimmune thrombocytopenia, the lower limit of detection could be extended. The results approved that thrombopoietin is present already in amniotic fluid. The application of various non-invasive means for diagnosing thrombocytopenia (TP) revealed that techniques for estimating the proportion of young, i.e. large platelets, such as direct measurement of reticulated platelets and the mean platelet size, would be useful for evaluating platelet kinetics in a given patient. Due to different kinetics between thrombopoietin and increase of young platelets in circulation, these measurements may have most predictive value when measured from simultaneous samples. Platelet autoantibodies were present not only in isolated autoimmune TP but also in patients without TP where disappearance of platelets might be compensated by increased production. The autoantibodies may also persist after TP has been cured. Simultaneous demonstration of increased young platelets (or increased mean platelet volume) in peripheral blood and the presence of platelet associated IgG specificities to major glycoproteins (GPIb-IX and GPIIb-IIIa) may be considered diagnostic for autoimmune TP. Measurement of a soluble marker as a sign of thrombin activation and proceeding deterioration of platelet components was applied to analyze the alterations under several stress factors (storage, transportation and lack of continuous shaking under controlled conditions) of platelet products. The GPV measured as a soluble factor in platelet storage medium showed good correlation with an array of other measurements commonly applied in characterization of stored platelets. The benefits of measuring soluble analyte in a quantitative assay were evident.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Atmospheric aerosol particles have a strong impact on the global climate. A deep understanding of the physical and chemical processes affecting the atmospheric aerosol climate system is crucial in order to describe those processes properly in global climate models. Besides the climatic effects, aerosol particles can deteriorate e.g. visibility and human health. Nucleation is a fundamental step in atmospheric new particle formation. However, details of the atmospheric nucleation mechanisms have remained unresolved. The main reason for that has been the non-existence of instruments capable of measuring neutral newly formed particles in the size range below 3 nm in diameter. This thesis aims to extend the detectable particle size range towards close-to-molecular sizes (~1nm) of freshly nucleated clusters, and by direct measurement obtain the concentrations of sub-3 nm particles in atmospheric environment and in well defined laboratory conditions. In the work presented in this thesis, new methods and instruments for the sub-3 nm particle detection were developed and tested. The selected approach comprises four different condensation based techniques and one electrical detection scheme. All of them are capable to detect particles with diameters well below 3 nm, some even down to ~1 nm. The developed techniques and instruments were deployed in the field measurements as well as in laboratory nucleation experiments. Ambient air studies showed that in a boreal forest environment a persistent population of 1-2 nm particles or clusters exists. The observation was done using 4 different instruments showing a consistent capability for the direct measurement of the atmospheric nucleation. The results from the laboratory experiments showed that sulphuric acid is a key species in the atmospheric nucleation. The mismatch between the earlier laboratory data and ambient observations on the dependency of nucleation rate on sulphuric acid concentration was explained. The reason was shown to be associated in the inefficient growth of the nucleated clusters and in the insufficient detection efficiency of particle counters used in the previous experiments. Even though the exact molecular steps of nucleation still remain an open question, the instrumental techniques developed in this work as well as their application in laboratory and ambient studies opened a new view into atmospheric nucleation and prepared the way for investigating the nucleation processes with more suitable tools.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Molecular machinery on the micro-scale, believed to be the fundamental building blocks of life, involve forces of 1-100 pN and movements of nanometers to micrometers. Micromechanical single-molecule experiments seek to understand the physics of nucleic acids, molecular motors, and other biological systems through direct measurement of forces and displacements. Optical tweezers are a popular choice among several complementary techniques for sensitive force-spectroscopy in the field of single molecule biology. The main objective of this thesis was to design and construct an optical tweezers instrument capable of investigating the physics of molecular motors and mechanisms of protein/nucleic-acid interactions on the single-molecule level. A double-trap optical tweezers instrument incorporating acousto-optic trap-steering, two independent detection channels, and a real-time digital controller was built. A numerical simulation and a theoretical study was performed to assess the signal-to-noise ratio in a constant-force molecular motor stepping experiment. Real-time feedback control of optical tweezers was explored in three studies. Position-clamping was implemented and compared to theoretical models using both proportional and predictive control. A force-clamp was implemented and tested with a DNA-tether in presence of the enzyme lambda exonuclease. The results of the study indicate that the presented models describing signal-to-noise ratio in constant-force experiments and feedback control experiments in optical tweezers agree well with experimental data. The effective trap stiffness can be increased by an order of magnitude using the presented position-clamping method. The force-clamp can be used for constant-force experiments, and the results from a proof-of-principle experiment, in which the enzyme lambda exonuclease converts double-stranded DNA to single-stranded DNA, agree with previous research. The main objective of the thesis was thus achieved. The developed instrument and presented results on feedback control serve as a stepping stone for future contributions to the growing field of single molecule biology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a measurement of the top-quark width using $t\bar{t}$ events produced in $p\bar{p}$ collisions at Fermilab's Tevatron collider and collected by the CDF II detector. In the mode where the top quark decays to a $W$ boson and a bottom quark, we select events in which one $W$ decays leptonically and the other hadronically~(lepton + jets channel) . From a data sample corresponding to 4.3~fb$^{-1}$ of integrated luminosity, we identify 756 candidate events. The top-quark mass and the mass of $W$ boson that decays hadronically are reconstructed for each event and compared with templates of different top-quark widths~($\Gamma_t$) and deviations from nominal jet energy scale~($\Delta_{JES}$) to perform a simultaneous fit for both parameters, where $\Delta_{JES}$ is used for the {\it in situ} calibration of the jet energy scale. By applying a Feldman-Cousins approach, we establish an upper limit at 95$\%$ confidence level~(CL) of $\Gamma_t $

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims: The aims of this study were 1) to identify and describe health economic studies that have used quality-adjusted life years (QALYs) based on actual measurements of patients' health-related quality of life (HRQoL); 2) to test the feasibility of routine collection of health-related quality of life (HRQoL) data as an indicator of effectiveness of secondary health care; and 3) to establish and compare the cost-utility of three large-volume surgical procedures in a real-world setting in the Helsinki University Central Hospital, a large referral hospital providing secondary and tertiary health-care services for a population of approximately 1.4 million. Patients and methods: So as to identify studies that have used QALYs as an outcome measure, a systematic search of the literature was performed using the Medline, Embase, CINAHL, SCI and Cochrane Library electronic databases. Initial screening of the identified articles involved two reviewers independently reading the abstracts; the full-text articles were also evaluated independently by two reviewers, with a third reviewer used in cases where the two reviewers could not agree a consensus on which articles should be included. The feasibility of routinely evaluating the cost-effectiveness of secondary health care was tested by setting up a system for collecting HRQoL data on approximately 4 900 patients' HRQoL before and after operative treatments performed in the hospital. The HRQoL data used as an indicator of treatment effectiveness was combined with diagnostic and financial indicators routinely collected in the hospital. To compare the cost-effectiveness of three surgical interventions, 712 patients admitted for routine operative treatment completed the 15D HRQoL questionnaire before and also 3-12 months after the operation. QALYs were calculated using the obtained utility data and expected remaining life years of the patients. Direct hospital costs were obtained from the clinical patient administration database of the hospital and a cost-utility analysis was performed from the perspective of the provider of secondary health care services. Main results: The systematic review (Study I) showed that although QALYs gained are considered an important measure of the effectiveness of health care, the number of studies in which QALYs are based on actual measurements of patients' HRQoL is still fairly limited. Of the reviewed full-text articles, only 70 reported QALYs based on actual before after measurements using a valid HRQoL instrument. Collection of simple cost-effectiveness data in secondary health care is feasible and could easily be expanded and performed on a routine basis (Study II). It allows meaningful comparisons between various treatments and provides a means for allocating limited health care resources. The cost per QALY gained was 2 770 for cervical operations and 1 740 for lumbar operations. In cases where surgery was delayed the cost per QALY was doubled (Study III). The cost per QALY ranges between subgroups in cataract surgery (Study IV). The cost per QALY gained was 5 130 for patients having both eyes operated on and 8 210 for patients with only one eye operated on during the 6-month follow-up. In patients whose first eye had been operated on previous to the study period, the mean HRQoL deteriorated after surgery, thus precluding the establishment of the cost per QALY. In arthroplasty patients (Study V) the mean cost per QALY gained in a one-year period was 6 710 for primary hip replacement, 52 270 for revision hip replacement, and 14 000 for primary knee replacement. Conclusions: Although the importance of cost-utility analyses has during recent years been stressed, there are only a limited number of studies in which the evaluation is based on patients own assessment of the treatment effectiveness. Most of the cost-effectiveness and cost-utility analyses are based on modeling that employs expert opinion regarding the outcome of treatment, not on patient-derived assessments. Routine collection of effectiveness information from patients entering treatment in secondary health care turned out to be easy enough and did not, for instance, require additional personnel on the wards in which the study was executed. The mean patient response rate was more than 70 %, suggesting that patients were happy to participate and appreciated the fact that the hospital showed an interest in their well-being even after the actual treatment episode had ended. Spinal surgery leads to a statistically significant and clinically important improvement in HRQoL. The cost per QALY gained was reasonable, at less than half of that observed for instance for hip replacement surgery. However, prolonged waiting for an operation approximately doubled the cost per QALY gained from the surgical intervention. The mean utility gain following routine cataract surgery in a real world setting was relatively small and confined mostly to patients who had had both eyes operated on. The cost of cataract surgery per QALY gained was higher than previously reported and was associated with considerable degree of uncertainty. Hip and knee replacement both improve HRQoL. The cost per QALY gained from knee replacement is two-fold compared to hip replacement. Cost-utility results from the three studied specialties showed that there is great variation in the cost-utility of surgical interventions performed in a real-world setting even when only common, widely accepted interventions are considered. However, the cost per QALY of all the studied interventions, except for revision hip arthroplasty, was well below 50 000, this figure being sometimes cited in the literature as a threshold level for the cost-effectiveness of an intervention. Based on the present study it may be concluded that routine evaluation of the cost-utility of secondary health care is feasible and produces information essential for a rational and balanced allocation of scarce health care resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aerosol particles play an important role in the Earth s atmosphere and in the climate system: they scatter and absorb solar radiation, facilitate chemical processes, and serve as seeds for cloud formation. Secondary new particle formation (NPF) is a globally important source of these particles. Currently, the mechanisms of particle formation and the vapors participating in this process are, however, not truly understood. In order to fully explain atmospheric NPF and subsequent growth, we need to measure directly the very initial steps of the formation processes. This thesis investigates the possibility to study atmospheric particle formation using a recently developed Neutral cluster and Air Ion Spectrometer (NAIS). First, the NAIS was calibrated and intercompared, and found to be in good agreement with the reference instruments both in the laboratory and in the field. It was concluded that NAIS can be reliably used to measure small atmospheric ions and particles directly at the sizes where NPF begins. Second, several NAIS systems were deployed simultaneously at 12 European measurement sites to quantify the spatial and temporal distribution of particle formation events. The sites represented a variety of geographical and atmospheric conditions. The NPF events were detected using NAIS systems at all of the sites during the year-long measurement period. Various particle formation characteristics, such as formation and growth rates, were used as indicators of the relevant processes and participating compounds in the initial formation. In a case of parallel ion and neutral cluster measurements, we also estimated the relative contribution of ion-induced and neutral nucleation to the total particle formation. At most sites, the particle growth rate increased with the increasing particle size indicating that different condensing vapors are participating in the growth of different-sized particles. The results suggest that, in addition to sulfuric acid, organic vapors contribute to the initial steps of NPF and to the subsequent growth, not just later steps of the particle growth. As a significant new result, we found out that the total particle formation rate varied much more between the different sites than the formation rate of charged particles. The results infer that the ion-induced nucleation has a minor contribution to particle formation in the boundary layer in most of the environments. These results give tools to better quantify the aerosol source provided by secondary NPF in various environments. The particle formation characteristics determined in this thesis can be used in global models to assess NPF s climatic effects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a laser-based system to measure the refractive index of air over a long path length. In optical distance measurements it is essential to know the refractive index of air with high accuracy. Commonly, the refractive index of air is calculated from the properties of the ambient air using either Ciddor or Edlén equations, where the dominant uncertainty component is in most cases the air temperature. The method developed in this work utilises direct absorption spectroscopy of oxygen to measure the average temperature of air and of water vapor to measure relative humidity. The method allows measurement of temperature and humidity over the same beam path as in optical distance measurement, providing spatially well matching data. Indoor and outdoor measurements demonstrate the effectiveness of the method. In particular, we demonstrate an effective compensation of the refractive index of air in an interferometric length measurement at a time-variant and spatially non-homogenous temperature over a long time period. Further, we were able to demonstrate 7 mK RMS noise over a 67 m path length using 120 s sample time. To our knowledge, this is the best temperature precision reported for a spectroscopic temperature measurement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this thesis is to develop a fully automatic lameness detection system that operates in a milking robot. The instrumentation, measurement software, algorithms for data analysis and a neural network model for lameness detection were developed. Automatic milking has become a common practice in dairy husbandry, and in the year 2006 about 4000 farms worldwide used over 6000 milking robots. There is a worldwide movement with the objective of fully automating every process from feeding to milking. Increase in automation is a consequence of increasing farm sizes, the demand for more efficient production and the growth of labour costs. As the level of automation increases, the time that the cattle keeper uses for monitoring animals often decreases. This has created a need for systems for automatically monitoring the health of farm animals. The popularity of milking robots also offers a new and unique possibility to monitor animals in a single confined space up to four times daily. Lameness is a crucial welfare issue in the modern dairy industry. Limb disorders cause serious welfare, health and economic problems especially in loose housing of cattle. Lameness causes losses in milk production and leads to early culling of animals. These costs could be reduced with early identification and treatment. At present, only a few methods for automatically detecting lameness have been developed, and the most common methods used for lameness detection and assessment are various visual locomotion scoring systems. The problem with locomotion scoring is that it needs experience to be conducted properly, it is labour intensive as an on-farm method and the results are subjective. A four balance system for measuring the leg load distribution of dairy cows during milking in order to detect lameness was developed and set up in the University of Helsinki Research farm Suitia. The leg weights of 73 cows were successfully recorded during almost 10,000 robotic milkings over a period of 5 months. The cows were locomotion scored weekly, and the lame cows were inspected clinically for hoof lesions. Unsuccessful measurements, caused by cows standing outside the balances, were removed from the data with a special algorithm, and the mean leg loads and the number of kicks during milking was calculated. In order to develop an expert system to automatically detect lameness cases, a model was needed. A probabilistic neural network (PNN) classifier model was chosen for the task. The data was divided in two parts and 5,074 measurements from 37 cows were used to train the model. The operation of the model was evaluated for its ability to detect lameness in the validating dataset, which had 4,868 measurements from 36 cows. The model was able to classify 96% of the measurements correctly as sound or lame cows, and 100% of the lameness cases in the validation data were identified. The number of measurements causing false alarms was 1.1%. The developed model has the potential to be used for on-farm decision support and can be used in a real-time lameness monitoring system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lypsylehmien maidon juoksettumiskyvyn jalostuskeinot Väitöskirjassa tutkittiin lypsylehmien maidon juustonvalmistuslaadun parantamista jalostusvalinnan avulla. Tutkimusaihe on tärkeä, sillä yhä suurempi osa maidosta käytetään juustonvalmistukseen. Tutkimuksen kohteena oli maidon juoksettumiskyky, sillä se on yksi keskeisistä juustomäärään vaikuttavista tekijöistä. Maidon juoksettumiskyky vaihteli huomattavasti lehmien, sonnien, karjojen, rotujen ja lypsykauden vaiheiden välillä. Vaikka tankkimaidon juoksettumiskyvyssä olikin suuria eroja karjoittain, karja selitti vain pienen osan juoksettumiskyvyn kokonaisvaihtelusta. Todennäköisesti perinnölliset erot lehmien välillä selittävät suurimman osan karjojen tankkimaitojen juoksettumiskyvyssä havaituista eroista. Hyvä hoito ja ruokinta vähensivät kuitenkin jossain määrin huonosti juoksettuvien tankkimaitojen osuutta karjoissa. Holstein-friisiläiset lehmät olivat juoksettumiskyvyltään ayrshire-rotuisia lehmiä parempia. Huono juoksettuminen ja juoksettumattomuus oli vain vähäinen ongelma holstein-friisiläisillä (10 %), kun taas kolmannes ayrshire-lehmistä tuotti huonosti juoksettuvaa tai juoksettumatonta maitoa. Maitoa sanotaan huonosti juoksettuvaksi silloin, kun juustomassa ei ole riittävän kiinteää leikattavaksi puolen tunnin kuluttua juoksetteen lisäyksestä. Juoksettumattomaksi määriteltävä maito ei saostu lainkaan puolen tunnin aikana ja on siksi erittäin huonoa raaka-ainetta juustomeijereille. Noin 40 % lehmien välisistä eroista maidon juoksettumiskyvyssä selittyi perinnöllisillä tekijöillä. Juoksettumiskykyä voikin sanoa hyvin periytyväksi ominaisuudeksi. Kolme mittauskertaa lehmää kohti riittää varsin hyvin lehmän maidon keskimääräisen juoksettumiskyvyn arvioimiseen. Tällä hetkellä juoksettumiskyvyn suoran jalostamisen ongelmana on kuitenkin automatisoidun, laajamittaiseen käyttöön soveltuvan mittalaitteen puute. Tämän takia väitöskirjassa tutkittiin mahdollisuuksia jalostaa maidon juoksettumiskykyä epäsuorasti, jonkin toisen ominaisuuden kautta. Tällaisen ominaisuuden pitää olla kyllin voimakkaasti perinnöllisesti kytkeytynyt juoksettumiskykyyn, jotta jalostus olisi mahdollista sen avulla. Tutkittavat ominaisuudet olivat sonnien kokonaisjalostusarvossa jo mukana olevat maitotuotos ja utareterveyteen liittyvät ominaisuudet sekä kokonaisjalostusarvoon kuulumattomat maidon valkuais- ja kaseiinipitoisuus sekä maidon pH. Väitöskirjassa tutkittiin myös mahdollisuuksia ns. merkkiavusteiseen valintaan tutkimalla maidon juoksettumattomuuden perinnöllisyyttä ja kartoittamalla siihen liittyvät kromosomialueet. Tutkimuksen tulosten perusteella lehmien utareterveyden jalostaminen parantaa jonkin verran myös maidon juoksettumiskykyä sekä vähentää juoksettumattomuutta ayrshire-rotuisilla lehmillä. Lehmien maitotuotos ja maidon juoksettumiskyky sekä juoksettumattomuus ovat sen sijaan perinnöllisesti toisistaan riippumattomia ominaisuuksia. Myöskin maidon valkuais- ja kaseiinipitoisuuden perinnöllinen yhteys juoksettumiskykyyn oli likimain nolla. Maidon pH:n ja juoksettumiskyvyn välillä oli melko voimakas perinnöllinen yhteys, joten maidon pH:n jalostaminen parantaisi myös maidon juoksettumiskykyä. Todennäköisesti sen jalostaminen ei kuitenkaan vähentäisi juoksettumatonta maitoa tuottavien lehmien määrää. Koska maidon juoksettumattomuus on niin yleinen ongelma suomalaisilla ayrshire-lehmillä, väitöksessä selvitettiin tarkemmin ilmiön taustoja. Kaikissa kolmessa tutkimusaineistoissa noin 10 % ayrshire-lehmistä tuotti juoksettumatonta maitoa. Kahden vuoden kuukausittaisen seurannan aikana osa lehmistä tuotti juoksettumatonta maitoa lähes joka mittauskerralla. Maidon juoksettumattomuus oli yhteydessä lypsykauden vaiheeseen, mutta mikään ympäristötekijöistä ei pystynyt täysin selittämään sitä. Sen sijaan viitteet sen periytyvyydestä vahvistuivat tutkimusten edetessä. Lopuksi tutkimusryhmä onnistui kartoittamaan juoksettumattomuutta aiheuttavat kromosomialueet kromosomeihin 2 ja 18, lähelle DNA-merkkejä BMS1126 ja BMS1355. Tulosten perusteella maidon juoksettumattomuus ei ole yhteydessä maidon juoksettumistapahtumassa keskeisiin kaseiinigeeneihin. Sen sijaan on mahdollista, että juoksettumattomuusongelman aiheuttavat kaseiinigeenien syntetisoinnin jälkeisessä muokkauksessa tapahtuvat virheet. Asia vaatii kuitenkin perusteellista tutkimista. Väitöksen tulosten perusteella maidon juoksettumattomuusgeeniä kantavien eläinten karsiminen jalostuseläinten joukosta olisi tehokkain tapa jalostaa maidon juoksettumiskykyä suomalaisessa lypsykarjapopulaatiossa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this thesis was to develop measurement techniques and systems for measuring air quality and to provide information about air quality conditions and the amount of gaseous emissions from semi-insulated and uninsulated dairy buildings in Finland and Estonia. Specialization and intensification in livestock farming, such as in dairy production, is usually accompanied by an increase in concentrated environmental emissions. In addition to high moisture, the presence of dust and corrosive gases, and widely varying gas concentrations in dairy buildings, Finland and Estonia experience winter temperatures reaching below -40 ºC and summer temperatures above +30 ºC. The adaptation of new technologies for long-term air quality monitoring and measurement remains relatively uncommon in dairy buildings because the construction and maintenance of accurate monitoring systems for long-term use are too expensive for the average dairy farmer to afford. Though the documentation of accurate air quality measurement systems intended mainly for research purposes have been made in the past, standardised methods and the documentation of affordable systems and simple methods for performing air quality and emissions measurements in dairy buildings are unavailable. In this study, we built three measurement systems: 1) a Stationary system with integrated affordable sensors for on-site measurements, 2) a Wireless system with affordable sensors for off-site measurements, and 3) a Mobile system consisting of expensive and accurate sensors for measuring air quality. In addition to assessing existing methods, we developed simplified methods for measuring ventilation and emission rates in dairy buildings. The three measurement systems were successfully used to measure air quality in uninsulated, semi-insulated, and fully-insulated dairy buildings between the years 2005 and 2007. When carefully calibrated, the affordable sensors in the systems gave reasonably accurate readings. The spatial air quality survey showed high variation in microclimate conditions in the dairy buildings measured. The average indoor air concentration for carbon dioxide was 950 ppm, for ammonia 5 ppm, for methane 48 ppm, for relative humidity 70%, and for inside air velocity 0.2 m/s. The average winter and summer indoor temperatures during the measurement period were -7º C and +24 ºC for the uninsulated, +3 ºC and +20 ºC for the semi-insulated and +10 ºC and +25 ºC for the fully-insulated dairy buildings. The measurement results showed that the uninsulated dairy buildings had lower indoor gas concentrations and emissions compared to fully insulated buildings. Although occasionally exceeded, the ventilation rates and average indoor air quality in the dairy buildings were largely within recommended limits. We assessed the traditional heat balance, moisture balance, carbon dioxide balance and direct airflow methods for estimating ventilation rates. The direct velocity measurement for the estimation of ventilation rate proved to be impractical for naturally ventilated buildings. Two methods were developed for estimating ventilation rates. The first method is applicable in buildings in which the ventilation can be stopped or completely closed. The second method is useful in naturally ventilated buildings with large openings and high ventilation rates where spatial gas concentrations are heterogeneously distributed. The two traditional methods (carbon dioxide and methane balances), and two newly developed methods (theoretical modelling using Fick s law and boundary layer theory, and the recirculation flux-chamber technique) were used to estimate ammonia emissions from the dairy buildings. Using the traditional carbon dioxide balance method, ammonia emissions per cow from the dairy buildings ranged from 7 g day-1 to 35 g day-1, and methane emissions per cow ranged from 96 g day-1 to 348 g day-1. The developed methods proved to be as equally accurate as the traditional methods. Variation between the mean emissions estimated with the traditional and the developed methods was less than 20%. The developed modelling procedure provided sound framework for examining the impact of production systems on ammonia emissions in dairy buildings.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider an obstacle scattering problem for linear Beltrami fields. A vector field is a linear Beltrami field if the curl of the field is a constant times itself. We study the obstacles that are of Neumann type, that is, the normal component of the total field vanishes on the boundary of the obstacle. We prove the unique solvability for the corresponding exterior boundary value problem, in other words, the direct obstacle scattering model. For the inverse obstacle scattering problem, we deduce the formulas that are needed to apply the singular sources method. The numerical examples are computed for the direct scattering problem and for the inverse scattering problem.