22 resultados para Enthalpy calibration

em Helda - Digital Repository of University of Helsinki


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A precision measurement of the top quark mass m_t is obtained using a sample of ttbar events from ppbar collisions at the Fermilab Tevatron with the CDF II detector. Selected events require an electron or muon, large missing transverse energy, and exactly four high-energy jets, at least one of which is tagged as coming from a b quark. A likelihood is calculated using a matrix element method with quasi-Monte Carlo integration taking into account finite detector resolution and jet mass effects. The event likelihood is a function of m_t and a parameter DJES to calibrate the jet energy scale /in situ/. Using a total of 1087 events, a value of m_t = 173.0 +/- 1.2 GeV/c^2 is measured.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A measurement of the top-quark pair-production cross section in ppbar collisions at sqrt{s}=1.96 TeV using data corresponding to an integrated luminosity of 1.12/fb collected with the Collider Detector at Fermilab is presented. Decays of top-quark pairs into the final states e nu + jets and mu nu + jets are selected, and the cross section and the b-jet identification efficiency are determined using a new measurement technique which requires that the measured cross sections with exactly one and multiple identified b-quarks from the top-quark decays agree. Assuming a top-quark mass of 175 GeV/c^2, a cross section of 8.5+/-0.6(stat.)+/-0.7(syst.) pb is measured.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to improve and continuously develop the quality of pharmaceutical products, the process analytical technology (PAT) framework has been adopted by the US Food and Drug Administration. One of the aims of PAT is to identify critical process parameters and their effect on the quality of the final product. Real time analysis of the process data enables better control of the processes to obtain a high quality product. The main purpose of this work was to monitor crucial pharmaceutical unit operations (from blending to coating) and to examine the effect of processing on solid-state transformations and physical properties. The tools used were near-infrared (NIR) and Raman spectroscopy combined with multivariate data analysis, as well as X-ray powder diffraction (XRPD) and terahertz pulsed imaging (TPI). To detect process-induced transformations in active pharmaceutical ingredients (APIs), samples were taken after blending, granulation, extrusion, spheronisation, and drying. These samples were monitored by XRPD, Raman, and NIR spectroscopy showing hydrate formation in the case of theophylline and nitrofurantoin. For erythromycin dihydrate formation of the isomorphic dehydrate was critical. Thus, the main focus was on the drying process. NIR spectroscopy was applied in-line during a fluid-bed drying process. Multivariate data analysis (principal component analysis) enabled detection of the dehydrate formation at temperatures above 45°C. Furthermore, a small-scale rotating plate device was tested to provide an insight into film coating. The process was monitored using NIR spectroscopy. A calibration model, using partial least squares regression, was set up and applied to data obtained by in-line NIR measurements of a coating drum process. The predicted coating thickness agreed with the measured coating thickness. For investigating the quality of film coatings TPI was used to create a 3-D image of a coated tablet. With this technique it was possible to determine coating layer thickness, distribution, reproducibility, and uniformity. In addition, it was possible to localise defects of either the coating or the tablet. It can be concluded from this work that the applied techniques increased the understanding of physico-chemical properties of drugs and drug products during and after processing. They additionally provided useful information to improve and verify the quality of pharmaceutical dosage forms

Relevância:

10.00% 10.00%

Publicador:

Resumo:

B. cereus is one of the most frequent occurring bacteria in foods . It produces several heat-labile enterotoxins and one stable non-protein toxin, cereulide (emetic), which may be pre-formed in food. Cereulide is a heat stable peptide whose structure and mechanism of action were in the past decade elucidated. Until this work, the detection of cereulide was done by biological assays. With my mentors, I developed the first quantitative chemical assay for cereulide. The assay is based on liquid chromatography (HPLC) combined with ion trap mass spectrometry and the calibration is done with valinomycin and purified cereulide. To detect and quantitate valinomycin and cereulide, their [NH4+] adducts, m/z 1128.9 and m/z 1171 respectively, were used. This was a breakthrough in the cereulide research and became a very powerful tool of investigation. This tool made it possible to prove for the first time that the toxin produced by B. cereus in heat-treated food caused human illness. Until this thesis work (Paper II), cereulide producing B. cereus strains were believed to represent a homogenous group of clonal strains. The cereulide producing strains investigated in those studies originated mostly from food poisoning incidents. We used strains of many origins and analyzed them using a polyphasic approach. We found that the cereulide producing B. cereus strains are genetically and biologically more diverse than assumed in earlier studies. The strains diverge in the adenylate kinase (adk) gene (two sequence types), in ribopatterns obtained with EcoRI and PvuII (three patterns), tyrosin decomposition, haemolysis and lecithine hydrolysis (two phenotypes). Our study was the first demonstration of diversity within the cereulide producing strains of B. cereus. To manage the risk for cereulide production in food, understanding is needed on factors that may upregulate cereulide production in a given food matrix and the environmental factors affecting it. As a contribution towards this direction, we adjusted the growth environment and measured the cereulide production by strains selected for diversity. The temperature range where cereulide is produced was narrower than that for growth for most of the producer strains. Most cereulide was by most strains produced at room temperature (20 - 23ºC). Exceptions to this were two faecal isolates which produced the same amount of cereulide from 23 ºC up until 39ºC. We also found that at 37º C the choice of growth media for cereulide production differed from that at the room temperature. The food composition and temperature may thus be a key for understanding cereulide production in foods as well as in the gut. We investigated the contents of [K+], [Na+] and amino acids of six growth media. Statistical evaluation indicated a significant positive correlation between the ratio [K+]:[Na+] and the production of cereulide, but only when the concentrations of glycine and [Na+] were constant. Of the amino acids only glycine correlated positively with high cereulide production. Glycine is used worldwide as food additive (E 640), flavor modifier, humectant, acidity regulator, and is permitted in the European Union countries, with no regulatory quantitative limitation, in most types of foods. B. subtilis group members are endospore-forming bacteria ubiquitous in the environment, similar to B. cereus in this respect. Bacillus species other than B. cereus have only sporadically been identified as causative agents of food-borne illnesses. We found (Paper IV) that food-borne isolates of B. subtilis and B. mojavensis produced amylosin. It is possible that amylosin was the agent responsible for the food-borne illness, since no other toxic substance was found in the strains. This is the first report on amylosin production by strains isolated from food. We found that the temperature requirement for amylosin production was higher for the B. subtilis strain F 2564/96, a mesophilic producer, than for B. mojavensis strains eela 2293 and B 31, psychrotolerant producers. We also found that an atmosphere with low oxygen did not prevent the production of amylosin. Ready-to-eat foods packaged in micro-aerophilic atmosphere and/or stored at temperatures above 10 °C, may thus pose a risk when toxigenic strains of B. subtilis or B. mojavensis are present.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of two parts; in the first part we performed a single-molecule force extension measurement with 10kb long DNA-molecules from phage-λ to validate the calibration and single-molecule capability of our optical tweezers instrument. Fitting the worm-like chain interpolation formula to the data revealed that ca. 71% of the DNA tethers featured a contour length within ±15% of the expected value (3.38 µm). Only 25% of the found DNA had a persistence length between 30 and 60 nm. The correct value should be within 40 to 60 nm. In the second part we designed and built a precise temperature controller to remove thermal fluctuations that cause drifting of the optical trap. The controller uses feed-forward and PID (proportional-integral-derivative) feedback to achieve 1.58 mK precision and 0.3 K absolute accuracy. During a 5 min test run it reduced drifting of the trap from 1.4 nm/min in open-loop to 0.6 nm/min in closed-loop.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent epidemiological studies have shown a consistent association of the mass concentration of urban air thoracic (PM10) and fine (PM2.5) particles with mortality and morbidity among cardiorespiratory patients. However, the chemical characteristics of different particulate size ranges and the biological mechanisms responsible for these adverse health effects are not well known. The principal aims of this thesis were to validate a high volume cascade impactor (HVCI) for the collection of particulate matter for physicochemical and toxicological studies, and to make an in-depth chemical and source characterisation of samples collected during different pollution situations. The particulate samples were collected with the HVCI, virtual impactors and a Berner low pressure impactor in six European cities: Helsinki, Duisburg, Prague, Amsterdam, Barcelona and Athens. The samples were analysed for particle mass, common ions, total and water-soluble elements as well as elemental and organic carbon. Laboratory calibration and field comparisons indicated that the HVCI can provide a unique large capacity, high efficiency sampling of size-segregated aerosol particles. The cutoff sizes of the recommended HVCI configuration were 2.4, 0.9 and 0.2 μm. The HVCI mass concentrations were in a good agreement with the reference methods, but the chemical composition of especially the fine particulate samples showed some differences. This implies that the chemical characterization of the exposure variable in toxicological studies needs to be done from the same HVCI samples as used in cell and animal studies. The data from parallel, low volume reference samplers provide valuable additional information for chemical mass closure and source assessment. The major components of PM2.5 in the virtual impactor samples were carbonaceous compounds, secondary inorganic ions and sea salt, whereas those of coarse particles (PM2.5-10) were soil-derived compounds, carbonaceous compounds, sea salt and nitrate. The major and minor components together accounted for 77-106% and 77-96% of the gravimetrically-measured masses of fine and coarse particles, respectively. Relatively large differences between sampling campaigns were observed in the organic carbon content of the PM2.5 samples as well as the mineral composition of the PM2.5-10 samples. A source assessment based on chemical tracers suggested clear differences in the dominant sources (e.g. traffic, residential heating with solid fuels, metal industry plants, regional or long-range transport) between the sampling campaigns. In summary, the field campaigns exhibited different profiles with regard to particulate sources, size distribution and chemical composition, thus, providing a highly useful setup for toxicological studies on the size-segregated HVCI samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Comprehensive two-dimensional gas chromatography (GC×GC) offers enhanced separation efficiency, reliability in qualitative and quantitative analysis, capability to detect low quantities, and information on the whole sample and its components. These features are essential in the analysis of complex samples, in which the number of compounds may be large or the analytes of interest are present at trace level. This study involved the development of instrumentation, data analysis programs and methodologies for GC×GC and their application in studies on qualitative and quantitative aspects of GC×GC analysis. Environmental samples were used as model samples. Instrumental development comprised the construction of three versions of a semi-rotating cryogenic modulator in which modulation was based on two-step cryogenic trapping with continuously flowing carbon dioxide as coolant. Two-step trapping was achieved by rotating the nozzle spraying the carbon dioxide with a motor. The fastest rotation and highest modulation frequency were achieved with a permanent magnetic motor, and modulation was most accurate when the motor was controlled with a microcontroller containing a quartz crystal. Heated wire resistors were unnecessary for the desorption step when liquid carbon dioxide was used as coolant. With use of the modulators developed in this study, the narrowest peaks were 75 ms at base. Three data analysis programs were developed allowing basic, comparison and identification operations. Basic operations enabled the visualisation of two-dimensional plots and the determination of retention times, peak heights and volumes. The overlaying feature in the comparison program allowed easy comparison of 2D plots. An automated identification procedure based on mass spectra and retention parameters allowed the qualitative analysis of data obtained by GC×GC and time-of-flight mass spectrometry. In the methodological development, sample preparation (extraction and clean-up) and GC×GC methods were developed for the analysis of atmospheric aerosol and sediment samples. Dynamic sonication assisted extraction was well suited for atmospheric aerosols collected on a filter. A clean-up procedure utilising normal phase liquid chromatography with ultra violet detection worked well in the removal of aliphatic hydrocarbons from a sediment extract. GC×GC with flame ionisation detection or quadrupole mass spectrometry provided good reliability in the qualitative analysis of target analytes. However, GC×GC with time-of-flight mass spectrometry was needed in the analysis of unknowns. The automated identification procedure that was developed was efficient in the analysis of large data files, but manual search and analyst knowledge are invaluable as well. Quantitative analysis was examined in terms of calibration procedures and the effect of matrix compounds on GC×GC separation. In addition to calibration in GC×GC with summed peak areas or peak volumes, simplified area calibration based on normal GC signal can be used to quantify compounds in samples analysed by GC×GC so long as certain qualitative and quantitative prerequisites are met. In a study of the effect of matrix compounds on GC×GC separation, it was shown that quality of the separation of PAHs is not significantly disturbed by the amount of matrix and quantitativeness suffers only slightly in the presence of matrix and when the amount of target compounds is low. The benefits of GC×GC in the analysis of complex samples easily overcome some minor drawbacks of the technique. The developed instrumentation and methodologies performed well for environmental samples, but they could also be applied for other complex samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi- and intralake datasets of fossil midge assemblages in surface sediments of small shallow lakes in Finland were studied to determine the most important environmental factors explaining trends in midge distribution and abundance. The aim was to develop palaeoenvironmental calibration models for the most important environmental variables for the purpose of reconstructing past environmental conditions. The developed models were applied to three high-resolution fossil midge stratigraphies from southern and eastern Finland to interpret environmental variability over the past 2000 years, with special focus on the Medieval Climate Anomaly (MCA), the Little Ice Age (LIA) and recent anthropogenic changes. The midge-based results were compared with physical properties of the sediment, historical evidence and environmental reconstructions based on diatoms (Bacillariophyta), cladocerans (Crustacea: Cladocera) and tree rings. The results showed that the most important environmental factor controlling midge distribution and abundance along a latitudinal gradient in Finland was the mean July air temperature (TJul). However, when the dataset was environmentally screened to include only pristine lakes, water depth at the sampling site became more important. Furthermore, when the dataset was geographically scaled to southern Finland, hypolimnetic oxygen conditions became the dominant environmental factor. The results from an intralake dataset from eastern Finland showed that the most important environmental factors controlling midge distribution within a lake basin were river contribution, water depth and submerged vegetation patterns. In addition, the results of the intralake dataset showed that the fossil midge assemblages represent fauna that lived in close proximity to the sampling sites, thus enabling the exploration of within-lake gradients in midge assemblages. Importantly, this within-lake heterogeneity in midge assemblages may have effects on midge-based temperature estimations, because samples taken from the deepest point of a lake basin may infer considerably colder temperatures than expected, as shown by the present test results. Therefore, it is suggested here that the samples in fossil midge studies involving shallow boreal lakes should be taken from the sublittoral, where the assemblages are most representative of the whole lake fauna. Transfer functions between midge assemblages and the environmental forcing factors that were significantly related with the assemblages, including mean air TJul, water depth, hypolimnetic oxygen, stream flow and distance to littoral vegetation, were developed using weighted averaging (WA) and weighted averaging-partial least squares (WA-PLS) techniques, which outperformed all the other tested numerical approaches. Application of the models in downcore studies showed mostly consistent trends. Based on the present results, which agreed with previous studies and historical evidence, the Medieval Climate Anomaly between ca. 800 and 1300 AD in eastern Finland was characterized by warm temperature conditions and dry summers, but probably humid winters. The Little Ice Age (LIA) prevailed in southern Finland from ca. 1550 to 1850 AD, with the coldest conditions occurring at ca. 1700 AD, whereas in eastern Finland the cold conditions prevailed over a longer time period, from ca. 1300 until 1900 AD. The recent climatic warming was clearly represented in all of the temperature reconstructions. In the terms of long-term climatology, the present results provide support for the concept that the North Atlantic Oscillation (NAO) index has a positive correlation with winter precipitation and annual temperature and a negative correlation with summer precipitation in eastern Finland. In general, the results indicate a relatively warm climate with dry summers but snowy winters during the MCA and a cool climate with rainy summers and dry winters during the LIA. The results of the present reconstructions and the forthcoming applications of the models can be used in assessments of long-term environmental dynamics to refine the understanding of past environmental reference conditions and natural variability required by environmental scientists, ecologists and policy makers to make decisions concerning the presently occurring global, regional and local changes. The developed midge-based models for temperature, hypolimnetic oxygen, water depth, littoral vegetation shift and stream flow, presented in this thesis, are open for scientific use on request.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper both documentary and natural proxy data have been used to improve the accuracy of palaeoclimatic knowledge in Finland since the 18th century. Early meteorological observations from Turku (1748-1800) were analyzed first as a potential source of climate variability. The reliability of the calculated mean temperatures was evaluated by comparing them with those of contemporary temperature records from Stockholm, St. Petersburg and Uppsala. The resulting monthly, seasonal and yearly mean temperatures from 1748 to 1800 were compared with the present day mean values (1961-1990): the comparison suggests that the winters of the period 1749-1800 were 0.8 ºC colder than today, while the summers were 0.4 ºC warmer. Over the same period, springs were 0.9 ºC and autumns 0.1 ºC colder than today. Despite their uncertainties when compared with modern meteorological data, early temperature measurements offer direct and daily information about the weather for all months of the year, in contrast with other proxies. Secondly, early meteorological observations from Tornio (1737-1749) and Ylitornio (1792-1838) were used to study the temporal behaviour of the climate-tree growth relationship during the past three centuries in northern Finland. Analyses showed that the correlations between ring widths and mid-summer (July) temperatures did not vary significantly as a function of time. Early (June) and late summer (August) mean temperatures were secondary to mid-summer temperatures in controlling the radial growth. According the dataset used, there was no clear signature of temporally reduced sensitivity of Scots pine ring widths to mid-summer temperatures over the periods of early and modern meteorological observations. Thirdly, plant phenological data with tree-rings from south-west Finland since 1750 were examined as a palaeoclimate indicator. The information from the fragmentary, partly overlapping, partly nonsystematically biased plant phenological records of 14 different phenomena were combined into one continuous time series of phenological indices. The indices were found to be reliable indicators of the February to June temperature variations. In contrast, there was no correlation between the phenological indices and the precipitation data. Moreover, the correlations between the studied tree-rings and spring temperatures varied as a function of time and hence, their use in palaeoclimate reconstruction is questionable. The use of present tree-ring datasets for palaeoclimate purposes may become possible after the application of more sophisticated calibration methods. Climate variability since the 18th century is perhaps best seen in the fourth paper study of the multiproxy spring temperature reconstruction of south-west Finland. With the help of transfer functions, an attempt has been made to utilize both documentary and natural proxies. The reconstruction was verified with statistics showing a high degree of validity between the reconstructed and observed temperatures. According to the proxies and modern meteorological observations from Turku, springs have become warmer and have featured a warming trend since around the 1850s. Over the period of 1750 to around 1850, springs featured larger multidecadal low-frequency variability, as well as a smaller range of annual temperature variations. The coldest springtimes occurred around the 1840s and 1850s and the first decade of the 19th century. Particularly warm periods occurred in the 1760s, 1790s, 1820s, 1930s, 1970s and from 1987 onwards, although in this period cold springs occurred, such as the springs of 1994 and 1996. On the basis of the available material, long-term temperature changes have been related to changes in the atmospheric circulation, such as the North Atlantic Oscillation (February-June).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Remote sensing provides methods to infer land cover information over large geographical areas at a variety of spatial and temporal resolutions. Land cover is input data for a range of environmental models and information on land cover dynamics is required for monitoring the implications of global change. Such data are also essential in support of environmental management and policymaking. Boreal forests are a key component of the global climate and a major sink of carbon. The northern latitudes are expected to experience a disproportionate and rapid warming, which can have a major impact on vegetation at forest limits. This thesis examines the use of optical remote sensing for estimating aboveground biomass, leaf area index (LAI), tree cover and tree height in the boreal forests and tundra taiga transition zone in Finland. The continuous fields of forest attributes are required, for example, to improve the mapping of forest extent. The thesis focus on studying the feasibility of satellite data at multiple spatial resolutions, assessing the potential of multispectral, -angular and -temporal information, and provides regional evaluation for global land cover data. Preprocessed ASTER, MISR and MODIS products are the principal satellite data. The reference data consist of field measurements, forest inventory data and fine resolution land cover maps. Fine resolution studies demonstrate how statistical relationships between biomass and satellite data are relatively strong in single species and low biomass mountain birch forests in comparison to higher biomass coniferous stands. The combination of forest stand data and fine resolution ASTER images provides a method for biomass estimation using medium resolution MODIS data. The multiangular data improve the accuracy of land cover mapping in the sparsely forested tundra taiga transition zone, particularly in mires. Similarly, multitemporal data improve the accuracy of coarse resolution tree cover estimates in comparison to single date data. Furthermore, the peak of the growing season is not necessarily the optimal time for land cover mapping in the northern boreal regions. The evaluated coarse resolution land cover data sets have considerable shortcomings in northernmost Finland and should be used with caution in similar regions. The quantitative reference data and upscaling methods for integrating multiresolution data are required for calibration of statistical models and evaluation of land cover data sets. The preprocessed image products have potential for wider use as they can considerably reduce the time and effort used for data processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To obtain data on phytoplankton dynamics with improved spatial and temporal resolution, and at reduced cost, traditional phytoplankton monitoring methods have been supplemented with optical approaches. In this thesis, I have explored various fluorescence-based techniques for detection of phytoplankton abundance, taxonomy and physiology in the Baltic Sea. In algal cultures used in this thesis, the availability of nitrogen and light conditions caused changes in pigmentation, and consequently in light absorption and fluorescence properties of cells. In the Baltic Sea, physical environmental factors (e.g. mixing depth, irradiance and temperature) and related seasonal succession in the phytoplankton community explained a large part of the seasonal variability in the magnitude and shape of Chlorophyll a (Chla)-specific absorption. The variability in Chla-specific fluorescence was related to the abundance of cyanobacteria, the size structure of the phytoplankton community, and absorption characteristics of phytoplankton. Cyanobacteria show very low Chla-specific fluorescence. In the presence of eukaryotic species, Chla fluorescence describes poorly cyanobacteria. During cyanobacterial bloom in the Baltic Sea, phycocyanin fluorescence explained large part of the variability in Chla concentrations. Thus, both Chla and phycocyanin fluorescence were required to predict Chla concentration. Phycobilins are major light harvesting pigments for cyanobacteria. In the open Baltic Sea, small picoplanktonic cyanobacteria were the main source of phycoerythrin fluorescence and absorption signal. Large filamentous cyanobacteria, forming harmful blooms, were the main source of the phycocyanin fluorescence signal and typically their biomass and phycocyanin fluorescence were linearly related. Using phycocyanin fluorescence, dynamics of cyanobacterial blooms can be detected at high spatial and seasonal resolution not possible with other methods. Various taxonomic phytoplankton pigment groups can be separated by spectral fluorescence. I compared multivariate calibration methods for the retrieval of phytoplankton biomass in different taxonomic groups. Partial least squares regression method gave the closest predictions for all taxonomic groups, and the accuracy was adequate for phytoplankton bloom detection. Variable fluorescence has been proposed as a tool to study the physiological state of phytoplankton. My results from the Baltic Sea emphasize that variable fluorescence alone cannot be used to detect nutrient limitation of phytoplankton. However, when combined with experiments with active nutrient manipulation, and other nutrient limitation indices, variable fluorescence provided valuable information on the physiological responses of the phytoplankton community. This thesis found a severe limitation of a commercial fast repetition rate fluorometer, which couldn t detect the variable fluorescence of phycoerythrin-lacking cyanobacteria. For these species, the Photosystem II absorption of blue light is very low, and fluorometer excitation light did not saturate Photosystem II during a measurement. This thesis encourages the use of various in vivo fluorescence methods for the detection of bulk phytoplankton biomass, biomass of cyanobacteria, chemotaxonomy of phytoplankton community, and phytoplankton physiology. Fluorescence methods can support traditional phytoplankton monitoring by providing continuous measurements of phytoplankton, and thereby strengthen the understanding of the links between biological, chemical and physical processes in aquatic ecosystems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pack ice is an aggregate of ice floes drifting on the sea surface. The forces controlling the motion and deformation of pack ice are air and water drag forces, sea surface tilt, Coriolis force and the internal force due to the interaction between ice floes. In this thesis, the mechanical behavior of compacted pack ice is investigated using theoretical and numerical methods, focusing on the three basic material properties: compressive strength, yield curve and flow rule. A high-resolution three-category sea ice model is applied to investigate the sea ice dynamics in two small basins, the whole Gulf Riga and the inside Pärnu Bay, focusing on the calibration of the compressive strength for thin ice. These two basins are on the scales of 100 km and 20 km, respectively, with typical ice thickness of 10-30 cm. The model is found capable of capturing the main characteristics of the ice dynamics. The compressive strength is calibrated to be about 30 kPa, consistent with the values from most large-scale sea ice dynamic studies. In addition, the numerical study in Pärnu Bay suggests that the shear strength drops significantly when the ice-floe size markedly decreases. A characteristic inversion method is developed to probe the yield curve of compacted pack ice. The basis of this method is the relationship between the intersection angle of linear kinematic features (LKFs) in sea ice and the slope of the yield curve. A summary of the observed LKFs shows that they can be basically divided into three groups: intersecting leads, uniaxial opening leads and uniaxial pressure ridges. Based on the available observed angles, the yield curve is determined to be a curved diamond. Comparisons of this yield curve with those from other methods show that it possesses almost all the advantages identified by the other methods. A new constitutive law is proposed, where the yield curve is a diamond and the flow rule is a combination of the normal and co-axial flow rule. The non-normal co-axial flow rule is necessary for the Coulombic yield constraint. This constitutive law not only captures the main features of forming LKFs but also takes the advantage of avoiding overestimating divergence during shear deformation. Moreover, this study provides a method for observing the flow rule for pack ice during deformation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The methods for estimating patient exposure in x-ray imaging are based on the measurement of radiation incident on the patient. In digital imaging, the useful dose range of the detector is large and excessive doses may remain undetected. Therefore, real-time monitoring of radiation exposure is important. According to international recommendations, the measurement uncertainty should be lower than 7% (confidence level 95%). The kerma-area product (KAP) is a measurement quantity used for monitoring patient exposure to radiation. A field KAP meter is typically attached to an x-ray device, and it is important to recognize the effect of this measurement geometry on the response of the meter. In a tandem calibration method, introduced in this study, a field KAP meter is used in its clinical position and calibration is performed with a reference KAP meter. This method provides a practical way to calibrate field KAP meters. However, the reference KAP meters require comprehensive calibration. In the calibration laboratory it is recommended to use standard radiation qualities. These qualities do not entirely correspond to the large range of clinical radiation qualities. In this work, the energy dependence of the response of different KAP meter types was examined. According to our findings, the recommended accuracy in KAP measurements is difficult to achieve with conventional KAP meters because of their strong energy dependence. The energy dependence of the response of a novel large KAP meter was found out to be much lower than with a conventional KAP meter. The accuracy of the tandem method can be improved by using this meter type as a reference meter. A KAP meter cannot be used to determine the radiation exposure of patients in mammography, in which part of the radiation beam is always aimed directly at the detector without attenuation produced by the tissue. This work assessed whether pixel values from this detector area could be used to monitor the radiation beam incident on the patient. The results were congruent with the tube output calculation, which is the method generally used for this purpose. The recommended accuracy can be achieved with the studied method. New optimization of radiation qualities and dose level is needed when other detector types are introduced. In this work, the optimal selections were examined with one direct digital detector type. For this device, the use of radiation qualities with higher energies was recommended and appropriate image quality was achieved by increasing the low dose level of the system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The need for mutual recognition of accurate measurement results made by competent laboratories has been very widely accepted at the international level e.g., at the World Trade Organization. A partial solution to the problem was made by the International Committee for Weights and Measures (CIPM) in setting up the Mutual Recognition Arrangement (CIPM MRA), which was signed by National Metrology Institutes (NMI) around the world. The core idea of the CIPM MRA is to have global arrangements for the mutual acceptance of the calibration certificates of National Metrology Institutes. The CIPM MRA covers all the fields of science and technology for which NMIs have their national standards. The infrastructure for the metrology of the gaseous compounds carbon monoxide (CO), nitrogen monoxide (NO), nitrogen dioxide (NO2), sulphur dioxide (SO2) and ozone (O3) has been constructed at the national level at the Finnish Meteorological Institute (FMI). The calibration laboratory at the FMI was constructed for providing calibration services for air quality measurements and to fulfil the requirements of a metrology laboratory. The laboratory successfully participated, with good results, in the first comparison project, which was aimed at defining the state of the art in the preparation and analysis of the gas standards used by European metrology institutes and calibration laboratories in the field of air quality. To confirm the competence of the laboratory, the international external surveillance study was conducted at the laboratory. Based on the evidence, the Centre for Metrology and Accreditation (MIKES) designated the calibration laboratory at the Finnish Meteorological Institute (FMI) as a National Standard Laboratory in the field of air quality. With this designation, the MIKES-FMI Standards Laboratory became a member of CIPM MRA, and Finland was brought into the internationally-accepted forum in the field of gas metrology. The concept of ‘once measured - everywhere accepted’ is the leading theme of the CIPM MRA. The calibration service of the MIKES-FMI Standards Laboratory realizes the SI traceability system for the gas components, and is constructed to enable it to meet the requirements of the European air quality directives. In addition, all the relevant uncertainty sources that influence the measurement results have been evaluated, and the uncertainty budgets for the measurement results have been created.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis a manifold learning method is applied to the problem of WLAN positioning and automatic radio map creation. Due to the nature of WLAN signal strength measurements, a signal map created from raw measurements results in non-linear distance relations between measurement points. These signal strength vectors reside in a high-dimensioned coordinate system. With the help of the so called Isomap-algorithm the dimensionality of this map can be reduced, and thus more easily processed. By embedding position-labeled strategic key points, we can automatically adjust the mapping to match the surveyed environment. The environment is thus learned in a semi-supervised way; gathering training points and embedding them in a two-dimensional manifold gives us a rough mapping of the measured environment. After a calibration phase, where the labeled key points in the training data are used to associate coordinates in the manifold representation with geographical locations, we can perform positioning using the adjusted map. This can be achieved through a traditional supervised learning process, which in our case is a simple nearest neighbors matching of a sampled signal strength vector. We deployed this system in two locations in the Kumpula campus in Helsinki, Finland. Results indicate that positioning based on the learned radio map can achieve good accuracy, especially in hallways or other areas in the environment where the WLAN signal is constrained by obstacles such as walls.