15 resultados para Hot modulus of rupture test

em Helda - Digital Repository of University of Helsinki


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three different Norway spruce cutting clones growing in three environments with different soil and climatic conditions were studied. The purpose was to follow variation in the radial growth rate, wood properties and lignin content and to modify wood lignin with a natural monolignol, coniferyl alcohol, by making use of inherent wood peroxidases. In addition, the incorporation of chlorinated anilines into lignin was studied with synthetic model compounds and synthetic lignin preparations to show whether unnatural compounds originating from pesticides could be bound in the lignin polymer. The lignin content of heartwood, sapwood and earlywood was determined by applying Fourier transform infrared (FTIR) spectroscopy and a principal component regression (PCR) technique. Wood blocks were treated with coniferyl alcohol by using a vacuum impregnation method. The effect of impregnation was assessed by FTIR and by a fungal decay test. Trees from a fertile site showed the highest growth rate and sapwood lignin content and the lowest latewood proportion, weight density and modulus of rupture (MOR). Trees from a medium fertile site had the lowest growth rate and the highest latewood proportion, weight density, modulus of elasticity (MOE) and MOR. The most rapidly growing clone showed the lowest latewood proportion, weight density, MOE and MOR. The slowest growing clone had the lowest sapwood lignin content and the highest latewood proportion, weight density, MOE and MOR. Differences between the sites and clones were small, while fairly large variation was found between the individual trees and growing seasons. The cutting clones maintained clone-dependent wood properties in the different growing sites although variation between trees was high and climatic factors affected growth. The coniferyl alcohol impregnation increased the content of different lignin-type phenolic compounds in the wood as well as wood decay resistance against a white-rot fungus, Coriolus versicolor. During the synthetic lignin preparation 3,4-dichloroaniline became bound by a benzylamine bond to β-O-4 structures in the polymer and it could not be released by mild acid hydrolysis. The natural monolignol, coniferyl alcohol, and chlorinated anilines could be incorporated into the lignin polymer in vivo and in vitro, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Backround and Purpose The often fatal (in 50-35%) subarachnoid hemorrhage (SAH) caused by saccular cerebral artery aneurysm (SCAA) rupture affects mainly the working aged population. The incidence of SAH is 10-11 / 100 000 in Western countries and twice as high in Finland and Japan. The estimated prevalence of SCAAs is around 2%. Many of those never rupture. Currently there are, however, no diagnostic methods to identify rupture-prone SCAAs from quiescent, (dormant) ones. Finding diagnostic markers for rupture-prone SCAAs is of primary importance since a SCAA rupture has such a sinister outcome, and all current treatment modalities are associated with morbidity and mortality. Also the therapies that prevent SCAA rupture need to be developed to as minimally invasive as possible. Although the clinical risk factors for SCAA rupture have been extensively studied and documented in large patient series, the cellular and molecular mechanisms how these risk factors lead to SCAA wall rupture remain incompletely known. Elucidation of the molecular and cellular pathobiology of the SCAA wall is needed in order to develop i) novel diagnostic tools that could identify rupture-prone SCAAs or patients at risk of SAH, and to ii) develop novel biological therapies that prevent SCAA wall rupture. Materials and Methods In this study, histological samples from unruptured and ruptured SCAAs and plasma samples from SCAA carriers were compared in order to identify structural changes, cell populations, growth factor receptors, or other molecular markers that would associate with SCAA wall rupture. In addition, experimental saccular aneurysm models and experimental models of mechanical vascular injury were used to study the cellular mechanisms of scar formation in the arterial wall, and the adaptation of the arterial wall to increased mechanical stress. Results and Interpretation Inflammation and degeneration of the SCAA wall, namely loss of mural cells and degradation of the wall matrix, were found to associate with rupture. Unruptured SCAA walls had structural resemblance with pads of myointimal hyperplasia or so called neointima that characterizes early atherosclerotic lesions, and is the repair and adaptation mechanism of the arterial wall after injury or increased mechanical stress. As in pads of myointimal hyperplasia elsewhere in the vasculature, oxidated LDL was found in the SCAA walls. Immunity against OxLDL was demonstrated in SAH patients with detection of circulating anti-oxidized LDL antibodies, which were significantly associated with the risk of rupture in patients with solitary SCAAs. Growth factor receptors associated with arterial wall remodeling and angiogenesis were more expressed in ruptured SCAA walls. In experimental saccular aneurysm models, capillary growth, arterial wall remodeling and neointima formation were found. The neointimal cells were shown to originate from the experimental aneurysm wall with minor contribution from the adjacent artery, and a negligible contribution of bone marrow-derived neointimal cells. Since loss of mural cells characterizes ruptured human SCAAs and likely impairs the adaptation and repair mechanism of ruptured or rupture-prone SCAAs, we investigated also the hypothesis that bone marrow-derived or circulating neointimal precursor cells could be used to enhance neointima formation and compensate the impaired repair capacity in ruptured SCAA walls. However, significant contribution of bone marrow cells or circulating mononuclear cells to neointima formation was not found.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poor pharmacokinetics is one of the reasons for the withdrawal of drug candidates from clinical trials. There is an urgent need for investigating in vitro ADME (absorption, distribution, metabolism and excretion) properties and recognising unsuitable drug candidates as early as possible in the drug development process. Current throughput of in vitro ADME profiling is insufficient because effective new synthesis techniques, such as drug design in silico and combinatorial synthesis, have vastly increased the number of drug candidates. Assay technologies for larger sets of compounds than are currently feasible are critically needed. The first part of this work focused on the evaluation of cocktail strategy in studies of drug permeability and metabolic stability. N-in-one liquid chromatography-tandem mass spectrometry (LC/MS/MS) methods were developed and validated for the multiple component analysis of samples in cocktail experiments. Together, cocktail dosing and LC/MS/MS were found to form an effective tool for increasing throughput. First, cocktail dosing, i.e. the use of a mixture of many test compounds, was applied in permeability experiments with Caco-2 cell culture, which is a widely used in vitro model for small intestinal absorption. A cocktail of 7-10 reference compounds was successfully evaluated for standardization and routine testing of the performance of Caco-2 cell cultures. Secondly, cocktail strategy was used in metabolic stability studies of drugs with UGT isoenzymes, which are one of the most important phase II drug metabolizing enzymes. The study confirmed that the determination of intrinsic clearance (Clint) as a cocktail of seven substrates is possible. The LC/MS/MS methods that were developed were fast and reliable for the quantitative analysis of a heterogenous set of drugs from Caco-2 permeability experiments and the set of glucuronides from in vitro stability experiments. The performance of a new ionization technique, atmospheric pressure photoionization (APPI), was evaluated through comparison with electrospray ionization (ESI), where both techniques were used for the analysis of Caco-2 samples. Like ESI, also APPI proved to be a reliable technique for the analysis of Caco-2 samples and even more flexible than ESI because of the wider dynamic linear range. The second part of the experimental study focused on metabolite profiling. Different mass spectrometric instruments and commercially available software tools were investigated for profiling metabolites in urine and hepatocyte samples. All the instruments tested (triple quadrupole, quadrupole time-of-flight, ion trap) exhibited some good and some bad features in searching for and identifying of expected and non-expected metabolites. Although, current profiling software is helpful, it is still insufficient. Thus a time-consuming largely manual approach is still required for metabolite profiling from complex biological matrices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation analyzes the interrelationship between death, the conditions of (wo)man s social being, and the notion of value as it emerges in the fiction of the American novelist Thomas Pynchon (1937 ). Pynchon s present work includes six novels V. (1963), The Crying of Lot 49 (1966), Gravity s Rainbow (1973), Vineland (1990), Mason & Dixon (1997), Against the Day (2006) and several short stories. Death constitues a central thematic in Pynchon s work, and it emerges through recurrent questions of mortality, suicide, mass destruction, sacrifice, afterlife, entropy, the relationship between the animate and the inanimate, and the limits of representation. In Pynchon, death is never a mere biological given (or event); it is always determined within a certain historical, cultural, and ideological context. Throughout his work, Pynchon questions the strict ontological separation of life and death by showing the relationship between this separation and social power. Conceptual divisions also reflect the relationship between society and its others, and death becomes that through which lines of social demarcation are articulated. Determined as a conceptual and social "other side", death in Pynchon forms a challenge to modern culture, and makes an unexpected return: the dead return to haunt the living, the inanimate and the animate fuse, and technoscientific attempts at overcoming and controlling death result in its re-emergence in mass destruction and ecological damage. The questioning of the ontological line also affects the structuration of Pynchon's prose, where the recurrent narrated and narrative desire to reach the limits of representation is openly associated with death. Textualized, death appears in Pynchon's writing as a sudden rupture within the textual functioning, when the "other side", that is, the bare materiality of the signifier is foregrounded. In this study, Pynchon s cultural criticism and his poetics come together, and I analyze the subversive role of death in his fiction through Jean Baudrillard s genealogy of the modern notion of death from L échange symbolique et la mort (1976). Baudrillard sees an intrinsic bond between the social repression of death in modernity and the emergence of modern political economy, and in his analysis economy and language appear as parallel systems for generating value (exchange value/ sign-value). For Baudrillard, the modern notion of death as negativity in relation to the positivity of life, and the fact that death cannot be given a proper meaning, betray an antagonistic relation between death and the notion of value. As a mode of negativity (that is, non-value), death becomes a moment of rupture in relation to value-based thinking in short, rationalism. Through this rupture emerges a form of thinking Baudrillard labels the symbolic, characterized by ambivalence and the subversion of conceptual opposites.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study is an examination of how the distant national past has been conceived and constructed for Finland from the mid-sixteenth century to the Second World War. The author argues that the perception and need of a national 'Golden Age' has undergone several phases during this period, yet the perceived Greatness of the Ancient Finns has been of great importance for the growth and development of the fundamental concepts of Finnish nationalism. It is a question reaching deeper than simply discussing the Kalevala or the Karelianism of the 1890s. Despite early occurrences of most of the topics the image-makers could utilize for the construction of an Ancient Greatness, a truly national proto-history only became a necessity after 1809, when a new conceptual 'Finnishness' was both conceived and brought forth in reality. In this process of nation-building, ethnic myths of origin and descent provided the core of the nationalist cause - the defence of a primordial national character - and within a few decades the antiquarian issue became a standard element of the nationalist public enlightenment. The emerging, archaeologically substantiated, nationhood was more than a scholarly construction: it was a 'politically correct' form of ethnic self-imaging, continuously adapting its message to contemporary society and modern progress. Prehistoric and medieval Finnishness became even more relevant for the intellectual defence of the nation during the period of Russian administrative pressure 1890-1905. With independence the origins of Finnishness were militarized even further, although the 'hot' phase of antiquarian nationalism ended, as many considered the Finnish state reestablished after centuries of 'dependency'. Nevertheless, the distant past of tribal Finnishness and the conceived Golden Age of the Kalevala remained obligating. The decline of public archaeology is quite evident after 1918, even though the national message of the antiquarian pursuits remained present in the history culture of the public. The myths, symbols, images, and constructs of ancient Finnishness had already become embedded in society by the turn of the century, like the patalakki cap, which remains a symbol of Finnishness to this day. The method of approach is one of combining a broad spectrum of previously neglected primary sources, all related to history culture and the subtle banalization of the distant past: school books, postcards, illustrations, festive costumes, drama, satirical magazines, novels, jewellery, and calendars. Tracing the origins of the national myths to their original contexts enables a rather thorough deconstruction of the proto-historical imaginary in this Finnish case study. Considering Anthony D. Smith's idea of ancient 'ethnies' being the basis for nationalist causes, the author considers such an approach in the Finnish case totally misplaced.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiple sclerosis (MS) is a chronic, inflammatory disease of the central nervous system, characterized especially by myelin and axon damage. Cognitive impairment in MS is common but difficult to detect without a neuropsychological examination. Valid and reliable methods are needed in clinical practice and research to detect deficits, follow their natural evolution, and verify treatment effects. The Paced Auditory Serial Addition Test (PASAT) is a measure of sustained and divided attention, working memory, and information processing speed, and it is widely used in MS patients neuropsychological evaluation. Additionally, the PASAT is the sole cognitive measure in an assessment tool primarly designed for MS clinical trials, the Multiple Sclerosis Functional Composite (MSFC). The aims of the present study were to determine a) the frequency, characteristics, and evolution of cognitive impairment among relapsing-remitting MS patients, and b) the validity and reliability of the PASAT in measuring cognitive performance in MS patients. The subjects were 45 relapsing-remitting MS patients from Seinäjoki Central Hospital, Department of Neurology and 48 healthy controls. Both groups underwent comprehensive neuropsychological assessments, including the PASAT, twice in a one-year follow-up, and additionally a sample of 10 patients and controls were evaluated with the PASAT in serial assessments five times in one month. The frequency of cognitive dysfunction among relapsing-remitting MS patients in the present study was 42%. Impairments were characterized especially by slowed information processing speed and memory deficits. During the one-year follow-up, the cognitive performance was relatively stable among MS patients on a group level. However, the practice effects in cognitive tests were less pronounced among MS patients than healthy controls. At an individual level the spectrum of MS patients cognitive deficits was wide in regards to their characteristics, severity, and evolution. The PASAT was moderately accurate in detecting MS-associated cognitive impairment, and 69% of patients were correctly classified as cognitively impaired or unimpaired when comprehensive neuropsychological assessment was used as a "gold standard". Self-reported nervousness and poor arithmetical skills seemed to explain misclassifications. MS-related fatigue was objectively demonstrated as fading performance towards the end of the test. Despite the observed practice effect, the reliability of the PASAT was excellent, and it was sensitive to the cognitive decline taking place during the follow-up in a subgroup of patients. The PASAT can be recommended for use in the neuropsychological assessment of MS patients. The test is fairly sensitive, but less specific; consequently, the reasons for low scores have to be carefully identified before interpreting them as clinically significant.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Type 2 diabetes is an increasing, serious, and costly public health problem. The increase in the prevalence of the disease can mainly be attributed to changing lifestyles leading to physical inactivity, overweight, and obesity. These lifestyle-related risk factors offer also a possibility for preventive interventions. Until recently, proper evidence regarding the prevention of type 2 diabetes has been virtually missing. To be cost-effective, intensive interventions to prevent type 2 diabetes should be directed to people at an increased risk of the disease. The aim of this series of studies was to investigate whether type 2 diabetes can be prevented by lifestyle intervention in high-risk individuals, and to develop a practical method to identify individuals who are at high risk of type 2 diabetes and would benefit from such an intervention. To study the effect of lifestyle intervention on diabetes risk, we recruited 522 volunteer, middle-aged (aged 40 - 64 at baseline), overweight (body mass index > 25 kg/m2) men (n = 172) and women (n = 350) with impaired glucose tolerance to the Diabetes Prevention Study (DPS). The participants were randomly allocated either to the intensive lifestyle intervention group or the control group. The control group received general dietary and exercise advice at baseline, and had annual physician's examination. The participants in the intervention group received, in addition, individualised dietary counselling by a nutritionist. They were also offered circuit-type resistance training sessions and were advised to increase overall physical activity. The intervention goals were to reduce body weight (5% or more reduction from baseline weight), limit dietary fat (< 30% of total energy consumed) and saturated fat (< 10% of total energy consumed), and to increase dietary fibre intake (15 g / 1000 kcal or more) and physical activity (≥ 30 minutes/day). Diabetes status was assessed annually by a repeated 75 g oral glucose tolerance testing. First analysis on end-points was completed after a mean follow-up of 3.2 years, and the intervention phase was terminated after a mean duration of 3.9 years. After that, the study participants continued to visit the study clinics for the annual examinations, for a mean of 3 years. The intervention group showed significantly greater improvement in each intervention goal. After 1 and 3 years, mean weight reductions were 4.5 and 3.5 kg in the intervention group and 1.0 kg and 0.9 kg in the control group. Cardiovascular risk factors improved more in the intervention group. After a mean follow-up of 3.2 years, the risk of diabetes was reduced by 58% in the intervention group compared with the control group. The reduction in the incidence of diabetes was directly associated with achieved lifestyle goals. Furthermore, those who consumed moderate-fat, high-fibre diet achieved the largest weight reduction and, even after adjustment for weight reduction, the lowest diabetes risk during the intervention period. After discontinuation of the counselling, the differences in lifestyle variables between the groups still remained favourable for the intervention group. During the post-intervention follow-up period of 3 years, the risk of diabetes was still 36% lower among the former intervention group participants, compared with the former control group participants. To develop a simple screening tool to identify individuals who are at high risk of type 2 diabetes, follow-up data of two population-based cohorts of 35-64 year old men and women was used. The National FINRISK Study 1987 cohort (model development data) included 4435 subjects, with 182 new drug-treated cases of diabetes identified during ten years, and the FINRISK Study 1992 cohort (model validation data) included 4615 subjects, with 67 new cases of drug-treated diabetes during five years, ascertained using the Social Insurance Institution's Drug register. Baseline age, body mass index, waist circumference, history of antihypertensive drug treatment and high blood glucose, physical activity and daily consumption of fruits, berries or vegetables were selected into the risk score as categorical variables. In the 1987 cohort the optimal cut-off point of the risk score identified 78% of those who got diabetes during the follow-up (= sensitivity of the test) and 77% of those who remained free of diabetes (= specificity of the test). In the 1992 cohort the risk score performed equally well. The final Finnish Diabetes Risk Score (FINDRISC) form includes, in addition to the predictors of the model, a question about family history of diabetes and the age category of over 64 years. When applied to the DPS population, the baseline FINDRISC value was associated with diabetes risk among the control group participants only, indicating that the intensive lifestyle intervention given to the intervention group participants abolished the diabetes risk associated with baseline risk factors. In conclusion, the intensive lifestyle intervention produced long-term beneficial changes in diet, physical activity, body weight, and cardiovascular risk factors, and reduced diabetes risk. Furthermore, the effects of the intervention were sustained after the intervention was discontinued. The FINDRISC proved to be a simple, fast, inexpensive, non-invasive, and reliable tool to identify individuals at high risk of type 2 diabetes. The use of FINDRISC to identify high-risk subjects, followed by lifestyle intervention, provides a feasible scheme in preventing type 2 diabetes, which could be implemented in the primary health care system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rupture of a cerebral artery aneurysm causes a devastating subarachnoid hemorrhage (SAH), with a mortality of almost 50% during the first month. Each year, 8-11/100 000 people suffer from aneurysmal SAH in Western countries, but the number is twice as high in Finland and Japan. The disease is most common among those of working age, the mean age at rupture being 50-55 years. Unruptured cerebral aneurysms are found in 2-6% of the population, but knowledge about the true risk of rupture is limited. The vast majority of aneurysms should be considered rupture-prone, and treatment for these patients is warranted. Both unruptured and ruptured aneurysms can be treated by either microsurgical clipping or endovascular embolization. In a standard microsurgical procedure, the neck of the aneurysm is closed by a metal clip, sealing off the aneurysm from the circulation. Endovascular embolization is performed by packing the aneurysm from the inside of the vessel lumen with detachable platinum coils. Coiling is associated with slightly lower morbidity and mortality than microsurgery, but the long-term results of microsurgically treated aneurysms are better. Endovascular treatment methods are constantly being developed further in order to achieve better long-term results. New coils and novel embolic agents need to be tested in a variety of animal models before they can be used in humans. In this study, we developed an experimental rat aneurysm model and showed its suitability for testing endovascular devices. We optimized noninvasive MRI sequences at 4.7 Tesla for follow-up of coiled experimental aneurysms and for volumetric measurement of aneurysm neck remnants. We used this model to compare platinum coils with polyglycolic-polylactic acid (PGLA) -coated coils, and showed the benefits of the latter in this model. The experimental aneurysm model and the imaging methods also gave insight into the mechanisms involved in aneurysm formation, and the model can be used in the development of novel imaging techniques. This model is affordable, easily reproducible, reliable, and suitable for MRI follow-up. It is also suitable for endovascular treatment, and it evades spontaneous occlusion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thrombin is a multifunctional protease, which has a central role in the development and progression of coronary atherosclerotic lesions and it is a possible mediator of myocardial ischemia-reperfusion injury. Its generation and procoagulant activity are greatly upregulated during cardiopulmonary bypass (CPB). On the other hand, activated protein C, a physiologic anticoagulant that is activated by thrombomodulin-bound thrombin, has been beneficial in various models of ischemia-reperfusion. Therefore, our aim in this study was to test whether thrombin generation or protein C activation during coronary artery bypass grafting (CABG) associate with postoperative myocardial damage or hemodynamic changes. To further investigate the regulation of thrombin during CABG, we tested whether preoperative thrombophilic factors associate with increased CPB-related generation of thrombin or its procoagulant activity. We also measured the anticoagulant effects of heparin during CPB with a novel coagulation test, prothrombinase-induced clotting time (PiCT), and compared the performance of this test with the present standard of laboratory-based anticoagulation monitoring. One hundred patients undergoing elective on-pump CABG were studied prospectively. A progressive increase in markers of thrombin generation (F1+2), fibrinolysis (D-dimer), and fibrin formation (soluble fibrin monomer complexes) was observed during CPB, which was further distinctly propagated by reperfusion after myocardial ischemia, and continued to peak after the neutralization of heparin with protamine. Thrombin generation during reperfusion after CABG associated with postoperative myocardial damage and increased pulmonary vascular resistance. Activated protein C levels increased only slightly during CPB before the release of the aortic clamp, but reperfusion and more significantly heparin neutralization caused a massive increase in activated protein C levels. Protein C activation was clearly delayed in relation to both thrombin generation and fibrin formation. Even though activated protein C associated dynamically with postoperative hemodynamic performance, it did not associate with postoperative myocardial damage. Preoperative thrombophilic variables did not associate with perioperative thrombin generation or its procoagulant activity. Therefore, our results do not favor routine thrombophilia screening before CABG. There was poor agreement between PiCT and other measurements of heparin effects in the setting of CPB. However, lower heparin levels during CPB associated with inferior thrombin control and high heparin levels during CPB associated with fewer perioperative transfusions of blood products. Overall, our results suggest that hypercoagulation after CABG, especially during reperfusion, might be clinically important.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structure and the mechanical properties of wood of Norway spruce (Picea abies [L.] Karst.) were studied using small samples from Finland and Sweden. X-ray diffraction (XRD) was used to determine the orientation of cellulose microfibrils (microfibril angle, MFA), the dimensions of cellulose crystallites and the average shape of the cell cross-section. X-ray attenuation and x-ray fluorescence measurements were used to study the chemical composition and the trace element content. Tensile testing with in situ XRD was used to characterise the mechanical properties of wood and the deformation of crystalline cellulose within the wood cell walls. Cellulose crystallites were found to be 192 284 Å long and 28.9 33.4 Å wide in chemically untreated wood and they were longer and wider in mature wood than in juvenile wood. The MFA distribution of individual Norway spruce tracheids and larger samples was asymmetric. In individual cell walls, the mean MFA was 19 30 degrees, while the mode of the MFA distribution was 7 21 degrees. Both the mean MFA and the mode of the MFA distribution decreased as a function of the annual ring. Tangential cell walls exhibited smaller mean MFA and mode of the MFA distribution than radial cell walls. Maceration of wood material caused narrowing of the MFA distribution and removed contributions observed at around 90 degrees. In wood of both untreated and fertilised trees, the average shape of the cell cross-section changed from circular via ambiguous to rectangular as the cambial age increased. The average shape of the cell cross-section and the MFA distribution did not change as a result of fertilisation. The mass absorption coefficient for x-rays was higher in wood of fertilised trees than in that of untreated trees and wood of fertilised trees contained more of the elements S, Cl, and K, but a smaller amount of Mn. Cellulose crystallites were longer in wood of fertilised trees than in that of untreated trees. Kraft cooking caused widening and shortening of the cellulose crystallites. Tensile tests parallel to the cells showed that if the mean MFA is initially around 10 degrees or smaller, no systematic changes occur in the MFA distribution due to strain. The role of mean MFA in defining the tensile strength or the modulus of elasticity of wood was not as dominant as that reported earlier. Crystalline cellulose elongated much less than the entire samples. The Poisson ratio νca of crystalline cellulose in Norway spruce wood was shown to be largely dependent on the surroundings of crystalline cellulose in the cell wall, varying between -1.2 and 0.8. The Poisson ratio was negative in kraft cooked wood and positive in chemically untreated wood. In chemically untreated wood, νca was larger in mature wood and in latewood compared to juvenile wood and earlywood.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis comprises four intercomplementary parts that introduce new approaches to brittle reaction layers and mechanical compatibility of metalloceramic joints created when fusing dental ceramics to titanium. Several different methods including atomic layer deposition (ALD), sessile drop contact angle measurements, scanning acoustic microscopy (SAM), three-point bending (TPB, DIN 13 927 / ISO 9693), cross-section microscopy, scanning electron microscopy (SEM), and energy dispersive X-ray spectroscopy (EDS) were employed. The first part investigates the effects of TiO2 layer structure and thickness on the joint strength of the titanium-metalloceramic system. Samples with all tested TiO2 thicknesses displayed good ceramics adhesion to Ti, and uniform TPB results. The fracture mode was independent of oxide layer thickness and structure. Cracking occurred deeper inside titanium, in the oxygen-rich Ti[O]x solid solution surface layer. During dental ceramics firing TiO2 layers dissociate and joints become brittle with increased dissolution of oxygen into metallic Ti and consequent reduction in the metal plasticity. To accomplish an ideal metalloceramic joint this needs to be resolved. The second part introduces photoinduced superhydrophilicity of TiO2. Test samples with ALD deposited anatase TiO2 films were produced. Samples were irradiated with UV light to induce superhydrophilicity of the surfaces through a cascade leading to increased amount of surface hydroxyl groups. Superhydrophilicity (contact angle ~0˚) was achieved within 2 minutes of UV radiation. Partial recovery of the contact angle was observed during the first 10 minutes after UV exposure. Total recovery was not observed within 24h storage. Photoinduced ultrahydrophilicity can be used to enhance wettability of titanium surfaces, an important factor in dental ceramics veneering processes. The third part addresses interlayers designed to restrain oxygen dissolution into Ti during dental ceramics fusing. The main requirements for an ideal interlayer material are proposed. Based on these criteria and systematic exclusion of possible interlayer materials silver (Ag) interlayers were chosen. TPB results were significantly better in when 5 μm Ag interlayers were used compared to only Al2O3-blasted samples. In samples with these Ag interlayers multiple cracks occurred inside dental ceramics, none inside Ti structure. Ag interlayers of 5 μm on Al2O3-blasted samples can be efficiently used to retard formation of the brittle oxygen-rich Ti[O]x layer, thus enhancing metalloceramic joint integrity. The most brittle component in metalloceramic joints with 5 μm Ag interlayers was bulk dental ceramics instead of Ti[O]x. The fourth part investigates the importance of mechanical interlocking. According to the results, the significance of mechanical interlocking achieved by conventional surface treatments can be questioned as long as the formation of the brittle layers (mainly oxygen-rich Ti[O]x) cannot be sufficiently controlled. In summary in contrast to former impressions of thick titanium oxide layers this thesis clearly demonstrates diffusion of oxygen from sintering atmosphere and SiO2 to Ti structures during dental ceramics firing and the following formation of brittle Ti[O]x solid solution as the most important factors predisposing joints between Ti and SiO2-based dental ceramics to low strength. This among other predisposing factors such as residual stresses created by the coefficient of thermal expansion mismatch between dental ceramics and Ti frameworks can be avoided with Ag interlayers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A straightforward computation of the list of the words (the `tail words' of the list) that are distributionally most similar to a given word (the `head word' of the list) leads to the question: How semantically similar to the head word are the tail words; that is: how similar are their meanings to its meaning? And can we do better? The experiment was done on nearly 18,000 most frequent nouns in a Finnish newsgroup corpus. These nouns are considered to be distributionally similar to the extent that they occur in the same direct dependency relations with the same nouns, adjectives and verbs. The extent of the similarity of their computational representations is quantified with the information radius. The semantic classification of head-tail pairs is intuitive; some tail words seem to be semantically similar to the head word, some do not. Each such pair is also associated with a number of further distributional variables. Individually, their overlap for the semantic classes is large, but the trained classification-tree models have some success in using combinations to predict the semantic class. The training data consists of a random sample of 400 head-tail pairs with the tail word ranked among the 20 distributionally most similar to the head word, excluding names. The models are then tested on a random sample of another 100 such pairs. The best success rates range from 70% to 92% of the test pairs, where a success means that the model predicted my intuitive semantic class of the pair. This seems somewhat promising when distributional similarity is used to capture semantically similar words. This analysis also includes a general discussion of several different similarity formulas, arranged in three groups: those that apply to sets with graded membership, those that apply to the members of a vector space, and those that apply to probability mass functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agriculture is an economic activity that heavily relies on the availability of natural resources. Through its role in food production agriculture is a major factor affecting public welfare and health, and its indirect contribution to gross domestic product and employment is significant. Agriculture also contributes to numerous ecosystem services through management of rural areas. However, the environmental impact of agriculture is considerable and reaches far beyond the agroecosystems. The questions related to farming for food production are, thus, manifold and of great public concern. Improving environmental performance of agriculture and sustainability of food production, sustainabilizing food production, calls for application of wide range of expertise knowledge. This study falls within the field of agro-ecology, with interphases to food systems and sustainability research and exploits the methods typical of industrial ecology. The research in these fields extends from multidisciplinary to interdisciplinary and transdisciplinary, a holistic approach being the key tenet. The methods of industrial ecology have been applied extensively to explore the interaction between human economic activity and resource use. Specifically, the material flow approach (MFA) has established its position through application of systematic environmental and economic accounting statistics. However, very few studies have applied MFA specifically to agriculture. The MFA approach was used in this thesis in such a context in Finland. The focus of this study is the ecological sustainability of primary production. The aim was to explore the possibilities of assessing ecological sustainability of agriculture by using two different approaches. In the first approach the MFA-methods from industrial ecology were applied to agriculture, whereas the other is based on the food consumption scenarios. The two approaches were used in order to capture some of the impacts of dietary changes and of changes in production mode on the environment. The methods were applied at levels ranging from national to sector and local levels. Through the supply-demand approach, the viewpoint changed between that of food production to that of food consumption. The main data sources were official statistics complemented with published research results and expertise appraisals. MFA approach was used to define the system boundaries, to quantify the material flows and to construct eco-efficiency indicators for agriculture. The results were further elaborated for an input-output model that was used to analyse the food flux in Finland and to determine its relationship to the economy-wide physical and monetary flows. The methods based on food consumption scenarios were applied at regional and local level for assessing feasibility and environmental impacts of relocalising food production. The approach was also used for quantification and source allocation of greenhouse gas (GHG) emissions of primary production. GHG assessment provided, thus, a means of crosschecking the results obtained by using the two different approaches. MFA data as such or expressed as eco-efficiency indicators, are useful in describing the overall development. However, the data are not sufficiently detailed for identifying the hot spots of environmental sustainability. Eco-efficiency indicators should not be bluntly used in environmental assessment: the carrying capacity of the nature, the potential exhaustion of non-renewable natural resources and the possible rebound effect need also to be accounted for when striving towards improved eco-efficiency. The input-output model is suitable for nationwide economy analyses and it shows the distribution of monetary and material flows among the various sectors. Environmental impact can be captured only at a very general level in terms of total material requirement, gaseous emissions, energy consumption and agricultural land use. Improving environmental performance of food production requires more detailed and more local information. The approach based on food consumption scenarios can be applied at regional or local scales. Based on various diet options the method accounts for the feasibility of re-localising food production and environmental impacts of such re-localisation in terms of nutrient balances, gaseous emissions, agricultural energy consumption, agricultural land use and diversity of crop cultivation. The approach is applicable anywhere, but the calculation parameters need to be adjusted so as to comply with the specific circumstances. The food consumption scenario approach, thus, pays attention to the variability of production circumstances, and may provide some environmental information that is locally relevant. The approaches based on the input-output model and on food consumption scenarios represent small steps towards more holistic systemic thinking. However, neither one alone nor the two together provide sufficient information for sustainabilizing food production. Environmental performance of food production should be assessed together with the other criteria of sustainable food provisioning. This requires evaluation and integration of research results from many different disciplines in the context of a specified geographic area. Foodshed area that comprises both the rural hinterlands of food production and the population centres of food consumption is suggested to represent a suitable areal extent for such research. Finding a balance between the various aspects of sustainability is a matter of optimal trade-off. The balance cannot be universally determined, but the assessment methods and the actual measures depend on what the bottlenecks of sustainability are in the area concerned. These have to be agreed upon among the actors of the area

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Road transport and infrastructure has a fundamental meaning for the developing world. Poor quality and inadequate coverage of roads, lack of maintenance operations and outdated road maps continue to hinder economic and social development in the developing countries. This thesis focuses on studying the present state of road infrastructure and its mapping in the Taita Hills, south-east Kenya. The study is included as a part of the TAITA-project by the Department of Geography, University of Helsinki. The road infrastructure of the study area is studied by remote sensing and GIS based methodology. As the principal dataset, true colour airborne digital camera data from 2004, was used to generate an aerial image mosaic of the study area. Auxiliary data includes SPOT satellite imagery from 2003, field spectrometry data of road surfaces and relevant literature. Road infrastructure characteristics are interpreted from three test sites using pixel-based supervised classification, object-oriented supervised classifications and visual interpretation. Road infrastructure of the test sites is interpreted visually from a SPOT image. Road centrelines are then extracted from the object-oriented classification results with an automatic vectorisation process. The road infrastructure of the entire image mosaic is mapped by applying the most appropriate assessed data and techniques. The spectral characteristics and reflectance of various road surfaces are considered with the acquired field spectra and relevant literature. The results are compared with the experimented road mapping methods. This study concludes that classification and extraction of roads remains a difficult task, and that the accuracy of the results is inadequate regardless of the high spatial resolution of the image mosaic used in this thesis. Visual interpretation, out of all the experimented methods in this thesis is the most straightforward, accurate and valid technique for road mapping. Certain road surfaces have similar spectral characteristics and reflectance values with other land cover and land use. This has a great influence for digital analysis techniques in particular. Road mapping is made even more complicated by rich vegetation and tree canopy, clouds, shadows, low contrast between roads and surroundings and the width of narrow roads in relation to the spatial resolution of the imagery used. The results of this thesis may be applied to road infrastructure mapping in developing countries on a more general context, although with certain limits. In particular, unclassified rural roads require updated road mapping schemas to intensify road transport possibilities and to assist in the development of the developing world.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Microbes in natural and artificial environments as well as in the human body are a key part of the functional properties of these complex systems. The presence or absence of certain microbial taxa is a correlate of functional status like risk of disease or course of metabolic processes of a microbial community. As microbes are highly diverse and mostly notcultivable, molecular markers like gene sequences are a potential basis for detection and identification of key types. The goal of this thesis was to study molecular methods for identification of microbial DNA in order to develop a tool for analysis of environmental and clinical DNA samples. Particular emphasis was placed on specificity of detection which is a major challenge when analyzing complex microbial communities. The approach taken in this study was the application and optimization of enzymatic ligation of DNA probes coupled with microarray read-out for high-throughput microbial profiling. The results show that fungal phylotypes and human papillomavirus genotypes could be accurately identified from pools of PCR amplicons generated from purified sample DNA. Approximately 1 ng/μl of sample DNA was needed for representative PCR amplification as measured by comparisons between clone sequencing and microarray. A minimum of 0,25 amol/μl of PCR amplicons was detectable from amongst 5 ng/μl of background DNA, suggesting that the detection limit of the test comprising of ligation reaction followed by microarray read-out was approximately 0,04%. Detection from sample DNA directly was shown to be feasible with probes forming a circular molecule upon ligation followed by PCR amplification of the probe. In this approach, the minimum detectable relative amount of target genome was found to be 1% of all genomes in the sample as estimated from 454 deep sequencing results. Signal-to-noise of contact printed microarrays could be improved by using an internal microarray hybridization control oligonucleotide probe together with a computational algorithm. The algorithm was based on identification of a bias in the microarray data and correction of the bias as shown by simulated and real data. The results further suggest semiquantitative detection to be possible by ligation detection, allowing estimation of target abundance in a sample. However, in practise, comprehensive sequence information of full length rRNA genes is needed to support probe design with complex samples. This study shows that DNA microarray has the potential for an accurate microbial diagnostic platform to take advantage of increasing sequence data and to replace traditional, less efficient methods that still dominate routine testing in laboratories. The data suggests that ligation reaction based microarray assay can be optimized to a degree that allows good signal-tonoise and semiquantitative detection.