859 resultados para WORK METHODS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wood is an important material for the construction and pulping industries. Using x-ray diffraction the microfibril angle of Sitka spruce wood was studied in the first part of this thesis. Sitka spruce (Picea sitchensis [Bong.] Carr.) is native to the west coast of North America, but due to its fast growth rate, it has also been imported to Europe. So far, its nanometre scale properties have not been systematically characterised. In this thesis the microfibril angle of Sitka spruce was shown to depend significantly on the origin of the tree in the first annual rings near the pith. Wood can be further processed to separate lignin from cellulose and hemicelluloses. Solid cellulose can act as a reducer for metal ions and it is also a porous support for nanoparticles. By chemically reducing nickel or copper in the solid cellulose support it is possible to get small nanoparticles on the surfaces of the cellulose fibres. Cellulose supported metal nanoparticles can potentially be used as environmentally friendly catalysts in organic chemistry reactions. In this thesis the size of the nickel and copper containing nanoparticles were studied using anomalous small-angle x-ray scattering and wide-angle x-ray scattering. The anomalous small-angle x-ray scattering experiments showed that the crystallite size of the copper oxide nanoparticles was the same as the size of the nanoparticles, so the nanoparticles were single crystals. The nickel containing nanoparticles were amorphous, but crystallised upon heating. The size of the nanoparticles was observed to be smaller when the reduction of nickel was done in aqueous ammonium hydrate medium compared to reduction made in aqueous solution. Lignin is typically seen as the side-product of wood industries. Lignin is the second most abundant natural polymer on Earth, and it possesses potential to be a useful material for many purposes in addition to being an energy source for the pulp mills. In this thesis, the morphology of several lignins, which were produced by different separation methods from wood, was studied using small-angle and ultra small-angle x-ray scattering. It was shown that the fractal model previously proposed for the lignin structure does not apply to most of the extracted lignin types. The only lignin to which the fractal model could be applied was kraft lignin. In aqueous solutions the average shape of the low molar mass kraft lignin particles was observed to be elongated and flat. The average shape does not necessarily correspond to the shape of the individual particles because of the polydispersity of the fraction and due to selfassociation of the particles. Lignins, and especially lignosulfonate, have many uses as dispersants, binders and emulsion stabilisers. In this thesis work the selfassociation of low molar mass lignosulfonate macromolecules was observed using small-angle x-ray scattering. By taking into account the polydispersity of the studied lignosulfonate fraction, the shape of the lignosulfonate particles was determined to be flat by fitting an oblate ellipsoidal model to the scattering intensity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While significant research has been undertaken exploring the pedagogical benefits of undertaking lengthy social work and human services field placements, there has been very little consideration regarding the potential financial stress involved for students. This study has addressed this knowledge gap. Research was conducted in 2014 using quantitative and qualitative methods with students, academic and professional staff from six Queensland Universities. The findings show a significant relationship between unpaid placements and financial hardship creating considerable stress for students and at times a compromised learning experience whilst on placement. The limited flexibility in the requirements of professional bodies and universities for how placements are undertaken has been identified as a key contributor to financial hardship. Addressing the complexities inherent in this issue requires a collaborative effort from multiple stakeholders and should not be regarded as a problem for students to endure and manage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Myotonic dystrophies type 1 (DM1) and type 2 (DM2) are the most common forms of muscular dystrophy affecting adults. They are autosomal dominant diseases caused by microsatellite tri- or tetranucleotide repeat expansion mutations in transcribed but not translated gene regions. The mutant RNA accumulates in nuclei disturbing the expression of several genes. The more recently identified DM2 disease is less well known, yet more than 300 patients have been confirmed in Finland thus far, and the true number is believed to be much higher. DM1 and DM2 share some features in general clinical presentation and molecular pathology, yet they show distinctive differences, including disease severity and differential muscle and fiber type involvement. However, the molecular differences underlying DM1 and DM2 muscle pathology are not well understood. Although the primary tissue affected is muscle, both DMs show a multisystemic phenotype due to wide expression of the mutation-carrying genes. DM2 is particularly intriguing, as it shows an incredibly wide spectrum of clinical manifestations. For this reason, it constitutes a real diagnostic challenge. The core symptoms in DM2 include proximal muscle weakness, muscle pain, myotonia, cataracts, cardiac conduction defects and endocrinological disturbations; however, none of these is mandatory for the disease. Myalgic pains may be the most disabling symptom for decades, sometimes leading to incapacity for work. In addition, DM2 may cause major socio-economical consequences for the patient, if not diagnosed, due to misunderstanding and false stigmatization. In this thesis work, we have (I) improved DM2 differential diagnostics based on muscle biopsy, and (II) described abnormalities in mRNA and protein expression in DM1 and DM2 patient skeletal muscles, showing partial differences between the two diseases, which may contribute to muscle pathology in these diseases. This is the first description of histopathological differences between DM1 and DM2, which can be used in differential diagnostics. Two novel high-resolution applications of in situ -hybridization have been described, which can be used for direct visualization of the DM2 mutation in muscle biopsy sections, or mutation size determination on extended DNA-fibers. By measuring protein and mRNA expression in the samples, differential changes in expression patterns affecting contractile proteins, other structural proteins and calcium handling proteins in DM2 compared to DM1 were found. The dysregulation at mRNA level was caused by altered transciption and abnormal splicing. The findings reported here indicate that the extent of aberrant splicing is higher in DM2 compared to DM1. In addition, the described abnormalities to some extent correlate to the differences in fiber type involvement in the two disorders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The occurrence of occupational chronic solvent encephalopathy (CSE) seems to decrease, but still every year reveals new cases. To prevent CSE and early retirement of solvent-exposed workers, actions should focus on early CSE detection and diagnosis. Identifying the work tasks and solvent exposure associated with high risk for CSE is crucial. Clinical and exposure data of all the 128 cases diagnosed with CSE as an occupational disease in Finland during 1995-2007 was collected from the patient records at the Finnish Institute of Occupational Health (FIOH) in Helsinki. The data on the number of exposed workers in Finland were gathered from the Finnish Job-exposure Matrix (FINJEM) and the number of employed from the national workforce survey. We analyzed the work tasks and solvent exposure of CSE patients and the findings in brain magnetic resonance imaging (MRI), quantitative electroencephalography (QEEG), and event-related potentials (ERP). The annual number of new cases diminished from 18 to 3, and the incidence of CSE decreased from 8.6 to 1.2 / million employed per year. The highest incidence of CSE was in workers with their main exposure to aromatic hydrocarbons; during 1995-2006 the incidence decreased from 1.2 to 0.3 / 1 000 exposed workers per year. The work tasks with the highest incidence of CSE were floor layers and lacquerers, wooden surface finishers, and industrial, metal, or car painters. Among 71 CSE patients, brain MRI revealed atrophy or white matter hyperintensities or both in 38% of the cases. Atrophy which was associated with duration of exposure was most frequently located in the cerebellum and in the frontal or parietal brain areas. QEEG in a group of 47 patients revealed increased power of the theta band in the frontal brain area. In a group of 86 patients, the P300 amplitude of auditory ERP was decreased, but at individual level, all the amplitude values were classified as normal. In 11 CSE patients and 13 age-matched controls, ERP elicited by a multimodal paradigm including an auditory, a visual detection, and a recognition memory task under single and dual-task conditions corroborated the decrease of auditory P300 amplitude in CSE patients in single-task condition. In dual-task conditions, the auditory P300 component was, more often in patients than in controls, unrecognizable. Due to the paucity and non-specificity of the findings, brain MRI serves mainly for differential diagnostics in CSE. QEEG and auditory P300 are insensitive at individual level and not useful in the clinical diagnostics of CSE. A multimodal ERP paradigm may, however, provide a more sensitive method to diagnose slight cognitive disturbances such as CSE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of lipid compositions from biological samples has become increasingly important. Lipids have a role in cardiovascular disease, metabolic syndrome and diabetes. They also participate in cellular processes such as signalling, inflammatory response, aging and apoptosis. Also, the mechanisms of regulation of cell membrane lipid compositions are poorly understood, partially because a lack of good analytical methods. Mass spectrometry has opened up new possibilities for lipid analysis due to its high resolving power, sensitivity and the possibility to do structural identification by fragment analysis. The introduction of Electrospray ionization (ESI) and the advances in instrumentation revolutionized the analysis of lipid compositions. ESI is a soft ionization method, i.e. it avoids unwanted fragmentation the lipids. Mass spectrometric analysis of lipid compositions is complicated by incomplete separation of the signals, the differences in the instrument response of different lipids and the large amount of data generated by the measurements. These factors necessitate the use of computer software for the analysis of the data. The topic of the thesis is the development of methods for mass spectrometric analysis of lipids. The work includes both computational and experimental aspects of lipid analysis. The first article explores the practical aspects of quantitative mass spectrometric analysis of complex lipid samples and describes how the properties of phospholipids and their concentration affect the response of the mass spectrometer. The second article describes a new algorithm for computing the theoretical mass spectrometric peak distribution, given the elemental isotope composition and the molecular formula of a compound. The third article introduces programs aimed specifically for the analysis of complex lipid samples and discusses different computational methods for separating the overlapping mass spectrometric peaks of closely related lipids. The fourth article applies the methods developed by simultaneously measuring the progress curve of enzymatic hydrolysis for a large number of phospholipids, which are used to determine the substrate specificity of various A-type phospholipases. The data provides evidence that the substrate efflux from bilayer is the key determining factor for the rate of hydrolysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Positron emission tomography (PET) is a molecular imaging technique that utilises radiopharmaceuticals (radiotracers) labelled with a positron-emitting radionuclide, such as fluorine-18 (18F). Development of a new radiotracer requires an appropriate radiosynthesis method: the most common of which with 18F is nucleophilic substitution with [18F]fluoride ion. The success of the labelling reaction is dependent on various factors such as the reactivity of [18F]fluoride, the structure of the target compound in addition to the chosen solvent. The overall radiosynthesis procedure must be optimised in terms of radiochemical yield and quality of the final product. Therefore, both quantitative and qualitative radioanalytical methods are essential in developing radiosynthesis methods. Furthermore, biological properties of the tracer candidate need to be evaluated by various pre-clinical studies in animal models. In this work, the feasibility of various nucleophilic 18F-fluorination strategies were studied and a labelling method for a novel radiotracer, N-3-[18F]fluoropropyl-2beta-carbomethoxy-3beta-4-fluorophenyl)nortropane ([18F]beta-CFT-FP), was optimised. The effect of solvent was studied by labelling a series of model compounds, 4-(R1-methyl)benzyl R2-benzoates. 18F-Fluorination reactions were carried out both in polar aprotic and protic solvents (tertiary alcohols). Assessment of the 18F-fluorinated products was studied by mass spectrometry (MS) in addition to conventional radiochromatographic methods, using radiosynthesis of 4-[18F]fluoro-N-[2-[1-(2-methoxyphenyl)-1-piperazinyl]ethyl-N-2-pyridinyl-benzamide (p-[18F]MPPF) as a model reaction. Labelling of [18F]beta-CFT-FP was studied using two 18F-fluoroalkylation reagents, [18F]fluoropropyl bromide and [18F]fluoropropyl tosylate, as well as by direct 18F-fluorination of sulfonate ester precursor. Subsequently, the suitability of [18F]beta-CFT-FP for imaging dopamine transporter (DAT) was evaluated by determining its biodistribution in rats. The results showed that protic solvents can be useful co-solvents in aliphatic 18F-fluorinations, especially in the labelling of sulfonate esters. Aromatic 18F-fluorination was not promoted in tert-alcohols. Sensitivity of the ion trap MS was sufficient for the qualitative analysis of the 18F-labelled products; p-[18F]MPPF was identified from the isolated product fraction with a mass-to-charge (m/z) ratio of 435 (i.e. protonated molecule [M+H]+). [18F]beta-CFT-FP was produced most efficiently via [18F]fluoropropyl tosylate, leading to sufficient radiochemical yield and specific radioactivity for PET studies. The ex vivo studies in rats showed fast kinetics as well as the specific uptake of [18F]beta-CFT-FP to the DAT rich brain regions. Thus, it was concluded that [18F]beta-CFT-FP has potential as a radiotracer for imaging DAT by PET.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Work/family reconciliation is a crucial question for both personal well-being and on societal level for productivity and re-production throughout the Western world. This thesis examines work/family reconciliation on societal and organisational level in the Finnish context. The study is based on an initial framework, developing it further and analysing the results with help of it. The methodology of the study is plural, including varying epistemological emphasis and both quantitative and qualitative methods. Policy analysis from two different sectors is followed by a survey answered by 113 HR-managers, and then, based on quantitative analyses, interviews in four chosen case companies. The central findings of the thesis are that there indeed are written corporate level policies for reconciling work and family in companies operating in Finland, in spite of the strong state level involvement in creating a policy context in work/family reconciliation. Also, the existing policies vary in accessibility and use. The most frequently used work/family policies still are the statutory state level policies for family leave, taking place when a baby is born and during his or her first years. Still, there are new policies arising, such as a nurse for an employee’s child who has fallen ill, that are based on company activity only, which shows in both accessibility and use of the policy. Reasons for developing corporate level work/family policies vary among the so-called pro-active and re-active companies. In general, family law has a substantial effect for developing corporate level policies. Also headquarter gender equality strategies as well as employee demands are important. In regression analyses, it was found that corporate image and importance in recruitment are the foremost reasons for companies to develop policies, not for example the amount of female employees in the company. The reasons for policy development can be summarized into normative pressures, coercive pressures and mimetic pressures, in line with findings from institutional theory. This research, however, includes awareness of different stakeholder interests and recognizes that institutional theory needs to be complemented with notions of gender and family, which seem to play a part in perceived work/family conflict and need for further work/family policies both in managers’ personal lives and on the organisational level. A very central finding, demanding more attention, is the by HR managers perceived change in values towards work and commitment towards organisation at the youngest working generation, Generation Y. This combined with the need for key personnel has brought new challenges to companies especially in knowledge business and will presumably lead to further development of flexible practices in organisations. The accessibility to this flexibility seems to, however, be even more dependent on the specific knowledge and skills of the employee. How this generation will change the organisations remains to be seen in further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Instruction scheduling with an automaton-based resource conflict model is well-established for normal scheduling. Such models have been generalized to software pipelining in the modulo-scheduling framework. One weakness with existing methods is that a distinct automaton must be constructed for each combination of a reservation table and initiation interval. In this work, we present a different approach to model conflicts. We construct one automaton for each reservation table which acts as a compact encoding of all the conflict automata for this table, which can be recovered for use in modulo-scheduling. The basic premise of the construction is to move away from the Proebsting-Fraser model of conflict automaton to the Muller model of automaton modelling issue sequences. The latter turns out to be useful and efficient in this situation. Having constructed this automaton, we show how to improve the estimate of resource constrained initiation interval. Such a bound is always better than the average-use estimate. We show that our bound is safe: it is always lower than the true initiation interval. This use of the automaton is orthogonal to its use in modulo-scheduling. Once we generate the required information during pre-processing, we can compute the lower bound for a program without any further reference to the automaton.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work analyses the influence of several design methods on the degree of creativity of the design outcome. A design experiment has been carried out in which the participants were divided into four teams of three members, and each team was asked to work applying different design methods. The selected methods were Brainstorming, Functional Analysis, and SCAMPER method. The `degree of creativity' of each design outcome is assessed by means of a questionnaire offered to a number of experts and by means of three different metrics: the metric of Moss, the metric of Sarkar and Chakrabarti, and the evaluation of innovative potential. The three metrics share the property of measuring the creativity as a combination of the degree of novelty and the degree of usefulness. The results show that Brainstorming provides more creative outcomes than when no method is applied, while this is not proved for SCAMPER and Functional Analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many dynamical systems, including lakes, organisms, ocean circulation patterns, or financial markets, are now thought to have tipping points where critical transitions to a contrasting state can happen. Because critical transitions can occur unexpectedly and are difficult to manage, there is a need for methods that can be used to identify when a critical transition is approaching. Recent theory shows that we can identify the proximity of a system to a critical transition using a variety of so-called `early warning signals', and successful empirical examples suggest a potential for practical applicability. However, while the range of proposed methods for predicting critical transitions is rapidly expanding, opinions on their practical use differ widely, and there is no comparative study that tests the limitations of the different methods to identify approaching critical transitions using time-series data. Here, we summarize a range of currently available early warning methods and apply them to two simulated time series that are typical of systems undergoing a critical transition. In addition to a methodological guide, our work offers a practical toolbox that may be used in a wide range of fields to help detect early warning signals of critical transitions in time series data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical failure of insulation is known to be an extremal random process wherein nominally identical pro-rated specimens of equipment insulation, at constant stress fail at inordinately different times even under laboratory test conditions. In order to be able to estimate the life of power equipment, it is necessary to run long duration ageing experiments under accelerated stresses, to acquire and analyze insulation specific failure data. In the present work, Resin Impregnated Paper (RIP) a relatively new insulation system of choice used in transformer bushings, is taken as an example. The failure data has been processed using proven statistical methods, both graphical and analytical. The physical model governing insulation failure at constant accelerated stress has been assumed to be based on temperature dependent inverse power law model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an experimental study that was conducted to compare the results obtained from using different design methods (brainstorming (BR), functional analysis (FA), and SCAMPER) in design processes. The objectives of this work are twofold. The first was to determine whether there are any differences in the length of time devoted to the different types of activities that are carried out in the design process, depending on the method that is employed; in other words, whether the design methods that are used make a difference in the profile of time spent across the design activities. The second objective was to analyze whether there is any kind of relationship between the time spent on design process activities and the degree of creativity in the solutions that are obtained. Creativity evaluation has been done by means of the degree of novelty and the level of resolution of the designed solutions using creative product semantic scale (CPSS) questionnaire. The results show that there are significant differences between the amounts of time devoted to activities related to understanding the problem and the typology of the design method, intuitive or logical, that are used. While the amount of time spent on analyzing the problem is very small in intuitive methods, such as brainstorming and SCAMPER (around 8-9% of the time), with logical methods like functional analysis practically half the time is devoted to analyzing the problem. Also, it has been found that the amount of time spent in each design phase has an influence on the results in terms of creativity, but results are not enough strong to define in which measure are they affected. This paper offers new data and results on the distinct benefits to be obtained from applying design methods. DOI: 10.1115/1.4007362]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The RILEM work-of-fracture method for measuring the specific fracture energy of concrete from notched three-point bend specimens is still the most common method used throughout the world, despite the fact that the specific fracture energy so measured is known to vary with the size and shape of the test specimen. The reasons for this variation have also been known for nearly two decades, and two methods have been proposed in the literature to correct the measured size-dependent specific fracture energy (G(f)) in order to obtain a size-independent value (G(F)). It has also been proved recently, on the basis of a limited set of results on a single concrete mix with a compressive strength of 37 MPa, that when the size-dependent G(f) measured by the RILEM method is corrected following either of these two methods, the resulting specific fracture energy G(F) is very nearly the same and independent of the size of the specimen. In this paper, we will provide further evidence in support of this important conclusion using extensive independent test results of three different concrete mixes ranging in compressive strength from 57 to 122 MPa. (c) 2013 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural Support Vector Machines (SSVMs) and Conditional Random Fields (CRFs) are popular discriminative methods used for classifying structured and complex objects like parse trees, image segments and part-of-speech tags. The datasets involved are very large dimensional, and the models designed using typical training algorithms for SSVMs and CRFs are non-sparse. This non-sparse nature of models results in slow inference. Thus, there is a need to devise new algorithms for sparse SSVM and CRF classifier design. Use of elastic net and L1-regularizer has already been explored for solving primal CRF and SSVM problems, respectively, to design sparse classifiers. In this work, we focus on dual elastic net regularized SSVM and CRF. By exploiting the weakly coupled structure of these convex programming problems, we propose a new sequential alternating proximal (SAP) algorithm to solve these dual problems. This algorithm works by sequentially visiting each training set example and solving a simple subproblem restricted to a small subset of variables associated with that example. Numerical experiments on various benchmark sequence labeling datasets demonstrate that the proposed algorithm scales well. Further, the classifiers designed are sparser than those designed by solving the respective primal problems and demonstrate comparable generalization performance. Thus, the proposed SAP algorithm is a useful alternative for sparse SSVM and CRF classifier design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using idealized one-dimensional Eulerian hydrodynamic simulations, we contrast the behaviour of isolated supernovae with the superbubbles driven by multiple, collocated supernovae. Continuous energy injection via successive supernovae exploding within the hot/dilute bubble maintains a strong termination shock. This strong shock keeps the superbubble over-pressured and drives the outer shock well after it becomes radiative. Isolated supernovae, in contrast, with no further energy injection, become radiative quite early (less than or similar to 0.1Myr, tens of pc), and stall at scales less than or similar to 100 pc. We show that isolated supernovae lose almost all of their mechanical energy by 1 Myr, but superbubbles can retain up to similar to 40 per cent of the input energy in the form of mechanical energy over the lifetime of the star cluster (a few tens of Myr). These conclusions hold even in the presence of realistic magnetic fields and thermal conduction. We also compare various methods for implementing supernova feedback in numerical simulations. For various feedback prescriptions, we derive the spatial scale below which the energy needs to be deposited in order for it to couple to the interstellar medium. We show that a steady thermal wind within the superbubble appears only for a large number (greater than or similar to 10(4)) of supernovae. For smaller clusters, we expect multiple internal shocks instead of a smooth, dense thermalized wind.