17 resultados para Spermatic quality analysis

em Helda - Digital Repository of University of Helsinki


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The nutritional quality of the product as well as other quality attributes like microbiological and sensory quality are essential factors in baby food industry, and therefore different alternative sterilizing methods for conventional heating processes are of great interest in this food sector. This report gives an overview on different sterilization techniques for baby food. The report is a part of the work done in work package 3 ”QACCP Analysis Processing: Quality – driven distribution and processing chain analysis“ in the Core Organic ERANET project called Quality analysis of critical control points within the whole food chain and their impact on food quality, safety and health (QACCP). The overall objective of the project is to optimise organic production and processing in order to improve food safety as well as nutritional quality and increase health promoting aspects in consumer products. The approach will be a chain analysis approach which addresses the link between farm and fork and backwards from fork to farm. The objective is to improve product related quality management in farming (towards testing food authenticity) and processing (towards food authenticity and sustainable processes. The articles in this volume do not necessarily reflect the Core Organic ERANET’s views and in no way anticipate the Core Organic ERANET’s future policy in this area. The contents of the articles in this volume are the sole responsibility of the authors. The information contained here in, including any expression of opinion and any projection or forecast, has been obtained from sources believed by the authors to be reliable but is not guaranteed as to accuracy or completeness. The information is supplied without obligation and on the understanding that any person who acts upon it or otherwise changes his/her position in reliance thereon does so entirely at his/her own risk. The writers gratefully acknowledge the financial support from the Core Organic Funding Body: Ministry of Agriculture and Forestry, Finland, Swiss Federal Office for Agriculture, Switzerland and Federal Ministry of Consumer Protection, Food and Agriculture, Germany.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To enhance the utilization of the wood, the sawmills are forced to place more emphasis on planning to master the whole production chain from the forest to the end product. One significant obstacle to integrating the forest-sawmill-market production chain is the lack of appropriate information about forest stands. Since the wood procurement point of view in forest planning systems has been almost totally disregarded there has been a great need to develop an easy and efficient pre-harvest measurement method, allowing separate measurement of stands prior to harvesting. The main purpose of this study was to develop a measurement method for pine stands which forest managers could use in describing the properties of the standing trees for sawing production planning. Study materials were collected from ten Scots pine stands (Pinus sylvestris) located in North Häme and South Pohjanmaa, in southern Finland. The data comprise test sawing data on 314 pine stems, dbh and height measures of all trees and measures of the quality parameters of pine sawlog stems in all ten study stands as well as the locations of all trees in six stands. The study was divided into four sub-studies which deal with pine quality prediction, construction of diameter and dead branch height distributions, sampling designs and applying height and crown height models. The final proposal for the pre-harvest measurement method is a synthesis of the individual sub-studies. Quality analysis resulted in choosing dbh, distance from stump height to the first dead branch (dead branch height), crown height and tree height as the most appropriate quality characteristics of Scots pine. Dbh and dead branch height are measured from each pine sample tree while height and crown height are derived from dbh measures by aid of mixed height and crown height models. Pine and spruce diameter distribution as well as dead branch height distribution are most effectively predicted by the kernel function. Roughly 25 sample trees seems to be appropriate in pure pine stands. In mixed stands the number of sample trees needs to be increased in proportion to the intensity of pines in order to attain the same level of accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solid materials can exist in different physical structures without a change in chemical composition. This phenomenon, known as polymorphism, has several implications on pharmaceutical development and manufacturing. Various solid forms of a drug can possess different physical and chemical properties, which may affect processing characteristics and stability, as well as the performance of a drug in the human body. Therefore, knowledge and control of the solid forms is fundamental to maintain safety and high quality of pharmaceuticals. During manufacture, harsh conditions can give rise to unexpected solid phase transformations and therefore change the behavior of the drug. Traditionally, pharmaceutical production has relied on time-consuming off-line analysis of production batches and finished products. This has led to poor understanding of processes and drug products. Therefore, new powerful methods that enable real time monitoring of pharmaceuticals during manufacturing processes are greatly needed. The aim of this thesis was to apply spectroscopic techniques to solid phase analysis within different stages of drug development and manufacturing, and thus, provide a molecular level insight into the behavior of active pharmaceutical ingredients (APIs) during processing. Applications to polymorph screening and different unit operations were developed and studied. A new approach to dissolution testing, which involves simultaneous measurement of drug concentration in the dissolution medium and in-situ solid phase analysis of the dissolving sample, was introduced and studied. Solid phase analysis was successfully performed during different stages, enabling a molecular level insight into the occurring phenomena. Near-infrared (NIR) spectroscopy was utilized in screening of polymorphs and processing-induced transformations (PITs). Polymorph screening was also studied with NIR and Raman spectroscopy in tandem. Quantitative solid phase analysis during fluidized bed drying was performed with in-line NIR and Raman spectroscopy and partial least squares (PLS) regression, and different dehydration mechanisms were studied using in-situ spectroscopy and partial least squares discriminant analysis (PLS-DA). In-situ solid phase analysis with Raman spectroscopy during dissolution testing enabled analysis of dissolution as a whole, and provided a scientific explanation for changes in the dissolution rate. It was concluded that the methods applied and studied provide better process understanding and knowledge of the drug products, and therefore, a way to achieve better quality.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims: The aims of this study were 1) to identify and describe health economic studies that have used quality-adjusted life years (QALYs) based on actual measurements of patients' health-related quality of life (HRQoL); 2) to test the feasibility of routine collection of health-related quality of life (HRQoL) data as an indicator of effectiveness of secondary health care; and 3) to establish and compare the cost-utility of three large-volume surgical procedures in a real-world setting in the Helsinki University Central Hospital, a large referral hospital providing secondary and tertiary health-care services for a population of approximately 1.4 million. Patients and methods: So as to identify studies that have used QALYs as an outcome measure, a systematic search of the literature was performed using the Medline, Embase, CINAHL, SCI and Cochrane Library electronic databases. Initial screening of the identified articles involved two reviewers independently reading the abstracts; the full-text articles were also evaluated independently by two reviewers, with a third reviewer used in cases where the two reviewers could not agree a consensus on which articles should be included. The feasibility of routinely evaluating the cost-effectiveness of secondary health care was tested by setting up a system for collecting HRQoL data on approximately 4 900 patients' HRQoL before and after operative treatments performed in the hospital. The HRQoL data used as an indicator of treatment effectiveness was combined with diagnostic and financial indicators routinely collected in the hospital. To compare the cost-effectiveness of three surgical interventions, 712 patients admitted for routine operative treatment completed the 15D HRQoL questionnaire before and also 3-12 months after the operation. QALYs were calculated using the obtained utility data and expected remaining life years of the patients. Direct hospital costs were obtained from the clinical patient administration database of the hospital and a cost-utility analysis was performed from the perspective of the provider of secondary health care services. Main results: The systematic review (Study I) showed that although QALYs gained are considered an important measure of the effectiveness of health care, the number of studies in which QALYs are based on actual measurements of patients' HRQoL is still fairly limited. Of the reviewed full-text articles, only 70 reported QALYs based on actual before after measurements using a valid HRQoL instrument. Collection of simple cost-effectiveness data in secondary health care is feasible and could easily be expanded and performed on a routine basis (Study II). It allows meaningful comparisons between various treatments and provides a means for allocating limited health care resources. The cost per QALY gained was 2 770 for cervical operations and 1 740 for lumbar operations. In cases where surgery was delayed the cost per QALY was doubled (Study III). The cost per QALY ranges between subgroups in cataract surgery (Study IV). The cost per QALY gained was 5 130 for patients having both eyes operated on and 8 210 for patients with only one eye operated on during the 6-month follow-up. In patients whose first eye had been operated on previous to the study period, the mean HRQoL deteriorated after surgery, thus precluding the establishment of the cost per QALY. In arthroplasty patients (Study V) the mean cost per QALY gained in a one-year period was 6 710 for primary hip replacement, 52 270 for revision hip replacement, and 14 000 for primary knee replacement. Conclusions: Although the importance of cost-utility analyses has during recent years been stressed, there are only a limited number of studies in which the evaluation is based on patients own assessment of the treatment effectiveness. Most of the cost-effectiveness and cost-utility analyses are based on modeling that employs expert opinion regarding the outcome of treatment, not on patient-derived assessments. Routine collection of effectiveness information from patients entering treatment in secondary health care turned out to be easy enough and did not, for instance, require additional personnel on the wards in which the study was executed. The mean patient response rate was more than 70 %, suggesting that patients were happy to participate and appreciated the fact that the hospital showed an interest in their well-being even after the actual treatment episode had ended. Spinal surgery leads to a statistically significant and clinically important improvement in HRQoL. The cost per QALY gained was reasonable, at less than half of that observed for instance for hip replacement surgery. However, prolonged waiting for an operation approximately doubled the cost per QALY gained from the surgical intervention. The mean utility gain following routine cataract surgery in a real world setting was relatively small and confined mostly to patients who had had both eyes operated on. The cost of cataract surgery per QALY gained was higher than previously reported and was associated with considerable degree of uncertainty. Hip and knee replacement both improve HRQoL. The cost per QALY gained from knee replacement is two-fold compared to hip replacement. Cost-utility results from the three studied specialties showed that there is great variation in the cost-utility of surgical interventions performed in a real-world setting even when only common, widely accepted interventions are considered. However, the cost per QALY of all the studied interventions, except for revision hip arthroplasty, was well below 50 000, this figure being sometimes cited in the literature as a threshold level for the cost-effectiveness of an intervention. Based on the present study it may be concluded that routine evaluation of the cost-utility of secondary health care is feasible and produces information essential for a rational and balanced allocation of scarce health care resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis reports on investigations into the influence of heat treatment on the manufacturing of oat flakes. Sources of variation in the oat flake quality are reviewed, including the whole chain from the farm to the consumer. The most important quality parameters of oat flakes are the absence of lipid hydrolysing enzymes, specific weight, thickness, breakage (fines), water absorption. Flavour, colour and pasting properties are also important, but were not included in the experimental part of this study. Of particular interest was the role of heat processing. The first possible heat treatment may occur already during grain drying, which in Finland generally happens at the farm. At the mill, oats are often kilned to stabilise the product by inactivating lipid hydrolysing enzymes. Almost invariably steaming is used during flaking, to soften the groats and reduce flake breakage. This thesis presents the use of a material science approach to investigating a complex system, typical of food processes. A combination of fundamental and empirical rheological measurements was used together with a laboratory scale process to simulate industrial processing. The results were verified by means of industrial trials. Industrially produced flakes at three thickness levels (nominally 0.75, 0.85 and 0.90 mm) were produced from kilned and unkilned oat groats, and the flake strength was measured at different moisture contents. Kilning was not found to significantly affect the force required to puncture a flake with a 2mm cylindrical probe, which was taken as a measure of flake strength. To further investigate how heat processing contributes to flake quality, dynamic mechanical analysis was used to characterise the effect of heat on the mechanical properties of oats. A marked stiffening of the groat, of up to about 50% increase in storage modulus, was observed during first heating at around 36 to 57°C. This was also observed in tablets prepared from ground groats and extracted oat starch. This stiffening was thus attributed to increased adhesion between starch granules. Groats were steamed in a laboratory steamer and were tempered in an oven at 80 110°C for 30 90 min. The maximum force required to compress the steamed groats to 50% strain increased from 50.7 N to 57.5 N as the tempering temperature was increased from 80 to 110°C. Tempering conditions also affected water absorption. A significantly higher moisture content was observed for kilned (18.9%) compared to unkilned (17.1%) groats, but otherwise had no effect on groat height, maximum force or final force after a 5 s relaxation time. Flakes were produced from the tempered groats using a laboratory flaking machine, using a roll gap of 0.4 mm. Apart from specific weight, flake properties were not influenced by kilning. Tempering conditions however had significant effects on the specific weight, thickness and water absorption of the flakes, as well as on the amount of fine material (<2 mm) produced during flaking. Flake strength correlated significantly with groat strength and flake thickness. Trial flaking at a commercial mill confirmed that groat temperature after tempering influenced water absorption. Variation in flake strength was observed , but at the groat temperatures required to inactivate lipase, it was rather small. Cold flaking of groats resulted in soft, floury flakes. The results presented in this thesis suggest that heating increased the adhesion between starch granules. This resulted in an increase in the stiffness and brittleness of the groat. Brittle fracture, rather than plastic flow, during flaking could result in flaws and cracks in the flake. These would be expected to increase water absorption. This was indeed observed as tempering temperature increased. Industrial trials, conducted with different groat temperatures, confirmed the main findings of the laboratory experiments. The approach used in the present study allowed the systematic study of the effect of interacting process parameters on product quality. There have been few scientific studies of oat processing, and these results can be used to understand the complex effects of process variables on flake quality. They also offer an insight into what happens as the oat groat is deformed into a flake.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital elevation models (DEMs) have been an important topic in geography and surveying sciences for decades due to their geomorphological importance as the reference surface for gravita-tion-driven material flow, as well as the wide range of uses and applications. When DEM is used in terrain analysis, for example in automatic drainage basin delineation, errors of the model collect in the analysis results. Investigation of this phenomenon is known as error propagation analysis, which has a direct influence on the decision-making process based on interpretations and applications of terrain analysis. Additionally, it may have an indirect influence on data acquisition and the DEM generation. The focus of the thesis was on the fine toposcale DEMs, which are typically represented in a 5-50m grid and used in the application scale 1:10 000-1:50 000. The thesis presents a three-step framework for investigating error propagation in DEM-based terrain analysis. The framework includes methods for visualising the morphological gross errors of DEMs, exploring the statistical and spatial characteristics of the DEM error, making analytical and simulation-based error propagation analysis and interpreting the error propagation analysis results. The DEM error model was built using geostatistical methods. The results show that appropriate and exhaustive reporting of various aspects of fine toposcale DEM error is a complex task. This is due to the high number of outliers in the error distribution and morphological gross errors, which are detectable with presented visualisation methods. In ad-dition, the use of global characterisation of DEM error is a gross generalisation of reality due to the small extent of the areas in which the decision of stationarity is not violated. This was shown using exhaustive high-quality reference DEM based on airborne laser scanning and local semivariogram analysis. The error propagation analysis revealed that, as expected, an increase in the DEM vertical error will increase the error in surface derivatives. However, contrary to expectations, the spatial au-tocorrelation of the model appears to have varying effects on the error propagation analysis depend-ing on the application. The use of a spatially uncorrelated DEM error model has been considered as a 'worst-case scenario', but this opinion is now challenged because none of the DEM derivatives investigated in the study had maximum variation with spatially uncorrelated random error. Sig-nificant performance improvement was achieved in simulation-based error propagation analysis by applying process convolution in generating realisations of the DEM error model. In addition, typology of uncertainty in drainage basin delineations is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Metabolism is the cellular subsystem responsible for generation of energy from nutrients and production of building blocks for larger macromolecules. Computational and statistical modeling of metabolism is vital to many disciplines including bioengineering, the study of diseases, drug target identification, and understanding the evolution of metabolism. In this thesis, we propose efficient computational methods for metabolic modeling. The techniques presented are targeted particularly at the analysis of large metabolic models encompassing the whole metabolism of one or several organisms. We concentrate on three major themes of metabolic modeling: metabolic pathway analysis, metabolic reconstruction and the study of evolution of metabolism. In the first part of this thesis, we study metabolic pathway analysis. We propose a novel modeling framework called gapless modeling to study biochemically viable metabolic networks and pathways. In addition, we investigate the utilization of atom-level information on metabolism to improve the quality of pathway analyses. We describe efficient algorithms for discovering both gapless and atom-level metabolic pathways, and conduct experiments with large-scale metabolic networks. The presented gapless approach offers a compromise in terms of complexity and feasibility between the previous graph-theoretic and stoichiometric approaches to metabolic modeling. Gapless pathway analysis shows that microbial metabolic networks are not as robust to random damage as suggested by previous studies. Furthermore the amino acid biosynthesis pathways of the fungal species Trichoderma reesei discovered from atom-level data are shown to closely correspond to those of Saccharomyces cerevisiae. In the second part, we propose computational methods for metabolic reconstruction in the gapless modeling framework. We study the task of reconstructing a metabolic network that does not suffer from connectivity problems. Such problems often limit the usability of reconstructed models, and typically require a significant amount of manual postprocessing. We formulate gapless metabolic reconstruction as an optimization problem and propose an efficient divide-and-conquer strategy to solve it with real-world instances. We also describe computational techniques for solving problems stemming from ambiguities in metabolite naming. These techniques have been implemented in a web-based sofware ReMatch intended for reconstruction of models for 13C metabolic flux analysis. In the third part, we extend our scope from single to multiple metabolic networks and propose an algorithm for inferring gapless metabolic networks of ancestral species from phylogenetic data. Experimenting with 16 fungal species, we show that the method is able to generate results that are easily interpretable and that provide hypotheses about the evolution of metabolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies human gene expression space using high throughput gene expression data from DNA microarrays. In molecular biology, high throughput techniques allow numerical measurements of expression of tens of thousands of genes simultaneously. In a single study, this data is traditionally obtained from a limited number of sample types with a small number of replicates. For organism-wide analysis, this data has been largely unavailable and the global structure of human transcriptome has remained unknown. This thesis introduces a human transcriptome map of different biological entities and analysis of its general structure. The map is constructed from gene expression data from the two largest public microarray data repositories, GEO and ArrayExpress. The creation of this map contributed to the development of ArrayExpress by identifying and retrofitting the previously unusable and missing data and by improving the access to its data. It also contributed to creation of several new tools for microarray data manipulation and establishment of data exchange between GEO and ArrayExpress. The data integration for the global map required creation of a new large ontology of human cell types, disease states, organism parts and cell lines. The ontology was used in a new text mining and decision tree based method for automatic conversion of human readable free text microarray data annotations into categorised format. The data comparability and minimisation of the systematic measurement errors that are characteristic to each lab- oratory in this large cross-laboratories integrated dataset, was ensured by computation of a range of microarray data quality metrics and exclusion of incomparable data. The structure of a global map of human gene expression was then explored by principal component analysis and hierarchical clustering using heuristics and help from another purpose built sample ontology. A preface and motivation to the construction and analysis of a global map of human gene expression is given by analysis of two microarray datasets of human malignant melanoma. The analysis of these sets incorporate indirect comparison of statistical methods for finding differentially expressed genes and point to the need to study gene expression on a global level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless access is expected to play a crucial role in the future of the Internet. The demands of the wireless environment are not always compatible with the assumptions that were made on the era of the wired links. At the same time, new services that take advantage of the advances in many areas of technology are invented. These services include delivery of mass media like television and radio, Internet phone calls, and video conferencing. The network must be able to deliver these services with acceptable performance and quality to the end user. This thesis presents an experimental study to measure the performance of bulk data TCP transfers, streaming audio flows, and HTTP transfers which compete the limited bandwidth of the GPRS/UMTS-like wireless link. The wireless link characteristics are modeled with a wireless network emulator. We analyze how different competing workload types behave with regular TPC and how the active queue management, the Differentiated services (DiffServ), and a combination of TCP enhancements affect the performance and the quality of service. We test on four link types including an error-free link and the links with different Automatic Repeat reQuest (ARQ) persistency. The analysis consists of comparing the resulting performance in different configurations based on defined metrics. We observed that DiffServ and Random Early Detection (RED) with Explicit Congestion Notification (ECN) are useful, and in some conditions necessary, for quality of service and fairness because a long queuing delay and congestion related packet losses cause problems without DiffServ and RED. However, we observed situations, where there is still room for significant improvements if the link-level is aware of the quality of service. Only very error-prone link diminishes the benefits to nil. The combination of TCP enhancements improves performance. These include initial window of four, Control Block Interdependence (CBI) and Forward RTO recovery (F-RTO). The initial window of four helps a later starting TCP flow to start faster but generates congestion under some conditions. CBI prevents slow-start overshoot and balances slow start in the presence of error drops, and F-RTO reduces unnecessary retransmissions successfully.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Free and Open Source Software (FOSS) has gained increased interest in the computer software industry, but assessing its quality remains a challenge. FOSS development is frequently carried out by globally distributed development teams, and all stages of development are publicly visible. Several product and process-level quality factors can be measured using the public data. This thesis presents a theoretical background for software quality and metrics and their application in a FOSS environment. Information available from FOSS projects in three information spaces are presented, and a quality model suitable for use in a FOSS context is constructed. The model includes both process and product quality metrics, and takes into account the tools and working methods commonly used in FOSS projects. A subset of the constructed quality model is applied to three FOSS projects, highlighting both theoretical and practical concerns in implementing automatic metric collection and analysis. The experiment shows that useful quality information can be extracted from the vast amount of data available. In particular, projects vary in their growth rate, complexity, modularity and team structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A vast amount of public services and goods are contracted through procurement auctions. Therefore it is very important to design these auctions in an optimal way. Typically, we are interested in two different objectives. The first objective is efficiency. Efficiency means that the contract is awarded to the bidder that values it the most, which in the procurement setting means the bidder that has the lowest cost of providing a service with a given quality. The second objective is to maximize public revenue. Maximizing public revenue means minimizing the costs of procurement. Both of these goals are important from the welfare point of view. In this thesis, I analyze field data from procurement auctions and show how empirical analysis can be used to help design the auctions to maximize public revenue. In particular, I concentrate on how competition, which means the number of bidders, should be taken into account in the design of auctions. In the first chapter, the main policy question is whether the auctioneer should spend resources to induce more competition. The information paradigm is essential in analyzing the effects of competition. We talk of a private values information paradigm when the bidders know their valuations exactly. In a common value information paradigm, the information about the value of the object is dispersed among the bidders. With private values more competition always increases the public revenue but with common values the effect of competition is uncertain. I study the effects of competition in the City of Helsinki bus transit market by conducting tests for common values. I also extend an existing test by allowing bidder asymmetry. The information paradigm seems to be that of common values. The bus companies that have garages close to the contracted routes are influenced more by the common value elements than those whose garages are further away. Therefore, attracting more bidders does not necessarily lower procurement costs, and thus the City should not implement costly policies to induce more competition. In the second chapter, I ask how the auctioneer can increase its revenue by changing contract characteristics like contract sizes and durations. I find that the City of Helsinki should shorten the contract duration in the bus transit auctions because that would decrease the importance of common value components and cheaply increase entry which now would have a more beneficial impact on the public revenue. Typically, cartels decrease the public revenue in a significant way. In the third chapter, I propose a new statistical method for detecting collusion and compare it with an existing test. I argue that my test is robust to unobserved heterogeneity unlike the existing test. I apply both methods to procurement auctions that contract snow removal in schools of Helsinki. According to these tests, the bidding behavior of two of the bidders seems consistent with a contract allocation scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aims of this study were to describe Finnish day surgery practice at present and to evaluate quality of care by assessing postdischarge minor morbidity and quality indicators. Potential treatment options were approached by investigating the role of oral dexamethasone as a part of multimodal analgesia and the feasibility of day surgery in patients aged 65 years and older. Over a 2-month period, all patient cases at 14 Finnish day surgery or short-stay units were analyzed (Study I). Quality indicators included rates and reasons for overnight admission, readmission, reoperation, cancellations, and patient satisfaction. Recovery during the first postoperative week was assessed at two units (Study II). Altogether 2732 patients graded daily the intensity of predefined symptoms. To define risk factors of postdischarge symptoms, multinomial regression analysis was used. Sixty patients scheduled to undergo day surgery for hallux valgus were randomized to receive twice perioperatively dexamethasone 9 mg or placebo (Study III). Paracetamol 1 g was administered 3 times daily. Rescue medication (oxycodone) consumption during 0-3 postoperative days (POD), maximal pain scores and adverse effects were documented. Medically stable patients aged 65 years or older, scheduled for open inguinal hernia repair, were randomized to receive treatment either as day cases or inpatients (Study IV). Complications, unplanned admissions, healthcare visits, and patients’ acceptance of the type of care provided were assessed during 2 weeks postoperatively. In Study I, unplanned overnight admissions were reported in 5.9%, return hospital visits during PODs 1-28 in 3.7%, and readmissions in 0.7% of patients. Patient satisfaction was high. In Study II, pain was the most common symptom in adult patients (57%). Postdischarge symptoms were more frequent in adults aged < 40 years, children aged ≥ 7 years, females, and following a longer duration of surgery. In Study III, the total median (range) oxycodone consumption during the study period was 45 (0–165) mg in the dexamethasone group, compared with 78 (15–175) mg in the placebo group (P < 0.049). On PODs 0-1, patients in the dexamethasone group reported significantly lower pain scores. Following inguinal hernia repair, no significant differences in outcome measures were seen between the study groups. Patient satisfaction was equally high in day cases and inpatients (Study IV). Finnish day surgery units provide good-quality services. Minor postdischarge symptoms are common, and they are influenced by several patient-, surgery-, and anesthesia-related factors. Oral dexamethasone combined with paracetamol improves pain relief and reduces the need for oxycodone rescue medication following correction of hallux valgus. Day surgery for open inguinal hernia repair is safe and well accepted by patients aged 65 years or older and can be recommended as the primary choice of care for medically stable patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rheumatoid arthritis (RA) and other chronic inflammatory joint diseases already begin to affect patients health-related quality of life (HRQoL) in the earliest phases of these diseases. In treatment of inflammatory joint diseases, the last two decades have seen new strategies and treatment options introduced. Treatment is started at an earlier phase; combinations of disease-modifying anti-rheumatic drugs (DMARDs) and corticosteroids are used; and in refractory cases new drugs such as tumour necrosis factor (TNF) inhibitors or other biologicals can be started. In patients with new referrals to the Department of Rheumatology of the Helsinki University Central Hospital, we evaluated the 15D and the Stanford Health Assessment Questionnaire (HAQ) results at baseline and approximately 8 months after their first visit. Altogether the analysis included 295 patients with various rheumatic diseases. The mean baseline 15D score (0.822, SD 0.114) was significantly lower than for the age-matched general population (0.903, SD 0.098). Patients with osteoarthritis (OA) and spondyloarthropathies (SPA) reported the poorest HRQoL. In patients with RA and reactive arthritis (ReA) the HRQoL improved in a statistically significant manner during the 8-month follow-up. In addition, a clinically important change appeared in patients with systemic rheumatic diseases. HAQ score improved significantly in patients with RA, arthralgia and fibromyalgia, and ReA. In a study of 97 RA patients treated either with etanercept or adalimumab, we assessed their HRQoL with the RAND 36-Item Health Survey 1.0 (RAND-36) questionnaire. We also analysed changes in clinical parameters and the HAQ. With etanercept and adalimumab, the values of all domains in the RAND-36 questionnaire increased during the first 3 months. The efficacy of each in improving HRQoL was statistically significant, and the drug effects were comparable. Compared to Finnish age- and sex-matched general population values, the HRQoL of the RA patients was significantly lower at baseline and, despite the improvement, remained lower also at follow-up. Our RA patients had long-standing and severe disease that can explain the low HRQoL also at follow-up. In a pharmacoeconomic study of patients treated with infliximab we evaluated medical and work disability costs for patients with chronic inflammatory joint disease during one year before and one year after institution of infliximab treatment. Clinical and economic data for 96 patients with different arthritis diagnoses showed, in all patients, significantly improved clinical and laboratory variables. However, the medical costs increased significantly during the second period by 12 015 (95% confidence interval, 6 496 to 18 076). Only a minimal decrease in work disability costs occurred mean decrease 130 (-1 268 to 1 072). In a study involving a switch from infliximab to etanercept, we investigated the clinical outcome in 49 patients with RA. Reasons for switching were in 42% failure to respond by American College of Rheumatology (ACR) 50% criteria; in 12% adverse event; and in 46% non-medical reasons although the patients had responded to infliximab. The Disease Activity Score with 28 joints examined (DAS28) allowed us to measure patients disease activity and compare outcome between groups based on the reason for switching. In the patients in whom infliximab was switched to etanercept for nonmedical reasons, etanercept continued to suppress disease activity effectively, and 1-year drug survival for etanercept was 77% (95% CI, 62 to 97). In patients in the infliximab failure and adverse event groups, DAS28 values improved significantly during etanercept therapy. However, the 1-year drug survival of etanercept was only 43% (95% CI, 26 to 70) and 50% (95% CI, 33 to 100), respectively. Although the HRQoL of patients with inflammatory joint diseases is significantly lower than that of the general population, use of early and aggressive treatment strategies including TNF-inhibitors can improve patients HRQoL effectively. Further research is needed in finding new treatment strategies for those patients who fail to respond or lose their response to TNF-inhibitors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growing need to understand the exchange processes of momentum, heat and mass between an urban surface and the atmosphere as they affect our quality of life. Understanding the source/sink strengths as well as the mixing mechanisms of air pollutants is particularly important due to their effects on human health and climate. This work aims to improve our understanding of these surface-atmosphere interactions based on the analysis of measurements carried out in Helsinki, Finland. The vertical exchange of momentum, heat, carbon dioxide (CO2) and aerosol particle number was measured with the eddy covariance technique at the urban measurement station SMEAR III, where the concentrations of ultrafine, accumulation mode and coarse particle numbers, nitrogen oxides (NOx), carbon monoxide (CO), ozone (O3) and sulphur dioxide (SO2) were also measured. These measurements were carried out over varying measurement periods between 2004 and 2008. In addition, black carbon mass concentration was measured at the Helsinki Metropolitan Area Council site during three campaigns in 1996-2005. Thus, the analyzed dataset covered far, the most comprehensive long-term measurements of turbulent fluxes reported in the literature from urban areas. Moreover, simultaneously measured urban air pollution concentrations and turbulent fluxes were examined for the first time. The complex measurement surrounding enabled us to study the effect of different urban covers on the exchange processes from a single point of measurement. The sensible and latent heat fluxes closely followed the intensity of solar radiation, and the sensible heat flux always exceeded the latent heat flux due to anthropogenic heat emissions and the conversion of solar radiation to direct heat in urban structures. This urban heat island effect was most evident during winter nights. The effect of land use cover was seen as increased sensible heat fluxes in more built-up areas than in areas with high vegetation cover. Both aerosol particle and CO2 exchanges were largely affected by road traffic, and the highest diurnal fluxes reached 109 m-2 s-1 and 20 µmol m-2 s-1, respectively, in the direction of the road. Local road traffic had the greatest effect on ultrafine particle concentrations, whereas meteorological variables were more important for accumulation mode and coarse particle concentrations. The measurement surroundings of the SMEAR III station served as a source for both particles and CO2, except in summer, when the vegetation uptake of CO2 exceeded the anthropogenic sources in the vegetation sector in daytime, and we observed a downward median flux of 8 µmol m-2 s-1. This work improved our understanding of the interactions between an urban surface and the atmosphere in a city located at high latitudes in a semi-continental climate. The results can be utilised in urban planning, as the fraction of vegetation cover and vehicular activity were found to be the major environmental drivers affecting most of the exchange processes. However, in order to understand these exchange and mixing processes on a city scale, more measurements above various urban surfaces accompanied by numerical modelling are required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyses personal relationships linking research to sociological theory on the questions of the social bond and on the self as social. From the viewpoint of disruptive life events and experiences, such as loss, divorce and illness, it aims at understanding how selves are bound to their significant others as those specific people ‘close or otherwise important’ to them. Who form the configurations of significant others? How do different bonds respond in disruptions and how do relational processes unfold? How is the embeddedness of selves manifested in the processes of bonding, on the one hand, and in the relational formation of the self, on the other? The bonds are analyzed from an anti-categorical viewpoint based on personal citations of significance as opposed to given relationship categories, such as ‘family’ or ‘friendship’ – the two kinds of relationships that in fact are most frequently significant. The study draws from analysis of the personal narratives of 37 Finnish women and men (in all 80 interviews) and their entire configurations of those specific people who they cite as ‘close or otherwise important’. The analysis stresses the subjective experiences, while also investigating the actualized relational processes and configurations of all personal relationships with certain relationship histories embedded in micro-level structures. The research is based on four empirical sub-studies of personal relationships and a summary discussing the questions of the self and social bond. Discussion draws from G. H. Mead, C. Cooley, N. Elias, T. Scheff, G. Simmel and the contributors of ‘relational sociology’. Sub-studies analyse bonds to others from the viewpoint of biographical disruption and re-configuration of significant others, estranged family bonds, peer support and the formation of the most intimate relationships into exclusive and inclusive configurations. All analyses examine the dialectics of the social and the personal, asking how different structuring mechanisms and personal experiences and negotiations together contribute to the unfolding of the bonds. The summary elaborates personal relationships as social bonds embedded in wider webs of interdependent people and social settings that are laden with cultural expectations. Regarding the question of the relational self, the study proposes both bonding and individuality as significant. They are seen as interdependent phases of the relationality of the self. Bonding anchors the self to its significant relationships, in which individuality is manifested, for example, in contrasting and differentiating dynamics, but also in active attempts to connect with others. Individuality is not a fixed quality of the self, but a fluid and interdependent phase of the relational self. More specifically, it appears in three formats in the flux of relational processes: as a sense of unique self (via cultivation of subjective experiences), as agency and as (a search for) relative autonomy. The study includes an epilogue addressing the ambivalence between the social expectation of individuality in society and the bonded reality of selves.