892 resultados para performance studies
Resumo:
BackgroundBipolar disorder is a highly heritable polygenic disorder. Recent enrichment analyses suggest that there may be true risk variants for bipolar disorder in the expression quantitative trait loci (eQTL) in the brain.AimsWe sought to assess the impact of eQTL variants on bipolar disorder risk by combining data from both bipolar disorder genome-wide association studies (GWAS) and brain eQTL.MethodTo detect single nucleotide polymorphisms (SNPs) that influence expression levels of genes associated with bipolar disorder, we jointly analysed data from a bipolar disorder GWAS (7481 cases and 9250 controls) and a genome-wide brain (cortical) eQTL (193 healthy controls) using a Bayesian statistical method, with independent follow-up replications. The identified risk SNP was then further tested for association with hippocampal volume (n = 5775) and cognitive performance (n = 342) among healthy individuals.ResultsIntegrative analysis revealed a significant association between a brain eQTL rs6088662 on chromosome 20q11.22 and bipolar disorder (log Bayes factor = 5.48; bipolar disorder P = 5.85×10(-5)). Follow-up studies across multiple independent samples confirmed the association of the risk SNP (rs6088662) with gene expression and bipolar disorder susceptibility (P = 3.54×10(-8)). Further exploratory analysis revealed that rs6088662 is also associated with hippocampal volume and cognitive performance in healthy individuals.ConclusionsOur findings suggest that 20q11.22 is likely a risk region for bipolar disorder; they also highlight the informative value of integrating functional annotation of genetic variants for gene expression in advancing our understanding of the biological basis underlying complex disorders, such as bipolar disorder.
Resumo:
Objective To evaluate the performance of diagnostic centers in the classification of mammography reports from an opportunistic screening undertaken by the Brazilian public health system (SUS) in the municipality of Goiânia, GO, Brazil in 2010. Materials and Methods The present ecological study analyzed data reported to the Sistema de Informação do Controle do Câncer de Mama (SISMAMA) (Breast Cancer Management Information System) by diagnostic centers involved in the mammographic screening developed by the SUS. Based on the frequency of mammograms per BI-RADS® category and on the limits established for the present study, the authors have calculated the rate of conformity for each diagnostic center. Diagnostic centers with equal rates of conformity were considered as having equal performance. Results Fifteen diagnostic centers performed mammographic studies for SUS and reported 31,198 screening mammograms. The performance of the diagnostic centers concerning BI-RADS classification has demonstrated that none of them was in conformity for all categories, one center presented conformity in five categories, two centers, in four categories, three centers, in three categories, two centers, in two categories, four centers, in one category, and three centers with no conformity. Conclusion The results of the present study demonstrate unevenness in the diagnostic centers performance in the classification of mammograms reported to SISMAMA from the opportunistic screening undertaken by SUS.
Resumo:
BACKGROUND: For the past decade (18)F-fluoro-ethyl-l-tyrosine (FET) and (18)F-fluoro-deoxy-glucose (FDG) positron emission tomography (PET) have been used for the assessment of patients with brain tumor. However, direct comparison studies reported only limited numbers of patients. Our purpose was to compare the diagnostic performance of FET and FDG-PET. METHODS: We examined studies published between January 1995 and January 2015 in the PubMed database. To be included the study should: (i) use FET and FDG-PET for the assessment of patients with isolated brain lesion and (ii) use histology as the gold standard. Analysis was performed on a per patient basis. Study quality was assessed with STARD and QUADAS criteria. RESULTS: Five studies (119 patients) were included. For the diagnosis of brain tumor, FET-PET demonstrated a pooled sensitivity of 0.94 (95% CI: 0.79-0.98) and pooled specificity of 0.88 (95% CI: 0.37-0.99), with an area under the curve of 0.96 (95% CI: 0.94-0.97), a positive likelihood ratio (LR+) of 8.1 (95% CI: 0.8-80.6), and a negative likelihood ratio (LR-) of 0.07 (95% CI: 0.02-0.30), while FDG-PET demonstrated a sensitivity of 0.38 (95% CI: 0.27-0.50) and specificity of 0.86 (95% CI: 0.31-0.99), with an area under the curve of 0.40 (95% CI: 0.36-0.44), an LR+ of 2.7 (95% CI: 0.3-27.8), and an LR- of 0.72 (95% CI: 0.47-1.11). Target-to-background ratios of either FDG or FET, however, allow distinction between low- and high-grade gliomas (P > .11). CONCLUSIONS: For brain tumor diagnosis, FET-PET performed much better than FDG and should be preferred when assessing a new isolated brain tumor. For glioma grading, however, both tracers showed similar performances.
Resumo:
Performance standards for Positron emission tomography (PET) were developed to be able to compare systems from different generations and manufacturers. This resulted in the NEMA methodology in North America and the IEC in Europe. In practices, the NEMA NU 2- 2001 is the method of choice today. These standardized methods allow assessment of the physical performance of new commercial dedicated PET/CT tomographs. The point spread in image formation is one of the factors that blur the image. The phenomenon is often called the partial volume effect. Several methods for correcting for partial volume are under research but no real agreement exists on how to solve it. The influence of the effect varies in different clinical settings and it is likely that new methods are needed to solve this problem. Most of the clinical PET work is done in the field of oncology. The whole body PET combined with a CT is the standard investigation today in oncology. Despite the progress in PET imaging technique visualization, especially quantification of small lesions is a challenge. In addition to partial volume, the movement of the object is a significant source of error. The main causes of movement are respiratory and cardiac motions. Most of the new commercial scanners are in addition to cardiac gating, also capable of respiratory gating and this technique has been used in patients with cancer of the thoracic region and patients being studied for the planning of radiation therapy. For routine cardiac applications such as assessment of viability and perfusion only cardiac gating has been used. However, the new targets such as plaque or molecular imaging of new therapies require better control of the cardiac motion also caused by respiratory motion. To overcome these problems in cardiac work, a dual gating approach has been proposed. In this study we investigated the physical performance of a new whole body PET/CT scanner with NEMA standard, compared methods for partial volume correction in PET studies of the brain and developed and tested a new robust method for dual cardiac-respiratory gated PET with phantom, animal and human data. Results from performance measurements showed the feasibility of the new scanner design in 2D and 3D whole body studies. Partial volume was corrected, but there is no best method among those tested as the correction also depends on the radiotracer and its distribution. New methods need to be developed for proper correction. The dual gating algorithm generated is shown to handle dual-gated data, preserving quantification and clearly eliminating the majority of contraction and respiration movement
Resumo:
Score-based biotic indices are widely used to evaluate the water quality of streams and rivers. Few adaptations of these indices have been done for South America because there is a lack of knowledge on mac-roinvertebrate taxonomy, distribution and tolerance to pollution in the region. Several areas in the Andes are densely populated and there is need for methods to assess the impact of increasing human pressures on aquatic ecosystems. Considering the unique ecological and geographical features of the Andes, macroinvertebrate indices used in other regions must be adapted with caution. Here we present a review of the literature on mac-roinvertebrate distribution and tolerance to pollution in Andean areas above 2 000masl. Using these data, we propose an Andean Biotic Index (ABI), which is based on the BMWP index. In general, ABI includes fewer macroinvertebrate families than in other regions of the world where the BMWP index has been applied because altitude restricts the distribution of several families. Our review shows that in the high Andes, the tolerance of several macroinvertebrate families to pollution differs from those reported in other areas. We tested the ABI index in two basins in Ecuador and Peru, and compared it to other BMWP adaptations using the reference condi-tion approach. The ABI index is extremely useful for detecting the general impairment of rivers but class quality boundaries should be defined independently for each basin because reference conditions may be different. The ABI is widely used in Ecuador and Peru, with high correlations with land-use pressures in several studies. The ABI index is an integral part of the new multimetric index designed for high Andean streams (IMEERA). Rev. Biol. Trop. 62 (Suppl. 2): 249-273. Epub 2014 April 01.
Resumo:
The research around performance measurement and management has focused mainly on the design, implementation and use of performance measurement systems. However, there is little evidence about the actual impacts of performance measurement on the different levels of business and operations of organisations, as well as the underlying factors that lead to a positive impact of performance measurement. The study thus focuses on this research gap, which can be considered both important and challenging to cover. The first objective of the study was to examine the impacts of performance measurement on different aspects of management, leadership and the quality of working life, after which the factors that facilitate and improve performance and performance measurement at the operative level of an organisation were examined. The second objective was to study how these factors operate in practice. The third objective focused on the construction of a framework for successful operative level performance measurement and the utilisation of the factors in the organisations. The research objectives have been studied through six research papers utilising empirical data from three separate studies, including two sets of interview data and one of quantitative data. The study applies mainly the hermeneutical research approach. As a contribution of the study, a framework for successful operative level performance measurement was formed by matching the findings of the current study and performance measurement theory. The study extents the prior research regarding the impacts of performance measurement and the factors that have a positive effect on operative level performance and performance measurement. The results indicate that under suitable circumstances, performance measurement has positive impacts on different aspects of management, leadership, and the quality of working life. The results reveal that for example the perception of the employees and the management of the impacts of performance measurement on leadership style differ considerably. Furthermore, the fragmented literature has been reorganised into six factors that facilitate and improve the performance of the operations and employees, and the use of performance measurement at the operative level of an organisation. Regarding the managerial implications of the study, managers who operate around performance measurement can utilise the framework for example by putting the different phases of the framework into practice.
Resumo:
Two experimental studies evaluated the effect of aerobic and membrane aeration changes on sludge properties, biological nutrient removal and filtration processes in a pilot plant membrane bioreactor. The optimal operating conditions were found at an aerobic dissolved oxygen set-point (DO) of 0.5mgO2L-1 and a membrane specific aeration demand (SADm) of 1mh-1, where membrane aeration can be used for nitrification. Under these conditions, a total flow reduction of 42% was achieved (75% energy reduction) without compromising nutrient removal efficiencies, maintaining sludge characteristics and controlled filtration. Below these optimal operating conditions, the nutrient removal efficiency was reduced, increasing 20% for soluble microbial products, 14% for capillarity suction time and reducing a 15% for filterability. Below this DO set-point, fouling increased with a transmembrane pressure 75% higher. SADm below 1mh-1 doubled the values of transmembrane pressure, without recovery after achieving the initial conditions
Resumo:
A rapid HPLC analytical method was developed and validated for the determination of the N-phenylpiperazine derivative LASSBio-579in plasma rat. Analyses were performed using a C18 column and elution with 20 mM sodium dihydrogen phosphate monohydrate - methanol. The analyte was monitored using a photodiode array detector (257 nm). Calibration curves in spiked plasma were linear over the concentration range of 0.3-8 mg/mL with determination coefficient > 0.99. The lower limit of quantification was 0.3 mg/mL. The applicability of the HPLC method for pharmacokinetic studies was tested using plasma samples obtained after administration of LASSBio-579 to Wistar rats, showing the specificity of the method.
Resumo:
The dissertation is based on four articles dealing with recalcitrant lignin water purification. Lignin, a complicated substance and recalcitrant to most treatment technologies, inhibits seriously pulp and paper industry waste management. Therefore, lignin is studied, using WO as a process method for its degradation. A special attention is paid to the improvement in biodegradability and the reduction of lignin content, since they have special importance for any following biological treatment. In most cases wet oxidation is not used as a complete ' mineralization method but as a pre treatment in order to eliminate toxic components and to reduce the high level of organics produced. The combination of wet oxidation with a biological treatment can be a good option due to its effectiveness and its relatively low technology cost. The literature part gives an overview of Advanced Oxidation Processes (AOPs). A hot oxidation process, wet oxidation (WO), is investigated in detail and is the AOP process used in the research. The background and main principles of wet oxidation, its industrial applications, the combination of wet oxidation with other water treatment technologies, principal reactions in WO, and key aspects of modelling and reaction kinetics are presented. There is also given a wood composition and lignin characterization (chemical composition, structure and origin), lignin containing waters, lignin degradation and reuse possibilities, and purification practices for lignin containing waters. The aim of the research was to investigate the effect of the operating conditions of WO, such as temperature, partial pressure of oxygen, pH and initial concentration of wastewater, on the efficiency, and to enhance the process and estimate optimal conditions for WO of recalcitrant lignin waters. Two different waters are studied (a lignin water model solution and debarking water from paper industry) to give as appropriate conditions as possible. Due to the great importance of re using and minimizing the residues of industries, further research is carried out using residual ash of an Estonian power plant as a catalyst in wet oxidation of lignin-containing water. Developing a kinetic model that includes in the prediction such parameters as TOC gives the opportunity to estimate the amount of emerging inorganic substances (degradation rate of waste) and not only the decrease of COD and BOD. The degradation target compound, lignin is included into the model through its COD value (CODligning). Such a kinetic model can be valuable in developing WO treatment processes for lignin containing waters, or other wastewaters containing one or more target compounds. In the first article, wet oxidation of "pure" lignin water was investigated as a model case with the aim of degrading lignin and enhancing water biodegradability. The experiments were performed at various temperatures (110 -190°C), partial oxygen pressures (0.5 -1.5 MPa) and pH (5, 9 and 12). The experiments showed that increasing the temperature notably improved the processes efficiency. 75% lignin reduction was detected at the lowest temperature tested and lignin removal improved to 100% at 190°C. The effect of temperature on the COD removal rate was lower, but clearly detectable. 53% of organics were oxidized at 190°C. The effect of pH occurred mostly on lignin removal. Increasing the pH enhanced the lignin removal efficiency from 60% to nearly 100%. A good biodegradability ratio (over 0.5) was generally achieved. The aim of the second article was to develop a mathematical model for "pure" lignin wet oxidation using lumped characteristics of water (COD, BOD, TOC) and lignin concentration. The model agreed well with the experimental data (R2 = 0.93 at pH 5 and 12) and concentration changes during wet oxidation followed adequately the experimental results. The model also showed correctly the trend of biodegradability (BOD/COD) changes. In the third article, the purpose of the research was to estimate optimal conditions for wet oxidation (WO) of debarking water from the paper industry. The WO experiments were' performed at various temperatures, partial oxygen pressures and pH. The experiments showed that lignin degradation and organics removal are affected remarkably by temperature and pH. 78-97% lignin reduction was detected at different WO conditions. Initial pH 12 caused faster removal of tannins/lignin content; but initial pH 5 was more effective for removal of total organics, represented by COD and TOC. Most of the decrease in organic substances concentrations occurred in the first 60 minutes. The aim of the fourth article was to compare the behaviour of two reaction kinetic models, based on experiments of wet oxidation of industrial debarking water under different conditions. The simpler model took into account only the changes in COD, BOD and TOC; the advanced model was similar to the model used in the second article. Comparing the results of the models, the second model was found to be more suitable for describing the kinetics of wet oxidation of debarking water. The significance of the reactions involved was compared on the basis of the model: for instance, lignin degraded first to other chemically oxidizable compounds rather than directly to biodegradable products. Catalytic wet oxidation of lignin containing waters is briefly presented at the end of the dissertation. Two completely different catalysts were used: a commercial Pt catalyst and waste power plant ash. CWO showed good performance using 1 g/L of residual ash gave lignin removal of 86% and COD removal of 39% at 150°C (a lower temperature and pressure than with WO). It was noted that the ash catalyst caused a remarkable removal rate for lignin degradation already during the pre heating for `zero' time, 58% of lignin was degraded. In general, wet oxidation is not recommended for use as a complete mineralization method, but as a pre treatment phase to eliminate toxic or difficultly biodegradable components and to reduce the high level of organics. Biological treatment is an appropriate post treatment method since easily biodegradable organic matter remains after the WO process. The combination of wet oxidation with subsequent biological treatment can be an effective option for the treatment of lignin containing waters.
Resumo:
Virtually every cell and organ in the human body is dependent on a proper oxygen supply. This is taken care of by the cardiovascular system that supplies tissues with oxygen precisely according to their metabolic needs. Physical exercise is one of the most demanding challenges the human circulatory system can face. During exercise skeletal muscle blood flow can easily increase some 20-fold and its proper distribution to and within muscles is of importance for optimal oxygen delivery. The local regulation of skeletal muscle blood flow during exercise remains little understood, but adenosine and nitric oxide may take part in this process. In addition to acute exercise, long-term vigorous physical conditioning also induces changes in the cardiovasculature, which leads to improved maximal physical performance. The changes are largely central, such as structural and functional changes in the heart. The function and reserve of the heart’s own vasculature can be studied by adenosine infusion, which according to animal studies evokes vasodilation via it’s a2A receptors. This has, however, never been addressed in humans in vivo and also studies in endurance athletes have shown inconsistent results regarding the effects of sport training on myocardial blood flow. This study was performed on healthy young adults and endurance athletes and local skeletal and cardiac muscle blod flow was measured by positron emission tomography. In the heart, myocardial blood flow reserve and adenosine A2A receptor density, and in skeletal muscle, oxygen extraction and consumption was also measured. The role of adenosine in the control of skeletal muscle blood flow during exercise, and its vasodilator effects, were addressed by infusing competitive inhibitors and adenosine into the femoral artery. The formation of skeletal muscle nitric oxide was also inhibited by a drug, with and without prostanoid blockade. As a result and conclusion, it can be said that skeletal muscle blood flow heterogeneity decreases with increasing exercise intensity most likely due to increased vascular unit recruitment, but exercise hyperemia is a very complex phenomenon that cannot be mimicked by pharmacological infusions, and no single regulator factor (e.g. adenosine or nitric oxide) accounts for a significant part of exercise-induced muscle hyperemia. However, in the present study it was observed for the first time in humans that nitric oxide is not only important regulator of the basal level of muscle blood flow, but also oxygen consumption, and together with prostanoids affects muscle blood flow and oxygen consumption during exercise. Finally, even vigorous endurance training does not seem to lead to supranormal myocardial blood flow reserve, and also other receptors than A2A mediate the vasodilator effects of adenosine. In respect to cardiac work, atheletes heart seems to be luxuriously perfused at rest, which may result from reduced oxygen extraction or impaired efficiency due to pronouncedly enhanced myocardial mass developed to excel in strenuous exercise.
Resumo:
The purpose of this study is to view credit risk from the financier’s point of view in a theoretical framework. Results and aspects of the previous studies regarding measuring credit risk with accounting based scoring models are also examined. The theoretical framework and previous studies are then used to support the empirical analysis which aims to develop a credit risk measure for a bank’s internal use or a risk management tool for a company to indicate its credit risk to the financier. The study covers a sample of Finnish companies from 12 different industries and four different company categories and employs their accounting information from 2004 to 2008. The empirical analysis consists of six stage methodology process which uses measures of profitability, liquidity, capital structure and cash flow to determine financier’s credit risk, define five significant risk classes and produce risk classification model. The study is confidential until 15.10.2012.
Resumo:
The paper industry is constantly looking for new ideas for improving paper products while competition and raw material prices are increasing. Many paper products are pigment coated. Coating layer is the top layer of paper, thus by modifying coating pigment also the paper itself can be altered and value added to the final product. In this thesis, synthesis of new plastic and hybrid pigments and their performance in paper and paperboard coating is reported. Two types of plastic pigments were studied: core-shell latexes and solid beads of maleimide copolymers. Core-shell latexes with partially crosslinked hydrophilic polymer core of poly(n-butyl acrylate-co-methacrylic acid) and a hard hydrophobic polystyrene shell were prepared to improve the optical properties of coated paper. In addition, the effect of different crosslinkers was analyzed and the best overall performance was achieved by the use of ethylene glycol dimethacrylate (EGDMA). Furthermore, the possibility to modify core-shell latex was investigated by introducing a new polymerizable optical brightening agent, 1-[(4-vinylphenoxy)methyl]-4-(2-henylethylenyl)benzene which gave promising results. The prepared core-shell latex pigments performed smoothly also in pilot coating and printing trials. The results demonstrated that by optimizing polymer composition, the optical and surface properties of coated paper can be significantly enhanced. The optimal reaction conditions were established for thermal imidization of poly(styrene-co-maleimide) (SMI) and poly(octadecene-co-maleimide) (OMI) from respective maleic anhydride copolymer precursors and ammonia in a solvent free process. The obtained aqueous dispersions of nanoparticle copolymers exhibited glass transition temperatures (Tg) between 140-170ºC and particle sizes from 50-230 nm. Furthermore, the maleimide copolymers were evaluated in paperboard coating as additional pigments. The maleimide copolymer nanoparticles were partly imbedded into the porous coating structure and therefore the full potential of optical property enhancement for paperboard was not achieved by this method. The possibility to modify maleimide copolymers was also studied. Modifications were carried out via N-substitution by replacing part of the ammonia in the imidization reaction with amines, such as triacetonediamine (TAD), aspartic acid (ASP) and fluorinated amines (2,2,2- trifluoroethylamine, TFEA and 2,2,3,3,4,4,4-heptafluorobuthylamine, HFBA). The obtained functional nanoparticles varied in size between 50-217 nm and their Tg from 150-180ºC. During the coating process the produced plastic pigments exhibited good runnability. No significant improvements were achieved in light stability with TAD modified copolymers whereas nanoparticles modified with aspartic acid and those containing fluorinated groups showed the desired changes in surface properties of the coated paperboard. Finally, reports on preliminary studies with organic-inorganic hybrids are presented. The hybrids prepared by an in situ polymerization reaction consisted of 30 wt% poly(styrene- co-maleimide) (SMI) and high levels of 70 wt% inorganic components of kaolin and/or alumina trihydrate. Scanning Electron Microscopy (SEM) images and characterization by Fourier Transform Infrared Spcetroscopy (FTIR) and X-Ray Diffraction (XRD) revealed that the hybrids had conventional composite structure and inorganic components were covered with precipitated SMI nanoparticles attached to the surface via hydrogen bonding. In paper coating, the hybrids had a beneficial effect on increasing gloss levels.
Resumo:
In the study the recently developed concept of strategic entrepreneurship was addressed with the aim to investigate the underlying factors and components constituting the concept and their influence on firm performance. As the result of analysis of existing literature and empirical studies the model of strategic entrepreneurship for the current study is developed with the emphasis on exploration and exploitation parts of the concept. The research model is tested on the data collected in the project ―Factors of growth and success of entrepreneurial firms in Russia‖ by Center for Entrepreneurship of GSOM in 2007 containing answers of owners and managers of 500 firms operating in St. Petersburg and Moscow. Multiple regression analysis showed that exploration and exploitation presented by entrepreneurial values, investments in internal resources, knowledge management and developmental changes are significant factors constituting strategic entrepreneurship and having positive relation to firm performance. The theoretical contribution of the work is linked to development and testing of the model of strategic entrepreneurship. The results can be implemented in management practices of companies willing to engage in strategic entrepreneurship and increase their firm performance.
Resumo:
The purpose of this study is to examine macroeconomic indicators‟ and technical analysis‟ ability to signal market crashes. Indicators examined were Yield Spread, The Purchasing Managers Index and the Consumer Confidence Index. Technical Analysis indicators were moving average, Moving Average Convergence-Divergence and Relative Strength Index. We studied if commonly used macroeconomic indicators can be used as a warning system for a stock market crashes as well. The hypothesis is that the signals of recession can be used as signals of stock market crash and that way a basis for a hedging strategy. The data is collected from the U.S. markets from the years 1983-2010. Empirical studies show that macroeconomic indicators have been able to explain the future GDP development in the U.S. in research period and they were statistically significant. A hedging strategy that combined the signals of yield spread and Consumer Confidence Index gave most useful results as a basis of a hedging strategy in selected time period. It was able to outperform buy-and-hold strategy as well as all of the technical indicator based hedging strategies.
Resumo:
Longitudinal studies are quite rare in the area of Operations Management. One reason might be the time needed to conduct such studies, and then the lack of experience and real-life examples and results. The aim of the thesis is to examine longitudinal studies in the area of OM and the possible advantages, challenges and pitfalls of such studies. A longitudinal benchmarking study, Made in Finland, was analyzed in terms of the study methodology and its outcomes. The timeline of this longitudinal study is interesting. The first study was made in 1993, the second in 2004 and the third in 2010. Between these studies some major changes occurred in the Finnish business environment. Between the first and second studies, Finland joined the ETA and the EU, and globalization started with the rise of the Internet era, while between the second and third studies financial turmoil started in 2007. The sample and cases used in this study were originally 23 manufacturing sites in Finland. These sites were interviewed in 1993, in 2004 and 2010. One important and interesting aspect is that all the original sites participated in 2004, and 19 sites were still able to participate in 2010. Four sites had been closed and/or moved abroad. All of this gave a good opportunity to study the changes that occurred in the Finnish manufacturing sites and their environment, and how they reacted to these changes, and the effects on their performance. It is very seldom, if ever, that the same manufacturing sites have been studied in a longitudinal setting by using three data points. The results of this study are thus unique, and the experience gained is valuable for practitioners.