15 resultados para improving standards
em Helda - Digital Repository of University of Helsinki
Resumo:
Drug Analysis without Primary Reference Standards: Application of LC-TOFMS and LC-CLND to Biofluids and Seized Material Primary reference standards for new drugs, metabolites, designer drugs or rare substances may not be obtainable within a reasonable period of time or their availability may also be hindered by extensive administrative requirements. Standards are usually costly and may have a limited shelf life. Finally, many compounds are not available commercially and sometimes not at all. A new approach within forensic and clinical drug analysis involves substance identification based on accurate mass measurement by liquid chromatography coupled with time-of-flight mass spectrometry (LC-TOFMS) and quantification by LC coupled with chemiluminescence nitrogen detection (LC-CLND) possessing equimolar response to nitrogen. Formula-based identification relies on the fact that the accurate mass of an ion from a chemical compound corresponds to the elemental composition of that compound. Single-calibrant nitrogen based quantification is feasible with a nitrogen-specific detector since approximately 90% of drugs contain nitrogen. A method was developed for toxicological drug screening in 1 ml urine samples by LC-TOFMS. A large target database of exact monoisotopic masses was constructed, representing the elemental formulae of reference drugs and their metabolites. Identification was based on matching the sample component s measured parameters with those in the database, including accurate mass and retention time, if available. In addition, an algorithm for isotopic pattern match (SigmaFit) was applied. Differences in ion abundance in urine extracts did not affect the mass accuracy or the SigmaFit values. For routine screening practice, a mass tolerance of 10 ppm and a SigmaFit tolerance of 0.03 were established. Seized street drug samples were analysed instantly by LC-TOFMS and LC-CLND, using a dilute and shoot approach. In the quantitative analysis of amphetamine, heroin and cocaine findings, the mean relative difference between the results of LC-CLND and the reference methods was only 11%. In blood specimens, liquid-liquid extraction recoveries for basic lipophilic drugs were first established and the validity of the generic extraction recovery-corrected single-calibrant LC-CLND was then verified with proficiency test samples. The mean accuracy was 24% and 17% for plasma and whole blood samples, respectively, all results falling within the confidence range of the reference concentrations. Further, metabolic ratios for the opioid drug tramadol were determined in a pharmacogenetic study setting. Extraction recovery estimation, based on model compounds with similar physicochemical characteristics, produced clinically feasible results without reference standards.
Resumo:
Although the treatment of most cancers has improved steadily, only few metastatic solid tumors can be cured. Despite responses, refractory clones often emerge and the disease becomes refractory to available treatment modalities. Furthermore, resistance factors are shared between different treatment regimens and therefore loss of response typically occurs rapidly, and there is a tendency for cross-resistance between agents. Therefore, new agents with novel mechanisms of action and lacking cross-resistance to currently available approaches are needed. Modified oncolytic adenoviruses, featuring cancer-celective cell lysis and spread, constitute an interesting drug platform towards the goals of tumor specificity and the implementation of potent multimodal treatment regimens. In this work, we demonstrate the applicability of capsid-modified, transcriptionally targeted oncolytic adenoviruses in targeting gastric, pancreatic and breast cancer. A variety of capsid modified adenoviruses were tested for transductional specificity first in gastric and pancreatic cancer cells and patient tissues and then in mice. Then, oncolytic viruses featuring the same capsid modifications were tested to confirm that successful transductional targeting translates into enhanced oncolytic potential. Capsid modified oncolytic viruses also prolonged the survival of tumor bearing orthotopic models of gastric and pancreatic cancer. Taken together, oncolytic adenoviral gene therapy could be a potent drug for gastric and pancreatic cancer, and its specificity, potency and safety can be modulated by means of capsid modification. We also characterized a new intraperitoneal virus delivery method in benefit for the persistence of gene delivery to intraperitoneal gastric and pancreatic cancer tumors. With a silica implant a steady and sustained virus release to the vicinity of the tumor improved the survival of the orthotopic tumor bearing mice. Furthermore, silica gel-based virus delivery lowered the toxicity mediating proimflammatory cytokine response and production of total and anti-adenovirus neutralizing antibodies (NAbs). On the other hand, silica shielded the virus against pre-excisting NAbs, resulting in a more favourable biodistribution in the preimmunized mice. The silica implant might therefore be of interest in treating intraperitoneally disseminated disease. Cancer stem cells are thought to be resistant to conventional cancer drugs and might play an important role in cancer relapse and the formation of metastasis. Therefore, we examined if transcriptionally modified oncolytic adenoviruses are able to kill these cells. Complete eradication of CD44+CD24-/low putative breast cancer stem cells was seen in vitro, and significant antitumor activity was detected in CD44+CD24-/low –derived tumor bearing mice. Thus, genetically engineered oncolytic adenoviruses have potential in destroying cancer initiating cells, which may have relevance for the elimination of cancer stem cells in humans.
Resumo:
Cancer is a devastating disease with poor prognosis and no curative treatment, when widely metastatic. Conventional therapies, such as chemotherapy and radiotherapy, have efficacy but are not curative and systemic toxicity can be considerable. Almost all cancers are caused due to changes in the genetic material of the transformed cells. Cancer gene therapy has emerged as a new treatment option, and past decades brought new insights in developing new therapeutic drugs for curing cancer. Oncolytic viruses constitute a novel therapeutic approach given their capacity to replicate in and kill specifically tumor cells as well as reaching tumor distant metastasis. Adenoviral gene therapy has been suggested to cause liver toxicity. This study shows that new developed adenoviruses, in particular Ad5/19p-HIT, can be redirected towards kidney while adenovirus uptake by liver is minimal. Moreover, low liver transduction resulted in a favorable tumor to liver ratio of virus load. Further, we established a new immunocompetent animal model Syrian hamsters. Wild type adenovirus 5 was found to replicate in Hap-T1 hamster tumors and normal tissues. There are no antiviral drugs available to inhibit adenovirus replication. In our study, chlorpromazine and cidofovir efficiently abrogated virus replication in vitro and showed significant reduction in vivo in tumors and liver. Once safety concerns were addressed together with the new given antiviral treatment options, we further improved oncolytic adenoviruses for better tumor penetration, local amplification and host system modulation. Further, we created Ad5/3-9HIF-Δ24-VEGFR-1-Ig, oncolytic adenovirus for improved infectivity and antiangiogenic effect for treatment of renal cancer. This virus exhibited increased anti-tumor effect and specific replication in kidney cancer cells. The key player for good efficacy of oncolytic virotherapy is the host immune response. Thus, we engineered a triple targeted adenovirus Ad5/3-hTERT-E1A-hCD40L, which would lead to tumor elimination due to tumor-specific oncolysis and apoptosis together with an anti-tumor immune response prompted by the immunomodulatory molecule. In conclusion, the results presented in this thesis constitute advances in our understanding of oncolytic virotherapy by successful tumor targeting, antiviral treatment options as a safety switch in case of replication associated side-effects, and modulation of the host immune system towards tumor elimination.
Resumo:
Accurate and stable time series of geodetic parameters can be used to help in understanding the dynamic Earth and its response to global change. The Global Positioning System, GPS, has proven to be invaluable in modern geodynamic studies. In Fennoscandia the first GPS networks were set up in 1993. These networks form the basis of the national reference frames in the area, but they also provide long and important time series for crustal deformation studies. These time series can be used, for example, to better constrain the ice history of the last ice age and the Earth s structure, via existing glacial isostatic adjustment models. To improve the accuracy and stability of the GPS time series, the possible nuisance parameters and error sources need to be minimized. We have analysed GPS time series to study two phenomena. First, we study the refraction in the neutral atmosphere of the GPS signal, and, second, we study the surface loading of the crust by environmental factors, namely the non-tidal Baltic Sea, atmospheric load and varying continental water reservoirs. We studied the atmospheric effects on the GPS time series by comparing the standard method to slant delays derived from a regional numerical weather model. We have presented a method for correcting the atmospheric delays at the observational level. The results show that both standard atmosphere modelling and the atmospheric delays derived from a numerical weather model by ray-tracing provide a stable solution. The advantage of the latter is that the number of unknowns used in the computation decreases and thus, the computation may become faster and more robust. The computation can also be done with any processing software that allows the atmospheric correction to be turned off. The crustal deformation due to loading was computed by convolving Green s functions with surface load data, that is to say, global hydrology models, global numerical weather models and a local model for the Baltic Sea. The result was that the loading factors can be seen in the GPS coordinate time series. Reducing the computed deformation from the vertical time series of GPS coordinates reduces the scatter of the time series; however, the long term trends are not influenced. We show that global hydrology models and the local sea surface can explain up to 30% of the GPS time series variation. On the other hand atmospheric loading admittance in the GPS time series is low, and different hydrological surface load models could not be validated in the present study. In order to be used for GPS corrections in the future, both atmospheric loading and hydrological models need further analysis and improvements.
Resumo:
One of the central issues in making efficient use of IT in the design, construction and maintenance of buildings is the sharing of the digital building data across disciplines and lifecycle stages. One technology which enables data sharing is CAD layering, which to be of real use requires the definition of standards. This paper focuses on the background, objectives and effectiveness of the International standard ISO 13567, Organisation and naming of layers for CAD. In particular the efficiency and effectiveness of the standardisation and standard implementation process are in focus, rather than the technical details. The study was conducted as a qualitative study with a number of experts who responded to a semi-structured mail questionnaire, supplemented by personal interviews. The main results were that CAD layer standards based on the ISO standard have been implemented, particularly in northern European countries, but are not very widely used. A major problem which was identified was the lack of resources for marketing and implementing the standard as national variations, once it had been formally accepted.
Resumo:
Markov random fields (MRF) are popular in image processing applications to describe spatial dependencies between image units. Here, we take a look at the theory and the models of MRFs with an application to improve forest inventory estimates. Typically, autocorrelation between study units is a nuisance in statistical inference, but we take an advantage of the dependencies to smooth noisy measurements by borrowing information from the neighbouring units. We build a stochastic spatial model, which we estimate with a Markov chain Monte Carlo simulation method. The smooth values are validated against another data set increasing our confidence that the estimates are more accurate than the originals.
Improving outcome of childhood bacterial meningitis by simplified treatment : Experience from Angola
Resumo:
Background Acute bacterial meningitis (BM) continues to be an important cause of childhood mortality and morbidity, especially in developing countries. Prognostic scales and the identification of risk factors for adverse outcome both aid in assessing disease severity. New antimicrobial agents or adjunctive treatments - except for oral glycerol - have essentially failed to improve BM prognosis. A retrospective observational analysis found paracetamol beneficial in adult bacteraemic patients, and some experts recommend slow β-lactam infusion. We examined these treatments in a prospective, double-blind, placebo-controlled clinical trial. Patients and methods A retrospective analysis included 555 children treated for BM in 2004 in the infectious disease ward of the Paediatric Hospital of Luanda, Angola. Our prospective study randomised 723 children into four groups, to receive a combination of cefotaxime infusion or boluses every 6 hours for the first 24 hours and oral paracetamol or placebo for 48 hours. The primary endpoints were 1) death or severe neurological sequelae (SeNeSe), and 2) deafness. Results In the retrospective study, the mortality of children with blood transfusion was 23% (30 of 128) vs. without blood transfusion 39% (109 of 282; p=0.004). In the prospective study, 272 (38%) of the children died. Of those 451 surviving, 68 (15%) showed SeNeSe, and 12% (45 of 374) were deaf. Whereas no difference between treatment groups was observable in primary endpoints, the early mortality in the infusion-paracetamol group was lower, with the difference (Fisher s exact test) from the other groups at 24, 48, and 72 hours being significant (p=0.041, 0.0005, and 0.005, respectively). Prognostic factors for adverse outcomes were impaired consciousness, dyspnoea, seizures, delayed presentation, and absence of electricity at home (Simple Luanda Scale, SLS); the Bayesian Luanda Scale (BLS) also included abnormally low or high blood glucose. Conclusions New studies concerning the possible beneficial effect of blood transfusion, and concerning longer treatment with cefotaxime infusion and oral paracetamol, and a study to validate our simple prognostic scales are warranted.
Resumo:
Gene mapping is a systematic search for genes that affect observable characteristics of an organism. In this thesis we offer computational tools to improve the efficiency of (disease) gene-mapping efforts. In the first part of the thesis we propose an efficient simulation procedure for generating realistic genetical data from isolated populations. Simulated data is useful for evaluating hypothesised gene-mapping study designs and computational analysis tools. As an example of such evaluation, we demonstrate how a population-based study design can be a powerful alternative to traditional family-based designs in association-based gene-mapping projects. In the second part of the thesis we consider a prioritisation of a (typically large) set of putative disease-associated genes acquired from an initial gene-mapping analysis. Prioritisation is necessary to be able to focus on the most promising candidates. We show how to harness the current biomedical knowledge for the prioritisation task by integrating various publicly available biological databases into a weighted biological graph. We then demonstrate how to find and evaluate connections between entities, such as genes and diseases, from this unified schema by graph mining techniques. Finally, in the last part of the thesis, we define the concept of reliable subgraph and the corresponding subgraph extraction problem. Reliable subgraphs concisely describe strong and independent connections between two given vertices in a random graph, and hence they are especially useful for visualising such connections. We propose novel algorithms for extracting reliable subgraphs from large random graphs. The efficiency and scalability of the proposed graph mining methods are backed by extensive experiments on real data. While our application focus is in genetics, the concepts and algorithms can be applied to other domains as well. We demonstrate this generality by considering coauthor graphs in addition to biological graphs in the experiments.
Resumo:
This thesis report attempts to improve the models for predicting forest stand structure for practical use, e.g. forest management planning (FMP) purposes in Finland. Comparisons were made between Weibull and Johnson s SB distribution and alternative regression estimation methods. Data used for preliminary studies was local but the final models were based on representative data. Models were validated mainly in terms of bias and RMSE in the main stand characteristics (e.g. volume) using independent data. The bivariate SBB distribution model was used to mimic realistic variations in tree dimensions by including within-diameter-class height variation. Using the traditional method, diameter distribution with the expected height resulted in reduced height variation, whereas the alternative bivariate method utilized the error-term of the height model. The lack of models for FMP was covered to some extent by the models for peatland and juvenile stands. The validation of these models showed that the more sophisticated regression estimation methods provided slightly improved accuracy. A flexible prediction and application for stand structure consisted of seemingly unrelated regression models for eight stand characteristics, the parameters of three optional distributions and Näslund s height curve. The cross-model covariance structure was used for linear prediction application, in which the expected values of the models were calibrated with the known stand characteristics. This provided a framework to validate the optional distributions and the optional set of stand characteristics. Height distribution is recommended for the earliest state of stands because of its continuous feature. From the mean height of about 4 m, Weibull dbh-frequency distribution is recommended in young stands if the input variables consist of arithmetic stand characteristics. In advanced stands, basal area-dbh distribution models are recommended. Näslund s height curve proved useful. Some efficient transformations of stand characteristics are introduced, e.g. the shape index, which combined the basal area, the stem number and the median diameter. Shape index enabled SB model for peatland stands to detect large variation in stand densities. This model also demonstrated reasonable behaviour for stands in mineral soils.
Resumo:
People with coeliac disease have to maintain a gluten-free diet, which means excluding wheat, barley and rye prolamin proteins from their diet. Immunochemical methods are used to analyse the harmful proteins and to control the purity of gluten-free foods. In this thesis, the behaviour of prolamins in immunological gluten assays and with different prolamin-specific antibodies was examined. The immunoassays were also used to detect residual rye prolamins in sourdough systems after enzymatic hydrolysis and wheat prolamins after deamidation. The aim was to characterize the ability of the gluten analysis assays to quantify different prolamins in varying matrices in order to improve the accuracy of the assays. Prolamin groups of cereals consist of a complex mixture of proteins that vary in their size and amino acid sequences. Two common characteristics distinguish prolamins from other cereal proteins. Firstly, they are soluble in aqueous alcohols, and secondly, most of the prolamins are mainly formed from repetitive amino acid sequences containing high amounts of proline and glutamine. The diversity among prolamin proteins sets high requirements for their quantification. In the present study, prolamin contents were evaluated using enzyme-linked immunosorbent assays based on ω- and R5 antibodies. In addition, assays based on A1 and G12 antibodies were used to examine the effect of deamidation on prolamin proteins. The prolamin compositions and the cross-reactivity of antibodies with prolamin groups were evaluated with electrophoretic separation and Western blotting. The results of this thesis research demonstrate that the currently used gluten analysis methods are not able to accurately quantify barley prolamins, especially when hydrolysed or mixed in oats. However, more precise results can be obtained when the standard more closely matches the sample proteins, as demonstrated with barley prolamin standards. The study also revealed that all of the harmful prolamins, i.e. wheat, barley and rye prolamins, are most efficiently extracted with 40% 1-propanol containing 1% dithiothreitol at 50 °C. The extractability of barley and rye prolamins was considerably higher with 40% 1-propanol than with 60% ethanol, which is typically used for prolamin extraction. The prolamin levels of rye were lowered by 99.5% from the original levels when an enzyme-active rye-malt sourdough system was used for prolamin degradation. Such extensive degradation of rye prolamins suggest the use of sourdough as a part of gluten-free baking. Deamidation increases the diversity of prolamins and improves their solubility and ability to form structures such as emulsions and foams. Deamidation changes the protein structure, which has consequences for antibody recognition in gluten analysis. According to the resuts of the present work, the analysis methods were not able to quantify wheat gluten after deamidation except at very high concentrations. Consequently, deamidated gluten peptides can exist in food products and remain undetected, and thus cause a risk for people with gluten intolerance. The results of this thesis demonstrate that current gluten analysis methods cannot accurately quantify prolamins in all food matrices. New information on the prolamins of rye and barley in addition to wheat prolamins is also provided in this thesis, which is essential for improving gluten analysis methods so that they can more accurately quantify prolamins from harmful cereals.