34 resultados para Timed and Probabilistic Automata
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
A protein of a biological sample is usually quantified by immunological techniques based on antibodies. Mass spectrometry offers alternative approaches that are not dependent on antibody affinity and avidity, protein isoforms, quaternary structures, or steric hindrance of antibody-antigen recognition in case of multiprotein complexes. One approach is the use of stable isotope-labeled internal standards; another is the direct exploitation of mass spectrometric signals recorded by LC-MS/MS analysis of protein digests. Here we assessed the peptide match score summation index based on probabilistic peptide scores calculated by the PHENYX protein identification engine for absolute protein quantification in accordance with the protein abundance index as proposed by Mann and co-workers (Rappsilber, J., Ryder, U., Lamond, A. I., and Mann, M. (2002) Large-scale proteomic analysis of the human spliceosome. Genome Res. 12, 1231-1245). Using synthetic protein mixtures, we demonstrated that this approach works well, although proteins can have different response factors. Applied to high density lipoproteins (HDLs), this new approach compared favorably to alternative protein quantitation methods like UV detection of protein peaks separated by capillary electrophoresis or quantitation of protein spots on SDS-PAGE. We compared the protein composition of a well defined HDL density class isolated from plasma of seven hypercholesterolemia subjects having low or high HDL cholesterol with HDL from nine normolipidemia subjects. The quantitative protein patterns distinguished individuals according to the corresponding concentration and distribution of cholesterol from serum lipid measurements of the same samples and revealed that hypercholesterolemia in unrelated individuals is the result of different deficiencies. The presented approach is complementary to HDL lipid analysis; does not rely on complicated sample treatment, e.g. chemical reactions, or antibodies; and can be used for projective clinical studies of larger patient groups.
Resumo:
BACKGROUND Record linkage of existing individual health care data is an efficient way to answer important epidemiological research questions. Reuse of individual health-related data faces several problems: Either a unique personal identifier, like social security number, is not available or non-unique person identifiable information, like names, are privacy protected and cannot be accessed. A solution to protect privacy in probabilistic record linkages is to encrypt these sensitive information. Unfortunately, encrypted hash codes of two names differ completely if the plain names differ only by a single character. Therefore, standard encryption methods cannot be applied. To overcome these challenges, we developed the Privacy Preserving Probabilistic Record Linkage (P3RL) method. METHODS In this Privacy Preserving Probabilistic Record Linkage method we apply a three-party protocol, with two sites collecting individual data and an independent trusted linkage center as the third partner. Our method consists of three main steps: pre-processing, encryption and probabilistic record linkage. Data pre-processing and encryption are done at the sites by local personnel. To guarantee similar quality and format of variables and identical encryption procedure at each site, the linkage center generates semi-automated pre-processing and encryption templates. To retrieve information (i.e. data structure) for the creation of templates without ever accessing plain person identifiable information, we introduced a novel method of data masking. Sensitive string variables are encrypted using Bloom filters, which enables calculation of similarity coefficients. For date variables, we developed special encryption procedures to handle the most common date errors. The linkage center performs probabilistic record linkage with encrypted person identifiable information and plain non-sensitive variables. RESULTS In this paper we describe step by step how to link existing health-related data using encryption methods to preserve privacy of persons in the study. CONCLUSION Privacy Preserving Probabilistic Record linkage expands record linkage facilities in settings where a unique identifier is unavailable and/or regulations restrict access to the non-unique person identifiable information needed to link existing health-related data sets. Automated pre-processing and encryption fully protect sensitive information ensuring participant confidentiality. This method is suitable not just for epidemiological research but also for any setting with similar challenges.
Resumo:
OBJECTIVE: To evaluate effects of racemic ketamine and S-ketamine in gazelles. ANIMALS: 21 male gazelles (10 Rheem gazelles [Gazella subgutturosa marica] and 11 Subgutturosa gazelles [Gazella subgutturosa subgutturosa]), 6 to 67 months old and weighing (mean+/-SD) 19 +/- 3 kg. PROCEDURES: In a randomized, blinded crossover study, a combination of medetomidine (80 mug/kg) with racemic ketamine (5 mg/kg) or S-ketamine (3 mg/kg) was administered i.m.. Heart rate, blood pressure, respiratory rate, rectal temperature, and oxygen saturation (determined by means of pulse oximetry) were measured. An evaluator timed and scored induction of, maintenance of, and recovery from anesthesia. Medetomidine was reversed with atipamezole. The alternate combination was used after a 4-day interval. Comparisons between groups were performed with Wilcoxon signed rank and paired t tests. RESULTS: Anesthesia induction was poor in 2 gazelles receiving S-ketamine, but other phases of anesthesia were uneventful. A dominant male required an additional dose of S-ketamine (0.75 mg/kg, i.m.). After administration of atipamezole, gazelles were uncoordinated for a significantly shorter period with S-ketamine than with racemic ketamine. Recovery quality was poor in 3 gazelles with racemic ketamine. No significant differences between treatments were found for any other variables. Time from drug administration to antagonism was similar between racemic ketamine (44.5 to 53.0 minutes) and S-ketamine (44.0 to 50.0 minutes). CONCLUSIONS AND CLINICAL RELEVANCE: Administration of S-ketamine at a dose 60% that of racemic ketamine resulted in poorer induction of anesthesia, an analogous degree of sedation, and better recovery from anesthesia in gazelles with unremarkable alterations in physiologic variables, compared with racemic ketamine.
Resumo:
QUESTION UNDER STUDY The aim of this study was to evaluate the cost-effectiveness of ticagrelor and generic clopidogrel as add-on therapy to acetylsalicylic acid (ASA) in patients with acute coronary syndrome (ACS), from a Swiss perspective. METHODS Based on the PLATelet inhibition and patient Outcomes (PLATO) trial, one-year mean healthcare costs per patient treated with ticagrelor or generic clopidogrel were analysed from a payer perspective in 2011. A two-part decision-analytic model estimated treatment costs, quality-adjusted life years (QALYs), life years and the cost-effectiveness of ticagrelor and generic clopidogrel in patients with ACS up to a lifetime at a discount of 2.5% per annum. Sensitivity analyses were performed. RESULTS Over a patient's lifetime, treatment with ticagrelor generates an additional 0.1694 QALYs and 0.1999 life years at a cost of CHF 260 compared with generic clopidogrel. This results in an Incremental Cost Effectiveness Ratio (ICER) of CHF 1,536 per QALY and CHF 1,301 per life year gained. Ticagrelor dominated generic clopidogrel over the five-year and one-year periods with treatment generating cost savings of CHF 224 and 372 while gaining 0.0461 and 0.0051 QALYs and moreover 0.0517 and 0.0062 life years, respectively. Univariate sensitivity analyses confirmed the dominant position of ticagrelor in the first five years and probabilistic sensitivity analyses showed a high probability of cost-effectiveness over a lifetime. CONCLUSION During the first five years after ACS, treatment with ticagrelor dominates generic clopidogrel in Switzerland. Over a patient's lifetime, ticagrelor is highly cost-effective compared with generic clopidogrel, proven by ICERs significantly below commonly accepted willingness-to-pay thresholds.
Resumo:
Derivation of probability estimates complementary to geophysical data sets has gained special attention over the last years. Information about a confidence level of provided physical quantities is required to construct an error budget of higher-level products and to correctly interpret final results of a particular analysis. Regarding the generation of products based on satellite data a common input consists of a cloud mask which allows discrimination between surface and cloud signals. Further the surface information is divided between snow and snow-free components. At any step of this discrimination process a misclassification in a cloud/snow mask propagates to higher-level products and may alter their usability. Within this scope a novel probabilistic cloud mask (PCM) algorithm suited for the 1 km × 1 km Advanced Very High Resolution Radiometer (AVHRR) data is proposed which provides three types of probability estimates between: cloudy/clear-sky, cloudy/snow and clear-sky/snow conditions. As opposed to the majority of available techniques which are usually based on the decision-tree approach in the PCM algorithm all spectral, angular and ancillary information is used in a single step to retrieve probability estimates from the precomputed look-up tables (LUTs). Moreover, the issue of derivation of a single threshold value for a spectral test was overcome by the concept of multidimensional information space which is divided into small bins by an extensive set of intervals. The discrimination between snow and ice clouds and detection of broken, thin clouds was enhanced by means of the invariant coordinate system (ICS) transformation. The study area covers a wide range of environmental conditions spanning from Iceland through central Europe to northern parts of Africa which exhibit diverse difficulties for cloud/snow masking algorithms. The retrieved PCM cloud classification was compared to the Polar Platform System (PPS) version 2012 and Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 cloud masks, SYNOP (surface synoptic observations) weather reports, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) vertical feature mask version 3 and to MODIS collection 5 snow mask. The outcomes of conducted analyses proved fine detection skills of the PCM method with results comparable to or better than the reference PPS algorithm.
Resumo:
We investigated whether a pure perceptual stream is sufficient for probabilistic sequence learning to occur within a single session or whether correlated streams are necessary, whether learning is affected by the transition probability between sequence elements, and how the sequence length influences learning. In each of three experiments, we used six horizontally arranged stimulus displays which consisted of randomly ordered bigrams xo and ox. The probability of the next possible target location out of two was either .50/.50 or .75/.25 and was marked by an underline. In Experiment 1, a left vs. right key response was required for the x of a marked bigram in the pure perceptual learning condition and a response key press corresponding to the marked bigram location (out of 6) was required in the correlated streams condition (i.e., the ring, middle, or index finger of the left and right hand, respectively). The same probabilistic 3-element sequence was used in both conditions. Learning occurred only in the correlated streams condition. In Experiment 2, we investigated whether sequence length affected learning correlated sequences by contrasting the 3-elements sequence with a 6-elements sequence. Significant sequence learning occurred in all conditions. In Experiment 3, we removed a potential confound, that is, the sequence of hand changes. Under these conditions, learning occurred for the 3-element sequence only and transition probability did not affect the amount of learning. Together, these results indicate that correlated streams are necessary for probabilistic sequence learning within a single session and that sequence length can reduce the chances for learning to occur.
Resumo:
The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 export fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Monte Carlo scheme to construct a 1000-member ensemble with the Bern3D ocean model. Ensemble results are constrained by comparing simulated and observation-based fields of excess dissolved calcium carbonate (TA*). The minerals calcite and aragonite are modelled explicitly and ocean–sediment fluxes are considered. For local dissolution rates, either a strong or a weak dependency on CaCO3 saturation is assumed. In addition, there is the option to have saturation-independent dissolution above the saturation horizon. The median (and 68 % confidence interval) of the constrained model ensemble for global biogenic CaCO3 export is 0.90 (0.72–1.05) Gt C yr−1, that is within the lower half of previously published estimates (0.4–1.8 Gt C yr−1). The spatial pattern of CaCO3 export is broadly consistent with earlier assessments. Export is large in the Southern Ocean, the tropical Indo–Pacific, the northern Pacific and relatively small in the Atlantic. The constrained results are robust across a range of diapycnal mixing coefficients and, thus, ocean circulation strengths. Modelled ocean circulation and transport timescales for the different set-ups were further evaluated with CFC11 and radiocarbon observations. Parameters and mechanisms governing dissolution are hardly constrained by either the TA* data or the current compilation of CaCO3 flux measurements such that model realisations with and without saturation-dependent dissolution achieve skill. We suggest applying saturation-independent dissolution rates in Earth system models to minimise computational costs.
Resumo:
Diabetic nephropathy and end-stage renal failure are still a major cause of mortality amongst patients with diabetes mellitus (DM). In this study, we evaluated the Clinitek-Microalbumin (CM) screening test strip for the detection of microalbuminuria (MA) in a random morning spot urine in comparison with the quantitative assessment of albuminuria in the timed overnight urine collection ("gold standard"). One hundred thirty-four children, adolescents, and young adults with insulin-dependent DM Type 1 were studied at 222 outpatient visits. Because of urinary tract infection and/or haematuria, the data of 13 visits were excluded. Finally, 165 timed overnight urine were collected in the remaining 209 visits (79% sample per visit rate). Ten (6.1%) patients presented MA of > or =15 microg/min. In comparison however, 200 spot urine could be screened (96% sample/visit rate) yielding a significant increase in compliance and screening rate (P<.001, McNemar test). Furthermore, at 156 occasions, the gold standard and CM could be directly compared. The sensitivity and the specificity for CM in the spot urine (cut-off > or =30 mg albumin/l) were 0.89 [95% confidence interval (CI) 0.56-0.99] and 0.73 (CI 0.66-0.80), respectively. The positive and negative predictive value were 0.17 (CI 0.08-0.30) and 0.99 (CI 0.95-1.00), respectively. Considering CM albumin-to-creatinine ratio, the results were poorer than with the albumin concentration alone. Using CM instead of quantitative assessment of albuminuria is not cost-effective (35 US dollars versus 60 US dollars/patient/year). In conclusion, to exclude MA, the CM used in the random spot urine is reliable and easy to handle, but positive screening results of > or =30 mg albumin/l must be confirmed by analyses in the timed overnight collected urine. Although the screening compliance is improved, in terms of analysing random morning spot urine for MA, we cannot recommend CM in a paediatric diabetic outpatient setting because the specificity is far too low.
Resumo:
ABSTRACT: BACKGROUND: Many parasitic organisms, eukaryotes as well as bacteria, possess surface antigens with amino acid repeats. Making up the interface between host and pathogen such repetitive proteins may be virulence factors involved in immune evasion or cytoadherence. They find immunological applications in serodiagnostics and vaccine development. Here we use proteins which contain perfect repeats as a basis for comparative genomics between parasitic and free-living organisms. RESULTS: We have developed Reptile http://reptile.unibe.ch, a program for proteome-wide probabilistic description of perfect repeats in proteins. Parasite proteomes exhibited a large variance regarding the proportion of repeat-containing proteins. Interestingly, there was a good correlation between the percentage of highly repetitive proteins and mean protein length in parasite proteomes, but not at all in the proteomes of free-living eukaryotes. Reptile combined with programs for the prediction of transmembrane domains and GPI-anchoring resulted in an effective tool for in silico identification of potential surface antigens and virulence factors from parasites. CONCLUSION: Systemic surveys for perfect amino acid repeats allowed basic comparisons between free-living and parasitic organisms that were directly applicable to predict proteins of serological and parasitological importance. An on-line tool is available at http://genomics.unibe.ch/dora.
Resumo:
Amyloids and prion proteins are clinically and biologically important beta-structures, whose supersecondary structures are difficult to determine by standard experimental or computational means. In addition, significant conformational heterogeneity is known or suspected to exist in many amyloid fibrils. Recent work has indicated the utility of pairwise probabilistic statistics in beta-structure prediction. We develop here a new strategy for beta-structure prediction, emphasizing the determination of beta-strands and pairs of beta-strands as fundamental units of beta-structure. Our program, BETASCAN, calculates likelihood scores for potential beta-strands and strand-pairs based on correlations observed in parallel beta-sheets. The program then determines the strands and pairs with the greatest local likelihood for all of the sequence's potential beta-structures. BETASCAN suggests multiple alternate folding patterns and assigns relative a priori probabilities based solely on amino acid sequence, probability tables, and pre-chosen parameters. The algorithm compares favorably with the results of previous algorithms (BETAPRO, PASTA, SALSA, TANGO, and Zyggregator) in beta-structure prediction and amyloid propensity prediction. Accurate prediction is demonstrated for experimentally determined amyloid beta-structures, for a set of known beta-aggregates, and for the parallel beta-strands of beta-helices, amyloid-like globular proteins. BETASCAN is able both to detect beta-strands with higher sensitivity and to detect the edges of beta-strands in a richly beta-like sequence. For two proteins (Abeta and Het-s), there exist multiple sets of experimental data implying contradictory structures; BETASCAN is able to detect each competing structure as a potential structure variant. The ability to correlate multiple alternate beta-structures to experiment opens the possibility of computational investigation of prion strains and structural heterogeneity of amyloid. BETASCAN is publicly accessible on the Web at http://betascan.csail.mit.edu.