33 resultados para cryptographic pairing computation, elliptic curve cryptography
Resumo:
Age-related changes in lumbar vertebral microarchitecture are evaluated, as assessed by trabecular bone score (TBS), in a cohort of 5,942 French women. The magnitude of TBS decline between 45 and 85 years of age is piecewise linear in the spine and averaged 14.5 %. TBS decline rate increases after 65 years by 50 %. INTRODUCTION: This study aimed to evaluate age-related changes in lumbar vertebral microarchitecture, as assessed by TBS, in a cohort of French women aged 45-85 years. METHODS: An all-comers cohort of French Caucasian women was selected from two clinical centers. Data obtained from these centers were cross-calibrated for TBS and bone mineral density (BMD). BMD and TBS were evaluated at L1-L4 and for all lumbar vertebrae combined using GE-Lunar Prodigy densitometer images. Weight, height, and body mass index (BMI) also were determined. To validate our all-comers cohort, the BMD normative data of our cohort and French Prodigy data were compared. RESULTS: A cohort of 5,942 French women aged 45 to 85 years was created. Dual-energy X-ray absorptiometry normative data obtained for BMD from this cohort were not significantly different from French prodigy normative data (p = 0.15). TBS values at L1-L4 were poorly correlated with BMI (r = -0.17) and weight (r = -0.14) and not correlated with height. TBS values obtained for all lumbar vertebra combined (L1, L2, L3, L4) decreased with age. The magnitude of TBS decline at L1-L4 between 45 and 85 years of age was piecewise linear in the spine and averaged 14.5 %, but this rate increased after 65 years by 50 %. Similar results were obtained for other region of interest in the lumbar spine. As opposed to BMD, TBS was not affected by spinal osteoarthrosis. CONCLUSION: The age-specific reference curve for TBS generated here could therefore be used to help clinicians to improve osteoporosis patient management and to monitor microarchitectural changes related to treatment or other diseases in routine clinical practice.
Resumo:
The process of DNA strand exchange during general genetic recombination is initiated within protein-stabilized synaptic filaments containing homologous regions of interacting DNA molecules. The RecA protein in bacteria and its analogs in eukaryotic organisms start this process by forming helical filamentous complexes on single-stranded or partially single-stranded DNA molecules. These complexes then progressively bind homologous double-stranded DNA molecules so that homologous regions of single- and double-stranded DNA molecules become aligned in register while presumably winding around common axis. The topological assay presented herein allows us to conclude that in synaptic complexes containing homologous single- and double-stranded DNA molecules, all three DNA strands have a helicity of approximately 19 nt per turn.
Resumo:
The question concerning whether all membranes fuse according to the same mechanism has yet to be answered satisfactorily. During fusion of model membranes or viruses, membranes dock, the outer membrane leaflets mix (termed hemifusion), and finally the fusion pore opens and the contents mix. Viral fusion proteins consist of a membrane-disturbing 'fusion peptide' and a helical bundle that pin the membranes together. Although SNARE (soluble N-ethylmaleimide-sensitive factor attachment protein receptor) complexes form helical bundles with similar topology, it is unknown whether SNARE-dependent fusion events on intracellular membranes proceed through a hemifusion state. Here we identify the first hemifusion state for SNARE-dependent fusion of native membranes, and place it into a sequence of molecular events: formation of helical bundles by SNAREs precedes hemifusion; further progression to pore opening requires additional peptides. Thus, SNARE-dependent fusion may proceed along the same pathway as viral fusion: both use a docking mechanism via helical bundles and additional peptides to destabilize the membrane and efficiently induce lipid mixing. Our results suggest that a common lipidic intermediate may underlie all fusion reactions of lipid bilayers.
Resumo:
Superantigens (SAg) are proteins of bacterial or viral origin able to activate T cells by forming a trimolecular complex with both MHC class II molecules and the T cell receptor (TCR), leading to clonal deletion of reactive T cells in the thymus. SAg interact with the TCR through the beta chain variable region (Vbeta), but the TCR alpha chain has been shown to have an influence on the T cell reactivity. We have investigated here the role of the TCR alpha chain in the modulation of T cell reactivity to Mtv-7 SAg by comparing the peripheral usage of Valpha2 in Vbeta6(+) (SAg-reactive) and Vbeta8.2(+) (SAg non-reactive) T cells, in either BALB/D2 (Mtv-7(+)) or BALB/c (Mtv-7(-)) mice. The results show, first, that pairing of Vbeta6 with certain Valpha2 family members prevents T cell deletion by Mtv-7 SAg. Second, there is a strikingly different distribution of the Valpha2 family members in CD4 and CD8 populations of Vbeta6 but not of Vbeta8.2 T cells, irrespective of the presence of Mtv-7 SAg. Third, the alpha chain may play a role in the overall stability of the TCR/SAg/MHC complex. Taken together, these results suggest that the Valpha domain contributes to the selective process by its role in the TCR reactivity to SAg/MHC class II complexes, most likely by influencing the orientation of the Vbeta domain in the TCR alphabeta heterodimer.
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.
Resumo:
The value of various indexes to characterize the stimulus-response curve of human motor nerves was assessed in 40 healthy subjects recruited from four European centers of investigation (Créteil, Lausanne, Liège, Marseille). Stimulus-response curves were established by stimulating the right median and ulnar motor nerves at the wrist, with stimulus durations of 0.05 and 0.5 ms. The following parameters were studied: the threshold intensity of stimulation to obtain 10% (I 10), 50% (I 50), and 90% (I 90) of the maximal compound muscle action potential, the ratios I 10/I 50, I 90/I 50, (I 90 - I 10)/I 10, (I 90-I 50)/I 50, and (I 50 - I 10)/I 10, and the slopes of the stimulus-response curves with or without normalization to I 50. For each parameter, within-center variability and reproducibility (in a test-retest study) were assessed and between-center comparisons were made. For most of the parameters, the results varied significantly within and between the centers. Within the centers, only the ratios I 10/I 50 and I 90/I 50 were found constant and reproducible. Between the centers, the absolute intensity thresholds (I 10, I 50, I 90) and the ratio I 90/I 50 did not show significant differences at stimulus duration of 0.5 ms, whatever the stimulated nerve. The reduced variability and good reproducibility of the ratios I 10/I 50 and I 90/I 50 open perspectives in neurophysiological practice for the use of these indexes of the stimulus-response curve, a rapid and noninvasive test.
Resumo:
Extensive gene flow between wheat (Triticum sp.) and several wild relatives of the genus Aegilops has recently been detected despite notoriously high levels of selfing in these species. Here, we assess and model the spread of wheat alleles into natural populations of the barbed goatgrass (Aegilops triuncialis), a wild wheat relative prevailing in the Mediterranean flora. Our sampling, based on an extensive survey of 31 Ae. triuncialis populations collected along a 60 km × 20 km area in southern Spain (Grazalema Mountain chain, Andalousia, totalling 458 specimens), is completed with 33 wheat cultivars representative of the European domesticated pool. All specimens were genotyped with amplified fragment length polymorphism with the aim of estimating wheat admixture levels in Ae. triuncialis populations. This survey first confirmed extensive hybridization and backcrossing of wheat into the wild species. We then used explicit modelling of populations and approximate Bayesian computation to estimate the selfing rate of Ae. triuncialis along with the magnitude, the tempo and the geographical distance over which wheat alleles introgress into Ae. triuncialis populations. These simulations confirmed that extensive introgression of wheat alleles (2.7 × 10(-4) wheat immigrants for each Ae. triuncialis resident, at each generation) into Ae. triuncialis occurs despite a high selfing rate (Fis ≈ 1 and selfing rate = 97%). These results are discussed in the light of risks associated with the release of genetically modified wheat cultivars in Mediterranean agrosystems.
Resumo:
The decision-making process regarding drug dose, regularly used in everyday medical practice, is critical to patients' health and recovery. It is a challenging process, especially for a drug with narrow therapeutic ranges, in which a medical doctor decides the quantity (dose amount) and frequency (dose interval) on the basis of a set of available patient features and doctor's clinical experience (a priori adaptation). Computer support in drug dose administration makes the prescription procedure faster, more accurate, objective, and less expensive, with a tendency to reduce the number of invasive procedures. This paper presents an advanced integrated Drug Administration Decision Support System (DADSS) to help clinicians/patients with the dose computing. Based on a support vector machine (SVM) algorithm, enhanced with the random sample consensus technique, this system is able to predict the drug concentration values and computes the ideal dose amount and dose interval for a new patient. With an extension to combine the SVM method and the explicit analytical model, the advanced integrated DADSS system is able to compute drug concentration-to-time curves for a patient under different conditions. A feedback loop is enabled to update the curve with a new measured concentration value to make it more personalized (a posteriori adaptation).
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
PURPOSE: We examined the role of smoking in the two dimensions behind the time trends in adult mortality in European countries, that is, rectangularization of the survival curve (mortality compression) and longevity extension (increase in the age-at-death). METHODS: Using data on national sex-specific populations aged 50 years and older from Denmark, Finland, France, West Germany, Italy, the Netherlands, Norway, Sweden, Switzerland, and the United Kingdom, we studied trends in life expectancy, rectangularity, and longevity from 1950 to 2009 for both all-cause and nonsmoking-related mortality and correlated them with trends in lifetime smoking prevalence. RESULTS: For all-cause mortality, rectangularization accelerated around 1980 among men in all the countries studied, and more recently among women in Denmark and the United Kingdom. Trends in lifetime smoking prevalence correlated negatively with both rectangularization and longevity extension, but more negatively with rectangularization. For nonsmoking-related mortality, rectangularization among men did not accelerate around 1980. Among women, the differences between all-cause mortality and nonsmoking-related mortality were small, but larger for rectangularization than for longevity extension. Rectangularization contributed less to the increase in life expectancy than longevity extension, especially for nonsmoking-related mortality among men. CONCLUSIONS: Smoking affects rectangularization more than longevity extension, both among men and women.
Resumo:
Drug metabolism can produce metabolites with physicochemical and pharmacological properties that differ substantially from those of the parent drug, and consequently has important implications for both drug safety and efficacy. To reduce the risk of costly clinical-stage attrition due to the metabolic characteristics of drug candidates, there is a need for efficient and reliable ways to predict drug metabolism in vitro, in silico and in vivo. In this Perspective, we provide an overview of the state of the art of experimental and computational approaches for investigating drug metabolism. We highlight the scope and limitations of these methods, and indicate strategies to harvest the synergies that result from combining measurement and prediction of drug metabolism.