20 resultados para Coding and Information Theory
em Université de Lausanne, Switzerland
Resumo:
Classical treatments of problems of sequential mate choice assume that the distribution of the quality of potential mates is known a priori. This assumption, made for analytical purposes, may seem unrealistic, opposing empirical data as well as evolutionary arguments. Using stochastic dynamic programming, we develop a model that includes the possibility for searching individuals to learn about the distribution and in particular to update mean and variance during the search. In a constant environment, a priori knowledge of the parameter values brings strong benefits in both time needed to make a decision and average value of mate obtained. Knowing the variance yields more benefits than knowing the mean, and benefits increase with variance. However, the costs of learning become progressively lower as more time is available for choice. When parameter values differ between demes and/or searching periods, a strategy relying on fixed a priori information might lead to erroneous decisions, which confers advantages on the learning strategy. However, time for choice plays an important role as well: if a decision must be made rapidly, a fixed strategy may do better even when the fixed image does not coincide with the local parameter values. These results help in delineating the ecological-behavior context in which learning strategies may spread.
Resumo:
BACKGROUND: Knowledge about their past medical history is central for childhood cancer survivors to ensure informed decisions in their health management. Knowledge about information provision and information needs in this population is still scarce. We thus aimed to assess: (1) the information survivors reported to have received on disease, treatment, follow-up, and late effects; (2) their information needs in these four domains and the format in which they would like it provided; (3) the association with psychological distress and quality of life (QoL). PROCEDURE: As part of the Follow-up survey of the Swiss Childhood Cancer Survivor Study, we sent a questionnaire to all survivors (≥18 years) who previously participated to the baseline survey, were diagnosed with cancer after 1990 at an age of <16 years. RESULTS: Most survivors had received oral information only (on illness: oral: 82%, written: 38%, treatment: oral: 79%, written: 36%; follow-up: oral: 77%, written: 23%; late effects: oral: 68%, written: 14%). Most survivors who had not previously received any information rated it as important, especially information on late effects (71%). A large proportion of survivors reported current information needs and would like to receive personalized information especially on late effects (44%). Survivors with higher information needs reported higher psychological distress and lower QoL. CONCLUSIONS: Survivors want to be more informed especially on possible late effects, and want to receive personalized information. Improving information provision, both qualitatively and quantitatively, will allow survivors to have better control of their health and to become better decision makers.
Resumo:
Gene expression changes may underlie much of phenotypic evolution. The development of high-throughput RNA sequencing protocols has opened the door to unprecedented large-scale and cross-species transcriptome comparisons by allowing accurate and sensitive assessments of transcript sequences and expression levels. Here, we review the initial wave of the new generation of comparative transcriptomic studies in mammals and vertebrate outgroup species in the context of earlier work. Together with various large-scale genomic and epigenomic data, these studies have unveiled commonalities and differences in the dynamics of gene expression evolution for various types of coding and non-coding genes across mammalian lineages, organs, developmental stages, chromosomes and sexes. They have also provided intriguing new clues to the regulatory basis and phenotypic implications of evolutionary gene expression changes.
Resumo:
In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.
Resumo:
The aim of this paper is to describe the process and challenges in building exposure scenarios for engineered nanomaterials (ENM), using an exposure scenario format similar to that used for the European Chemicals regulation (REACH). Over 60 exposure scenarios were developed based on information from publicly available sources (literature, books, and reports), publicly available exposure estimation models, occupational sampling campaign data from partnering institutions, and industrial partners regarding their own facilities. The primary focus was on carbon-based nanomaterials, nano-silver (nano-Ag) and nano-titanium dioxide (nano-TiO2), and included occupational and consumer uses of these materials with consideration of the associated environmental release. The process of building exposure scenarios illustrated the availability and limitations of existing information and exposure assessment tools for characterizing exposure to ENM, particularly as it relates to risk assessment. This article describes the gaps in the information reviewed, recommends future areas of ENM exposure research, and proposes types of information that should, at a minimum, be included when reporting the results of such research, so that the information is useful in a wider context.
Resumo:
Our lives and careers are becoming ever more unpredictable. The "life-design paradigm" described in detail in this ground-breaking handbook helps counselors and others meet people's increasing need to develop and manage their own lives and careers. Life-design interventions, suited to a wide variety of cultural settings, help individuals become actors in their own lives and careers by activating, stimulating, and developing their personal resources. This handbook first addresses life-design theory, then shows how to apply life designing to different age groups and with more at-risk people, and looks at how to train life-design counselors
Resumo:
There is no doubt about the necessity of protecting digital communication: Citizens are entrusting their most confidential and sensitive data to digital processing and communication, and so do governments, corporations, and armed forces. Digital communication networks are also an integral component of many critical infrastructures we are seriously depending on in our daily lives. Transportation services, financial services, energy grids, food production and distribution networks are only a few examples of such infrastructures. Protecting digital communication means protecting confidentiality and integrity by encrypting and authenticating its contents. But most digital communication is not secure today. Nevertheless, some of the most ardent problems could be solved with a more stringent use of current cryptographic technologies. Quite surprisingly, a new cryptographic primitive emerges from the ap-plication of quantum mechanics to information and communication theory: Quantum Key Distribution. QKD is difficult to understand, it is complex, technically challenging, and costly-yet it enables two parties to share a secret key for use in any subsequent cryptographic task, with an unprecedented long-term security. It is disputed, whether technically and economically fea-sible applications can be found. Our vision is, that despite technical difficulty and inherent limitations, Quantum Key Distribution has a great potential and fits well with other cryptographic primitives, enabling the development of highly secure new applications and services. In this thesis we take a structured approach to analyze the practical applicability of QKD and display several use cases of different complexity, for which it can be a technology of choice, either because of its unique forward security features, or because of its practicability.
Resumo:
Continuing developments in science and technology mean that the amounts of information forensic scientists are able to provide for criminal investigations is ever increasing. The commensurate increase in complexity creates difficulties for scientists and lawyers with regard to evaluation and interpretation, notably with respect to issues of inference and decision. Probability theory, implemented through graphical methods, and specifically Bayesian networks, provides powerful methods to deal with this complexity. Extensions of these methods to elements of decision theory provide further support and assistance to the judicial system. Bayesian Networks for Probabilistic Inference and Decision Analysis in Forensic Science provides a unique and comprehensive introduction to the use of Bayesian decision networks for the evaluation and interpretation of scientific findings in forensic science, and for the support of decision-makers in their scientific and legal tasks. Includes self-contained introductions to probability and decision theory. Develops the characteristics of Bayesian networks, object-oriented Bayesian networks and their extension to decision models. Features implementation of the methodology with reference to commercial and academically available software. Presents standard networks and their extensions that can be easily implemented and that can assist in the reader's own analysis of real cases. Provides a technique for structuring problems and organizing data based on methods and principles of scientific reasoning. Contains a method for the construction of coherent and defensible arguments for the analysis and evaluation of scientific findings and for decisions based on them. Is written in a lucid style, suitable for forensic scientists and lawyers with minimal mathematical background. Includes a foreword by Ian Evett. The clear and accessible style of this second edition makes this book ideal for all forensic scientists, applied statisticians and graduate students wishing to evaluate forensic findings from the perspective of probability and decision analysis. It will also appeal to lawyers and other scientists and professionals interested in the evaluation and interpretation of forensic findings, including decision making based on scientific information.
Resumo:
Investigating macro-geographical genetic structures of animal populations is crucial to reconstruct population histories and to identify significant units for conservation. This approach may also provide information about the intraspecific flexibility of social systems. We investigated the history and current structure of a large number of populations in the communally breeding Bechstein's bat (Myotis bechsteinii). Our aim was to understand which factors shape the species' social system over a large ecological and geographical range. Using sequence data from one coding and one noncoding mitochondrial DNA region, we identified the Balkan Peninsula as the main and probably only glacial refugium of the species in Europe. Sequence data also suggest the presence of a cryptic taxon in the Caucasus and Anatolia. In a second step, we used seven autosomal and two mitochondrial microsatellite loci to compare population structures inside and outside of the Balkan glacial refugium. Central European and Balkan populations both were more strongly differentiated for mitochondrial DNA than for nuclear DNA, had higher genetic diversities and lower levels of relatedness at swarming (mating) sites than in maternity (breeding) colonies, and showed more differentiation between colonies than between swarming sites. All these suggest that populations are shaped by strong female philopatry, male dispersal, and outbreeding throughout their European range. We conclude that Bechstein's bats have a stable social system that is independent from the postglacial history and location of the populations. Our findings have implications for the understanding of the benefits of sociality in female Bechstein's bats and for the conservation of this endangered species.