950 resultados para Probabilistic metrics
Resumo:
Wireless “MIMO” systems, employing multiple transmit and receive antennas, promise a significant increase of channel capacity, while orthogonal frequency-division multiplexing (OFDM) is attracting a good deal of attention due to its robustness to multipath fading. Thus, the combination of both techniques is an attractive proposition for radio transmission. The goal of this paper is the description and analysis of a new and novel pilot-aided estimator of multipath block-fading channels. Typical models leading to estimation algorithms assume the number of multipath components and delays to be constant (and often known), while their amplitudes are allowed to vary with time. Our estimator is focused instead on the more realistic assumption that the number of channel taps is also unknown and varies with time following a known probabilistic model. The estimation problem arising from these assumptions is solved using Random-Set Theory (RST), whereby one regards the multipath-channel response as a single set-valued random entity.Within this framework, Bayesian recursive equations determine the evolution with time of the channel estimator. Due to the lack of a closed form for the solution of Bayesian equations, a (Rao–Blackwellized) particle filter (RBPF) implementation ofthe channel estimator is advocated. Since the resulting estimator exhibits a complexity which grows exponentially with the number of multipath components, a simplified version is also introduced. Simulation results describing the performance of our channel estimator demonstrate its effectiveness.
Resumo:
User generated content shared in online communities is often described using collaborative tagging systems where users assign labels to content resources. As a result, a folksonomy emerges that relates a number of tags with the resources they label and the users that have used them. In this paper we analyze the folksonomy of Freesound, an online audio clip sharing site which contains more than two million users and 150,000 user-contributed sound samplescovering a wide variety of sounds. By following methodologies taken from similar studies, we compute some metrics that characterize the folksonomy both at the globallevel and at the tag level. In this manner, we are able to betterunderstand the behavior of the folksonomy as a whole, and also obtain some indicators that can be used as metadata for describing tags themselves. We expect that such a methodology for characterizing folksonomies can be useful to support processes such as tag recommendation or automatic annotation of online resources.
Resumo:
Silver Code (SilC) was originally discovered in [1–4] for 2×2 multiple-input multiple-output (MIMO) transmission. It has non-vanishing minimum determinant 1/7, slightly lower than Golden code, but is fast-decodable, i.e., it allows reduced-complexity maximum likelihood decoding [5–7]. In this paper, we present a multidimensional trellis-coded modulation scheme for MIMO systems [11] based on set partitioning of the Silver Code, named Silver Space-Time Trellis Coded Modulation (SST-TCM). This lattice set partitioning is designed specifically to increase the minimum determinant. The branches of the outer trellis code are labeled with these partitions. Viterbi algorithm is applied for trellis decoding, while the branch metrics are computed by using a sphere-decoding algorithm. It is shown that the proposed SST-TCM performs very closely to the Golden Space-Time Trellis Coded Modulation (GST-TCM) scheme, yetwith a much reduced decoding complexity thanks to its fast-decoding property.
Resumo:
Almost 30 years ago, Bayesian networks (BNs) were developed in the field of artificial intelligence as a framework that should assist researchers and practitioners in applying the theory of probability to inference problems of more substantive size and, thus, to more realistic and practical problems. Since the late 1980s, Bayesian networks have also attracted researchers in forensic science and this tendency has considerably intensified throughout the last decade. This review article provides an overview of the scientific literature that describes research on Bayesian networks as a tool that can be used to study, develop and implement probabilistic procedures for evaluating the probative value of particular items of scientific evidence in forensic science. Primary attention is drawn here to evaluative issues that pertain to forensic DNA profiling evidence because this is one of the main categories of evidence whose assessment has been studied through Bayesian networks. The scope of topics is large and includes almost any aspect that relates to forensic DNA profiling. Typical examples are inference of source (or, 'criminal identification'), relatedness testing, database searching and special trace evidence evaluation (such as mixed DNA stains or stains with low quantities of DNA). The perspective of the review presented here is not exclusively restricted to DNA evidence, but also includes relevant references and discussion on both, the concept of Bayesian networks as well as its general usage in legal sciences as one among several different graphical approaches to evidence evaluation.
Resumo:
This paper discusses five strategies to deal with five types of errors in Qualitative Comparative Analysis (QCA): condition errors, systematic errors, random errors, calibration errors, and deviant case errors. These strategies are the comparative inspection of complex, intermediary, and parsimonious solutions; the use of an adjustment factor, the use of probabilistic criteria, the test of the robustness of calibration parameters, and the use of a frequency threshold for observed combinations of conditions. The strategies are systematically reviewed, assessed, and evaluated as regards their applicability, advantages, limitations, and complementarities.
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
The objective of this study consists in quantifying in money terms the potential reduction in usage of public health care outlets associated to the tenure of double (public plus private) insurance. In order to address the problem, a probabilistic model for visits to physicians is specified and estimated using data from the Catalonian Health Survey. Also, a model for the marginal cost of a visit to a physician is estimated using data from a representative sample of fee-for-service payments from a major insurer. Combining the estimates from the two models it is possible to quantify in money terms the cost/savings of alternative policies which bear an impact on the adoption of double insurance by the population. The results suggest that the private sector absorbs an important volume of demand which would be re-directed to the public sector if consumers cease to hold double insurance.
Resumo:
Prior probabilities represent a core element of the Bayesian probabilistic approach to relatedness testing. This letter opinions on the commentary 'Use of prior odds for missing persons identifications' by Budowle et al. (2011), published recently in this journal. Contrary to Budowle et al. (2011), we argue that the concept of prior probabilities (i) is not endowed with the notion of objectivity, (ii) is not a case for computation and (iii) does not require new guidelines edited by the forensic DNA community - as long as probability is properly considered as an expression of personal belief. Please see related article: http://www.investigativegenetics.com/content/3/1/3
The hematology laboratory in blood doping (bd): 2014 update on the athlete biological passport (APB)
Resumo:
Introduction: Blood doping (BD) is the use of Erythropoietic Stimulating Agents (ESAs) and/or transfusion to increase aerobic performance in athletes. Direct toxicologic techniques are insufficient to unmask sophisticated doping protocols. The Hematological module of the ABP (World Anti-Doping Agency), associates decision support technology and expert assessment to indirectly detect BD hematological effects. Methods: The ABP module is based on blood parameters, under strict pre-analytical and analytical rules for collection, storage and transport at 2-12°C, internal and external QC. Accuracy, reproducibility and interlaboratory harmonization fulfill forensic standard. Blood samples are collected in competition and out-ofcompetition. Primary parameters for longitudinal monitoring are: - hemoglobin (HGB); - reticulocyte percentage (RET); - OFF score, indicator of suppressed erythropoiesis, calculated as [HGB(g/L) * 60-√RET%]. Statistical calculation predicts individual expected limits by probabilistic inference. Secondary parameters are RBC, HCT, MCHC-MCH-MCV-RDW-IFR. ABP profiles flagged as atypical are review by experts in hematology, pharmacology, sports medicine or physiology, and classified as: - normal - suspect (to target) - likely due to BD - likely due to pathology. Results: Thousands of athletes worldwide are currently monitored. Since 2010, at least 35 athletes have been sanctioned and others are prosecuted on the sole basis of abnormal ABP, with a 240% increase of positivity to direct tests for ESA, thanks to improved targeting of suspicious athletes (WADA data). Specific doping scenarios have been identified by the Experts (Table and Figure). Figure. Typical HGB and RET profiles in two highly suspicious athletes. A. Sample 2: simultaneous increases in HGB and RET (likely ESA stimulation) in a male. B. Samples 3, 6 and 7: "OFF" picture, with high HGB and low RET in a female. Sample 10: normal HGB and increased RET (ESA or blood withdrawal). Conclusions: ABP is a powerful tool for indirect doping detection, based on the recognition of specific, unphysiological changes triggered by blood doping. The effect of factors of heterogeneity, such as sex and altitude, must also be considered. Schumacher YO, et al. Drug Test Anal 2012, 4:846-853. Sottas PE, et al. Clin Chem 2011, 57:969-976.
Resumo:
In a series of three experiments, participants made inferences about which one of a pair of two objects scored higher on a criterion. The first experiment was designed to contrast the prediction of Probabilistic Mental Model theory (Gigerenzer, Hoffrage, & Kleinbölting, 1991) concerning sampling procedure with the hard-easy effect. The experiment failed to support the theory's prediction that a particular pair of randomly sampled item sets would differ in percentage correct; but the observation that German participants performed practically as well on comparisons between U.S. cities (many of which they did not even recognize) than on comparisons between German cities (about which they knew much more) ultimately led to the formulation of the recognition heuristic. Experiment 2 was a second, this time successful, attempt to unconfound item difficulty and sampling procedure. In Experiment 3, participants' knowledge and recognition of each city was elicited, and how often this could be used to make an inference was manipulated. Choices were consistent with the recognition heuristic in about 80% of the cases when it discriminated and people had no additional knowledge about the recognized city (and in about 90% when they had such knowledge). The frequency with which the heuristic could be used affected the percentage correct, mean confidence, and overconfidence as predicted. The size of the reference class, which was also manipulated, modified these effects in meaningful and theoretically important ways.
Resumo:
Quantitative research that aimed to identify the mean total cost (MTC) of connecting, maintaining and disconnecting patient-controlled analgesia pump (PCA) in the management of pain. The non-probabilistic sample corresponded to the observation of 81 procedures in 17 units of the Central Institute of the Clinics Hospital, Faculty of Medicine, University of Sao Paulo. We calculated the MTC multiplying by the time spent by nurses at a unit cost of direct labor, adding the cost of materials and medications/solutions. The MTC of connecting was R$ 107.91; maintenance R$ 110.55 and disconnecting R$ 4.94. The results found will subsidize discussions about the need to transfer money from the Unified Health System to hospitals units that perform this technique of analgesic therapy and it will contribute to the cost management aimed at making efficient and effective decision-making in the allocation of available resources.
Resumo:
Sobriety checkpoints are not usually randomly located by traffic authorities. As such, information provided by non-random alcohol tests cannot be used to infer the characteristics of the general driving population. In this paper a case study is presented in which the prevalence of alcohol-impaired driving is estimated for the general population of drivers. A stratified probabilistic sample was designed to represent vehicles circulating in non-urban areas of Catalonia (Spain), a region characterized by its complex transportation network and dense traffic around the metropolis of Barcelona. Random breath alcohol concentration tests were performed during spring 2012 on 7,596 drivers. The estimated prevalence of alcohol-impaired drivers was 1.29%, which is roughly a third of the rate obtained in non-random tests. Higher rates were found on weekends (1.90% on Saturdays, 4.29% on Sundays) and especially at night. The rate is higher for men (1.45%) than for women (0.64%) and the percentage of positive outcomes shows an increasing pattern with age. In vehicles with two occupants, the proportion of alcohol-impaired drivers is estimated at 2.62%, but when the driver was alone the rate drops to 0.84%, which might reflect the socialization of drinking habits. The results are compared with outcomes in previous surveys, showing a decreasing trend in the prevalence of alcohol-impaired drivers over time.
Resumo:
Objective To analyze the determinants of emergency contraception non-use among women in unplanned and ambivalent pregnancies. Method Cross-sectional study with a probabilistic sample of 366 pregnant women from 12 primary health care units in the city of São Paulo, Brazil. A multinomial logistic regression was performed, comparing three groups: women who used emergency contraception to prevent ongoing pregnancies (reference); women who made no use of emergency contraception, but used other contraceptive methods; and women who made no use of any contraceptive methods at all. Results Cohabitation with a partner was the common determinant of emergency contraception non-use. No pregnancy risk awareness, ambivalent pregnancies and no previous use of emergency contraception also contributed to emergency contraception non-use. Conclusion Apart from what is pointed out in the literature, knowledge of emergency contraception and the fertile period were not associated to its use.
Resumo:
The visual cortex in each hemisphere is linked to the opposite hemisphere by axonal projections that pass through the splenium of the corpus callosum. Visual-callosal connections in humans and macaques are found along the V1/V2 border where the vertical meridian is represented. Here we identify the topography of V1 vertical midline projections through the splenium within six human subjects with normal vision using diffusion-weighted MR imaging and probabilistic diffusion tractography. Tractography seed points within the splenium were classified according to their estimated connectivity profiles to topographic subregions of V1, as defined by functional retinotopic mapping. First, we report a ventral-dorsal mapping within the splenium with fibers from ventral V1 (representing the upper visual field) projecting to the inferior-anterior corner of the splenium and fibers from dorsal V1 (representing the lower visual field) projecting to the superior-posterior end. Second, we also report an eccentricity gradient of projections from foveal-to-peripheral V1 subregions running in the anterior-superior to posterior-inferior direction, orthogonal to the dorsal-ventral mapping. These results confirm and add to a previous diffusion MRI study (Dougherty et al., 2005) which identified a dorsal/ventral mapping of human splenial fibers. These findings yield a more detailed view of the structural organization of the splenium than previously reported and offer new opportunities to study structural plasticity in the visual system.
Resumo:
The objective of this study consists in quantifying in money terms thepotential reduction in usage of public health care outlets associatedto the tenure of double (public plus private) insurance. In order to address the problem, a probabilistic model for visits to physicians is specified and estimated using data from the Catalonian Health Survey. Also, a model for the marginal cost of a visit to a physician is estimated using data from a representative sample of fee-for-service payments from a major insurer. Combining the estimates from the two models it is possible to quantify in money terms the cost/savings of alternative policies which bear an impact on the adoption of double insurance by the population. The results suggest that the private sector absorbs an important volumeof demand which would be re-directed to the public sector if consumerscease to hold double insurance.