46 resultados para probabilistic refinement calculus
Behind the curtains of privacy calculus on social networking sites: the study of Germany and the USA
Resumo:
As social networking sites (SNSs) become increasingly global, the issues of cultural differences in participation patterns become acute. However, current research offers only limited insights into the role of culture behind SNS usage. Aiming to fill this gap, this study adopts a ‘privacy calculus’ perspective to study the differences between German and American SNS users. Results of structural equation modeling and multi-group analysis reveal distinct variability in the cognitive patterns of American and German subjects. We contribute to the theory by rejecting the universal nature of privacy-calculus processes. From a practical standpoint, our results signal that SNS providers cannot rely on the “proven” means in ensuring user participation when crossing geographic boundaries. When financial means are limited, SNS providers should direct their investments into enhancing platform enjoyment and granting users with more control and, paradoxically, lobbying for more legalistic safeguards of user privacy.
Resumo:
Worldwide social networks, like Facebook, face fierce competition from local platforms when expanding globally. To remain attractive social network providers need to encourage user self-disclosure. Yet, little research exists on how cultural differences impact selfdisclosure on these platforms. Addressing this gap, this study explores the differences in perceptions of disclosure-relevant determinants between German and US users. Survey of Facebook members indicates that German users expect more damage and attribute higher probability to privacy-related violations. On the other hand, even though American users show higher level of privacy concern, they extract more benefits from their social networking activities, have more trust in the service provider and legal assurances as well as perceive more control. These factors may explain a higher level of self-disclosure indicated by American users. Our results provide relevant insights for the social network providers who can adjust their expansion strategy with regard to cultural differences.
Resumo:
How do probabilistic models represent their targets and how do they allow us to learn about them? The answer to this question depends on a number of details, in particular on the meaning of the probabilities involved. To classify the options, a minimalist conception of representation (Su\'arez 2004) is adopted: Modelers devise substitutes (``sources'') of their targets and investigate them to infer something about the target. Probabilistic models allow us to infer probabilities about the target from probabilities about the source. This leads to a framework in which we can systematically distinguish between different models of probabilistic modeling. I develop a fully Bayesian view of probabilistic modeling, but I argue that, as an alternative, Bayesian degrees of belief about the target may be derived from ontic probabilities about the source. Remarkably, some accounts of ontic probabilities can avoid problems if they are supposed to apply to sources only.
Resumo:
Derivation of probability estimates complementary to geophysical data sets has gained special attention over the last years. Information about a confidence level of provided physical quantities is required to construct an error budget of higher-level products and to correctly interpret final results of a particular analysis. Regarding the generation of products based on satellite data a common input consists of a cloud mask which allows discrimination between surface and cloud signals. Further the surface information is divided between snow and snow-free components. At any step of this discrimination process a misclassification in a cloud/snow mask propagates to higher-level products and may alter their usability. Within this scope a novel probabilistic cloud mask (PCM) algorithm suited for the 1 km × 1 km Advanced Very High Resolution Radiometer (AVHRR) data is proposed which provides three types of probability estimates between: cloudy/clear-sky, cloudy/snow and clear-sky/snow conditions. As opposed to the majority of available techniques which are usually based on the decision-tree approach in the PCM algorithm all spectral, angular and ancillary information is used in a single step to retrieve probability estimates from the precomputed look-up tables (LUTs). Moreover, the issue of derivation of a single threshold value for a spectral test was overcome by the concept of multidimensional information space which is divided into small bins by an extensive set of intervals. The discrimination between snow and ice clouds and detection of broken, thin clouds was enhanced by means of the invariant coordinate system (ICS) transformation. The study area covers a wide range of environmental conditions spanning from Iceland through central Europe to northern parts of Africa which exhibit diverse difficulties for cloud/snow masking algorithms. The retrieved PCM cloud classification was compared to the Polar Platform System (PPS) version 2012 and Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 cloud masks, SYNOP (surface synoptic observations) weather reports, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) vertical feature mask version 3 and to MODIS collection 5 snow mask. The outcomes of conducted analyses proved fine detection skills of the PCM method with results comparable to or better than the reference PPS algorithm.
Resumo:
AB A fundamental capacity of the human brain is to learn relations (contingencies) between environmental stimuli and the consequences of their occurrence. Some contingencies are probabilistic; that is, they predict an event in some situations but not in all. Animal studies suggest that damage to limbic structures or the prefrontal cortex may disturb probabilistic learning. The authors studied the learning of probabilistic contingencies in amnesic patients with limbic lesions, patients with prefrontal cortex damage, and healthy controls. Across 120 trials, participants learned contingent relations between spatial sequences and a button press. Amnesic patients had learning comparable to that of control subjects but failed to indicate what they had learned. Across the last 60 trials, amnesic patients and control subjects learned to avoid a noncontingent choice better than frontal patients. These results indicate that probabilistic learning does not depend on the brain structures supporting declarative memory.
Resumo:
Prolonged ischemia of skeletal muscle tissue, followed by reperfusion, leads to ischemia/reperfusion injury (IRI), which is a feared local and systemic inflammatory reaction. With respect to the 3Rs, we wanted to determine which parameters for assessment of IRI require a reperfusion time of 24 h and for which 2 h of reperfusion are sufficient. Rats were subjected to 3 h of hind limb ischemia and 2 h or 24 h of reperfusion. Human plasma derived C1 inhibitor was used as a drug to prevent reperfusion injury. For 2 h of reperfusion the rats stayed under anesthesia throughout (severity grade 1), whereas for 24 h they were awake under analgesia during reperfusion (grade 2). The femoral artery was clamped and a tourniquet was placed, under maintenance of venous return. C1 esterase inhibitor was systemically administered 5 min before the induction of ischemia. No differences in local muscle edema formation and depositions of immunoglobulin G and immunoglobulin M were observed between 2 h and 24 h (P > 0.05), whereas lung edema was only observed after 24 h. Muscle viability was significantly lower after 24 h vs 2 h reperfusion (P < 0.05). Increased plasma creatine kinase (CK)-MM and platelet-derived growth factor (PDGF)-bb could be detected after 2 h, but not after 24 h of reperfusion. By contrast, depositions of C3b/c and fibrin in muscle were only detected after 24 h (P < 0.001). In conclusion, for a first screening of drug candidates to reduce IRI, 2 h reperfusions are sufficient, and these reduce the severity of the animal experiment. Twenty-four-hour reperfusions are only needed for in-depth analysis of the mechanisms of IRI, including lung damage.
Resumo:
The scientific literature of laboratory animal research is replete with papers reporting poor reproducibility of results as well as failure to translate results to clinical trials in humans. This may stem in part from poor experimental design and conduct of animal experiments. Despite widespread recognition of these problems and implementation of guidelines to attenuate them, a review of the literature suggests that experimental design and conduct of laboratory animal research are still in need of refinement. This paper will review and discuss possible sources of biases, highlight advantages and limitations of strategies proposed to alleviate them, and provide a conceptual framework for improving the reproducibility of laboratory animal research.
Resumo:
BACKGROUND Record linkage of existing individual health care data is an efficient way to answer important epidemiological research questions. Reuse of individual health-related data faces several problems: Either a unique personal identifier, like social security number, is not available or non-unique person identifiable information, like names, are privacy protected and cannot be accessed. A solution to protect privacy in probabilistic record linkages is to encrypt these sensitive information. Unfortunately, encrypted hash codes of two names differ completely if the plain names differ only by a single character. Therefore, standard encryption methods cannot be applied. To overcome these challenges, we developed the Privacy Preserving Probabilistic Record Linkage (P3RL) method. METHODS In this Privacy Preserving Probabilistic Record Linkage method we apply a three-party protocol, with two sites collecting individual data and an independent trusted linkage center as the third partner. Our method consists of three main steps: pre-processing, encryption and probabilistic record linkage. Data pre-processing and encryption are done at the sites by local personnel. To guarantee similar quality and format of variables and identical encryption procedure at each site, the linkage center generates semi-automated pre-processing and encryption templates. To retrieve information (i.e. data structure) for the creation of templates without ever accessing plain person identifiable information, we introduced a novel method of data masking. Sensitive string variables are encrypted using Bloom filters, which enables calculation of similarity coefficients. For date variables, we developed special encryption procedures to handle the most common date errors. The linkage center performs probabilistic record linkage with encrypted person identifiable information and plain non-sensitive variables. RESULTS In this paper we describe step by step how to link existing health-related data using encryption methods to preserve privacy of persons in the study. CONCLUSION Privacy Preserving Probabilistic Record linkage expands record linkage facilities in settings where a unique identifier is unavailable and/or regulations restrict access to the non-unique person identifiable information needed to link existing health-related data sets. Automated pre-processing and encryption fully protect sensitive information ensuring participant confidentiality. This method is suitable not just for epidemiological research but also for any setting with similar challenges.
Resumo:
In this article, we introduce the probabilistic justification logic PJ, a logic in which we can reason about the probability of justification statements. We present its syntax and semantics, and establish a strong completeness theorem. Moreover, we investigate the relationship between PJ and the logic of uncertain justifications.
Resumo:
The logic PJ is a probabilistic logic defined by adding (noniterated) probability operators to the basic justification logic J. In this paper we establish upper and lower bounds for the complexity of the derivability problem in the logic PJ. The main result of the paper is that the complexity of the derivability problem in PJ remains the same as the complexity of the derivability problem in the underlying logic J, which is π[p/2] -complete. This implies that the probability operators do not increase the complexity of the logic, although they arguably enrich the expressiveness of the language.
Resumo:
We present a probabilistic justification logic, PPJ, to study rational belief, degrees of belief and justifications. We establish soundness and completeness for PPJ and show that its satisfiability problem is decidable. In the last part we use PPJ to provide a solution to the lottery paradox.
Resumo:
We investigated whether a pure perceptual stream is sufficient for probabilistic sequence learning to occur within a single session or whether correlated streams are necessary, whether learning is affected by the transition probability between sequence elements, and how the sequence length influences learning. In each of three experiments, we used six horizontally arranged stimulus displays which consisted of randomly ordered bigrams xo and ox. The probability of the next possible target location out of two was either .50/.50 or .75/.25 and was marked by an underline. In Experiment 1, a left vs. right key response was required for the x of a marked bigram in the pure perceptual learning condition and a response key press corresponding to the marked bigram location (out of 6) was required in the correlated streams condition (i.e., the ring, middle, or index finger of the left and right hand, respectively). The same probabilistic 3-element sequence was used in both conditions. Learning occurred only in the correlated streams condition. In Experiment 2, we investigated whether sequence length affected learning correlated sequences by contrasting the 3-elements sequence with a 6-elements sequence. Significant sequence learning occurred in all conditions. In Experiment 3, we removed a potential confound, that is, the sequence of hand changes. Under these conditions, learning occurred for the 3-element sequence only and transition probability did not affect the amount of learning. Together, these results indicate that correlated streams are necessary for probabilistic sequence learning within a single session and that sequence length can reduce the chances for learning to occur.