906 resultados para Timed and Probabilistic Automata


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Cost-effectiveness analysis of a 6-month treatment of apixaban (10 mg/12h, first 7 days; 5 mg/12h afterwards) for the treatment of the first event of venous thromboembolism (VTE) and prevention of recurrences, versus low-molecular-weight heparins/vitamin K antagonists treatment (LMWH/VKA). Material and methods: A lifetime Markov model with 13 health states was used for describing the course of the disease. Efficacy and safety data were obtained from AMPLIFY and AMPLIFY-EXT clinical trials; health outcomes were measured as life years gained (LYG) and quality-adjusted life years (QALY). The chosen perspective of this analysis has been the Spanish National Health System (NHS). Drugs, management of VTE and complications costs were obtained from several Spanish data sources (€, 2014). A 3% discount rate was applied to health outcomes and costs. Univariate and probabilistic sensitivity analyses (SA) were performed in order to assess the robustness of the results. Results: Apixaban was the most effective therapy with 7.182 LYG and 5.865 QALY, versus 7.160 LYG and 5.838 QALYs with LMWH/VKA. Furthermore, apixaban had a lower total cost (€13,374.70 vs €13,738.30). Probabilistic SA confirmed dominance of apixaban (led to better health outcomes with less associated costs) in 89% of the simulations. Conclusions: Apixaban 5 mg/12h versus LMWH/VKA was an efficient therapeutic strategy for the treatment and prevention of recurrences of VTE from the NHS perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider binary infinite order stochastic chains perturbed by a random noise. This means that at each time step, the value assumed by the chain can be randomly and independently flipped with a small fixed probability. We show that the transition probabilities of the perturbed chain are uniformly close to the corresponding transition probabilities of the original chain. As a consequence, in the case of stochastic chains with unbounded but otherwise finite variable length memory, we show that it is possible to recover the context tree of the original chain, using a suitable version of the algorithm Context, provided that the noise is small enough.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article examines Simpson's paradox as applied to the theory of probabilites and percentages. The author discusses possible flaws in the paradox and compares it to the Sure Thing Principle, statistical inference, causal inference and probabilistic analyses of causation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents results on the simulation of the solid state sintering of copper wires using Monte Carlo techniques based on elements of lattice theory and cellular automata. The initial structure is superimposed onto a triangular, two-dimensional lattice, where each lattice site corresponds to either an atom or vacancy. The number of vacancies varies with the simulation temperature, while a cluster of vacancies is a pore. To simulate sintering, lattice sites are picked at random and reoriented in terms of an atomistic model governing mass transport. The probability that an atom has sufficient energy to jump to a vacant lattice site is related to the jump frequency, and hence the diffusion coefficient, while the probability that an atomic jump will be accepted is related to the change in energy of the system as a result of the jump, as determined by the change in the number of nearest neighbours. The jump frequency is also used to relate model time, measured in Monte Carlo Steps, to the actual sintering time. The model incorporates bulk, grain boundary and surface diffusion terms and includes vacancy annihilation on the grain boundaries. The predictions of the model were found to be consistent with experimental data, both in terms of the microstructural evolution and in terms of the sintering time. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

International Scientific Forum, ISF 2013, ISF 2013, 12-14 December 2013, Tirana.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

3rd SMTDA Conference Proceedings, 11-14 June 2014, Lisbon Portugal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: High-grade gliomas are aggressive, incurable tumors characterized by extensive diffuse invasion of the normal brain parenchyma. Novel therapies at best prolong survival; their costs are formidable and benefit is marginal. Economic restrictions thus require knowledge of the cost-effectiveness of treatments. Here, we show the cost-effectiveness of enhanced resections in malignant glioma surgery using a well-characterized tool for intraoperative tumor visualization, 5-aminolevulinic acid (5-ALA). OBJECTIVE: To evaluate the cost-effectiveness of 5-ALA fluorescence-guided neurosurgery compared with white-light surgery in adult patients with newly diagnosed high-grade glioma, adopting the perspective of the Portuguese National Health Service. METHODS: We used a Markov model (cohort simulation). Transition probabilities were estimated with the use of data from 1 randomized clinical trial and 1 noninterventional prospective study. Utility values and resource use were obtained from published literature and expert opinion. Unit costs were taken from official Portuguese reimbursement lists (2012 values). The health outcomes considered were quality-adjusted life-years, lifeyears, and progression-free life-years. Extensive 1-way and probabilistic sensitivity analyses were performed. RESULTS: The incremental cost-effectiveness ratios are below €10 000 in all evaluated outcomes, being around €9100 per quality-adjusted life-year gained, €6700 per life-year gained, and €8800 per progression-free life-year gained. The probability of 5-ALA fluorescence-guided surgery cost-effectiveness at a threshold of €20000 is 96.0% for quality-adjusted life-year, 99.6% for life-year, and 98.8% for progression-free life-year. CONCLUSION: 5-ALA fluorescence-guided surgery appears to be cost-effective in newly diagnosed high-grade gliomas compared with white-light surgery. This example demonstrates cost-effectiveness analyses for malignant glioma surgery to be feasible on the basis of existing data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Doutor em Engenharia Civil

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The European Court of Justice has held that as from 21 December 2012 insurers may no longer charge men and women differently on the basis of scientific evidence that is statistically linked to their sex, effectively prohibiting the use of sex as a factor in the calculation of premiums and benefits for the purposes of insurance and related financial services throughout the European Union. This ruling marks a sharp turn away from the traditional view that insurers should be allowed to apply just about any risk assessment criterion, so long as it is sustained by the findings of actuarial science. The naïveté behind the assumption that insurers’ recourse to statistical data and probabilistic analysis, given their scientific nature, would suffice to keep them out of harm’s way was exposed. In this article I look at the flaws of this assumption and question whether this judicial decision, whilst constituting a most welcome landmark in the pursuit of equality between men and women, has nonetheless gone too far by saying too little on the million dollar question of what separates admissible criteria of differentiation from inadmissible forms of discrimination.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine ethics is an interdisciplinary field of inquiry that emerges from the need of imbuing autonomous agents with the capacity of moral decision-making. While some approaches provide implementations in Logic Programming (LP) systems, they have not exploited LP-based reasoning features that appear essential for moral reasoning. This PhD thesis aims at investigating further the appropriateness of LP, notably a combination of LP-based reasoning features, including techniques available in LP systems, to machine ethics. Moral facets, as studied in moral philosophy and psychology, that are amenable to computational modeling are identified, and mapped to appropriate LP concepts for representing and reasoning about them. The main contributions of the thesis are twofold. First, novel approaches are proposed for employing tabling in contextual abduction and updating – individually and combined – plus a LP approach of counterfactual reasoning; the latter being implemented on top of the aforementioned combined abduction and updating technique with tabling. They are all important to model various issues of the aforementioned moral facets. Second, a variety of LP-based reasoning features are applied to model the identified moral facets, through moral examples taken off-the-shelf from the morality literature. These applications include: (1) Modeling moral permissibility according to the Doctrines of Double Effect (DDE) and Triple Effect (DTE), demonstrating deontological and utilitarian judgments via integrity constraints (in abduction) and preferences over abductive scenarios; (2) Modeling moral reasoning under uncertainty of actions, via abduction and probabilistic LP; (3) Modeling moral updating (that allows other – possibly overriding – moral rules to be adopted by an agent, on top of those it currently follows) via the integration of tabling in contextual abduction and updating; and (4) Modeling moral permissibility and its justification via counterfactuals, where counterfactuals are used for formulating DDE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The 'database search problem', that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method's graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The visual cortex in each hemisphere is linked to the opposite hemisphere by axonal projections that pass through the splenium of the corpus callosum. Visual-callosal connections in humans and macaques are found along the V1/V2 border where the vertical meridian is represented. Here we identify the topography of V1 vertical midline projections through the splenium within six human subjects with normal vision using diffusion-weighted MR imaging and probabilistic diffusion tractography. Tractography seed points within the splenium were classified according to their estimated connectivity profiles to topographic subregions of V1, as defined by functional retinotopic mapping. First, we report a ventral-dorsal mapping within the splenium with fibers from ventral V1 (representing the upper visual field) projecting to the inferior-anterior corner of the splenium and fibers from dorsal V1 (representing the lower visual field) projecting to the superior-posterior end. Second, we also report an eccentricity gradient of projections from foveal-to-peripheral V1 subregions running in the anterior-superior to posterior-inferior direction, orthogonal to the dorsal-ventral mapping. These results confirm and add to a previous diffusion MRI study (Dougherty et al., 2005) which identified a dorsal/ventral mapping of human splenial fibers. These findings yield a more detailed view of the structural organization of the splenium than previously reported and offer new opportunities to study structural plasticity in the visual system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyses the predictive ability of quantitative precipitation forecasts (QPF) and the so-called "poor-man" rainfall probabilistic forecasts (RPF). With this aim, the full set of warnings issued by the Meteorological Service of Catalonia (SMC) for potentially-dangerous events due to severe precipitation has been analysed for the year 2008. For each of the 37 warnings, the QPFs obtained from the limited-area model MM5 have been verified against hourly precipitation data provided by the rain gauge network covering Catalonia (NE of Spain), managed by SMC. For a group of five selected case studies, a QPF comparison has been undertaken between the MM5 and COSMO-I7 limited-area models. Although MM5's predictive ability has been examined for these five cases by making use of satellite data, this paper only shows in detail the heavy precipitation event on the 9¿10 May 2008. Finally, the "poor-man" rainfall probabilistic forecasts (RPF) issued by SMC at regional scale have also been tested against hourly precipitation observations. Verification results show that for long events (>24 h) MM5 tends to overestimate total precipitation, whereas for short events (¿24 h) the model tends instead to underestimate precipitation. The analysis of the five case studies concludes that most of MM5's QPF errors are mainly triggered by very poor representation of some of its cloud microphysical species, particularly the cloud liquid water and, to a lesser degree, the water vapor. The models' performance comparison demonstrates that MM5 and COSMO-I7 are on the same level of QPF skill, at least for the intense-rainfall events dealt with in the five case studies, whilst the warnings based on RPF issued by SMC have proven fairly correct when tested against hourly observed precipitation for 6-h intervals and at a small region scale. Throughout this study, we have only dealt with (SMC-issued) warning episodes in order to analyse deterministic (MM5 and COSMO-I7) and probabilistic (SMC) rainfall forecasts; therefore we have not taken into account those episodes that might (or might not) have been missed by the official SMC warnings. Therefore, whenever we talk about "misses", it is always in relation to the deterministic LAMs' QPFs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Avalanche forecasting is a complex process involving the assimilation of multiple data sources to make predictions over varying spatial and temporal resolutions. Numerically assisted forecasting often uses nearest neighbour methods (NN), which are known to have limitations when dealing with high dimensional data. We apply Support Vector Machines to a dataset from Lochaber, Scotland to assess their applicability in avalanche forecasting. Support Vector Machines (SVMs) belong to a family of theoretically based techniques from machine learning and are designed to deal with high dimensional data. Initial experiments showed that SVMs gave results which were comparable with NN for categorical and probabilistic forecasts. Experiments utilising the ability of SVMs to deal with high dimensionality in producing a spatial forecast show promise, but require further work.