973 resultados para probability models
Resumo:
In a digital world, users’ Personally Identifiable Information (PII) is normally managed with a system called an Identity Management System (IMS). There are many types of IMSs. There are situations when two or more IMSs need to communicate with each other (such as when a service provider needs to obtain some identity information about a user from a trusted identity provider). There could be interoperability issues when communicating parties use different types of IMS. To facilitate interoperability between different IMSs, an Identity Meta System (IMetS) is normally used. An IMetS can, at least theoretically, join various types of IMSs to make them interoperable and give users the illusion that they are interacting with just one IMS. However, due to the complexity of an IMS, attempting to join various types of IMSs is a technically challenging task, let alone assessing how well an IMetS manages to integrate these IMSs. The first contribution of this thesis is the development of a generic IMS model called the Layered Identity Infrastructure Model (LIIM). Using this model, we develop a set of properties that an ideal IMetS should provide. This idealized form is then used as a benchmark to evaluate existing IMetSs. Different types of IMS provide varying levels of privacy protection support. Unfortunately, as observed by Jøsang et al (2007), there is insufficient privacy protection in many of the existing IMSs. In this thesis, we study and extend a type of privacy enhancing technology known as an Anonymous Credential System (ACS). In particular, we extend the ACS which is built on the cryptographic primitives proposed by Camenisch, Lysyanskaya, and Shoup. We call this system the Camenisch, Lysyanskaya, Shoup - Anonymous Credential System (CLS-ACS). The goal of CLS-ACS is to let users be as anonymous as possible. Unfortunately, CLS-ACS has problems, including (1) the concentration of power to a single entity - known as the Anonymity Revocation Manager (ARM) - who, if malicious, can trivially reveal a user’s PII (resulting in an illegal revocation of the user’s anonymity), and (2) poor performance due to the resource-intensive cryptographic operations required. The second and third contributions of this thesis are the proposal of two protocols that reduce the trust dependencies on the ARM during users’ anonymity revocation. Both protocols distribute trust from the ARM to a set of n referees (n > 1), resulting in a significant reduction of the probability of an anonymity revocation being performed illegally. The first protocol, called the User Centric Anonymity Revocation Protocol (UCARP), allows a user’s anonymity to be revoked in a user-centric manner (that is, the user is aware that his/her anonymity is about to be revoked). The second protocol, called the Anonymity Revocation Protocol with Re-encryption (ARPR), allows a user’s anonymity to be revoked by a service provider in an accountable manner (that is, there is a clear mechanism to determine which entity who can eventually learn - and possibly misuse - the identity of the user). The fourth contribution of this thesis is the proposal of a protocol called the Private Information Escrow bound to Multiple Conditions Protocol (PIEMCP). This protocol is designed to address the performance issue of CLS-ACS by applying the CLS-ACS in a federated single sign-on (FSSO) environment. Our analysis shows that PIEMCP can both reduce the amount of expensive modular exponentiation operations required and lower the risk of illegal revocation of users’ anonymity. Finally, the protocols proposed in this thesis are complex and need to be formally evaluated to ensure that their required security properties are satisfied. In this thesis, we use Coloured Petri nets (CPNs) and its corresponding state space analysis techniques. All of the protocols proposed in this thesis have been formally modeled and verified using these formal techniques. Therefore, the fifth contribution of this thesis is a demonstration of the applicability of CPN and its corresponding analysis techniques in modeling and verifying privacy enhancing protocols. To our knowledge, this is the first time that CPN has been comprehensively applied to model and verify privacy enhancing protocols. From our experience, we also propose several CPN modeling approaches, including complex cryptographic primitives (such as zero-knowledge proof protocol) modeling, attack parameterization, and others. The proposed approaches can be applied to other security protocols, not just privacy enhancing protocols.
Resumo:
Continuum diffusion models are often used to represent the collective motion of cell populations. Most previous studies have simply used linear diffusion to represent collective cell spreading, while others found that degenerate nonlinear diffusion provides a better match to experimental cell density profiles. In the cell modeling literature there is no guidance available with regard to which approach is more appropriate for representing the spreading of cell populations. Furthermore, there is no knowledge of particular experimental measurements that can be made to distinguish between situations where these two models are appropriate. Here we provide a link between individual-based and continuum models using a multi-scale approach in which we analyze the collective motion of a population of interacting agents in a generalized lattice-based exclusion process. For round agents that occupy a single lattice site, we find that the relevant continuum description of the system is a linear diffusion equation, whereas for elongated rod-shaped agents that occupy L adjacent lattice sites we find that the relevant continuum description is connected to the porous media equation (pme). The exponent in the nonlinear diffusivity function is related to the aspect ratio of the agents. Our work provides a physical connection between modeling collective cell spreading and the use of either the linear diffusion equation or the pme to represent cell density profiles. Results suggest that when using continuum models to represent cell population spreading, we should take care to account for variations in the cell aspect ratio because different aspect ratios lead to different continuum models.
Resumo:
Chapter 2 of 'International Journalism and Democracy' provides examples of what the author dubs "deliberative journalism". Following a definition of deliberative journalism in Chapter 1, the book's second chapter examines major models of deliberative journalism that are in operation around the world. These models include public journalism, citizen journalism, community and alternative media, development journalism and peace journalism. The author argues that when these new forms of journalism are practiced well, they extend people's ability to identify, express, understand and respond to politics and issues affecting their communities. However, the main models of deliberative journalism all have contentious elements. Many deliberative journalism practioners have been subjected to criticism for lack of objectivity and poor professional standards. Many of their activities have clearly been ill-conceived. The author also finds that neither professional nor citizen journalists have a strong understanding of what constitutes "good practice" in deliberative journalism. Furthermore, there is much debate as to whether the type of "citizen journalism" that is posted intermittently on Facebook, Twitter, blogs and other social media can even be defined as "journalism". The practice of deliberative journalism can potentially contribute to public deliberation, but it does not always do so in any immediate or obvious way. The author finds that even so, deliberative journalism indirectly strengthens the environments that support fertile deliberation and decision making. (See the Extended Abstract for further details.)
Resumo:
Over the last three years, in our Early Algebra Thinking Project, we have been studying Years 3 to 5 students’ ability to generalise in a variety of situations, namely, compensation principles in computation, the balance principle in equivalence and equations, change and inverse change rules with function machines, and pattern rules with growing patterns. In these studies, we have attempted to involve a variety of models and representations and to build students’ abilities to switch between them (in line with the theories of Dreyfus, 1991, and Duval, 1999). The results have shown the negative effect of closure on generalisation in symbolic representations, the predominance of single variance generalisation over covariant generalisation in tabular representations, and the reduced ability to readily identify commonalities and relationships in enactive and iconic representations. This chapter uses the results to explore the interrelation between generalisation and verbal and visual comprehension of context. The studies evidence the importance of understanding and communicating aspects of representational forms which allowed commonalities to be seen across or between representations. Finally the chapter explores the implications of the studies for a theory that describes a growth in integration of models and representations that leads to generalisation.
Resumo:
The purpose of this article is to examine how a consumer’s weight control beliefs (WCB), a female advertising model’s body size (slim or large) and product type influence consumer evaluations and consumer body perceptions. The study uses an experiment of 371 consumers. The design of the experiment was a 2 (weight control belief: internal, external) X 2 (model size: larger sized, slim) X 2 (product type: weight controlling, non-weight controlling) between-participants factorial design. Results reveal two key contributions. First, larger sized models result in consumers feeling less pressure from society to be thin, viewing their actual shape as slimmer relative to viewing a slim model and wanting a thinner ideal body shape. Slim models result in the opposite effects. Second this research reveals a boundary condition for the extent to which endorser–product congruency theory can be generalized to endorsers of a larger body size. Results indicate that consumer WCB may be a useful variable to consider when marketers consider the use of larger models in advertising.
Resumo:
"International Journalism and Democracy" explores a new form of journalism that has been dubbed ‘deliberative journalism’. As the name suggests, these forms of journalism support deliberation — the processes in which citizens recognize and discuss the issues that affect their communities, appraise the potential responses to those issues, and make decisions about whether and how to take action. Authors from across the globe identify the types of journalism that assist deliberative politics in different cultural and political contexts. Case studies from 15 nations spotlight different approaches to deliberative journalism, including strategies that have been sometimes been labeled as public or civic journalism, peace journalism, development journalism, citizen journalism, the street press, community journalism, social entrepreneurism, or other names. Countries that are studied in-depth include the United States, the United Kingdom, Germany, Finland, China, India, Japan, Indonesia, Australia, New Zealand, South Africa, Nigeria, Brazil, Colombia and Puerto Rico. Each of the approaches that are described offers a distinctive potential to support deliberative democracy. However, the book does not present any of these models or case studies as examples of categorical success. Instead, it explores different elements of the nature, strengths, limitations and challenges of each approach, as well as issues affecting their longer-term sustainability and effectiveness. The book also describes the underlying principles of deliberation, the media’s potential role in deliberation from a theoretical and practical perspective, and ongoing issues for deliberative media practitioners.
Resumo:
From a ‘cultural science’ perspective, this paper traces one aspect of a more general shift, from the realist representational regime of modernity to the productive DIY systems of the internet era. It argues that collecting and archiving is transformed by this change. Modern museums – and also broadcast television – were based on determinist or ‘essence’ theory; while internet archives like YouTube (and the internet as an archive) are based on ‘probability’ theory. The paper goes through the differences between modernist ‘essence’ and postmodern ‘probability’; starting from the obvious difference that in a museum each object is selected by experts for its intrinsic properties, while on the internet you don’t know what you will find. The status of individual objects is uncertain, although the productivity of the overall archive is unlimited. The paper links these differences with changes in contemporary culture – from a Newtonian to a quantum universe, progress to risk, institutional structure to evolutionary change, objectivity to uncertainty, identity to performance. Borrowing some of its methodology from science fiction, the paper uses examples from museums and online archives, ranging from the oldest stone tool in the world to the latest tribute vid on the net.
Resumo:
It is recognized that, in general, the performance of construction projects does not meet optimal expectations. One aspect of this is the performance of each participant, which is interdependent and makes a significance impact on overall project outcomes. Of these, the client is traditionally the owner of the project, the architect or engineer is engaged as the lead designer and a contractor is selected to construct the facilities. Generally, the performance of the participants is gauged by considering three main factors, namely time, cost and quality. As the level of satisfaction is a subjective measurement, it is rarely used in the performance evaluation of construction work. Recently, various approaches to the measurement of satisfaction have been made in attempting to determine the performance of construction project outcomes – for instance client satisfaction, consultant satisfaction, contractor satisfaction, customer satisfaction and home buyer satisfaction. These not only identify the performance of the construction project, but are also used to improve and maintain relationships. In addition, these assessments are necessary for continuous improvement and enhanced cooperation between participants. The measurement of satisfaction levels primarily involves expectations and perceptions. An expectation can be regarded as a comparison standard of different needs, motives and beliefs, while a perception is a subjective interpretation that is influenced by moods, experiences and values. This suggests that the disparity between perceptions and expectations may be used to represent different levels of satisfaction. However, this concept is rather new and in need of further investigation. This paper examines the current methods commonly practiced in measuring satisfaction level and the advantages of promoting these methods. The results provided are a preliminary review of the advantages of satisfaction measurement in the construction industry and recommendations are made concerning the most appropriate methods for use in identifying the performance of project outcomes.
Resumo:
This paper investigates how software designers use their knowledge during the design process. The research is based on the analysis of the observational and verbal data from three software design teams generated during the conceptual stage of the design process. The knowledge captured from the analysis of the mapped design team data is utilized to generate descriptive models of novice and expert designers. These models contribute to a better understanding of the connections between, and integration of, designer variables, and to a better understanding of software design expertise and its development. The models are transferable to other domains.
Resumo:
Purpose: One strategy to minimize bacteria-associated adverse responses such as microbial keratitis, contact lens–induced acute red eye (CLARE), and contact lens induced peripheral ulcers (CLPUs) that occur with contact lens wear is the development of an antimicrobial or antiadhesive contact lens. Cationic peptides represent a novel approach for the development of antimicrobial lenses.---------- Methods: A novel cationic peptide, melimine, was covalently incorporated into silicone hydrogel lenses. Confirmation tests to determine the presence of peptide and anti-microbial activity were performed. Cationic lenses were then tested for their ability to prevent CLPU in the Staphylococcus aureus rabbit model and CLARE in the Pseudomonas aeruginosa guinea pig model. ---------- Results: In the rabbit model of CLPU, melimine-coated lenses resulted in significant reductions in ocular symptom scores and in the extent of corneal infiltration (P < 0.05). Evaluation of the performance of melimine lenses in the CLARE model showed significant improvement in all ocular response parameters measured, including the percentage of eyes with corneal infiltrates, compared with those observed in the eyes fitted with the control lens (P ≤ 0.05). ---------- Conclusions: Cationic coating of contact lenses with the peptide melimine may represent a novel method of prevention of bacterial growth on contact lenses and consequently result in reduction of the incidence and severity of adverse responses due to Gram-positive and -negative bacteria during lens wear.
Resumo:
The health of tollbooth workers is seriously threatened by long-term exposure to polluted air from vehicle exhausts. Using traffic data collected at a toll plaza, vehicle movements were simulated by a system dynamics model with different traffic volumes and toll collection procedures. This allowed the average travel time of vehicles to be calculated. A three-dimension Computational Fluid Dynamics (CFD) model was used with a k–ε turbulence model to simulate pollutant dispersion at the toll plaza for different traffic volumes and toll collection procedures. It was shown that pollutant concentration around tollbooths increases as traffic volume increases. Whether traffic volume is low or high (1500 vehicles/h or 2500 vehicles/h), pollutant concentration decreases if electronic toll collection (ETC) is adopted. In addition, pollutant concentration around tollbooths decreases as the proportion of ETC-equipped vehicles increases. However, if the proportion of ETC-equipped vehicles is very low and the traffic volume is not heavy, then pollutant concentration increases as the number of ETC lanes increases.
Resumo:
In this paper, two ideal formation models of serrated chips, the symmetric formation model and the unilateral right-angle formation model, have been established for the first time. Based on the ideal models and related adiabatic shear theory of serrated chip formation, the theoretical relationship among average tooth pitch, average tooth height and chip thickness are obtained. Further, the theoretical relation of the passivation coefficient of chip's sawtooth and the chip thickness compression ratio is deduced as well. The comparison between these theoretical prediction curves and experimental data shows good agreement, which well validates the robustness of the ideal chip formation models and the correctness of the theoretical deducing analysis. The proposed ideal models may have provided a simple but effective theoretical basis for succeeding research on serrated chip morphology. Finally, the influences of most principal cutting factors on serrated chip formation are discussed on the basis of a series of finite element simulation results for practical advices of controlling serrated chips in engineering application.
Resumo:
Autonomous underwater gliders are robust and widely-used ocean sampling platforms that are characterized by their endurance, and are one of the best approaches to gather subsurface data at the appropriate spatial resolution to advance our knowledge of the ocean environment. Gliders generally do not employ sophisticated sensors for underwater localization, but instead dead-reckon between set waypoints. Thus, these vehicles are subject to large positional errors between prescribed and actual surfacing locations. Here, we investigate the implementation of a large-scale, regional ocean model into the trajectory design for autonomous gliders to improve their navigational accuracy. We compute the dead-reckoning error for our Slocum gliders, and compare this to the average positional error recorded from multiple deployments conducted over the past year. We then compare trajectory plans computed on-board the vehicle during recent deployments to our prediction-based trajectory plans for 140 surfacing occurrences.