896 resultados para Autoepistemic Logic
Resumo:
This paper examines the contribution of aspects of critical and referential realism to the “logic” of structural explanation through an analysis of Erik Olin Wright’s Classes and the debate surrounding this work. Wright’s Classes has been selected as a case study because it offers an opportunity to examine issues pertaining to “objective” and “subjective” determinations of class and related questions of agency and structure at the level of actual methodological strategies. A close examination of the structure of Wright’s inquiry reveals a number of places where Harre’s and Bhaskar’s approaches may contribute to the prescription of methodological strategies which could overcome some of the antinomies on which the debate on Classes is based. As a case study, the paper underlines the important “underlabourer” role of critical and referential realism and their contribution to questions of agency and structure in the context of actual stages involved in structural explanation
Resumo:
We present an automated verification method for security of Diffie–Hellman–based key exchange protocols. The method includes a Hoare-style logic and syntactic checking. The method is applied to protocols in a simplified version of the Bellare–Rogaway–Pointcheval model (2000). The security of the protocol in the complete model can be established automatically by a modular proof technique of Kudla and Paterson (2005).
Resumo:
Genetic research of complex diseases is a challenging, but exciting, area of research. The early development of the research was limited, however, until the completion of the Human Genome and HapMap projects, along with the reduction in the cost of genotyping, which paves the way for understanding the genetic composition of complex diseases. In this thesis, we focus on the statistical methods for two aspects of genetic research: phenotype definition for diseases with complex etiology and methods for identifying potentially associated Single Nucleotide Polymorphisms (SNPs) and SNP-SNP interactions. With regard to phenotype definition for diseases with complex etiology, we firstly investigated the effects of different statistical phenotyping approaches on the subsequent analysis. In light of the findings, and the difficulties in validating the estimated phenotype, we proposed two different methods for reconciling phenotypes of different models using Bayesian model averaging as a coherent mechanism for accounting for model uncertainty. In the second part of the thesis, the focus is turned to the methods for identifying associated SNPs and SNP interactions. We review the use of Bayesian logistic regression with variable selection for SNP identification and extended the model for detecting the interaction effects for population based case-control studies. In this part of study, we also develop a machine learning algorithm to cope with the large scale data analysis, namely modified Logic Regression with Genetic Program (MLR-GEP), which is then compared with the Bayesian model, Random Forests and other variants of logic regression.
Resumo:
Business practices vary from one company to another and business practices often need to be changed due to changes of business environments. To satisfy different business practices, enterprise systems need to be customized. To keep up with ongoing business practice changes, enterprise systems need to be adapted. Because of rigidity and complexity, the customization and adaption of enterprise systems often takes excessive time with potential failures and budget shortfall. Moreover, enterprise systems often drag business behind because they cannot be rapidly adapted to support business practice changes. Extensive literature has addressed this issue by identifying success or failure factors, implementation approaches, and project management strategies. Those efforts were aimed at learning lessons from post implementation experiences to help future projects. This research looks into this issue from a different angle. It attempts to address this issue by delivering a systematic method for developing flexible enterprise systems which can be easily tailored for different business practices or rapidly adapted when business practices change. First, this research examines the role of system models in the context of enterprise system development; and the relationship of system models with software programs in the contexts of computer aided software engineering (CASE), model driven architecture (MDA) and workflow management system (WfMS). Then, by applying the analogical reasoning method, this research initiates a concept of model driven enterprise systems. The novelty of model driven enterprise systems is that it extracts system models from software programs and makes system models able to stay independent of software programs. In the paradigm of model driven enterprise systems, system models act as instructors to guide and control the behavior of software programs. Software programs function by interpreting instructions in system models. This mechanism exposes the opportunity to tailor such a system by changing system models. To make this true, system models should be represented in a language which can be easily understood by human beings and can also be effectively interpreted by computers. In this research, various semantic representations are investigated to support model driven enterprise systems. The significance of this research is 1) the transplantation of the successful structure for flexibility in modern machines and WfMS to enterprise systems; and 2) the advancement of MDA by extending the role of system models from guiding system development to controlling system behaviors. This research contributes to the area relevant to enterprise systems from three perspectives: 1) a new paradigm of enterprise systems, in which enterprise systems consist of two essential elements: system models and software programs. These two elements are loosely coupled and can exist independently; 2) semantic representations, which can effectively represent business entities, entity relationships, business logic and information processing logic in a semantic manner. Semantic representations are the key enabling techniques of model driven enterprise systems; and 3) a brand new role of system models; traditionally the role of system models is to guide developers to write system source code. This research promotes the role of system models to control the behaviors of enterprise.
Resumo:
A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.
Resumo:
In an era of complex challenges that draw sustained media attention and entangle multiple organisational actors, this thesis addresses the gap between current trends in society and business, and existing scholarship in public relations and crisis communication. By responding to calls from crisis communication researchers to develop theory (Coombs, 2006a), to examine the interdependencies of crises (Seeger, Sellnow, & Ulmer, 1998), and to consider variation in crisis response (Seeger, 2002), this thesis contributes to theory development in crisis communication and public relations. Through transformative change, this thesis extends existing scholarship built on a preservation or conservation logic where public relations is used to maintain stability by incrementally responding to changes in an organisation‘s environment (Cutlip, Center, & Broom, 2006; Everett, 2001; Grunig, 2000; Spicer, 1997). Based on the opportunity to contribute to ongoing theoretical development in the literature, the overall research problem guiding this thesis asks: How does transformative change during crisis influence corporate actors’ communication? This thesis adopts punctuated equilibrium theory, which describes change as alternating between long periods of stability and short periods of revolutionary or transformative change (Gersick, 1991; Romanelli & Tushman, 1994; Siggelkow, 2002; Tushman, Newman, & Romanelli, 1986; Tushman & Romanelli, 1985). As a theory for change, punctuated equilibrium provides an opportunity to examine public relations and transformative change, building on scholarship that is based primarily on incremental change. Further, existing scholarship in public relations and crisis communication focuses on the actions of single organisations in situational or short-term crisis events. Punctuated equilibrium theory enables the study of multiple crises and multiple organisational responses during transformative change. In doing so, punctuated equilibrium theory provides a framework to explain both the context for transformative change and actions or strategies enacted by organisations during transformative change (Tushman, Newman, & Romanelli, 1986; Tushman & Romanelli, 1985; Tushman, Virany, & Romanelli, 1986). The connections between context and action inform the research questions that guide this thesis: RQ1: What symbolic and substantive strategies persist and change as crises develop from situational events to transformative and multiple linked events? RQ2: What features of the crisis context influence changes in symbolic and substantive strategies? To shed light on these research questions, the thesis adopts a qualitative approach guided by process theory and methods to explicate the events, sequences and activities that were essential to change (Pettigrew, 1992; Van de Ven, 1992). Specifically, the thesis draws on an alternative template strategy (Langley, 1999) that provides several alternative interpretations of the same events (Allison, 1971; Allison & Zelikow, 1999). Following Allison (1971) and Allison and Zelikow (1999), this thesis uses three alternative templates of crisis or strategic response typologies to construct three narratives using media articles and organisational documents. The narratives are compared to identify and draw out different patterns of crisis communication strategies that operate within different crisis contexts. The thesis is based on the crisis events that affected three organisations within the pharmaceutical industry for four years. The primary organisation is Merck, as its product recall crisis triggered transformative change affecting, in different ways, the secondary organisations of Pfizer and Novartis. Three narratives are presented based on the crisis or strategic response typologies of Coombs (2006b), Allen and Caillouet (1994), and Oliver (1991). The findings of this thesis reveal different stories about crisis communication under transformative change. By zooming in to a micro perspective (Nicolini, 2009) to focus on the crisis communication and actions of a single organisation and zooming out to a macro perspective (Nicolini, 2009) to consider multiple organisations, new insights about crisis communication, change and the relationships among multiple organisations are revealed at context and action levels. At the context level, each subsequent narrative demonstrates greater connections among multiple corporate actors. By zooming out from Coombs‘ (2006b) focus on single organisations to consider Allen and Caillouet‘s (1994) integration of the web of corporate actors, the thesis demonstrates how corporate actors add accountability pressures to the primary organisation. Next, by zooming further out to the macro perspective by considering Oliver‘s (1991) strategic responses to institutional processes, the thesis reveals a greater range of corporate actors that are caught up in the process of transformative change and accounts for their varying levels of agency over their environment. By zooming in to a micro perspective and out to a macro perspective (Nicolini, 2009) across alternative templates, the thesis sheds light on sequences, events, and actions of primary and secondary organisations. Although the primary organisation remains the focus of sustained media attention across the four-year time frame, the secondary organisations, even when one faced a similar starting situation to the primary organisation, were buffered by the process of transformative change. This understanding of crisis contexts in transforming environments builds on existing knowledge in crisis communication. At the action level, the thesis also reveals different interpretations from each alternative template. Coombs‘ (2006b) narrative shows persistence in the primary organisation‘s crisis or strategic responses over the four-year time frame of the thesis. That is, the primary organisation consistently applies a diminish crisis response. At times, the primary organisation drew on denial responses when corporate actors questioned its legitimacy or actions. To close the crisis, the primary organisation uses a rebuild crisis posture (Coombs, 2006). These finding are replicated in Allen and Caillouet‘s (1994) narrative, noting this template‘s limitation to communication messages only. Oliver‘s (1991) narrative is consistent with Coombs‘ (2006b) but also demonstrated a shift from a strategic response that signals conformity to the environment to one that signals more active resistance to the environment over time. Specifically, the primary organisation‘s initial response demonstrates conformity but these same messages were used some three years later to set new expectations in the environment in order to shape criteria and build acceptance for future organisational decisions. In summary, the findings demonstrate the power of crisis or strategic responses when considered over time and in the context of transformative change. The conclusions of this research contribute to scholarship in the public relations and management literatures. Based on the significance of organisational theory, the primary contribution of the theory relates to the role of interorganisational linkages or legitimacy buffers that form during the punctuation of equilibrium. The network of linkages among the corporate actors are significant also to the crisis communication literature as they form part of the process model of crisis communication under punctuated equilibrium. This model extends existing research that focuses on crisis communication of single organisations to consider the emergent context that incorporates secondary organisations as well as the localised contests of legitimacy and buffers from regulatory authorities. The thesis also provides an empirical base for punctuated equilibrium in public relations and crisis communication, extending Murphy‘s (2000) introduction of the theory to the public relations literature. In doing this, punctuated equilibrium theory reinvigorates theoretical development in crisis communication by extending existing scholarship around incrementalist approaches and demonstrating how public relations works in the context of transformative change. Further research in this area could consider using alternative templates to study transformative change caused by a range of crisis types from natural disasters to product tampering, and to add further insight into the dynamics between primary and secondary organisations. This thesis contributes to practice by providing guidelines for crisis response strategy selection and indicators related to the emergent context for crises under transformative change that will help primary and secondary organisations‘ responses to crises.
Resumo:
The OED reminds us as surely as Ovid that a labyrinth is a “structure consisting of a number of intercommunicating passages arranged in bewildering complexity, through which it is it difficult or impossible to find one’s way without guidance”. Both Shaun Tan’s The Arrival (2006) and Matt Ottley’s Requiem for a Beast: A Work for Image, Word and Music (2007) mark a kind of labyrinthine watershed in Australian children’s literature. Deploying complex, intercommunicating logics of story and literacy, these books make high demands of their reader but also offer guidance for the successful navigation of their stories; for their protagonists as surely as for readers. That the shared logic of navigation in each book is literacy as privileged form of meaning-making is not surprising in the sense that within “a culture deeply invested in myths of individualism and self-sufficiency, it is easy to see why literacy is glorified as an attribute of individual control and achievement” (Williams and Zenger 166). The extent to which these books might be read as exemplifying desired norms of contemporary Australian culture seems to be affirmed by the fact of Tan and Ottley winning the Australian “Picture Book of the Year” prize awarded by the Children’s Book Council of Australia in 2007 and 2008 respectively. However, taking its cue from Ottley’s explicit intertextual use of the myth of Theseus and from Tan’s visual rhetoric of lostness and displacement, this paper reads these texts’ engagement with tropes of “literacy” in order to consider the ways in which norms of gender and culture seemingly circulated within these texts might be undermined by constructions of “nation” itself as a labyrinth that can only partly be negotiated by a literate subject. In doing so, I argue that these picture books, to varying degrees, reveal a perpetuation of the “literacy myth” (Graff 12) as a discourse of safety and agency but simultaneously bear traces of Ariadne’s story, wherein literacy alone is insufficient for safe navigation of the labyrinth of culture.
Resumo:
As the graphics race subsides and gamers grow weary of predictable and deterministic game characters, game developers must put aside their “old faithful” finite state machines and look to more advanced techniques that give the users the gaming experience they crave. The next industry breakthrough will be with characters that behave realistically and that can learn and adapt, rather than more polygons, higher resolution textures and more frames-per-second. This paper explores the various artificial intelligence techniques that are currently being used by game developers, as well as techniques that are new to the industry. The techniques covered in this paper are finite state machines, scripting, agents, flocking, fuzzy logic and fuzzy state machines decision trees, neural networks, genetic algorithms and extensible AI. This paper introduces each of these technique, explains how they can be applied to games and how commercial games are currently making use of them. Finally, the effectiveness of these techniques and their future role in the industry are evaluated.
Resumo:
The primary goal of the Vehicular Ad Hoc Network (VANET) is to provide real-time safety-related messages to motorists to enhance road safety. Accessing and disseminating safety-related information through the use of wireless communications technology in VANETs should be secured, as motorists may make critical decisions in dealing with an emergency situation based on the received information. If security concerns are not addressed in developing VANET systems, an adversary can tamper with, or suppress, the unprotected message to mislead motorists to cause traffic accidents and hazards. Current research on secure messaging in VANETs focuses on employing the certificate-based Public Key Infrastructure (PKI) scheme to support message encryption and digital signing. The security overhead of such a scheme, however, creates a transmission delay and introduces a time-consuming verification process to VANET communications. This thesis has proposed a novel public key verification and management approach for VANETs; namely, the Public Key Registry (PKR) regime. Compared to the VANET PKI scheme, this new approach can satisfy necessary security requirements with improved performance and scalability, and at a lower cost by reducing the security overheads of message transmission and eliminating digital certificate deployment and maintenance issues. The proposed PKR regime consists of the required infrastructure components, rules for public key management and verification, and a set of interactions and associated behaviours to meet these rule requirements. This is achieved through a system design as a logic process model with functional specifications. The PKR regime can be used as development guidelines for conforming implementations. An analysis and evaluation of the proposed PKR regime includes security features assessment, analysis of the security overhead of message transmission, transmission latency, processing latency, and scalability of the proposed PKR regime. Compared to certificate-based PKI approaches, the proposed PKR regime can maintain the necessary security requirements, significantly reduce the security overhead by approximately 70%, and improve the performance by 98%. Meanwhile, the result of the scalability evaluation shows that the latency of employing the proposed PKR regime stays much lower at approximately 15 milliseconds, whether operating in a huge or small environment. It is therefore believed that this research will create a new dimension to the provision of secure messaging services in VANETs.
Resumo:
This work examines the effect of landmark placement on the efficiency and accuracy of risk-bounded searches over probabilistic costmaps for mobile robot path planning. In previous work, risk-bounded searches were shown to offer in excess of 70% efficiency increases over normal heuristic search methods. The technique relies on precomputing distance estimates to landmarks which are then used to produce probability distributions over exact heuristics for use in heuristic searches such as A* and D*. The location and number of these landmarks therefore influence greatly the efficiency of the search and the quality of the risk bounds. Here four new methods of selecting landmarks for risk based search are evaluated. Results are shown which demonstrate that landmark selection needs to take into account the centrality of the landmark, and that diminishing rewards are obtained from using large numbers of landmarks.
Resumo:
Marketers spend considerable resources to motivate people to consume their products and services as a means of goal attainment (Bagozzi and Dholakia, 1999). Why people increase, decrease, or stop consuming some products is based largely on how well they perceive they are doing in pursuit of their goals (Carver and Scheier, 1992). Yet despite the importance for marketers in understanding how current performance influences a consumer’s future efforts, this topic has received little attention in marketing research. Goal researchers generally agree that feedback about how well or how poorly people are doing in achieving their goals affects their motivation (Bandura and Cervone, 1986; Locke and Latham, 1990). Yet there is less agreement about whether positive and negative performance feedback increases or decreases future effort (Locke and Latham, 1990). For instance, while a customer of a gym might cancel his membership after receiving negative feedback about his fitness, the same negative feedback might cause another customer to visit the gym more often to achieve better results. A similar logic can apply to many products and services from the use of cosmetics to investing in mutual funds. The present research offers managers key insights into how to engage customers and keep them motivated. Given that connecting customers with the company is a top research priority for managers (Marketing Science Institute, 2006), this article provides suggestions for performance metrics including four questions that managers can use to apply the findings.
Resumo:
The recent exponential rise in the number of behaviour disorders has been the focus of a wide range of commentaries, ranging from the pedagogic and the administrative, to the sociological, and even the legal. This book will be the first to apply, in a systematic and thorough manner, the ideas of the foundational discipline of philosophy. A number of philosophical tools are applied here, tools arising through the medium of the traditional philosophical debates, such as those concerning governance, truth, logic, ethics, free-will, law and language. Each forms a separate chapter, but together they constitute a comprehensive, rigorous and original insight into what is now an important set of concerns for all those interested in the governance of children. The intention is threefold: first, to demonstrate the utility, accessibility and effectiveness of philosophical ideas within this important academic area. Philosophy does not have to be regarded an arcane and esoteric discipline, with only limited contemporary application, far from it. Second, the book offers a new set of approaches and ideas for both researchers and practitioners within education, a field is in danger of continually using the same ideas, to endlessly repeat the same conclusions. Third, the book offers a viable alternative to the dominant psychological model which increasingly employs pathology as its central rationale for conduct. The book would not only be of interest to mainstream educators, and to those students and academics interested in philosophy, and more specifically, the application of philosophical ideas to educational issues, it would also be an appropriate text for courses on education and difference, and due to the breadth of the philosophical issues addressed, courses on applied philosophy.
Resumo:
Variants of the same process can be encountered within one organization or across different organizations. For example, different municipalities, courts, and rental agencies all need to support highly similar processes. In fact, procurement and sales processes can be found in almost any organization. However, despite these similarities, there is also the need to allow for local variations in a controlled manner. Therefore, many academics and practitioners have advocated the use of configurable process models (sometimes referred to as reference models). A configurable process model describes a family of similar process models in a given domain. Such a model can be configured to obtain a specific process model that is subsequently used to handle individual cases, for instance, to process customer orders. Process configuration is notoriously difficult as there may be all kinds of interdependencies between configuration decisions. In fact, an incorrect configuration may lead to behavioral issues such as deadlocks and livelocks. To address this problem, we present a novel verification approach inspired by the “operating guidelines” used for partner synthesis. We view the configuration process as an external service, and compute a characterization of all such services which meet particular requirements via the notion of configuration guideline. As a result, we can characterize all feasible configurations (i. e., configurations without behavioral problems) at design time, instead of repeatedly checking each individual configuration while configuring a process model.