475 resultados para Jason, BDI, AgentSpeak, Agenti
Resumo:
The Time magazine ‘Person of theYear’ award is a venerable institution. Established by Time’s founder Henry Luce in 1927 as ‘Man of the Year’, it is an annual award given to ‘a person, couple, group, idea, place, or machine that ‘for better or for worse ... has done the most to influence the events of the year’ (Time 2002, p. 1). In 2010, the award was given to Mark Zuckerberg, the founder and CEO of the social networking site Facebook.There was, however, a strong campaign for the ‘People’s Choice’ award to be given to Julian Assange, the founder and editor-in-chief of Wikileaks, the online whistleblowing site. Earlier in the year Wikileaks had released more than 250 000 US government diplomatic cables through the internet, and the subsequent controver- sies around the actions of Wikileaks and Assange came to be known worldwide as ‘Cablegate’. The focus of this chapter is not on the implications of ‘Cablegate’ for international diplomacy, which continue to have great significance, but rather upon what the emergence of Wikileaks has meant for journalism, and whether it provides insights into the future of journalism. Both Facebook and Wikileaks, as well as social media platforms such as Twitter and YouTube, and independent media practices such as blogging, citizen journalism and crowdsourcing, are manifestations of the rise of social media, or what has also been termed web 2.0.The term ‘web 2.0’ was coined by Tim O’Reilly, and captures the rise of online social media platforms and services, that better realise the collaborative potential of digitally networked media. They do this by moving from the relatively static, top-down notions of interactivity that informed early internet development, towards more open and evolutionary models that better harness collective intelligence by enabling users to become the creators and collaborators in the development of online media content (Musser and O’Reilly 2007; Bruns 2008).
Resumo:
Web 2.0 is a new generation of online applications on the web that permit people to collaborate and share information online. The use of such applications by employees in organisations enhances knowledge management (KM) in organisations. Employee involvement is a critical success factor as the concept is based on openness, engagement and collaboration between people where organizational knowledge is derived from employees experience, skills and best practices. Consequently, the employee's perception is recognized as being an important factor in web 2.0 adoption for KM and worthy of investigation. There are few studies that define and explore employee's enterprise 2.0 acceptance for KM. This paper provides a systematic review of the literature prior to demonstrating the findings as part of a preliminary conceptual model that represents the first stage of an ongoing research project that will end up with an empirical study. Reviewing available studies in technology acceptance, knowledge management and enterprise 2.0 literatures aids obtaining all potential user acceptance factors of enterprise 2.0. The preliminary conceptual model is a refinement of the theory of planed behaviour (TPB) as the user acceptance factors has been mapped into the TPB main components including behaviour attitude, subjective norms and behaviour control which are the determinant of individual's intention to a particular behaviour.
Resumo:
Lipopolysaccharide-activated macrophages rapidly synthesize and secrete tumor necrosis factor α (TNFα) to prime the immune system. Surface delivery of membrane carrying newly synthesized TNFα is controlled and limited by the level of soluble N-ethylmaleimide-sensitive factor attachment protein receptor (SNARE) proteins syntaxin 4 and SNAP-23. Many functions in immune cells are coordinated from lipid rafts in the plasmamembrane, and we investigated a possible role for lipid rafts in TNFα trafficking and secretion. TNFα surface delivery and secretion were found to be cholesterol- dependent. Upon macrophage activation, syntaxin 4 was recruited to cholesterol-dependent lipid rafts, whereas its regulatory protein, Munc18c, was excluded from the rafts. Syntaxin 4 in activated macrophages localized to discrete cholesterol-dependent puncta on the plasmamembrane, particularly on filopodia. Imaging the early stages of TNFα surface distribution revealed these puncta to be the initial points of TNFα delivery. During the early stages of phagocytosis, syntaxin 4 was recruited to the phagocytic cup in a cholesterol dependent manner. Insertion of VAMP3-positive recycling endosome membrane is required for efficient ingestion of a pathogen. Without this recruitment of syntaxin 4, it is not incorporated into the plasma membrane, and phagocytosis is greatly reduced. Thus, relocation of syntaxin 4 into lipid rafts in macrophages is a critical and rate-limiting step in initiating an effective immune response.
Resumo:
Membrane traffic in activated macrophages is required for two critical events in innate immunity: proinflammatory cytokine secretion and phagocytosis of pathogens. We found a joint trafficking pathway linking both actions, which may economize membrane transport and augment the immune response. Tumor necrosis factor α (TNFα) is trafficked from the Golgi to the recycling endosome (RE), where vesicle-associated membrane protein 3 mediates its delivery to the cell surface at the site of phagocytic cup formation. Fusion of the RE at the cup simultaneously allows rapid release of TNFα and expands the membrane for phagocytosis.
Resumo:
Divergence dating studies, which combine temporal data from the fossil record with branch length data from molecular phylogenetic trees, represent a rapidly expanding approach to understanding the history of life. National Evolutionary Synthesis Center hosted the first Fossil Calibrations Working Group (3–6 March, 2011, Durham, NC, USA), bringing together palaeontologists, molecular evolutionists and bioinformatics experts to present perspectives from disciplines that generate, model and use fossil calibration data. Presentations and discussions focused on channels for interdisciplinary collaboration, best practices for justifying, reporting and using fossil calibrations and roadblocks to synthesis of palaeontological and molecular data. Bioinformatics solutions were proposed, with the primary objective being a new database for vetted fossil calibrations with linkages to existing resources, targeted for a 2012 launch.
Resumo:
This nuts and bolts session discusses QUT Library’s Study Solutions service which is staffed by academic skills advisors and librarians as the 2nd tier of its learning and study support model. Firstly, it will discuss the rationale behind the Study Solutions model and provide a brief profile of the service. Secondly, it will outline what distinguishes it from other modes of one-to-one learning support. Thirdly, it will report findings from a student perception study conducted to determine what difference this model of individual study assistance made to academic confidence, ability to transfer academic skills and capacity to assist peers. Finally, this session will include small group discussions to consider the feasibility of this model as best practice for other tertiary institutions and student perception as a valuable measure of the impact of learning support services.
Resumo:
This paper presents the flight trials of an electro-optical (EO) sense-and-avoid system onboard a Cessna host aircraft (camera aircraft). We focus on the autonomous collision avoidance capability of the sense-and-avoid system; that is, closed-loop integration with the onboard aircraft autopilot. We also discuss the system’s approach to target detection and avoidance control, as well as the methodology of the flight trials. The results demonstrate the ability of the sense-and-avoid system to automatically detect potential conflicting aircraft and engage the host Cessna autopilot to perform an avoidance manoeuvre, all without any human intervention
Resumo:
This paper presents a survey of previously presented vision based aircraft detection flight test, and then presents new flight test results examining the impact of camera field-of view choice on the detection range and false alarm rate characteristics of a vision-based aircraft detection technique. Using data collected from approaching aircraft, we examine the impact of camera fieldof-view choice and confirm that, when aiming for similar levels of detection confidence, an improvement in detection range can be obtained by choosing a smaller effective field-of-view (in terms of degrees per pixel).
Resumo:
The integration of unmanned aircraft into civil airspace is a complex issue. One key question is whether unmanned aircraft can operate just as safely as their manned counterparts. The absence of a human pilot in unmanned aircraft automatically points to a deficiency that is the lack of an inherent see-and-avoid capability. To date, regulators have mandated that an “equivalent level of safety” be demonstrated before UAVs are permitted to routinely operate in civil airspace. This chapter proposes techniques, methods, and hardware integrations that describe a “sense-and-avoid” system designed to address the lack of a see-and-avoid capability in UAVs.
Resumo:
Monitoring environmental health is becoming increasingly important as human activity and climate change place greater pressure on global biodiversity. Acoustic sensors provide the ability to collect data passively, objectively and continuously across large areas for extended periods. While these factors make acoustic sensors attractive as autonomous data collectors, there are significant issues associated with large-scale data manipulation and analysis. We present our current research into techniques for analysing large volumes of acoustic data efficiently. We provide an overview of a novel online acoustic environmental workbench and discuss a number of approaches to scaling analysis of acoustic data; online collaboration, manual, automatic and human-in-the loop analysis.
Resumo:
This paper investigates a mixed centralised-decentralised air traffic separation management system, which combines the best features of the centralised and decentralised systems whilst ensuring the reliability of the air traffic management system during degraded conditions. To overcome communication band limits, we propose a mixed separation manager on the basis of a robust decision (or min-max) problem that is posed on a reduced set of admissible flight avoidance manoeuvres (or a FAM alphabet). We also present a design method for selecting an appropriate FAM alphabet for use in the mixed separation management system. Simulation studies are presented to illustrate the benefits of our proposed FAM alphabet based mixed separation manager.
Resumo:
Several track-before-detection approaches for image based aircraft detection have recently been examined in an important automated aircraft collision detection application. A particularly popular approach is a two stage processing paradigm which involves: a morphological spatial filter stage (which aims to emphasize the visual characteristics of targets) followed by a temporal or track filter stage (which aims to emphasize the temporal characteristics of targets). In this paper, we proposed new spot detection techniques for this two stage processing paradigm that fuse together raw and morphological images or fuse together various different morphological images (we call these approaches morphological reinforcement). On the basis of flight test data, the proposed morphological reinforcement operations are shown to offer superior signal to-noise characteristics when compared to standard spatial filter options (such as the close-minus-open and adaptive contour morphological operations). However, system operation characterised curves, which examine detection verses false alarm characteristics after both processing stages, illustrate that system performance is very data dependent.
Resumo:
The quick detection of abrupt (unknown) parameter changes in an observed hidden Markov model (HMM) is important in several applications. Motivated by the recent application of relative entropy concepts in the robust sequential change detection problem (and the related model selection problem), this paper proposes a sequential unknown change detection algorithm based on a relative entropy based HMM parameter estimator. Our proposed approach is able to overcome the lack of knowledge of post-change parameters, and is illustrated to have similar performance to the popular cumulative sum (CUSUM) algorithm (which requires knowledge of the post-change parameter values) when examined, on both simulated and real data, in a vision-based aircraft manoeuvre detection problem.
Resumo:
Hybrid system representations have been exploited in a number of challenging modelling situations, including situations where the original nonlinear dynamics are too complex (or too imprecisely known) to be directly filtered. Unfortunately, the question of how to best design suitable hybrid system models has not yet been fully addressed, particularly in the situations involving model uncertainty. This paper proposes a novel joint state-measurement relative entropy rate based approach for design of hybrid system filters in the presence of (parameterised) model uncertainty. We also present a design approach suitable for suboptimal hybrid system filters. The benefits of our proposed approaches are illustrated through design examples and simulation studies.
In the pursuit of effective affective computing : the relationship between features and registration
Resumo:
For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.