438 resultados para distributed computing
Resumo:
The ability to detect unusual events in surviellance footage as they happen is a highly desireable feature for a surveillance system. However, this problem remains challenging in crowded scenes due to occlusions and the clustering of people. In this paper, we propose using the Distributed Behavior Model (DBM), which has been widely used in computer graphics, for video event detection. Our approach does not rely on object tracking, and is robust to camera movements. We use sparse coding for classification, and test our approach on various datasets. Our proposed approach outperforms a state-of-the-art work which uses the social force model and Latent Dirichlet Allocation.
Resumo:
In this conceptual article, we extend earlier work on Open Innovation and Absorptive Capacity. We suggest that the literature on Absorptive Capacity does not place sufficient emphasis on distributed knowledge and learning or on the application of innovative knowledge. To accomplish physical transformations, organisations need specific Innovative Capacities that extend beyond knowledge management. Accessive Capacity is the ability to collect, sort and analyse knowledge from both internal and external sources. Adaptive Capacity is needed to ensure that new pieces of equipment are suitable for the organisation's own purposes even though they may have been originally developed for other uses. Integrative Capacity makes it possible for a new or modified piece of equipment to be fitted into an existing production process with a minimum of inessential and expensive adjustment elsewhere in the process. These Innovative Capacities are controlled and coordinated by Innovative Management Capacity, a higher-order dynamic capability.
Resumo:
The intensity pulsations of a cw 1030 nm Yb:Phosphate monolithic waveguide laser with distributed feedback are described. We show that the pulsations could result from the coupling of the two orthogonal polarization modes through the two photon process of cooperative luminescence. The predictions of the presented theoretical model agree well with the observed behaviour.
Resumo:
Software as a Service (SaaS) is gaining more and more attention from software users and providers recently. This has raised many new challenges to SaaS providers in providing better SaaSes that suit everyone needs at minimum costs. One of the emerging approaches in tackling this challenge is by delivering the SaaS as a composite SaaS. Delivering it in such an approach has a number of benefits, including flexible offering of the SaaS functions and decreased cost of subscription for users. However, this approach also introduces new problems for SaaS resource management in a Cloud data centre. We present the problem of composite SaaS resource management in Cloud data centre, specifically on its initial placement and resource optimization problems aiming at improving the SaaS performance based on its execution time as well as minimizing the resource usage. Our approach differs from existing literature because it addresses the problems resulting from composite SaaS characteristics, where we focus on the SaaS requirements, constraints and interdependencies. The problems are tackled using evolutionary algorithms. Experimental results demonstrate the efficiency and the scalability of the proposed algorithms.
Resumo:
Recently, Software as a Service (SaaS) in Cloud computing, has become more and more significant among software users and providers. To offer a SaaS with flexible functions at a low cost, SaaS providers have focused on the decomposition of the SaaS functionalities, or known as composite SaaS. This approach has introduced new challenges in SaaS resource management in data centres. One of the challenges is managing the resources allocated to the composite SaaS. Due to the dynamic environment of a Cloud data centre, resources that have been initially allocated to SaaS components may be overloaded or wasted. As such, reconfiguration for the components’ placement is triggered to maintain the performance of the composite SaaS. However, existing approaches often ignore the communication or dependencies between SaaS components in their implementation. In a composite SaaS, it is important to include these elements, as they will directly affect the performance of the SaaS. This paper will propose a Grouping Genetic Algorithm (GGA) for multiple composite SaaS application component clustering in Cloud computing that will address this gap. To the best of our knowledge, this is the first attempt to handle multiple composite SaaS reconfiguration placement in a dynamic Cloud environment. The experimental results demonstrate the feasibility and the scalability of the GGA.
Resumo:
Although mobile phones are often used in public urban places to interact with one’s geographically dispersed social circle, they can also facilitate interactions with people in the same public urban space. The PlaceTagz study investigates how physical artefacts in public urban places can be utilised and combined with mobile phone technologies to facilitate interactions. Printed on stickers, PlaceTagz are QR codes linking to a digital message board enabling collocated users to interact with each other over time resulting in a place-based digital memory. This exploratory project set out to investigate if and how PlaceTagz are used by urban dwellers in a real world deployment. We present findings from analysing content received through PlaceTagz and interview data from application users. QR codes, which do not contain any contextual information, piqued the curiosity of users wondering about the embedded link’s destination and provoked comments in regards to people, place and technology.
In the pursuit of effective affective computing : the relationship between features and registration
Resumo:
For facial expression recognition systems to be applicable in the real world, they need to be able to detect and track a previously unseen person's face and its facial movements accurately in realistic environments. A highly plausible solution involves performing a "dense" form of alignment, where 60-70 fiducial facial points are tracked with high accuracy. The problem is that, in practice, this type of dense alignment had so far been impossible to achieve in a generic sense, mainly due to poor reliability and robustness. Instead, many expression detection methods have opted for a "coarse" form of face alignment, followed by an application of a biologically inspired appearance descriptor such as the histogram of oriented gradients or Gabor magnitudes. Encouragingly, recent advances to a number of dense alignment algorithms have demonstrated both high reliability and accuracy for unseen subjects [e.g., constrained local models (CLMs)]. This begs the question: Aside from countering against illumination variation, what do these appearance descriptors do that standard pixel representations do not? In this paper, we show that, when close to perfect alignment is obtained, there is no real benefit in employing these different appearance-based representations (under consistent illumination conditions). In fact, when misalignment does occur, we show that these appearance descriptors do work well by encoding robustness to alignment error. For this work, we compared two popular methods for dense alignment-subject-dependent active appearance models versus subject-independent CLMs-on the task of action-unit detection. These comparisons were conducted through a battery of experiments across various publicly available data sets (i.e., CK+, Pain, M3, and GEMEP-FERA). We also report our performance in the recent 2011 Facial Expression Recognition and Analysis Challenge for the subject-independent task.
Resumo:
Building Web 2.0 sites does not necessarily ensure the success of the site. We aim to better understand what improves the success of a site by drawing insight from biologically inspired design patterns. Web 2.0 sites provide a mechanism for human interaction enabling powerful intercommunication between massive volumes of users. Early Web 2.0 site providers that were previously dominant are being succeeded by newer sites providing innovative social interaction mechanisms. Understanding what site traits contribute to this success drives research into Web sites mechanics using models to describe the associated social networking behaviour. Some of these models attempt to show how the volume of users provides a self-organising and self-contextualisation of content. One model describing coordinated environments is called stigmergy, a term originally describing coordinated insect behavior. This paper explores how exploiting stigmergy can provide a valuable mechanism for identifying and analysing online user behavior specifically when considering that user freedom of choice is restricted by the provided web site functionality. This will aid our building better collaborative Web sites improving the collaborative processes.
Resumo:
Privacy is an important component of freedom and plays a key role in protecting fundamental human rights. It is becoming increasingly difficult to ignore the fact that without appropriate levels of privacy, a person’s rights are diminished. Users want to protect their privacy - particularly in “privacy invasive” areas such as social networks. However, Social Network users seldom know how to protect their own privacy through online mechanisms. What is required is an emerging concept that provides users legitimate control over their own personal information, whilst preserving and maintaining the advantages of engaging with online services such as Social Networks. This paper reviews “Privacy by Design (PbD)” and shows how it applies to diverse privacy areas. Such an approach will move towards mitigating many of the privacy issues in online information systems and can be a potential pathway for protecting users’ personal information. The research has also posed many questions in need of further investigation for different open source distributed Social Networks. Findings from this research will lead to a novel distributed architecture that provides more transparent and accountable privacy for the users of online information systems.
Resumo:
This paper develops a framework for classifying term dependencies in query expansion with respect to the role terms play in structural linguistic associations. The framework is used to classify and compare the query expansion terms produced by the unigram and positional relevance models. As the unigram relevance model does not explicitly model term dependencies in its estimation process it is often thought to ignore dependencies that exist between words in natural language. The framework presented in this paper is underpinned by two types of linguistic association, namely syntagmatic and paradigmatic associations. It was found that syntagmatic associations were a more prevalent form of linguistic association used in query expansion. Paradoxically, it was the unigram model that exhibited this association more than the positional relevance model. This surprising finding has two potential implications for information retrieval models: (1) if linguistic associations underpin query expansion, then a probabilistic term dependence assumption based on position is inadequate for capturing them; (2) the unigram relevance model captures more term dependency information than its underlying theoretical model suggests, so its normative position as a baseline that ignores term dependencies should perhaps be reviewed.
Resumo:
This paper develops and evaluates an enhanced corpus based approach for semantic processing. Corpus based models that build representations of words directly from text do not require pre-existing linguistic knowledge, and have demonstrated psychologically relevant performance on a number of cognitive tasks. However, they have been criticised in the past for not incorporating sufficient structural information. Using ideas underpinning recent attempts to overcome this weakness, we develop an enhanced tensor encoding model to build representations of word meaning for semantic processing. Our enhanced model demonstrates superior performance when compared to a robust baseline model on a number of semantic processing tasks.
Resumo:
The presence of large number of single-phase distributed energy resources (DERs) can cause severe power quality problems in distribution networks. The DERs can be installed in random locations. This may cause the generation in a particular phase exceeds the load demand in that phase. Therefore the excess power in that phase will be fed back to the transmission network. To avoid this problem, the paper proposes the use of distribution static compensator (DSTATCOM) that needs to be connected at the first bus following a substation. When operated properly, the DSTATCOM can facilitate a set of balanced current flow from the substation, even when excess power is generated by DERs. The proposals are validated through extensive digital computer simulation studies using PSCAD and MATLAB.
Resumo:
Digital information that is place- and time-specific, is increasingly becoming available on all aspects of the urban landscape. People (cf. the Social Web), places (cf. the Geo Web), and physical objects (cf. ubiquitous computing, the Internet of Things) are increasingly infused with sensors, actuators, and tagged with a wealth of digital information. Urban informatics research explores these emerging digital layers of the city at the intersection of people, place and technology. However, little is known about the challenges and new opportunities that these digital layers may offer to road users driving through today’s mega cities. We argue that this aspect is worth exploring in particular with regards to Auto-UI’s overarching goal of making cars both safer and more enjoyable. This paper presents the findings of a pilot study, which included 14 urban informatics research experts participating in a guided ideation (idea creation) workshop within a simulated environment. They were immersed into different driving scenarios to imagine novel urban informatics type of applications specific to the driving context.