960 resultados para User-centric API Framework
Resumo:
Adaptability for distributed object-oriented enterprise frameworks in multimedia technology is a critical mission for system evolution. Today, building adaptive services is a complex task due to lack of adequate framework support in the distributed computing systems. In this paper, we propose a Metalevel Component-Based Framework which uses distributed computing design patterns as components to develop an adaptable pattern-oriented framework for distributed computing applications. We describe our approach of combining a meta-architecture with a pattern-oriented framework, resulting in an adaptable framework which provides a mechanism to facilitate system evolution. This approach resolves the problem of dynamic adaptation in the framework, which is encountered in most distributed multimedia applications. The proposed architecture of the pattern-oriented framework has the abilities to dynamically adapt new design patterns to address issues in the domain of distributed computing and they can be woven together to shape the framework in future. © 2011 Springer Science+Business Media B.V.
Resumo:
The main challenges of multimedia data retrieval lie in the effective mapping between low-level features and high-level concepts, and in the individual users' subjective perceptions of multimedia content. ^ The objectives of this dissertation are to develop an integrated multimedia indexing and retrieval framework with the aim to bridge the gap between semantic concepts and low-level features. To achieve this goal, a set of core techniques have been developed, including image segmentation, content-based image retrieval, object tracking, video indexing, and video event detection. These core techniques are integrated in a systematic way to enable the semantic search for images/videos, and can be tailored to solve the problems in other multimedia related domains. In image retrieval, two new methods of bridging the semantic gap are proposed: (1) for general content-based image retrieval, a stochastic mechanism is utilized to enable the long-term learning of high-level concepts from a set of training data, such as user access frequencies and access patterns of images. (2) In addition to whole-image retrieval, a novel multiple instance learning framework is proposed for object-based image retrieval, by which a user is allowed to more effectively search for images that contain multiple objects of interest. An enhanced image segmentation algorithm is developed to extract the object information from images. This segmentation algorithm is further used in video indexing and retrieval, by which a robust video shot/scene segmentation method is developed based on low-level visual feature comparison, object tracking, and audio analysis. Based on shot boundaries, a novel data mining framework is further proposed to detect events in soccer videos, while fully utilizing the multi-modality features and object information obtained through video shot/scene detection. ^ Another contribution of this dissertation is the potential of the above techniques to be tailored and applied to other multimedia applications. This is demonstrated by their utilization in traffic video surveillance applications. The enhanced image segmentation algorithm, coupled with an adaptive background learning algorithm, improves the performance of vehicle identification. A sophisticated object tracking algorithm is proposed to track individual vehicles, while the spatial and temporal relationships of vehicle objects are modeled by an abstract semantic model. ^
Resumo:
This dissertation discussed resource allocation mechanisms in several network topologies including infrastructure wireless network, non-infrastructure wireless network and wire-cum-wireless network. Different networks may have different resource constrains. Based on actual technologies and implementation models, utility function, game theory and a modern control algorithm have been introduced to balance power, bandwidth and customers' satisfaction in the system. ^ In infrastructure wireless networks, utility function was used in the Third Generation (3G) cellular network and the network was trying to maximize the total utility. In this dissertation, revenue maximization was set as an objective. Compared with the previous work on utility maximization, it is more practical to implement revenue maximization by the cellular network operators. The pricing strategies were studied and the algorithms were given to find the optimal price combination of power and rate to maximize the profit without degrading the Quality of Service (QoS) performance. ^ In non-infrastructure wireless networks, power capacity is limited by the small size of the nodes. In such a network, nodes need to transmit traffic not only for themselves but also for their neighbors, so power management become the most important issue for the network overall performance. Our innovative routing algorithm based on utility function, sets up a flexible framework for different users with different concerns in the same network. This algorithm allows users to make trade offs between multiple resource parameters. Its flexibility makes it a suitable solution for the large scale non-infrastructure network. This dissertation also covers non-cooperation problems. Through combining game theory and utility function, equilibrium points could be found among rational users which can enhance the cooperation in the network. ^ Finally, a wire-cum-wireless network architecture was introduced. This network architecture can support multiple services over multiple networks with smart resource allocation methods. Although a SONET-to-WiMAX case was used for the analysis, the mathematic procedure and resource allocation scheme could be universal solutions for all infrastructure, non-infrastructure and combined networks. ^
Resumo:
With the recent explosion in the complexity and amount of digital multimedia data, there has been a huge impact on the operations of various organizations in distinct areas, such as government services, education, medical care, business, entertainment, etc. To satisfy the growing demand of multimedia data management systems, an integrated framework called DIMUSE is proposed and deployed for distributed multimedia applications to offer a full scope of multimedia related tools and provide appealing experiences for the users. This research mainly focuses on video database modeling and retrieval by addressing a set of core challenges. First, a comprehensive multimedia database modeling mechanism called Hierarchical Markov Model Mediator (HMMM) is proposed to model high dimensional media data including video objects, low-level visual/audio features, as well as historical access patterns and frequencies. The associated retrieval and ranking algorithms are designed to support not only the general queries, but also the complicated temporal event pattern queries. Second, system training and learning methodologies are incorporated such that user interests are mined efficiently to improve the retrieval performance. Third, video clustering techniques are proposed to continuously increase the searching speed and accuracy by architecting a more efficient multimedia database structure. A distributed video management and retrieval system is designed and implemented to demonstrate the overall performance. The proposed approach is further customized for a mobile-based video retrieval system to solve the perception subjectivity issue by considering individual user's profile. Moreover, to deal with security and privacy issues and concerns in distributed multimedia applications, DIMUSE also incorporates a practical framework called SMARXO, which supports multilevel multimedia security control. SMARXO efficiently combines role-based access control (RBAC), XML and object-relational database management system (ORDBMS) to achieve the target of proficient security control. A distributed multimedia management system named DMMManager (Distributed MultiMedia Manager) is developed with the proposed framework DEMUR; to support multimedia capturing, analysis, retrieval, authoring and presentation in one single framework.
Resumo:
Collaborative sharing of information is becoming much more needed technique to achieve complex goals in today's fast-paced tech-dominant world. Personal Health Record (PHR) system has become a popular research area for sharing patients informa- tion very quickly among health professionals. PHR systems store and process sensitive information, which should have proper security mechanisms to protect patients' private data. Thus, access control mechanisms of the PHR should be well-defined. Secondly, PHRs should be stored in encrypted form. Cryptographic schemes offering a more suitable solution for enforcing access policies based on user attributes are needed for this purpose. Attribute-based encryption can resolve these problems, we propose a patient-centric framework that protects PHRs against untrusted service providers and malicious users. In this framework, we have used Ciphertext Policy Attribute Based Encryption scheme as an efficient cryptographic technique, enhancing security and privacy of the system, as well as enabling access revocation. Patients can encrypt their PHRs and store them on untrusted storage servers. They also maintain full control over access to their PHR data by assigning attribute-based access control to selected data users, and revoking unauthorized users instantly. In order to evaluate our system, we implemented CP-ABE library and web services as part of our framework. We also developed an android application based on the framework that allows users to register into the system, encrypt their PHR data and upload to the server, and at the same time authorized users can download PHR data and decrypt it. Finally, we present experimental results and performance analysis. It shows that the deployment of the proposed system would be practical and can be applied into practice.
Resumo:
La tesi tratta l'esplorazione dell'idea per una nuova tipologia di interfacce utente, progettate specificatamente per dispositivi wearable hands free (più nel dettaglio per un'accoppiata smart glass Android based e gesture recognizer). Per facilitare lo sviluppo di applicazioni basate su questi dispositivi è stato realizzato un framework che permetta di costruire, in maniera relativamente semplice, interfacce utente innovative, che consentano all'utente di interagire con i contenuti digitali senza interrompere il suo contatto con la realtà e senza costringerlo a utilizzare le mani.
Resumo:
INTRODUCTION: The ability to reproducibly identify clinically equivalent patient populations is critical to the vision of learning health care systems that implement and evaluate evidence-based treatments. The use of common or semantically equivalent phenotype definitions across research and health care use cases will support this aim. Currently, there is no single consolidated repository for computable phenotype definitions, making it difficult to find all definitions that already exist, and also hindering the sharing of definitions between user groups. METHOD: Drawing from our experience in an academic medical center that supports a number of multisite research projects and quality improvement studies, we articulate a framework that will support the sharing of phenotype definitions across research and health care use cases, and highlight gaps and areas that need attention and collaborative solutions. FRAMEWORK: An infrastructure for re-using computable phenotype definitions and sharing experience across health care delivery and clinical research applications includes: access to a collection of existing phenotype definitions, information to evaluate their appropriateness for particular applications, a knowledge base of implementation guidance, supporting tools that are user-friendly and intuitive, and a willingness to use them. NEXT STEPS: We encourage prospective researchers and health administrators to re-use existing EHR-based condition definitions where appropriate and share their results with others to support a national culture of learning health care. There are a number of federally funded resources to support these activities, and research sponsors should encourage their use.
Resumo:
Economic policy-making has long been more integrated than social policy-making in part because the statistics and much of the analysis that supports economic policy are based on a common conceptual framework – the system of national accounts. People interested in economic analysis and economic policy share a common language of communication, one that includes both concepts and numbers. This paper examines early attempts to develop a system of social statistics that would mirror the system of national accounts, particular the work on the development of social accounts that took place mainly in the 60s and 70s. It explores the reasons why these early initiatives failed but argues that the preconditions now exist to develop a new conceptual framework to support integrated social statistics – and hence a more coherent, effective social policy. Optimism is warranted for two reasons. First, we can make use of the radical transformation that has taken place in information technology both in processing data and in providing wide access to the knowledge that can flow from the data. Second, the conditions exist to begin to shift away from the straight jacket of government-centric social statistics, with its implicit assumption that governments must be the primary actors in finding solutions to social problems. By supporting the decision-making of all the players (particularly individual citizens) who affect social trends and outcomes, we can start to move beyond the sterile, ideological discussions that have dominated much social discourse in the past and begin to build social systems and structures that evolve, almost automatically, based on empirical evidence of ‘what works best for whom’. The paper describes a Canadian approach to developing a framework, or common language, to support the evolution of an integrated, citizen-centric system of social statistics and social analysis. This language supports the traditional social policy that we have today; nothing is lost. However, it also supports a quite different social policy world, one where individual citizens and families (not governments) are seen as the central players – a more empirically-driven world that we have referred to as the ‘enabling society’.
Resumo:
The Olivia framework is a set of concepts and measures that, when mature, will allow users to describe, in a consistent and integrated manner, everything about individuals and institutions that is of potential interest to social policy. The present paper summarizes the current stage of development in achieving this highly ambitious goal. The current version of the framework supports analysis of social trends and policy responses from many perspectives: • The point-in-time, resource-flow perspectives that underlie most traditional, economics-based policy analysis. • Life-course perspectives, including both transitions/trajectories analysis and asset-based analysis. • Spatial perspectives that anchor people in space and history and that provide a link to macro-analysis. • The perspective of the purposes/goals of individuals and institutions, including the objectives of different types of government programming. The concepts of the framework, which are all potentially measurable, provide a language that can support integrated analysis in all these areas at a much finer level of description than is customary. It provides a language that is especially well suited for analysis of the incremental policy changes that are typical of a mature welfare state. It supports both qualitative and quantitative analysis, enabling some integration between the two. It supports citizen-centric as well as a government-centric view of social policy. In its current version, the concepts are most highly developed as they related to social policies as they related to labour markets, equality and social integration, care-giving, immigration, income security, sustainability, and social and economic well-being more generally. However the paper points to likely extensions in the areas of health, justice and safety.
Resumo:
Revenue and production output of the United Kingdom’s Aerospace Industry (AI) is growing year on year and the need to develop new products and innovative enhancements to existing ranges is creating a critical need for the increased utilisation and sharing of employee knowledge. The capture of employee knowledge within the UK’s AI is vital if it is to retain its pre-eminent position in the global marketplace. Crowdsourcing, as a collaborative problem solving activity, allows employees to capture explicit knowledge from colleagues and teams and also offers the potential to extract previously unknown tacit knowledge in a less formal virtual environment. By using micro-blogging as a mechanism, a conceptual framework is proposed to illustrate how companies operating in the AI may improve the capture of employee knowledge to address production-related problems through the use of crowdsourcing. Subsequently, the framework has been set against the background of the product development process proposed by Maylor in 1996 and illustrates how micro-blogging may be used to crowdsource ideas and solutions during product development. Initial validation of the proposed framework is reported, using a focus group of 10 key actors from the collaborating organisation, identifying the perceived advantages, disadvantages and concerns of the framework; results indicate that the activity of micro-blogging for crowdsourcing knowledge relating to product development issues would be most beneficial during product conceptualisation due to the requirement for successful innovation.
Resumo:
Android is becoming ubiquitous and currently has the largest share of the mobile OS market with billions of application downloads from the official app market. It has also become the platform most targeted by mobile malware that are becoming more sophisticated to evade state-of-the-art detection approaches. Many Android malware families employ obfuscation techniques in order to avoid detection and this may defeat static analysis based approaches. Dynamic analysis on the other hand may be used to overcome this limitation. Hence in this paper we propose DynaLog, a dynamic analysis based framework for characterizing Android applications. The framework provides the capability to analyse the behaviour of applications based on an extensive number of dynamic features. It provides an automated platform for mass analysis and characterization of apps that is useful for quickly identifying and isolating malicious applications. The DynaLog framework leverages existing open source tools to extract and log high level behaviours, API calls, and critical events that can be used to explore the characteristics of an application, thus providing an extensible dynamic analysis platform for detecting Android malware. DynaLog is evaluated using real malware samples and clean applications demonstrating its capabilities for effective analysis and detection of malicious applications.
Proposition de nouvelles fonctionnalités WikiSIG pour supporter le travail collaboratif en Geodesign
Resumo:
L’émergence du Web 2.0 se matérialise par de nouvelles technologies (API, Ajax…), de nouvelles pratiques (mashup, geotagging…) et de nouveaux outils (wiki, blog…). Il repose principalement sur le principe de participation et de collaboration. Dans cette dynamique, le Web à caractère spatial et cartographique c’est-à-dire, le Web géospatial (ou GéoWeb) connait lui aussi de fortes transformations technologiques et sociales. Le GéoWeb 2.0 participatif se matérialise en particulier par des mashups entre wikis et géobrowsers (ArgooMap, Geowiki, WikiMapia, etc.). Les nouvelles applications nées de ces mashups évoluent vers des formes plus interactives d’intelligence collective. Mais ces applications ne prennent pas en compte les spécificités du travail collaboratif, en particulier la gestion de traçabilité ou l’accès dynamique à l’historique des contributions. Le Geodesign est un nouveau domaine fruit de l’association des SIG et du design, permettant à une équipe multidisciplinaire de travailler ensemble. Compte tenu de son caractère émergent, le Geodesign n’est pas assez défini et il requiert une base théorique innovante, de nouveaux outils, supports, technologies et pratiques afin de s’adapter à ses exigences complexes. Nous proposons dans cette thèse de nouvelles fonctionnalités de type WikiSIG, bâties sur les principes et technologies du GéoWeb 2.0 et visant en particulier à supporter la dimension collaborative du processus de Geodesign. Le WikiSIG est doté de fonctionnalités wiki dédiées à la donnée géospatiale (y compris dans sa composante géométrique : forme et localisation) permettant d’assurer, de manière dynamique, la gestion documentée des versions des objets et l’accès à ces versions (et de leurs métadonnées), facilitant ainsi le travail collaboratif en Geodesign. Nous proposons également la deltification qui consiste en la capacité de comparer et d’afficher les différences entre deux versions de projets. Finalement la pertinence de quelques outils du géotraitement et « sketching » est évoquée. Les principales contributions de cette thèse sont d’une part d’identifier les besoins, les exigences et les contraintes du processus de Geodesign collaboratif, et d’autre part de proposer des nouvelles fonctionnalités WikiSIG répondant au mieux à la dimension collaborative du processus. Pour ce faire, un cadre théorique est dressé où nous avons identifié les exigences du travail collaboratif de Geodesign et proposé certaines fonctionnalités WikiSIG innovantes qui sont par la suite formalisés en diagrammes UML. Une maquette informatique est aussi développée de façon à mettre en oeuvre ces fonctionnalités, lesquelles sont illustrées à partir d’un cas d’étude simulé, traité comme preuve du concept. La pertinence de ces fonctionnalités développées proposées est finalement validée par des experts à travers un questionnaire et des entrevues. En résumé, nous montrons dans cette thèse l’importance de la gestion de la traçabilité et comment accéder dynamiquement à l’historique dans un processus de Geodesign. Nous proposons aussi d’autres fonctionnalités comme la deltification, le volet multimédia supportant l’argumentation, les paramètres qualifiant les données produites, et la prise de décision collective par consensus, etc.
Resumo:
Com um número cada vez maior de cidadãos a viver em grandes aglomerados urbanos, as cidades necessitam de se adaptar e tornar mais inteligentes por forma a serem sustentáveis. Desta forma, o conceito de smart city implica a integração de várias dimensões da gestão da cidade, mediante uma abordagem integrada e sustentada, criando um novo mercado per si. Mas, para responder a estas necessidades e conquistar este novo mercado, as empresas têm que se organizar por forma a sustentar as suas decisões estratégicas com ferramentas que permitem a análise e avaliação deste novo paradigma. Baseado nos conceitos de smart cities/cidades inteligentes, este trabalho desenvolve um conjunto de ferramentas que permitem a análise e avaliação de novos mercados pela empresa PTInovação, criando um modelo para a implementação de um mapa de calor/heat map que apresenta as cidades com maior potencial de mercado a nível mundial. Com base neste modelo, é então efetuada uma instanciação do modelo que permite analisar 7 casos diferentes de cidades localizadas na América, África, Ásia e Europa. A partir da análise realizada, é efetuado um caso de estudo para a cidade de Cartagena das Índias, na Colômbia. Este caso de estudo efetua a análise do portfólio de oferta da PTInovação, estuda as necessidades específicas dos utilizadores locais e analisa os potenciais competidores no mercado local, permitindo a realização de uma análise SWOT/TOWS que induz a criação de um plano de ação que permite mapear o processo de entrada da empresa neste mercado.
Resumo:
Future pervasive environments will take into consideration not only individual user’s interest, but also social relationships. In this way, pervasive communities can lead the user to participate beyond traditional pervasive spaces, enabling the cooperation among groups and taking into account not only individual interests, but also the collective and social context. Social applications in CSCW (Computer Supported Cooperative Work) field represent new challenges and possibilities in terms of use of social context information for adaptability in pervasive environments. In particular, the research describes the approach in the design and development of a context.aware framework for collaborative applications (CAFCA), utilizing user’s context social information for proactive adaptations in pervasive environments. In order to validate the proposed framework an evaluation was conducted with a group of users based on enterprise scenario. The analysis enabled to verify the impact of the framework in terms of functionality and efficiency in real-world conditions. The main contribution of this thesis was to provide a context-aware framework to support collaborative applications in pervasive environments. The research focused on providing an innovative socio-technical approach to exploit collaboration in pervasive communities. Finally, the main results reside in social matching capabilities for session formation, communication and coordinations of groupware for collaborative activities.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.