762 resultados para Trusted computing platform
Resumo:
Sharing sensor data between multiple devices and users can be^challenging for naive users, and requires knowledge of programming and use of different communication channels and/or development tools, leading to non uniform solutions. This thesis proposes a system that allows users to access sensors, share sensor data and manage sensors. With this system we intent to manage devices, share sensor data, compare sensor data, and set policies to act based on rules. This thesis presents the design and implementation of the system, as well as three case studies of its use.
Resumo:
Spontaneous volunteers always emerge under emergency scenarios and are vital to a successful community response, yet some uncertainty subsists around their role and its inherent acceptance by official entities under emergency scenarios. In our research we have identified that most of the spontaneous volunteers do have none or little support from official entities, hence they end up facing critical problems as situational awareness, safety instructions and guidance, motivation and group organization. We argue that official entities still play a crucial role and should change some of their behaviors regarding spontaneous volunteerism. We aim with this thesis to design a software architecture and a framework in order to implement a solution to support spontaneous volunteerism under emergency scenarios along with a set of guidelines for the design of open information management systems. Together with the collaboration from both citizens and emergency professionals we have been able to attain several important contributions, as the clear identification of the roles taken by both spontaneous volunteers and professionals, the importance of volunteerism in overall community response and the role which open collaborative information management systems have in the community volunteering efforts. These conclusions have directly supported the design guidelines of our software solution proposal. In what concerns to methodology, we first review literature on technologies support to emergencies and how spontaneous volunteers actually challenge these systems. Following, we have performed a field research where we have observed that the emerging of spontaneous volunteer’s efforts imposes new requirements for the design of such systems, which leaded to the creation of a cluster of design guidelines that supported our software solution proposal to address the volunteers’ requirements. Finally we have architected and developed an online open information management tool which has been evaluated via usability engineering methods, usability user tests and heuristic evaluations.
Resumo:
Ubiquitous computing raises new usability challenges that cut across design and development. We are particularly interested in environments enhanced with sensors, public displays and personal devices. How can prototypes be used to explore the users' mobility and interaction, both explicitly and implicitly, to access services within these environments? Because of the potential cost of development and design failure, these systems must be explored using early assessment techniques and versions of the systems that could disrupt if deployed in the target environment. These techniques are required to evaluate alternative solutions before making the decision to deploy the system on location. This is crucial for a successful development, that anticipates potential user problems, and reduces the cost of redesign. This thesis reports on the development of a framework for the rapid prototyping and analysis of ubiquitous computing environments that facilitates the evaluation of design alternatives. It describes APEX, a framework that brings together an existing 3D Application Server with a modelling tool. APEX-based prototypes enable users to navigate a virtual world simulation of the envisaged ubiquitous environment. By this means users can experience many of the features of the proposed design. Prototypes and their simulations are generated in the framework to help the developer understand how the user might experience the system. These are supported through three different layers: a simulation layer (using a 3D Application Server); a modelling layer (using a modelling tool) and a physical layer (using external devices and real users). APEX allows the developer to move between these layers to evaluate different features. It supports exploration of user experience through observation of how users might behave with the system as well as enabling exhaustive analysis based on models. The models support checking of properties based on patterns. These patterns are based on ones that have been used successfully in interactive system analysis in other contexts. They help the analyst to generate and verify relevant properties. Where these properties fail then scenarios suggested by the failure provide an important aid to redesign.
Resumo:
With the current proliferation of sensor equipped mobile devices such as smartphones and tablets, location aware services are expanding beyond the mere efficiency and work related needs of users, evolving in order to incorporate fun, culture and the social life of users. Today people on the move have more and more connectivity and are expected to be able to communicate with their usual and familiar social networks. That means communications not only with their peers and colleagues, friends and family but also with unknown people that might share their interests, curiosities or happen to use the same social network. Through social networks, location aware blogging, cultural mobile applications relevant information is now available at specific geographical locations and open to feedback and conversations among friends as well as strangers. In fact, nowadays smartphone technologies aloud users to post and retrieve content while on the move, often relating to specific physical landmarks or locations, engaging and being engaged in conversations with strangers as much as their own social network. The use of such technologies and applications while on the move can often lead people to serendipitous discoveries and interactions. Throughout our thesis we are engaging on a two folded investigation: how can we foster and support serendipitous discoveries and what are the best interfaces for it? In fact, to read and write content while on the move is a cognitively intensive task. While the map serves the function of orienting the user, it also absorbs most of the user’s concentration. In order to address this kind of cognitive overload issue with Breadcrumbs we propose a 360 degrees interface that enables the user to find content around them by means of scanning the surrounding space with the mobile device. By using a loose metaphor of a periscope, harnessing the power of the smartphone sensors we designed an interactive interface capable of detecting content around the users and display it in the form of 2 dimensional bubbles which diameter depends on their distance from the users. Users will navigate the space in relation to the content that they are curious about, rather than in relation to the traditional geographical map. Through this model we envisage alleviating a certain cognitive overload generated by having to continuously confront a two dimensional map with the real three dimensional space surrounding the user, but also use the content as a navigational filter. Furthermore this alternative mean of navigating space might bring serendipitous discovery about places that user where not aware of or intending to reach. We hence conclude our thesis with the evaluation of the Breadcrumbs application and the comparison of the 360 degrees interface with a traditional 2 dimensional map displayed on the devise screen. Results from the evaluation are compiled in findings and insights for future use in designing and developing context aware mobile applications.
Resumo:
The progresses of the Internet and telecommunications have been changing the concepts of Information Technology IT, especially with regard to outsourcing services, where organizations seek cost-cutting and a better focus on the business. Along with the development of that outsourcing, a new model named Cloud Computing (CC) evolved. It proposes to migrate to the Internet both data processing and information storing. Among the key points of Cloud Computing are included cost-cutting, benefits, risks and the IT paradigms changes. Nonetheless, the adoption of that model brings forth some difficulties to decision-making, by IT managers, mainly with regard to which solutions may go to the cloud, and which service providers are more appropriate to the Organization s reality. The research has as its overall aim to apply the AHP Method (Analytic Hierarchic Process) to decision-making in Cloud Computing. There to, the utilized methodology was the exploratory kind and a study of case applied to a nationwide organization (Federation of Industries of RN). The data collection was performed through two structured questionnaires answered electronically by IT technicians, and the company s Board of Directors. The analysis of the data was carried out in a qualitative and comparative way, and we utilized the software to AHP method called Web-Hipre. The results we obtained found the importance of applying the AHP method in decision-making towards the adoption of Cloud Computing, mainly because on the occasion the research was carried out the studied company already showed interest and necessity in adopting CC, considering the internal problems with infrastructure and availability of information that the company faces nowadays. The organization sought to adopt CC, however, it had doubt regarding the cloud model and which service provider would better meet their real necessities. The application of the AHP, then, worked as a guiding tool to the choice of the best alternative, which points out the Hybrid Cloud as the ideal choice to start off in Cloud Computing. Considering the following aspects: the layer of Infrastructure as a Service IaaS (Processing and Storage) must stay partly on the Public Cloud and partly in the Private Cloud; the layer of Platform as a Service PaaS (Software Developing and Testing) had preference for the Private Cloud, and the layer of Software as a Service - SaaS (Emails/Applications) divided into emails to the Public Cloud and applications to the Private Cloud. The research also identified the important factors to hiring a Cloud Computing provider
Resumo:
Simulations based on cognitively rich agents can become a very intensive computing task, especially when the simulated environment represents a complex system. This situation becomes worse when time constraints are present. This kind of simulations would benefit from a mechanism that improves the way agents perceive and react to changes in these types of environments. In other worlds, an approach to improve the efficiency (performance and accuracy) in the decision process of autonomous agents in a simulation would be useful. In complex environments, and full of variables, it is possible that not every information available to the agent is necessary for its decision-making process, depending indeed, on the task being performed. Then, the agent would need to filter the coming perceptions in the same as we do with our attentions focus. By using a focus of attention, only the information that really matters to the agent running context are perceived (cognitively processed), which can improve the decision making process. The architecture proposed herein presents a structure for cognitive agents divided into two parts: 1) the main part contains the reasoning / planning process, knowledge and affective state of the agent, and 2) a set of behaviors that are triggered by planning in order to achieve the agent s goals. Each of these behaviors has a runtime dynamically adjustable focus of attention, adjusted according to the variation of the agent s affective state. The focus of each behavior is divided into a qualitative focus, which is responsible for the quality of the perceived data, and a quantitative focus, which is responsible for the quantity of the perceived data. Thus, the behavior will be able to filter the information sent by the agent sensors, and build a list of perceived elements containing only the information necessary to the agent, according to the context of the behavior that is currently running. Based on the human attention focus, the agent is also dotted of a affective state. The agent s affective state is based on theories of human emotion, mood and personality. This model serves as a basis for the mechanism of continuous adjustment of the agent s attention focus, both the qualitative and the quantative focus. With this mechanism, the agent can adjust its focus of attention during the execution of the behavior, in order to become more efficient in the face of environmental changes. The proposed architecture can be used in a very flexibly way. The focus of attention can work in a fixed way (neither the qualitative focus nor the quantitaive focus one changes), as well as using different combinations for the qualitative and quantitative foci variation. The architecture was built on a platform for BDI agents, but its design allows it to be used in any other type of agents, since the implementation is made only in the perception level layer of the agent. In order to evaluate the contribution proposed in this work, an extensive series of experiments were conducted on an agent-based simulation over a fire-growing scenario. In the simulations, the agents using the architecture proposed in this work are compared with similar agents (with the same reasoning model), but able to process all the information sent by the environment. Intuitively, it is expected that the omniscient agent would be more efficient, since they can handle all the possible option before taking a decision. However, the experiments showed that attention-focus based agents can be as efficient as the omniscient ones, with the advantage of being able to solve the same problems in a significantly reduced time. Thus, the experiments indicate the efficiency of the proposed architecture
Resumo:
This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing
Resumo:
The fission-track method (FTM) in apatite was applied to 45 samples collected in the Serra da Mantiqueira (Mantiqueira mountain range), the Serra do Mar (Mar mountain range), regions next to these mountain ranges and the coastal region between Ubatuba and Santos in the State of São Paulo, Brazil, to study the thermochronology of the South American Platform in southeast Brazil and its influence on Santos and Campos basins. The data presented in this work complement the previously presented data on the same region (Tello Saenz et al., 2003. J. S. Am. Earth Sci. 15, 765-774) with 31 new samples analyzed. The weighted mean of the corrected ages from high Mantiqueira (around 1000 m), (121 +/- 6) Ma, coincides with the South Atlantic opening. The fact that its thermal history starts at a relatively low temperature (similar to 80 degrees C) suggests that the age of similar to 120 Ma would be the formation age of Serra da Mantiqueira due to a rapid pulse, in which tracks had no time to be retained at the closure temperature, that is similar to 120 degrees C. The Serra do Mar presents a more complicated thermal history, with several reactivations indicated by the changes in the slope of its cooling curve. The thermal histories obtained in the regions next to these mountain ranges are compatible with the results mentioned above. The Santos Basin has unconformities that agree with changes in the slope thermal histories of the studied region. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The increase of higher education offer is a basic need of developed and emerging countries. It requires increasing and ongoing investments. The offer of higher education, by means of Distance Learning, based on the Internet, is one of the most efficient manners for the massification of this offer, as it allows ample coverage and lower costs. In this scenario, we highlight Moodle, an open and low-cost environment for Distance Learning. Its utilization may be amplified through the adoption of an emerging Information and Communication Technology (ICT), Cloud Computing, which allows the virtualization of Moodle sites, cutting costs, facilitating management and increasing its service capacity. This article diffuses a public tool, opened and free, for automatic conversion of Moodle sites, such that these may be hosted on Azure: the Cloud Computing environment of Microsoft.
Resumo:
One of the current challenges of Ubiquitous Computing is the development of complex applications, those are more than simple alarms triggered by sensors or simple systems to configure the environment according to user preferences. Those applications are hard to develop since they are composed by services provided by different middleware and it is needed to know the peculiarities of each of them, mainly the communication and context models. This thesis presents OpenCOPI, a platform which integrates various services providers, including context provision middleware. It provides an unified ontology-based context model, as well as an environment that enable easy development of ubiquitous applications via the definition of semantic workflows that contains the abstract description of the application. Those semantic workflows are converted into concrete workflows, called execution plans. An execution plan consists of a workflow instance containing activities that are automated by a set of Web services. OpenCOPI supports the automatic Web service selection and composition, enabling the use of services provided by distinct middleware in an independent and transparent way. Moreover, this platform also supports execution adaptation in case of service failures, user mobility and degradation of services quality. The validation of OpenCOPI is performed through the development of case studies, specifically applications of the oil industry. In addition, this work evaluates the overhead introduced by OpenCOPI and compares it with the provided benefits, and the efficiency of OpenCOPI s selection and adaptation mechanism
Resumo:
With the advance of the Cloud Computing paradigm, a single service offered by a cloud platform may not be enough to meet all the application requirements. To fulfill such requirements, it may be necessary, instead of a single service, a composition of services that aggregates services provided by different cloud platforms. In order to generate aggregated value for the user, this composition of services provided by several Cloud Computing platforms requires a solution in terms of platforms integration, which encompasses the manipulation of a wide number of noninteroperable APIs and protocols from different platform vendors. In this scenario, this work presents Cloud Integrator, a middleware platform for composing services provided by different Cloud Computing platforms. Besides providing an environment that facilitates the development and execution of applications that use such services, Cloud Integrator works as a mediator by providing mechanisms for building applications through composition and selection of semantic Web services that take into account metadata about the services, such as QoS (Quality of Service), prices, etc. Moreover, the proposed middleware platform provides an adaptation mechanism that can be triggered in case of failure or quality degradation of one or more services used by the running application in order to ensure its quality and availability. In this work, through a case study that consists of an application that use services provided by different cloud platforms, Cloud Integrator is evaluated in terms of the efficiency of the performed service composition, selection and adaptation processes, as well as the potential of using this middleware in heterogeneous computational clouds scenarios