897 resultados para Dominic Interactive
Resumo:
Several Web-based on-line judges or on-line programming trainers have been developed in order to allow students to train their programming skills. However, their pedagogical functionalities in the learning of programming have not been clearly defined. EduJudge is a project which aims to integrate the “UVA On-line Judge”, an existing on-line programming trainer with an important number of problems and users, into an effective educational environment consisting of the e-learning platform Moodle and the competitive learning tool QUESTOURnament. The result is the EduJudge system which allows teachers to apply different pedagogical approaches using a proven e-learning platform, makes problems easy to search through an effective search engine, and provides an automated evaluation of the solutions submitted to these problems. The final objective is to provide new learning strategies to motivate students and present programming as an easy and attractive challenge. EduJudge has been tried and tested in three algorithms and programming courses in three different Engineering degrees. The students’ motivation and satisfaction levels were analysed alongside the effects of the EduJudge system on students’ academic outcomes. Results indicate that both students and teachers found that among other multiple benefits the EduJudge system facilitates the learning process. Furthermore, the experi- ment also showed an improvement in students’ academic outcomes. It must be noted that the students’ level of satisfaction did not depend on their computer skills or their gender.
Resumo:
With the advent of Web 2.0, new kinds of tools became available, which are not seen as novel anymore but are widely used. For instance, according to Eurostat data, in 2010 32% of individuals aged 16 to 74 used the Internet to post messages to social media sites or instant messaging tools, ranging from 17% in Romania to 46% in Sweden (Eurostat, 2012). Web 2.0 applications have been used in technology-enhanced learning environments. Learning 2.0 is a concept that has been used to describe the use of social media for learning. Many Learning 2.0 initiatives have been launched by educational and training institutions in Europe. Web 2.0 applications have also been used for informal learning. Web 2.0 tools can be used in classrooms, virtual or not, not only to engage students but also to support collaborative activities. Many of these tools allow users to use tags to organize resources and facilitate their retrieval at a later date or time. The aim of this chapter is to describe how tagging has been used in systems that support formal or informal learning and to summarize the functionalities that are common to these systems. In addition, common and unusual tagging applications that have been used in some Learning Objects Repositories are analysed.
Resumo:
Virtual Reality (VR) has grown to become state-of-theart technology in many business- and consumer oriented E-Commerce applications. One of the major design challenges of VR environments is the placement of the rendering process. The rendering process converts the abstract description of a scene as contained in an object database to an image. This process is usually done at the client side like in VRML [1] a technology that requires the client’s computational power for smooth rendering. The vision of VR is also strongly connected to the issue of Quality of Service (QoS) as the perceived realism is subject to an interactive frame rate ranging from 10 to 30 frames-per-second (fps), real-time feedback mechanisms and realistic image quality. These requirements overwhelm traditional home computers or even high sophisticated graphical workstations over their limits. Our work therefore introduces an approach for a distributed rendering architecture that gracefully balances the workload between the client and a clusterbased server. We believe that a distributed rendering approach as described in this paper has three major benefits: It reduces the clients workload, it decreases the network traffic and it allows to re-use already rendered scenes.
Resumo:
Trabalho de projeto apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.
Resumo:
OBJECTIVES: Estimate the frequency of online searches on the topic of smoking and analyze the quality of online resources available to smokers interested in giving up smoking. METHODS: Search engines were used to revise searches and online resources related to stopping smoking in Brazil in 2010. The number of searches was determined using analytical tools available on Google Ads; the number and type of sites were determined by replicating the search patterns of internet users. The sites were classified according to content (advertising, library of articles and other). The quality of the sites was analyzed using the Smoking Treatment Scale- Content (STS-C) and the Smoking Treatment Scale - Rating (STS-R). RESULTS: A total of 642,446 searches was carried out. Around a third of the 113 sites encountered were of the 'library' type, i.e. they only contained articles, followed by sites containing clinical advertising (18.6) and professional education (10.6). Thirteen of the sites offered advice on quitting directed at smokers. The majority of the sites did not contain evidence-based information, were not interactive and did not have the possibility of communicating with users after the first contact. Other limitations we came across were a lack of financial disclosure as well as no guarantee of privacy concerning information obtained and no distinction made between editorial content and advertisements. CONCLUSIONS: There is a disparity between the high demand for online support in giving up smoking and the scarcity of quality online resources for smokers. It is necessary to develop interactive, customized online resources based on evidence and random clinical testing in order to improve the support available to Brazilian smokers.
Resumo:
In this paper, we present some of the fault tolerance management mechanisms being implemented in the Multi-μ architecture, namely its support for replica non-determinism. In this architecture, fault tolerance is achieved by node active replication, with software based replica management and fault tolerance transparent algorithms. A software layer implemented between the application and the real-time kernel, the Fault Tolerance Manager (FTManager), is the responsible for the transparent incorporation of the fault tolerance mechanisms The active replication model can be implemented either imposing replica determinism or keeping replica consistency at critical points, by means of interactive agreement mechanisms. One of the Multi-μ architecture goals is to identify such critical points, relieving the underlying system from performing the interactive agreement in every Ada dispatching point.
Resumo:
This paper describes how MPEG-4 object based video (obv) can be used to allow selected objects to be inserted into the play-out stream to a specific user based on a profile derived for that user. The application scenario described here is for personalized product placement, and considers the value of this application in the current and evolving commercial media distribution market given the huge emphasis media distributors are currently placing on targeted advertising. This level of application of video content requires a sophisticated content description and metadata system (e.g., MPEG-7). The scenario considers the requirement for global libraries to provide the objects to be inserted into the streams. The paper then considers the commercial trading of objects between the libraries, video service providers, advertising agencies and other parties involved in the service. Consequently a brokerage of video objects is proposed based on negotiation and trading using intelligent agents representing the various parties. The proposed Media Brokerage Platform is a multi-agent system structured in two layers. In the top layer, there is a collection of coarse grain agents representing the real world players – the providers and deliverers of media contents and the market regulator profiler – and, in the bottom layer, there is a set of finer grain agents constituting the marketplace – the delegate agents and the market agent. For knowledge representation (domain, strategic and negotiation protocols) we propose a Semantic Web approach based on ontologies. The media components contents should be represented in MPEG-7 and the metadata describing the objects to be traded should follow a specific ontology. The top layer content providers and deliverers are modelled by intelligent autonomous agents that express their will to transact – buy or sell – media components by registering at a service registry. The market regulator profiler creates, according to the selected profile, a market agent, which, in turn, checks the service registry for potential trading partners for a given component and invites them for the marketplace. The subsequent negotiation and actual transaction is performed by delegate agents in accordance with their profiles and the predefined rules of the market.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
Relatório de estágio apresentado à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Gestão Estratégica das Relações Públicas.
Resumo:
Doctoral Thesis in Information Systems and Technologies Area of Engineering and Manag ement Information Systems
Resumo:
Mestrado em Ensino Precoce do Inglês
Resumo:
OBJECTIVE To analyze the dynamics of operation of the Bipartite Committees in health care in the Brazilian states.METHODS The research included visits to 24 states, direct observation, document analysis, and performance of semi-structured interviews with state and local leaders. The characterization of each committee was performed between 2007 and 2010, and four dimensions were considered: (i) level of institutionality, classified as advanced, intermediate, or incipient; (ii) agenda of intergovernmental negotiations, classified as diversified/restricted, adapted/not adapted to the reality of each state, and shared/unshared between the state and municipalities; (iii) political processes, considering the character and scope of intergovernmental relations; and (iv) capacity of operation, assessed as high, moderate, or low.RESULTS Ten committees had advanced level of institutionality. The agenda of the negotiations was diversified in all states, and most of them were adapted to the state reality. However, one-third of the committees showed power inequalities between the government levels. Cooperative and interactive intergovernmental relations predominated in 54.0% of the states. The level of institutionality, scope of negotiations, and political processes influenced Bipartite Committees’ ability to formulate policies and coordinate health care at the federal level. Bipartite Committees with a high capacity of operation predominated in the South and Southeast regions, while those with a low capacity of operations predominated in the North and Northeast.CONCLUSIONS The regional differences in operation among Bipartite Interagency Committees suggest the influence of historical-structural variables (socioeconomic development, geographic barriers, characteristics of the health care system) in their capacity of intergovernmental health care management. However, structural problems can be overcome in some states through institutional and political changes. The creation of federal investments, varied by regions and states, is critical in overcoming the structural inequalities that affect political institutions. The operation of Bipartite Committees is a step forward; however, strengthening their ability to coordinate health care is crucial in the regional organization of the health care system in the Brazilian states.
Resumo:
In an increasingly competitive and globalized world, companies need effective training methodologies and tools for their employees. However, selecting the most suitable ones is not an easy task. It depends on the requirements of the target group (namely time restrictions), on the specificities of the contents, etc. This is typically the case for training in Lean, the waste elimination manufacturing philosophy. This paper presents and compares two different approaches to lean training methodologies and tools: a simulation game based on a single realistic manufacturing platform, involving production and assembly operations that allows learning by playing; and a digital game that helps understand lean tools. This paper shows that both tools have advantages in terms of trainee motivation and knowledge acquisition. Furthermore, they can be used in a complementary way, reinforcing the acquired knowledge.
Resumo:
MSCC Dissertation in Computer Engineering
Resumo:
In this paper the authors intend to demonstrate the utilization of remote experimentation (RE) using mobile computational devices in the Science areas of the elementary school, with the purpose to develop practices that will help in the assimilation process of the subjects taught in classroom seeking to interlink them with the daily students? activities. Allying mobility with RE we intend to minimize the space-temporal barrier giving more availability and speed in the information access. The implemented architecture utilizes technologies and freely distributed softwares with open code resources besides remote experiments developed in the Laboratory of Remote Experimentation (RExLab) of Federal University of Santa Catarina (UFSC), in Brazil, through the physical computation platform of the ?open hardware of construction of our own. The utilization of open code computational tools and the integration of hardware to the 3D virtual worlds, accessible through mobile devices, give to the project an innovative face with a high potential for reproducibility and reusability.