989 resultados para Personal Computing


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Currently, a learning management system (LMS) plays a central role in any e-learning environment. These environments include systems to handle the pedagogic aspects of the teaching–learning process (e.g. specialized tutors, simulation games) and the academic aspects (e.g. academic management systems). Thus, the potential for interoperability is an important, although over looked, aspect of an LMS. In this paper, we make a comparative study of the interoperability level of the most relevant LMS. We start by defining an application and a specification model. For the application model, we create a basic application that acts as a tool provider for LMS integration. The specification model acts as the API that the LMS should implement to communicate with the tool provider. Based on researches, we select the Learning Tools Interoperability (LTI) from IMS. Finally, we compare the LMS interoperability level defined as the effort made to integrate the application on the study LMS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The introduction of affordable, consumer-oriented 3-D printers is a milestone in the current "maker movement," which has been heralded as the next industrial revolution. Combined with free and open sharing of detailed design blueprints and accessible development tools, rapid prototypes of complex products can now be assembled in one's own garage--a game-changer reminiscent of the early days of personal computing. At the same time, 3-D printing has also allowed the scientific and engineering community to build the "little things" that help a lab get up and running much faster and easier than ever before.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Informatics evolution presently offers the possibility of new technique and methodology development for studies in all human knowledge areas. In addition, the present personal computer capacity of handling a large volume of data makes the creation and application of new analysis tools easy. This paper aimed the application of a fuzzy partition matrix to analyze data obtained from the Landsat 5 TMN sensor, in order to elaborate the supervised classification of land use in Arroio das Pombas microbasin in Botucatu, SP, Brazil. It was possible that one single training area present input in more than one covering class due to weight attribution at the signature creation moment. A change in the classification result was also observed when compared to maximum likelihood classification, mainly when related to bigger uniformity and better class edges classification.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Las TIC son inseparables de la museografía in situ e imprescindibles en la museografía en red fija y móvil. En demasiados casos se han instalado prótesis tecnológicas para barnizar de modernidad el espacio cultural, olvidando que la tecnología debe estar al servicio de los contenidos de manera que resulte invisible y perfectamente imbricada con la museografía tradicional. Las interfaces móviles pueden fusionar museo in situ y en red y acompañar a las personas más allá del espacio físico. Esa fusión debe partir de una base de datos narrativa y abierta a obras materiales e inmateriales de otros museos de manera que no se trasladen las limitaciones del museo físico al virtual. En el museo in situ tienen sentido las instalaciones hipermedia inmersivas que faciliten experiencias culturales innovadoras. La interactividad (relaciones virtuales) debe convivir con la interacción (relaciones físicas y personales) y estar al servicio de todas las personas, partiendo de que todas, todos tenemos limitaciones. Trabajar interdisciplinarmente ayuda a comprender mejor el museo para ponerlo al servicio de las personas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 2013, a series of posters began appearing in Washington, DC’s Metro system. Each declared “The internet: Your future depends on it” next to a photo of a middle-aged black Washingtonian, and an advertisement for the municipal government’s digital training resources. This hopeful discourse is familiar but where exactly does it come from? And how are our public institutions reorganized to approach the problem of poverty as a problem of technology? The Clinton administration’s ‘digital divide’ policy program popularized this hopeful discourse about personal computing powering social mobility, positioned internet startups as the ‘right’ side of the divide, and charged institutions of social reproduction such as schools and libraries with closing the gap and upgrading themselves in the image of internet startups. After introducing the development regime that builds this idea into the urban landscape through what I call the ‘political economy of hope’, and tracing the origin of the digital divide frame, this dissertation draws on three years of comparative ethnographic fieldwork in startups, schools, and libraries to explore how this hope is reproduced in daily life, becoming the common sense that drives our understanding of and interaction with economic inequality and reproduces that inequality in turn. I show that the hope in personal computing to power social mobility becomes a method of securing legitimacy and resources for both white émigré technologists and institutions of social reproduction struggling to understand and manage the persistent poverty of the information economy. I track the movement of this common sense between institutions, showing how the political economy of hope transforms them as part of a larger development project. This dissertation models a new, relational direction for digital divide research that grounds the politics of economic inequality with an empirical focus on technologies of poverty management. It demands a conceptual shift that sees the digital divide not as a bug within the information economy, but a feature of it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este estudo teve como objetivos verificar a validade fatorial e a validade interna da versão brasileira do Exercise Motivation Inventory-2 (EMI-2) e comparar os principais motivos para prática de exercício tendo em conta os contextos de academia e personal training. Um total de 588 praticantes de exercício da cidade de Pelotas/RS/Brasil (405 de academia e 183 de personal training) preencheram o EMI-2, o qual é constituído por 51 itens, agrupados em 14 motivos (fatores) para prática de exercício físico. A validade fatorial do EMI-2 foi testada através da realização de análises fatoriais confirmatórias e a validade interna através do alfa de Cronbach. Para a verificar o efeito do contexto nos motivos foi utilizada a MANOVA e calculado o tamanho do efeito. Os resultados obtidos dão suporte à estrutura original do EMI-2 com 14 fatores, nesta amostra. Verificou-se um efeito multivariado significativo do contexto sobre os motivos de prática [Wilks’ λ = 0.912, F (14, 573.000) = 3.9, p < 0.001, η² = 0.088]. Os motivos de “Prazer”, “Força e resistência”, “Desafio”, “Socialização”, “Competição” e “Reconhecimento Social” foram significativamente superiores no contexto de academia e os motivos de “Agilidade” e “Prevenção de Doenças” foram significativamente superiores no contexto de personal training.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban Computing (UrC) provides users with the situation-proper information by considering context of users, devices, and social and physical environment in urban life. With social network services, UrC makes it possible for people with common interests to organize a virtual-society through exchange of context information among them. In these cases, people and personal devices are vulnerable to fake and misleading context information which is transferred from unauthorized and unauthenticated servers by attackers. So called smart devices which run automatically on some context events are more vulnerable if they are not prepared for attacks. In this paper, we illustrate some UrC service scenarios, and show important context information, possible threats, protection method, and secure context management for people.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Graphics Processing Unit (GPU) is present in almost every modern day personal computer. Despite its specific purpose design, they have been increasingly used for general computations with very good results. Hence, there is a growing effort from the community to seamlessly integrate this kind of devices in everyday computing. However, to fully exploit the potential of a system comprising GPUs and CPUs, these devices should be presented to the programmer as a single platform. The efficient combination of the power of CPU and GPU devices is highly dependent on each device’s characteristics, resulting in platform specific applications that cannot be ported to different systems. Also, the most efficient work balance among devices is highly dependable on the computations to be performed and respective data sizes. In this work, we propose a solution for heterogeneous environments based on the abstraction level provided by algorithmic skeletons. Our goal is to take full advantage of the power of all CPU and GPU devices present in a system, without the need for different kernel implementations nor explicit work-distribution.To that end, we extended Marrow, an algorithmic skeleton framework for multi-GPUs, to support CPU computations and efficiently balance the work-load between devices. Our approach is based on an offline training execution that identifies the ideal work balance and platform configurations for a given application and input data size. The evaluation of this work shows that the combination of CPU and GPU devices can significantly boost the performance of our benchmarks in the tested environments, when compared to GPU-only executions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aquest treball crea una aplicació web per a la gestió del personal portuari de Barcelona.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An analytic method to evaluate nuclear contributions to electrical properties of polyatomic molecules is presented. Such contributions control changes induced by an electric field on equilibrium geometry (nuclear relaxation contribution) and vibrational motion (vibrational contribution) of a molecular system. Expressions to compute the nuclear contributions have been derived from a power series expansion of the potential energy. These contributions to the electrical properties are given in terms of energy derivatives with respect to normal coordinates, electric field intensity or both. Only one calculation of such derivatives at the field-free equilibrium geometry is required. To show the useful efficiency of the analytical evaluation of electrical properties (the so-called AEEP method), results for calculations on water and pyridine at the SCF/TZ2P and the MP2/TZ2P levels of theory are reported. The results obtained are compared with previous theoretical calculations and with experimental values

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The alignment between competences, teaching-learning methodologies and assessment is a key element of the European Higher Education Area. This paper presents the efforts carried out by six Telematics, Computer Science and Electronic Engineering Education teachers towards achieving this alignment in their subjects. In a joint work with pedagogues, a set of recommended actions were identified. A selection of these actions were applied and evaluated in the six subjects. The cross-analysis of the results indicate that the actions allow students to better understand the methodologies and assessment planned for the subjects, facilitate (self-) regulation and increase students’ involvement in the subjects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing volume of data describing humandisease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the@neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system’s architecture is generic enough that it could be adapted to the treatment of other diseases.Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers cliniciansthe tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medicalresearchers gain access to a critical mass of aneurysm related data due to the system’s ability to federate distributed informationsources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access andwork on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand forperforming computationally intensive simulations for treatment planning and research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold