937 resultados para Virtual storage (Computer science)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of new learning models has been of great importance throughout recent years, with a focus on creating advances in the area of deep learning. Deep learning was first noted in 2006, and has since become a major area of research in a number of disciplines. This paper will delve into the area of deep learning to present its current limitations and provide a new idea for a fully integrated deep and dynamic probabilistic system. The new model will be applicable to a vast number of areas initially focusing on applications into medical image analysis with an overall goal of utilising this approach for prediction purposes in computer based medical systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

COSTA, Umberto Souza da; MOREIRA, Anamaria Martins; MUSICANTE, Martin A. Specification and Runtime Verification of Java Card Programs. Electronic Notes in Theoretical Computer Science. [S.l:s.n], 2009.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The generation of heterogeneous big data sources with ever increasing volumes, velocities and veracities over the he last few years has inspired the data science and research community to address the challenge of extracting knowledge form big data. Such a wealth of generated data across the board can be intelligently exploited to advance our knowledge about our environment, public health, critical infrastructure and security. In recent years we have developed generic approaches to process such big data at multiple levels for advancing decision-support. It specifically concerns data processing with semantic harmonisation, low level fusion, analytics, knowledge modelling with high level fusion and reasoning. Such approaches will be introduced and presented in context of the TRIDEC project results on critical oil and gas industry drilling operations and also the ongoing large eVacuate project on critical crowd behaviour detection in confined spaces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Heading into the 2020s, Physics and Astronomy are undergoing experimental revolutions that will reshape our picture of the fabric of the Universe. The Large Hadron Collider (LHC), the largest particle physics project in the world, produces 30 petabytes of data annually that need to be sifted through, analysed, and modelled. In astrophysics, the Large Synoptic Survey Telescope (LSST) will be taking a high-resolution image of the full sky every 3 days, leading to data rates of 30 terabytes per night over ten years. These experiments endeavour to answer the question why 96% of the content of the universe currently elude our physical understanding. Both the LHC and LSST share the 5-dimensional nature of their data, with position, energy and time being the fundamental axes. This talk will present an overview of the experiments and data that is gathered, and outlines the challenges in extracting information. Common strategies employed are very similar to industrial data! Science problems (e.g., data filtering, machine learning, statistical interpretation) and provide a seed for exchange of knowledge between academia and industry. Speaker Biography Professor Mark Sullivan Mark Sullivan is a Professor of Astrophysics in the Department of Physics and Astronomy. Mark completed his PhD at Cambridge, and following postdoctoral study in Durham, Toronto and Oxford, now leads a research group at Southampton studying dark energy using exploding stars called "type Ia supernovae". Mark has many years' experience of research that involves repeatedly imaging the night sky to track the arrival of transient objects, involving significant challenges in data handling, processing, classification and analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an Integrated Environment suitable for learning and teaching computer programming which is designed for both students of specialised Computer Science courses, and also non-specialist students such as those following Liberal Arts. The environment is rich enough to allow exploration of concepts from robotics, artificial intelligence, social science, and philosophy as well as the specialist areas of operating systems and the various computer programming paradigms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract not available

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artigo tem como objetivo mostrar que é possível incentivar a aprendizagem em museus através da construção de comunidades virtuais, com base em repositórios de objetos de aprendizagem (OA), ferramentas comunicacionais e produção de OA por parte dos visitantes. O enfoque é incentivar a aprendizagem no sentido de motivar a participação/envolvimento do visitante nas atividades da comunidade virtual. Nesta perspectiva, partimos do pressuposto de que a informação, a comunicação, a interação e a colaboração são essenciais para o processo de aprender no contexto informal dos museus. Acreditamos que a interação e a colaboração são partes integrantes do processo de aprendizagem proporcionado por comunidades virtuais e que o principal recurso de aprendizagem oferecido nessas comunidades são os objetos de aprendizagem. Assim sendo, por meio de um entendimento do aprender baseado na comunicação e na linguagem, percebemos os museus interativos como espaços discursivos em que os visitantes mergulham e por eles são modificados. Neste sentido, argumentamos que as comunidades virtuais de aprendizagem, com a possibilidade de virtualizar a linguagem, são excelentes mecanismos para ampliar o poder comunicacional dos museus, criando novas estratégias comunicativas. Para atingir o objetivo, foi necessário reunir quatro conceitos técnicos da área de informática, são eles: comunidades virtuais de aprendizagem; objetos de aprendizagem; metadados e mapas de tópicos. A junção destes conceitos permitiu a construção do ambiente de comunidade virtual, denominada CV-Muzar. Diante do exposto, de modo a identificar os meios pelos quais se podem motivar os visitantes a realmente produzirem novos conhecimentos durante sua visita informal ao museu, examinando essa questão tanto do ponto de vista quantitativo, como também qualitativamente, foi realizada uma experimentação do ambiente com um grupo de pessoas convidadas segundo suas áreas de formação.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artigo tem como objetivo mostrar que é possível incentivar a aprendizagem informal em museus através da construção de comunidades virtuais, com base em repositórios de objetos de aprendizagem, ferramentas comunicacionais e produção de OAs por parte dos visitantes. O enfoque é incentivar a aprendizagem no sentido de motivar a participação/envolvimento do visitante nas atividades da comunidade virtual. Nesta perspectiva, partimos do pressuposto de que a informação, a comunicação, a interação e a cooperação são essenciais para o processo de aprender no contexto informal dos museus. Acreditamos que a interação e a cooperação são partes integrantes do processo de aprendizagem proporcionado por comunidades virtuais e que o principal recurso de aprendizagem oferecido nessas comunidades são os objetos de aprendizagem. Diante do exposto, construímos a Comunidade Virtual do Muzar e realizamos uma experimentação do ambiente de modo a verificar o quanto os visitantes são incentivados a produzir novos conhecimentos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As robot imitation learning is beginning to replace conventional hand-coded approaches in programming robot behaviors, much work is focusing on learning from the actions of demonstrators. We hypothesize that in many situations, procedural tasks can be learned more effectively by observing object behaviors while completely ignoring the demonstrator's motions. To support studying this hypothesis and robot imitation learning in general, we built a software system named SMILE that is a simulated 3D environment. In this virtual environment, both a simulated robot and a user-controlled demonstrator can manipulate various objects on a tabletop. The demonstrator is not embodied in SMILE, and therefore a recorded demonstration appears as if the objects move on their own. In addition to recording demonstrations, SMILE also allows programing the simulated robot via Matlab scripts, as well as creating highly customizable objects for task scenarios via XML. This report describes the features and usages of SMILE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los maestros deben fortalecer desde el currículo ambientes adecuados para que los estudiantes se apropien del manejo de las TIC (Tecnologías de la Información y Comunicación) y las involucren en el proceso de enseñanza - aprendizaje; favoreciendo con esto el desarrollo de los contenidos en todas las áreas -- Cuando se pretende hacer uso de las TIC para mejorar el aprendizaje significativo de los estudiantes, se evidencia la debilidad que tiene el maestro para realizar un seguimiento a las actividades informales que realiza el estudiante para comprender un tema o adquirir una competencia -- El primer acercamiento para mejorar esta problemática, es realizar una integración de un estándar educativo para llevar un control y registrar las actividades informales que el estudiante realice a través de Internet, empleando un sinnúmero de aplicaciones que la web ofrece -- Para el mejoramiento del proceso evaluativo de los estudiantes de la Institución Educativa San José, se implementará la integración del estándar Tin Can Api dentro de un prototipo para realizar seguimiento a las actividades informales que realicen los estudiantes con el fin de llevar un registro de experiencias de aprendizaje -- Teniendo en cuenta lo anterior el maestro debe estar preparado para utilizar metodologías apoyadas en las herramientas tecnológicas, así el aprendizaje se logra con mayor disfrute y calidad

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a study about a the Baars-Franklin architecture, which defines a model of computational consciousness, and use it in a mobile robot navigation task. The insertion of mobile robots in dynamic environments carries a high complexity in navigation tasks, in order to deal with the constant environment changes, it is essential that the robot can adapt to this dynamism. The approach utilized in this work is to make the execution of these tasks closer to how human beings react to the same conditions by means of a model of computational consci-ousness. The LIDA architecture (Learning Intelligent Distribution Agent) is a cognitive system that seeks tomodel some of the human cognitive aspects, from low-level perceptions to decision making, as well as attention mechanism and episodic memory. In the present work, a computa-tional implementation of the LIDA architecture was evaluated by means of a case study, aiming to evaluate the capabilities of a cognitive approach to navigation of a mobile robot in dynamic and unknown environments, using experiments both with virtual environments (simulation) and a real robot in a realistic environment. This study concluded that it is possible to obtain benefits by using conscious cognitive models in mobile robot navigation tasks, presenting the positive and negative aspects of this approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

COSTA, Umberto Souza da; MOREIRA, Anamaria Martins; MUSICANTE, Martin A. Specification and Runtime Verification of Java Card Programs. Electronic Notes in Theoretical Computer Science. [S.l:s.n], 2009.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The large number of opinions generated by online users made the former “word of mouth” find its way to virtual world. In addition to be numerous, many of the useful reviews are mixed with a large number of fraudulent, incomplete or duplicate reviews. However, how to find the features that influence on the number of votes received by an opinion and find useful reviews? The literature on opinion mining has several studies and techniques that are able to analyze of properties found in the text of reviews. This paper presents the application of a methodology for evaluation of usefulness of opinions with the aim of identifying which characteristics have more influence on the amount of votes: basic utility (e.g. ratings about the product and/or service, date of publication), textual (e.g.size of words, paragraphs) and semantics (e.g., the meaning of the words of the text). The evaluation was performed in a database extracted from TripAdvisor with opinionsabout hotels written in Portuguese. Results show that users give more attention to recent opinions with higher scores for value and location of the hotel and with lowest scores for sleep quality and service and cleanliness. Texts with positive opinions, small words, few adjectives and adverbs increase the chances of receiving more votes.