918 resultados para one-to-one computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’intégration des TIC a connu un essor considérable dans les dernières années et des chercheurs à travers le monde y accordent une importance sans cesse croissante ; le sujet des TIC en éducation est ainsi répandu au sein des écrits depuis maintenant plusieurs années (Istance & Kools, 2013; Storz & Hoffman, 2013). Dans un monde où les technologies sont omniprésentes dans la plupart des sphères d’activités, il ne s’agit plus de savoir si les technologies doivent être intégrées dans les activités d’enseignement et d’apprentissage, mais bien de quelle façon elles doivent l’être. Comme les TIC présentent de nombreux avantages, notamment en ce qui concerne la motivation scolaire et la réduction du fossé numérique, les différents intervenants du monde de l’éducation sont généralement conscients de l’importance de bien utiliser les technologies de l’information et de la communication (TIC) en éducation, mais ne savent pas toujours par où commencer. La présente recherche s’intéresse à une forme particulière d’intégration des TIC en éducation, soit les projets portables. Les projets portables se différencient par le fait que l’enseignant et chaque élève disposent de leur propre ordinateur portable dans le but d’une utilisation pédagogique. Cette thèse de doctorat tente de détailler, à travers un langage clair et accessible, les défis qu’il est possible de rencontrer à l’intérieur de tels projets, de même que ce qui peut être fait pour en limiter les impacts. En vue de déterminer les conditions pouvant favoriser le succès global des projets portables au Québec, voire ailleurs, une recension des écrits exhaustive a permis de relever quatre catégories de facteurs principales dans lesquelles l’ensemble des défis identifiés semblent pouvoir être classés : les facteurs relatifs à la gestion du projet, les facteurs internes à l’enseignant, les facteurs relatifs au cadre de travail de même que les facteurs relatifs à l’infrastructure et au matériel. Ces diverses catégories de facteurs sont abordées en détails à l’intérieur du cadre théorique de cette thèse de doctorat. En vue d’atteindre les objectifs, un questionnaire a été mis au point et plus de 300 enseignants d’une commission scolaire où a lieu un projet portable à grand déploiement y ont répondu. Les données de nature mixte (données quantitatives et qualitatives) ont été analysées à l’aide de logiciels spécialisés et ceci a permis de vérifier la pertinence des éléments rencontrés dans la recension des écrits, de même que d’en découvrir de nouveaux. Il a été trouvé que de nombreux défis sont susceptibles d’être rencontrés. Les plus importants ont trait à la qualité du matériel utilisé, à l’importance de la formation des enseignants relativement aux TIC, et à l’importance de mettre au point une vision claire assurant la pleine adhésion des enseignants. Il a aussi été déterminé que l’enseignant doit pouvoir accéder à un soutien pédagogique ainsi qu’à un soutien technique facilement. Enfin, il a été découvert que la nature des projets à grand déploiement fait en sorte qu’il importe de porter une attention particulière aux besoins locaux des enseignants, qui peuvent varier selon le contexte de travail de ceux-ci.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Space applications are challenged by the reliability of parallel computing systems (FPGAs) employed in space crafts due to Single-Event Upsets. The work reported in this paper aims to achieve self-managing systems which are reliable for space applications by applying autonomic computing constructs to parallel computing systems. A novel technique, 'Swarm-Array Computing' inspired by swarm robotics, and built on the foundations of autonomic and parallel computing is proposed as a path to achieve autonomy. The constitution of swarm-array computing comprising for constituents, namely the computing system, the problem / task, the swarm and the landscape is considered. Three approaches that bind these constituents together are proposed. The feasibility of one among the three proposed approaches is validated on the SeSAm multi-agent simulator and landscapes representing the computing space and problem are generated using the MATLAB.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to accelerate computing the convex hull on a set of n points, a heuristic procedure is often applied to reduce the number of points to a set of s points, s ≤ n, which also contains the same hull. We present an algorithm to precondition 2D data with integer coordinates bounded by a box of size p × q before building a 2D convex hull, with three distinct advantages. First, we prove that under the condition min(p, q) ≤ n the algorithm executes in time within O(n); second, no explicit sorting of data is required; and third, the reduced set of s points forms a simple polygonal chain and thus can be directly pipelined into an O(n) time convex hull algorithm. This paper empirically evaluates and quantifies the speed up gained by preconditioning a set of points by a method based on the proposed algorithm before using common convex hull algorithms to build the final hull. A speedup factor of at least four is consistently found from experiments on various datasets when the condition min(p, q) ≤ n holds; the smaller the ratio min(p, q)/n is in the dataset, the greater the speedup factor achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last 10 years the number of mobile devices has grown rapidly. Each person usually brings at least two personal devices and researchers says that in a near future this number could raise up to ten devices per person. Moreover, all the devices are becoming more integrated to our life than in the past, therefore the amount of data exchanged increases accordingly to the improvement of people's lifestyle. This is what researchers call Internet of Things. Thus, in the future there will be more than 60 billions of nodes and the current infrastructure is not ready to keep track of all the exchanges of data between them. Therefore, infrastructure improvements have been proposed in the last years, like MobileIP and HIP in order to facilitate the exchange of packets in mobility, however none of them have been optimized for the purpose. In the last years, researchers from Mid Sweden University created The MediaSense Framework. Initially, this framework was based on the Chord protocol in order to route packets in a big network, but the most important change has been the introduction of PGrids in order to create the Overlay and the persistence. Thanks to this technology, a lookup in the trie takes up to 0.5*log(N), where N is the total number of nodes in the network. This result could be improved by further optimizations on the management of the nodes, for example by the dynamic creation of groups of nodes. Moreover, since the nodes move, an underlaying support for connectivity management is needed. SCTP has been selected as one of the most promising upcoming standards for simultaneous multiple connection's management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La educación superior en el siglo XXI va más allá de la simple transferencia de conocimientos donde el estudiante es un receptor pasivo y el rol activo lo desempeña el docente. Hoy la co-construcción de los conocimientos con los alumnos resulta una estrategia enriquecedora para ambos, más aún cuando las actividades pedagógicas que se planteen tengan como uno de sus receptores a la propia comunidad. La experiencia que se relata intenta complementar saberes universitarios, mediarlos a diferentes destinatarios, desarrollar capacidades para producirlos y utilizarlos adecuadamente a grupos definidos que requieren la información que se imparte. La tecnología digital en los escenarios pedagógico-didácticos no se limita exclusivamente a los ámbitos informáticos, virtuales o de elearning. Resulta una herramienta excelente para llegar a grupos poblacionales diversos. Así es que se que expresa una experiencia realizada entre las cátedras Clínica del Paciente Discapacitado y PrácticaProfesional Supervisada de la Facultad de Odontología, Universidad Nacional de Cuyo en Mendoza, Argentina. Se diseñó una propuesta de promoción de la salud bucal mediante el uso de las tecnologías de la información y comunicación (TICs). La misma consistió en la construcción a cargo de los alumnos de páginas web como instrumento de educación de la salud bucal dirigidas a grupos de riesgo con discapacidad, sus familias y profesionales odontólogos que atienden a estos pacientes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La característica fundamental de la Computación Natural se basa en el empleo de conceptos, principios y mecanismos del funcionamiento de la Naturaleza. La Computación Natural -y dentro de ésta, la Computación de Membranas- surge como una posible alternativa a la computación clásica y como resultado de la búsqueda de nuevos modelos de computación que puedan superar las limitaciones presentes en los modelos convencionales. En concreto, la Computación de Membranas se originó como un intento de formular un nuevo modelo computacional inspirado en la estructura y el funcionamiento de las células biológicas: los sistemas basados en este modelo constan de una estructura de membranas que actúan a la vez como separadores y como canales de comunicación, y dentro de esa estructura se alojan multiconjuntos de objetos que evolucionan de acuerdo a unas determinadas reglas de evolución. Al conjunto de dispositivos contemplados por la Computación de Membranas se les denomina genéricamente como Sistemas P. Hasta el momento los Sistemas P sólo han sido estudiados a nivel teórico y no han sido plenamente implementados ni en medios electrónicos, ni en medios bioquímicos, sólo han sido simulados o parcialmente implementados. Por tanto, la implantación de estos sistemas es un reto de investigación abierto. Esta tesis aborda uno de los problemas que debe ser resuelto para conseguir la implantación de los Sistemas P sobre plataformas hardware. El problema concreto se centra en el modelo de los Sistemas P de Transición y surge de la necesidad de disponer de algoritmos de aplicación de reglas que, independientemente de la plataforma hardware sobre la que se implementen, cumplan los requisitos de ser no deterministas, masivamente paralelos y además su tiempo de ejecución esté estáticamente acotado. Como resultado se ha obtenido un conjunto de algoritmos (tanto para plataformas secuenciales, como para plataformas paralelas) que se adecúan a las diferentes configuraciones de los Sistemas P. ABSTRACT The main feature of Natural Computing is the use of concepts, principles and mechanisms inspired by Nature. Natural Computing and within it, Membrane Computing emerges as an potential alternative to conventional computing and as from the search for new models of computation that may overcome the existing limitations in conventional models. Specifically, Membrane Computing was created to formulate a new computational paradigm inspired by the structure and functioning of biological cells: it consists of a membrane structure, which acts as separators as well as communication channels, and within this structure are stored multisets of objects that evolve according to certain evolution rules. The set of computing devices addressed by Membrane Computing are generically known P systems. Up to now, no P systems have been fully implemented yet in electronic or biochemical means. They only have been studied in theory, simulated or partially implemented. Therefore, the implementation of these systems is an open research challenge. This thesis addresses one of the problems to be solved in order to deploy P systems on hardware platforms. This specific problem is focused on the Transition P System model and emerges from the need of providing application rules algorithms that independently on the hardware platform on which they are implemented, meets the requirements of being nondeterministic, massively parallel and runtime-bounded. As a result, this thesis has developed a set of algorithms for both platforms, sequential and parallel, adapted to all possible configurations of P systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La computación con membranas surge como una alternativa a la computación tradicional. Dentro de este campo se sitúan los denominados Sistemas P de Transición que se basan en la existencia de regiones que contienen recursos y reglas que hacen evolucionar a dichos recursos para poder llevar a cada una de las regiones a una nueva situación denominada configuración. La sucesión de las diferentes configuraciones conforman la computación. En este campo, el Grupo de Computación Natural de la Universidad Politécnica de Madrid lleva a cabo numerosas investigaciones al amparo de las cuales se han publicado numerosos artículos y realizado varias tesis doctorales. Las principales vías de investigación han sido, hasta el momento, el estudio del modelo teórico sobre el que se definen los Sistemas P, el estudio de los algoritmos que se utilizan para la aplicación de las reglas de evolución en las regiones, el diseño de nuevas arquitecturas que mejoren las comunicaciones entre las diferentes membranas (regiones) que componen el sistema y la implantación de estos sistemas en dispositivos hardware que pudiesen definir futuras máquinas basadas en este modelo. Dentro de este último campo, es decir, dentro del objetivo de construir finalmente máquinas que puedan llevar a cabo la funcionalidad de la computación con Sistemas P, la presente tesis doctoral se centra en el diseño de dos procesadores paralelos que, aplicando variantes de algoritmos existentes, favorezcan el crecimiento en el nivel de intra-paralelismo a la hora de aplicar las reglas. El diseño y creación de ambos procesadores presentan novedosas aportaciones al entorno de investigación de los Sistemas P de Transición en tanto en cuanto se utilizan conceptos que aunque previamente definidos de manera teórica, no habían sido introducidos en el hardware diseñado para estos sistemas. Así, los dos procesadores mantienen las siguientes características: - Presentan un alto rendimiento en la fase de aplicación de reglas, manteniendo por otro lado una flexibilidad y escalabilidad medias que son dependientes de la tecnología final sobre la que se sinteticen dichos procesadores. - Presentan un alto nivel de intra-paralelismo en las regiones al permitir la aplicación simultánea de reglas. - Tienen carácter universal en tanto en cuanto no depende del carácter de las reglas que componen el Sistema P. - Tienen un comportamiento indeterminista que es inherente a la propia naturaleza de estos sistemas. El primero de los circuitos utiliza el conjunto potencia del conjunto de reglas de aplicación así como el concepto de máxima aplicabilidad para favorecer el intra-paralelismo y el segundo incluye, además, el concepto de dominio de aplicabilidad para determinar el conjunto de reglas que son aplicables en cada momento con los recursos existentes. Ambos procesadores se diseñan y se prueban mediante herramientas de diseño electrónico y se preparan para ser sintetizados sobre FPGAs. ABSTRACT Membrane computing appears as an alternative to traditional computing. P Systems are placed inside this field and they are based upon the existence of regions called “membranes” that contain resources and rules that describe how the resources may vary to take each of these regions to a new situation called "configuration". Successive configurations conform computation. Inside this field, the Natural Computing Group of the Universidad Politécnica of Madrid develops a large number of works and researches that provide a lot of papers and some doctoral theses. Main research lines have been, by the moment, the study of the theoretical model over which Transition P Systems are defined, the study of the algorithms that are used for the evolution rules application in the regions, the design of new architectures that may improve communication among the different membranes (regions) that compose the whole system and the implementation of such systems over hardware devices that may define machines based upon this new model. Within this last research field, this is, within the objective of finally building machines that may accomplish the functionality of computation with P Systems, the present thesis is centered on the design of two parallel processors that, applying several variants of some known algorithms, improve the level of the internal parallelism at the evolution rule application phase. Design and creation of both processors present innovations to the field of Transition P Systems research because they use concepts that, even being known before, were never used for circuits that implement the applying phase of evolution rules. So, both processors present the following characteristics: - They present a very high performance during the application rule phase, keeping, on the other hand, a level of flexibility and scalability that, even known it is not very high, it seems to be acceptable. - They present a very high level of internal parallelism inside the regions, allowing several rule to be applied at the same time. - They present a universal character meaning this that they are not dependent upon the active rules that compose the P System. - They have a non-deterministic behavior that is inherent to this systems nature. The first processor uses the concept of "power set of the application rule set" and the concept of "maximal application" number to improve parallelism, and the second one includes, besides the previous ones, the concept of "applicability domain" to determine the set of rules that may be applied in each moment with the existing resources.. Both processors are designed and tested with the design software by Altera Corporation and they are ready to be synthetized over FPGAs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los resultados presentados en la memoria de esta tesis doctoral se enmarcan en la denominada computación celular con membranas una nueva rama de investigación dentro de la computación natural creada por Gh. Paun en 1998, de ahí que habitualmente reciba el nombre de sistemas P. Este nuevo modelo de cómputo distribuido está inspirado en la estructura y funcionamiento de la célula. El objetivo de esta tesis ha sido analizar el poder y la eficiencia computacional de estos sistemas de computación celular. En concreto, se han analizado dos tipos de sistemas P: por un lado los sistemas P de neuronas de impulsos, y por otro los sistemas P con proteínas en las membranas. Para el primer tipo, los resultados obtenidos demuestran que es posible que estos sistemas mantengan su universalidad aunque muchas de sus características se limiten o incluso se eliminen. Para el segundo tipo, se analiza la eficiencia computacional y se demuestra que son capaces de resolver problemas de la clase de complejidad ESPACIO-P (PSPACE) en tiempo polinómico. Análisis del poder computacional: Los sistemas P de neuronas de impulsos (en adelante SN P, acrónimo procedente del inglés «Spiking Neural P Systems») son sistemas inspirados en el funcionamiento neuronal y en la forma en la que los impulsos se propagan por las redes sinápticas. Los SN P bio-inpirados poseen un numeroso abanico de características que ha cen que dichos sistemas sean universales y por tanto equivalentes, en poder computacional, a una máquina de Turing. Estos sistemas son potentes a nivel computacional, pero tal y como se definen incorporan numerosas características, quizás demasiadas. En (Ibarra et al. 2007) se demostró que en estos sistemas sus funcionalidades podrían ser limitadas sin comprometer su universalidad. Los resultados presentados en esta memoria son continuistas con la línea de trabajo de (Ibarra et al. 2007) y aportan nuevas formas normales. Esto es, nuevas variantes simplificadas de los sistemas SN P con un conjunto mínimo de funcionalidades pero que mantienen su poder computacional universal. Análisis de la eficiencia computacional: En esta tesis se ha estudiado la eficiencia computacional de los denominados sistemas P con proteínas en las membranas. Se muestra que este modelo de cómputo es equivalente a las máquinas de acceso aleatorio paralelas (PRAM) o a las máquinas de Turing alterantes ya que se demuestra que un sistema P con proteínas, es capaz de resolver un problema ESPACIOP-Completo como el QSAT(problema de satisfacibilidad de fórmulas lógicas cuantificado) en tiempo polinómico. Esta variante de sistemas P con proteínas es muy eficiente gracias al poder de las proteínas a la hora de catalizar los procesos de comunicación intercelulares. ABSTRACT The results presented at this thesis belong to membrane computing a new research branch inside of Natural computing. This new branch was created by Gh. Paun on 1998, hence usually receives the name of P Systems. This new distributed computing model is inspired on structure and functioning of cell. The aim of this thesis is to analyze the efficiency and computational power of these computational cellular systems. Specifically there have been analyzed two different classes of P systems. On the one hand it has been analyzed the Neural Spiking P Systems, and on the other hand it has been analyzed the P systems with proteins on membranes. For the first class it is shown that it is possible to reduce or restrict the characteristics of these kind of systems without loss of computational power. For the second class it is analyzed the computational efficiency solving on polynomial time PSACE problems. Computational Power Analysis: The spiking neural P systems (SN P in short) are systems inspired by the way of neural cells operate sending spikes through the synaptic networks. The bio-inspired SN Ps possess a large range of features that make these systems to be universal and therefore equivalent in computational power to a Turing machine. Such systems are computationally powerful, but by definition they incorporate a lot of features, perhaps too much. In (Ibarra et al. in 2007) it was shown that their functionality may be limited without compromising its universality. The results presented herein continue the (Ibarra et al. 2007) line of work providing new formal forms. That is, new SN P simplified variants with a minimum set of functionalities but keeping the universal computational power. Computational Efficiency Analisys: In this thesis we study the computational efficiency of P systems with proteins on membranes. We show that this computational model is equivalent to parallel random access machine (PRAM) or alternating Turing machine because, we show P Systems with proteins can solve a PSPACE-Complete problem as QSAT (Quantified Propositional Satisfiability Problem) on polynomial time. This variant of P Systems with proteins is very efficient thanks to computational power of proteins to catalyze inter-cellular communication processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes an architecture for pervasive computing which utilizes context information to provide adaptations based on vertical handovers (handovers between heterogeneous networks) while supporting application Quality of Service (QoS). The future of mobile computing will see an increase in ubiquitous network connectivity which allows users to roam freely between heterogeneous networks. One of the requirements for pervasive computing is to adapt computing applications or their environment if current applications can no longer be provided with the requested QoS. One of possible adaptations is a vertical handover to a different network. Vertical handover operations include changing network interfaces on a single device or changes between different devices. Such handovers should be performed with minimal user distraction and minimal violation of communication QoS for user applications. The solution utilises context information regarding user devices, user location, application requirements, and network environment. The paper shows how vertical handover adaptations are incorporated into the whole infrastructure of a pervasive system

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Highlights of Data Expedition: • Students explored daily observations of local climate data spanning the past 35 years. • Topological Data Analysis, or TDA for short, provides cutting-edge tools for studying the geometry of data in arbitrarily high dimensions. • Using TDA tools, students discovered intrinsic dynamical features of the data and learned how to quantify periodic phenomenon in a time-series. • Since nature invariably produces noisy data which rarely has exact periodicity, students also considered the theoretical basis of almost-periodicity and even invented and tested new mathematical definitions of almost-periodic functions. Summary The dataset we used for this data expedition comes from the Global Historical Climatology Network. “GHCN (Global Historical Climatology Network)-Daily is an integrated database of daily climate summaries from land surface stations across the globe.” Source: https://www.ncdc.noaa.gov/oa/climate/ghcn-daily/ We focused on the daily maximum and minimum temperatures from January 1, 1980 to April 1, 2015 collected from RDU International Airport. Through a guided series of exercises designed to be performed in Matlab, students explore these time-series, initially by direct visualization and basic statistical techniques. Then students are guided through a special sliding-window construction which transforms a time-series into a high-dimensional geometric curve. These high-dimensional curves can be visualized by projecting down to lower dimensions as in the figure below (Figure 1), however, our focus here was to use persistent homology to directly study the high-dimensional embedding. The shape of these curves has meaningful information but how one describes the “shape” of data depends on which scale the data is being considered. However, choosing the appropriate scale is rarely an obvious choice. Persistent homology overcomes this obstacle by allowing us to quantitatively study geometric features of the data across multiple-scales. Through this data expedition, students are introduced to numerically computing persistent homology using the rips collapse algorithm and interpreting the results. In the specific context of sliding-window constructions, 1-dimensional persistent homology can reveal the nature of periodic structure in the original data. I created a special technique to study how these high-dimensional sliding-window curves form loops in order to quantify the periodicity. Students are guided through this construction and learn how to visualize and interpret this information. Climate data is extremely complex (as anyone who has suffered from a bad weather prediction can attest) and numerous variables play a role in determining our daily weather and temperatures. This complexity coupled with imperfections of measuring devices results in very noisy data. This causes the annual seasonal periodicity to be far from exact. To this end, I have students explore existing theoretical notions of almost-periodicity and test it on the data. They find that some existing definitions are also inadequate in this context. Hence I challenged them to invent new mathematics by proposing and testing their own definition. These students rose to the challenge and suggested a number of creative definitions. While autocorrelation and spectral methods based on Fourier analysis are often used to explore periodicity, the construction here provides an alternative paradigm to quantify periodic structure in almost-periodic signals using tools from topological data analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article discusses the potential of audio games based on the evaluation of three projects: a story-driven audio role-playing game (RPG), an interactive audiobook with RPG elements, and a set of casual sound-based games. The potential is understood, both in popularity and playability terms. The first factor is connected to the degree of players’ interest, while the second one to the degree of their engagement in sound-based game worlds. Although presented projects are embedded within the landscape of past and contemporary audio games and gaming platforms, the authors reach into the near future, concluding with possible development directions for this non-visual interactive entertainment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Elasticity is one of the most known capabilities related to cloud computing, being largely deployed reactively using thresholds. In this way, maximum and minimum limits are used to drive resource allocation and deallocation actions, leading to the following problem statements: How can cloud users set the threshold values to enable elasticity in their cloud applications? And what is the impact of the application’s load pattern in the elasticity? This article tries to answer these questions for iterative high performance computing applications, showing the impact of both thresholds and load patterns on application performance and resource consumption. To accomplish this, we developed a reactive and PaaS-based elasticity model called AutoElastic and employed it over a private cloud to execute a numerical integration application. Here, we are presenting an analysis of best practices and possible optimizations regarding the elasticity and HPC pair. Considering the results, we observed that the maximum threshold influences the application time more than the minimum one. We concluded that threshold values close to 100% of CPU load are directly related to a weaker reactivity, postponing resource reconfiguration when its activation in advance could be pertinent for reducing the application runtime.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To analyze the characteristics and predict the dynamic behaviors of complex systems over time, comprehensive research to enable the development of systems that can intelligently adapt to the evolving conditions and infer new knowledge with algorithms that are not predesigned is crucially needed. This dissertation research studies the integration of the techniques and methodologies resulted from the fields of pattern recognition, intelligent agents, artificial immune systems, and distributed computing platforms, to create technologies that can more accurately describe and control the dynamics of real-world complex systems. The need for such technologies is emerging in manufacturing, transportation, hazard mitigation, weather and climate prediction, homeland security, and emergency response. Motivated by the ability of mobile agents to dynamically incorporate additional computational and control algorithms into executing applications, mobile agent technology is employed in this research for the adaptive sensing and monitoring in a wireless sensor network. Mobile agents are software components that can travel from one computing platform to another in a network and carry programs and data states that are needed for performing the assigned tasks. To support the generation, migration, communication, and management of mobile monitoring agents, an embeddable mobile agent system (Mobile-C) is integrated with sensor nodes. Mobile monitoring agents visit distributed sensor nodes, read real-time sensor data, and perform anomaly detection using the equipped pattern recognition algorithms. The optimal control of agents is achieved by mimicking the adaptive immune response and the application of multi-objective optimization algorithms. The mobile agent approach provides potential to reduce the communication load and energy consumption in monitoring networks. The major research work of this dissertation project includes: (1) studying effective feature extraction methods for time series measurement data; (2) investigating the impact of the feature extraction methods and dissimilarity measures on the performance of pattern recognition; (3) researching the effects of environmental factors on the performance of pattern recognition; (4) integrating an embeddable mobile agent system with wireless sensor nodes; (5) optimizing agent generation and distribution using artificial immune system concept and multi-objective algorithms; (6) applying mobile agent technology and pattern recognition algorithms for adaptive structural health monitoring and driving cycle pattern recognition; (7) developing a web-based monitoring network to enable the visualization and analysis of real-time sensor data remotely. Techniques and algorithms developed in this dissertation project will contribute to research advances in networked distributed systems operating under changing environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Fiscal Year 2014-15 the General Assembly funded with lottery fund revenues the K-12 Technology Initiative. The Initiative has three objectives: to improve external connections to schools; to improve internal connections within schools; and to develop or expand one-to-one computing. There is also a summary of the school district and school responses to questions on the South Carolina Technology Counts Survey for the 2015-16 reporting period that pertain directly to the K-12 Technology Initiative.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main problem with current approaches to quantum computing is the difficulty of establishing and maintaining entanglement. A Topological Quantum Computer (TQC) aims to overcome this by using different physical processes that are topological in nature and which are less susceptible to disturbance by the environment. In a (2+1)-dimensional system, pseudoparticles called anyons have statistics that fall somewhere between bosons and fermions. The exchange of two anyons, an effect called braiding from knot theory, can occur in two different ways. The quantum states corresponding to the two elementary braids constitute a two-state system allowing the definition of a computational basis. Quantum gates can be built up from patterns of braids and for quantum computing it is essential that the operator describing the braiding-the R-matrix-be described by a unitary operator. The physics of anyonic systems is governed by quantum groups, in particular the quasi-triangular Hopf algebras obtained from finite groups by the application of the Drinfeld quantum double construction. Their representation theory has been described in detail by Gould and Tsohantjis, and in this review article we relate the work of Gould to TQC schemes, particularly that of Kauffman.