824 resultados para Computer Communication Networks
Resumo:
In-Band Full-DupleX (IB-FDX) is defined as the ability for nodes to transmit and receive signals simultaneously on the same channel. Conventional digital wireless networks do not implement it, since a node’s own transmission signal causes interference to the signal it is trying to receive. However, recent studies attempt to overcome this obstacle, since it can potentially double the spectral efficiency of current wireless networks. Different mechanisms exist today that are able to reduce a significant part of the Self- Interference (SI), although specially tuned Medium Access Control (MAC) protocols are required to optimize its use. One of IB-FDX’s biggest problems is that the nodes’ interference range is extended, meaning the unusable space for other transmissions and receptions is broader. This dissertation proposes using MultiPacket Reception (MPR) to address this issue and adapts an already existing Single-Carrier with Frequency-Domain Equalization (SC-FDE) receiver to IB-FDX. The performance analysis suggests that MPR and IB-FDX have a strong synergy and are able to achieve higher data rates, when used together. Using analytical models, the optimal transmission patterns and transmission power were identified, which maximize the channel capacity with the minimal energy consumption. This was used to define a new MAC protocol, named Full-duplex Multipacket reception Medium Access Control (FM-MAC). FM-MAC was designed for a single-hop cellular infrastructure, where the Access Point (AP) and the terminals implement both IB-FDX and MPR. It divides the coverage range of the AP into a closer Full-DupleX (FDX) zone and a farther Half-DupleX (HDX) zone and adds a tunable fairness mechanism to avoid terminal starvation. Simulation results show that this protocol provides efficient support for both HDX and FDX terminals, maximizing its capacity when more FDX terminals are used.
Resumo:
This research addresses the problem of creating interactive experiences to encourage people to explore spaces. Besides the obvious spaces to visit, such as museums or art galleries, spaces that people visit can be, for example, a supermarket or a restaurant. As technology evolves, people become more demanding in the way they use it and expect better forms of interaction with the space that surrounds them. Interaction with the space allows information to be transmitted to the visitors in a friendly way, leading visitors to explore it and gain knowledge. Systems to provide better experiences while exploring spaces demand hardware and software that is not in the reach of every space owner either because of the cost or inconvenience of the installation, that can damage artefacts or the space environment. We propose a system adaptable to the spaces, that uses a video camera network and a wi-fi network present at the space (or that can be installed) to provide means to support interactive experiences using the visitor’s mobile device. The system is composed of an infrastructure (called vuSpot), a language grammar used to describe interactions at a space (called XploreDescription), a visual tool used to design interactive experiences (called XploreBuilder) and a tool used to create interactive experiences (called urSpace). By using XploreBuilder, a tool built of top of vuSpot, a user with little or no experience in programming can define a space and design interactive experiences. This tool generates a description of the space and of the interactions at that space (that complies with the XploreDescription grammar). These descriptions can be given to urSpace, another tool built of top of vuSpot, that creates the interactive experience application. With this system we explore new forms of interaction and use mobile devices and pico projectors to deliver additional information to the users leading to the creation of interactive experiences. The several components are presented as well as the results of the respective user tests, which were positive. The design and implementation becomes cheaper, faster, more flexible and, since it does not depend on the knowledge of a programming language, accessible for the general public.
Resumo:
Existing wireless networks are characterized by a fixed spectrum assignment policy. However, the scarcity of available spectrum and its inefficient usage demands for a new communication paradigm to exploit the existing spectrum opportunistically. Future Cognitive Radio (CR) devices should be able to sense unoccupied spectrum and will allow the deployment of real opportunistic networks. Still, traditional Physical (PHY) and Medium Access Control (MAC) protocols are not suitable for this new type of networks because they are optimized to operate over fixed assigned frequency bands. Therefore, novel PHY-MAC cross-layer protocols should be developed to cope with the specific features of opportunistic networks. This thesis is mainly focused on the design and evaluation of MAC protocols for Decentralized Cognitive Radio Networks (DCRNs). It starts with a characterization of the spectrum sensing framework based on the Energy-Based Sensing (EBS) technique considering multiple scenarios. Then, guided by the sensing results obtained by the aforementioned technique, we present two novel decentralized CR MAC schemes: the first one designed to operate in single-channel scenarios and the second one to be used in multichannel scenarios. Analytical models for the network goodput, packet service time and individual transmission probability are derived and used to compute the performance of both protocols. Simulation results assess the accuracy of the analytical models as well as the benefits of the proposed CR MAC schemes.
Resumo:
Wireless Sensor Networks(WSN) are networks of devices used to sense and act that applies wireless radios to communicate. To achieve a successful implementation of a wireless device it is necessary to take in consideration the existence of a wide variety of radios available, a large number of communication parameters (payload, duty cycle, etc.) and environmental conditions that may affect the device’s behaviour. However, to evaluate a specific radio towards a unique application it might be necessary to conduct trial experiments, with such a vast amount of devices, communication parameters and environmental conditions to take into consideration the number of trial cases generated can be surprisingly high. Thus, making trial experiments to achieve manual validation of wireless communication technologies becomes unsuitable due to the existence of a high number of trial cases on the field. To overcome this technological issue an automated test methodology was introduced, presenting the possibility to acquire data regarding the device’s behaviour when testing several technologies and parameters that care for a specific analysis. Therefore, this method advances the validation and analysis process of the wireless radios and allows the validation to be done without the need of specific and in depth knowledge about wireless devices.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for human-computer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of visionbased interaction systems could be the same for all applications and thus facilitate the implementation. For hand posture recognition, a SVM (Support Vector Machine) model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM (Hidden Markov Model) model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications. To validate the proposed framework two applications were implemented. The first one is a real-time system able to interpret the Portuguese Sign Language. The second one is an online system able to help a robotic soccer game referee judge a game in real time.
Resumo:
Hand gestures are a powerful way for human communication, with lots of potential applications in the area of human computer interaction. Vision-based hand gesture recognition techniques have many proven advantages compared with traditional devices, giving users a simpler and more natural way to communicate with electronic devices. This work proposes a generic system architecture based in computer vision and machine learning, able to be used with any interface for humancomputer interaction. The proposed solution is mainly composed of three modules: a pre-processing and hand segmentation module, a static gesture interface module and a dynamic gesture interface module. The experiments showed that the core of vision-based interaction systems can be the same for all applications and thus facilitate the implementation. In order to test the proposed solutions, three prototypes were implemented. For hand posture recognition, a SVM model was trained and used, able to achieve a final accuracy of 99.4%. For dynamic gestures, an HMM model was trained for each gesture that the system could recognize with a final average accuracy of 93.7%. The proposed solution as the advantage of being generic enough with the trained models able to work in real-time, allowing its application in a wide range of human-machine applications.
Resumo:
Liver diseases have severe patients’ consequences, being one of the main causes of premature death. These facts reveal the centrality of one`s daily habits, and how important it is the early diagnosis of these kind of illnesses, not only to the patients themselves, but also to the society in general. Therefore, this work will focus on the development of a diagnosis support system to these kind of maladies, built under a formal framework based on Logic Programming, in terms of its knowledge representation and reasoning procedures, complemented with an approach to computing grounded on Artificial Neural Networks.
Resumo:
About 90% of breast cancers do not cause or are capable of producing death if detected at an early stage and treated properly. Indeed, it is still not known a specific cause for the illness. It may be not only a beginning, but also a set of associations that will determine the onset of the disease. Undeniably, there are some factors that seem to be associated with the boosted risk of the malady. Pondering the present study, different breast cancer risk assessment models where considered. It is our intention to develop a hybrid decision support system under a formal framework based on Logic Programming for knowledge representation and reasoning, complemented with an approach to computing centered on Artificial Neural Networks, to evaluate the risk of developing breast cancer and the respective Degree-of-Confidence that one has on such a happening.
Resumo:
In view of the major social and environmental problems, with which we are faced nowadays, we noticed a certain absence of values in society, where man draws many more resources than nature can replace in the short or medium term. Within the framework of fashion emerges the ethical fashion as a movement in this direction, intending to change this current paradigm. Ethical fashion encompasses different concepts such as fair trade, sustainability, working conditions, raw materials, social responsibility and the protection of animals. This study aims to determine which type of communication are fashion brands using in this context, and if this communication aims at educating the consumer for a more ethical consumer behavior. For this study were selected 44 fashion brands associated with the Ethical Trade Initiative. The method used for the research development was content analysis for which first was made a data collection of the information provided on the websites and social networks of the selected fashion brands. The data was analyzed taking into account the quality and type of information published related to ethical fashion, for which an ordinal scale was created as a way of measuring and comparing results.
Resumo:
This book was produced in the scope of a research project entitled “Navigating with ‘Magalhães’: Study on the Impact of Digital Media in Schoolchildren”. This study was conducted between May 2010 and May 2013 at the Communication and Society Research Centre, University of Minho, Portugal and it was funded by the Portuguese Foundation for Science and Technology (PTDC/CCI-COM/101381/2008).
Resumo:
This book was produced in the scope of a research project entitled “Navigating with ‘Magalhães’: Study on the Impact of Digital Media in Schoolchildren”. This study was conducted between May 2010 and May 2013 at the Communication and Society Research Centre, University of Minho, Portugal and it was funded by the Portuguese Foundation for Science and Technology (PTDC/CCI-COM/101381/2008). As we shall explain in more detail later in this book, the main objective of that research project was to analyse the impact of the Portuguese government programme named ´e-escolinha´ launched in 2008 within the Technological Plan for Education. This Plan responds to the principles of the Lisbon Strategy signed in 2000 and rereleased in the Spring European Council of 2005.
Resumo:
La carte postale est un kaléidoscope de vues, d’ornements et de couleurs, qui consacre un tout petit espace au message. C’est à la photographie et aux procédés de reproduction photomécaniques que revient le mérite d’avoir industrialisé la production de la carte postale. Et ce sont les clichés de villes, avec leurs monuments et leurs paysages, qui confèrent à la carte postale son statut de moyen de communication de masse et qui lui concèdent une affinité avec l’industrie du tourisme. La carte postale s’est ainsi emparée de l’ambition photographique de reproduire le monde, s’alliant aux « besoins de l’exploration, des expéditions et des relevés topographiques » du médium photographique à ses débuts. Ayant comme point de départ la carte postale, notre objectif est de montrer les conséquences culturelles de la révolution optique, commencée au milieu du XIXe siècle, avec l’invention de l’appareil photo, et consumée dans la seconde moitié du XXe siècle, avec l’apparition de l’ordinateur. En effet, depuis l’apparition de l’appareil photographique et des cartes postales jusqu’au flux de pixels de Google Images et aux images satellite de Google Earth, un entrelacement de territoire, puissance et technique a été mis en oeuvre, la terre devenant, en conséquence, de plus en plus auscultée par les appareils de vision, ce qui impacte sur la perception de l’espace. Nous espérons pouvoir montrer avec cette étude que la lettre traditionnelle est à l’email ce que la carte postale est au post que l’on publie dans un blog ou dans des réseaux comme Facebook et Twitter. À notre sens, les cartes postales correspondent à l’ouverture maximale du système postal moderne, qui d’universel devient dépendant et partie intégrante des réseaux télématiques d’envoi. Par elles sont annoncés, en effet, la vitesse de transmission de l’information, la brièveté de la parole et l’hégémonie de la dimension imagétique du message, et pour finir, l’embarras provoqué par la fusion de l’espace public avec l’espace privé.
Resumo:
El crecimiento exponencial del tráfico de datos es uno de los mayores desafíos que enfrentan actualmente los sistemas de comunicaciones, debiendo los mismos ser capaces de soportar velocidades de procesamiento de datos cada vez mas altas. En particular, el consumo de potencia se ha transformado en uno de los parámetros de diseño más críticos, generando la necesidad de investigar el uso de nuevas arquitecturas y algoritmos para el procesamiento digital de la información. Por otro lado, el análisis y evaluación de nuevas técnicas de procesamiento presenta dificultades dadas las altas velocidades a las que deben operar, resultando frecuentemente ineficiente el uso de la simulación basada en software como método. En este contexto, el uso de electrónica programable ofrece una oportunidad a bajo costo donde no solo se evaluan nuevas técnicas de diseño de alta velocidad sino también se valida su implementación en desarrollos tecnológicos. El presente proyecto tiene como objetivo principal el estudio y desarrollo de nuevas arquitecturas y algoritmos en electrónica programable para el procesamiento de datos a alta velocidad. El método a utilizar será la programación en dispositivos FPGA (Field-Programmable Gate Array) que ofrecen una buena relación costo-beneficio y gran flexibilidad para integrarse con otros dispositivos de comunicaciones. Para la etapas de diseño, simulación y programación se utilizaran herramientas CAD (Computer-Aided Design) orientadas a sistemas electrónicos digitales. El proyecto beneficiara a estudiantes de grado y postgrado de carreras afines a la informática y las telecomunicaciones, contribuyendo al desarrollo de proyectos finales y tesis doctorales. Los resultados del proyecto serán publicados en conferencias y/o revistas nacionales e internacionales y divulgados a través de charlas de difusión y/o encuentros. El proyecto se enmarca dentro de un área de gran importancia para la Provincia de Córdoba, como lo es la informática y las telecomunicaciones, y promete generar conocimiento de gran valor agregado que pueda ser transferido a empresas tecnológicas de la Provincia de Córdoba a través de consultorias o desarrollos de productos.
Resumo:
This is a study of a state of the art implementation of a new computer integrated testing (CIT) facility within a company that designs and manufactures transport refrigeration systems. The aim was to use state of the art hardware, software and planning procedures in the design and implementation of three CIT systems. Typical CIT system components include data acquisition (DAQ) equipment, application and analysis software, communication devices, computer-based instrumentation and computer technology. It is shown that the introduction of computer technology into the area of testing can have a major effect on such issues as efficiency, flexibility, data accuracy, test quality, data integrity and much more. Findings reaffirm how the overall area of computer integration continues to benefit any organisation, but with more recent advances in computer technology, communication methods and software capabilities, less expensive more sophisticated test solutions are now possible. This allows more organisations to benefit from the many advantages associated with CIT. Examples of computer integration test set-ups and the benefits associated with computer integration have been discussed.
Resumo:
Network protection, distribution networks, decentralised energy resources, communication links, IEC Communication and Substation Control Standards