935 resultados para Realtà aumentata Pervasive computing Internet Things Augmented Worlds
Resumo:
The Internet of Things is a technological innovation, based on artifacts and consolidated concepts like Internet and Smart Objects. Its growing business application of Internet of Things makes necessary to evaluate the strategy, benefits and challenges of this technology application. The main objective of this paper is to present the definition of Internet of Things, based on the most cited articles and as a secondary objective, present publication statistics classified by year and related terms, like ubiquitous computation. One of the conclusions is that papers related to business represent only 5% of all the papers analyzed by this research, considering just the papers published on journals. It shows that there is a great field to research on Business Administration.
Resumo:
A partir de la dinámica evolutiva de la economía de las Tecnologías de la Información y las Comunicaciones y el establecimiento de estándares mínimos de velocidad en distintos contextos regulatorios a nivel mundial, en particular en Colombia, en el presente artículo se presentan diversas aproximaciones empíricas para evaluar los efectos reales que conlleva el establecimiento de definiciones de servicios de banda ancha en el mercado de Internet fijo. Con base en los datos disponibles para Colombia sobre los planes de servicios de Internet fijo ofrecidos durante el periodo 2006-2012, se estima para los segmentos residencial y corporativo el proceso de difusión logístico modificado y el modelo de interacción estratégica para identificar los impactos generados sobre la masificación del servicio a nivel municipal y sobre las decisiones estratégicas que adoptan los operadores, respectivamente. Respecto a los resultados, se encuentra, por una parte, que las dos medidas regulatorias establecidas en Colombia en 2008 y 2010 presentan efectos significativos y positivos sobre el desplazamiento y el crecimiento de los procesos de difusión a nivel municipal. Por otra parte, se observa sustituibilidad estratégica en las decisiones de oferta de velocidad de descarga por parte de los operadores corporativos mientras que, a partir del análisis de distanciamiento de la velocidad ofrecida respecto al estándar mínimo de banda ancha, se demuestra que los proveedores de servicios residenciales tienden a agrupar sus decisiones de velocidad alrededor de los niveles establecidos por regulación.
Resumo:
Acute Coronary Syndrome (ACS) is transversal to a broad and heterogeneous set of human beings, and assumed as a serious diagnosis and risk stratification problem. Although one may be faced with or had at his disposition different tools as biomarkers for the diagnosis and prognosis of ACS, they have to be previously evaluated and validated in different scenarios and patient cohorts. Besides ensuring that a diagnosis is correct, attention should also be directed to ensure that therapies are either correctly or safely applied. Indeed, this work will focus on the development of a diagnosis decision support system in terms of its knowledge representation and reasoning mechanisms, given here in terms of a formal framework based on Logic Programming, complemented with a problem solving methodology to computing anchored on Artificial Neural Networks. On the one hand it caters for the evaluation of ACS predisposing risk and the respective Degree-of-Confidence that one has on such a happening. On the other hand it may be seen as a major development on the Multi-Value Logics to understand things and ones behavior. Undeniably, the proposed model allows for an improvement of the diagnosis process, classifying properly the patients that presented the pathology (sensitivity ranging from 89.7% to 90.9%) as well as classifying the absence of ACS (specificity ranging from 88.4% to 90.2%).
Resumo:
Los geógrafos ahora tienen a su disposición la red mundial de INTERNET. Esta res es mucho más que un depósito gigante de datos y programas. Es un cúmulo de experiencias humanas que incluyen texto, artículos, imagen, video y foros de discusión. Es una nueva forma de procesamiento a la información de formas que antes considerábamos imposibles. El profesional que continúe procesando y obteniendo información de la manera tradicional se estará quedando al margen de nuevo conocimiento disponible a diario en INTERNET. El profesional de hoy no se limita a recopilar información en una biblioteca o librería, sino que accesa directamente sitios de búsqueda que le permitirán encontrar rápidamente los datos que busca. Un ejemplo, son los meteorólogos que tienen en INTERNET su mejor herramienta, ya que pueden recuperar imágenes sobre el clima casi inmediatamente después que son almacenadas desde el satélite, lo cual les permite evaluar y discernir sobre el estado actual del clima (Aberdeen University Compiting Center, 1996). Las imágenes las pueden ver y bajar a su computadora individual para su propio uso. Los profesores en la actualidad brindan al estudiante todo su material almacenándolo en INTERNET. La relación profesor-estudiante ya no es la misma. Al estudiante se le exige encontrar la información en su computadora y asimilarla. El viejo cuaderno no es necesario, las lecciones pueden ser recuperadas para su estudio sin que el profesor tenga que impartirlas, como se hace en la mayoría de las universidades de los Estados Unidos (Ohio State University, 1996). En general, este articulo persigue mostrar a los profesionales de las ciencias geográficas, dónde encontrar la información que buscan t cómo localizar más de lo que imaginan con la red INTERNET. ABSTRACT Geographers now have at their disposition the world network of INTERNET. This network is much more than just a large deposit of digital data and programs. It is an accumulation of human experiences that include text, articles, images, videos, and discussion bulletin boards. It is a new form of processing and managing information that was previously considered impossible. The professional who continues searching and obtaining information by traditional methods will be left on the fringes of this new wave of digital information and material available daily on INTERNET. Hence, a professional is not limited to compiling information in libraries or bookstores as direct and rapid access of desired research materials is available on the INTERNET. For example, meteorologists have in INTERNET their best tool in that they can acquire meteorologic satellite images, which permit them to evaluate and discern the actual present climatic situation (Aberdeen University Computing Center, 1996). One can see and then down load to one´s personal computer imagines of interest for personal use. Professors can offer to students all their materials for a class through and stores on the INTERNET. The relationship between professor and student is not the same. Students can be asked to access and assimilate the information via individual computers connected to the INTERNTET. Notebooks are becoming obsolete given that all class lectures and materials could be placed on the INTERNET for review without a professor having to give a lecture, as is being done in many universities of the United States (Ohio State University, 1996).This article pursues showing, in general, where professionals in Geographical Sciences can find available information and much more on the INTERNET.
Resumo:
Early definitions of Smart Building focused almost entirely on the technology aspect and did not suggest user interaction at all. Indeed, today we would attribute it more to the concept of the automated building. In this sense, control of comfort conditions inside buildings is a problem that is being well investigated, since it has a direct effect on users’ productivity and an indirect effect on energy saving. Therefore, from the users’ perspective, a typical environment can be considered comfortable, if it’s capable of providing adequate thermal comfort, visual comfort and indoor air quality conditions and acoustic comfort. In the last years, the scientific community has dealt with many challenges, especially from a technological point of view. For instance, smart sensing devices, the internet, and communication technologies have enabled a new paradigm called Edge computing that brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth. This has allowed us to improve services, sustainability and decision making. Many solutions have been implemented such as smart classrooms, controlling the thermal condition of the building, monitoring HVAC data for energy-efficient of the campus and so forth. Though these projects provide to the realization of smart campus, a framework for smart campus is yet to be determined. These new technologies have also introduced new research challenges: within this thesis work, some of the principal open challenges will be faced, proposing a new conceptual framework, technologies and tools to move forward the actual implementation of smart campuses. Keeping in mind, several problems known in the literature have been investigated: the occupancy detection, noise monitoring for acoustic comfort, context awareness inside the building, wayfinding indoor, strategic deployment for air quality and books preserving.
Resumo:
Nowadays, cities deal with unprecedented pollution and overpopulation problems, and Internet of Things (IoT) technologies are supporting them in facing these issues and becoming increasingly smart. IoT sensors embedded in public infrastructure can provide granular data on the urban environment, and help public authorities to make their cities more sustainable and efficient. Nonetheless, this pervasive data collection also raises high surveillance risks, jeopardizing privacy and data protection rights. Against this backdrop, this thesis addresses how IoT surveillance technologies can be implemented in a legally compliant and ethically acceptable fashion in smart cities. An interdisciplinary approach is embraced to investigate this question, combining doctrinal legal research (on privacy, data protection, criminal procedure) with insights from philosophy, governance, and urban studies. The fundamental normative argument of this work is that surveillance constitutes a necessary feature of modern information societies. Nonetheless, as the complexity of surveillance phenomena increases, there emerges a need to develop more fine-attuned proportionality assessments to ensure a legitimate implementation of monitoring technologies. This research tackles this gap from different perspectives, analyzing the EU data protection legislation and the United States and European case law on privacy expectations and surveillance. Specifically, a coherent multi-factor test assessing privacy expectations in public IoT environments and a surveillance taxonomy are proposed to inform proportionality assessments of surveillance initiatives in smart cities. These insights are also applied to four use cases: facial recognition technologies, drones, environmental policing, and smart nudging. Lastly, the investigation examines competing data governance models in the digital domain and the smart city, reviewing the EU upcoming data governance framework. It is argued that, despite the stated policy goals, the balance of interests may often favor corporate strategies in data sharing, to the detriment of common good uses of data in the urban context.
Resumo:
Machine (and deep) learning technologies are more and more present in several fields. It is undeniable that many aspects of our society are empowered by such technologies: web searches, content filtering on social networks, recommendations on e-commerce websites, mobile applications, etc., in addition to academic research. Moreover, mobile devices and internet sites, e.g., social networks, support the collection and sharing of information in real time. The pervasive deployment of the aforementioned technological instruments, both hardware and software, has led to the production of huge amounts of data. Such data has become more and more unmanageable, posing challenges to conventional computing platforms, and paving the way to the development and widespread use of the machine and deep learning. Nevertheless, machine learning is not only a technology. Given a task, machine learning is a way of proceeding (a way of thinking), and as such can be approached from different perspectives (points of view). This, in particular, will be the focus of this research. The entire work concentrates on machine learning, starting from different sources of data, e.g., signals and images, applied to different domains, e.g., Sport Science and Social History, and analyzed from different perspectives: from a non-data scientist point of view through tools and platforms; setting a problem stage from scratch; implementing an effective application for classification tasks; improving user interface experience through Data Visualization and eXtended Reality. In essence, not only in a quantitative task, not only in a scientific environment, and not only from a data-scientist perspective, machine (and deep) learning can do the difference.
Resumo:
The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions.
Resumo:
The purpose of this research study is to discuss privacy and data protection-related regulatory and compliance challenges posed by digital transformation in healthcare in the wake of the COVID-19 pandemic. The public health crisis accelerated the development of patient-centred remote/hybrid healthcare delivery models that make increased use of telehealth services and related digital solutions. The large-scale uptake of IoT-enabled medical devices and wellness applications, and the offering of healthcare services via healthcare platforms (online doctor marketplaces) have catalysed these developments. However, the use of new enabling technologies (IoT, AI) and the platformisation of healthcare pose complex challenges to the protection of patient’s privacy and personal data. This happens at a time when the EU is drawing up a new regulatory landscape for the use of data and digital technologies. Against this background, the study presents an interdisciplinary (normative and technology-oriented) critical assessment on how the new regulatory framework may affect privacy and data protection requirements regarding the deployment and use of Internet of Health Things (hardware) devices and interconnected software (AI systems). The study also assesses key privacy and data protection challenges that affect healthcare platforms (online doctor marketplaces) in their offering of video API-enabled teleconsultation services and their (anticipated) integration into the European Health Data Space. The overall conclusion of the study is that regulatory deficiencies may create integrity risks for the protection of privacy and personal data in telehealth due to uncertainties about the proper interplay, legal effects and effectiveness of (existing and proposed) EU legislation. The proliferation of normative measures may increase compliance costs, hinder innovation and ultimately, deprive European patients from state-of-the-art digital health technologies, which is paradoxically, the opposite of what the EU plans to achieve.
Resumo:
Recent technological advancements have played a key role in seamlessly integrating cloud, edge, and Internet of Things (IoT) technologies, giving rise to the Cloud-to-Thing Continuum paradigm. This cloud model connects many heterogeneous resources that generate a large amount of data and collaborate to deliver next-generation services. While it has the potential to reshape several application domains, the number of connected entities remarkably broadens the security attack surface. One of the main problems is the lack of security measures to adapt to the dynamic and evolving conditions of the Cloud-To-Thing Continuum. To address this challenge, this dissertation proposes novel adaptable security mechanisms. Adaptable security is the capability of security controls, systems, and protocols to dynamically adjust to changing conditions and scenarios. However, since the design and development of novel security mechanisms can be explored from different perspectives and levels, we place our attention on threat modeling and access control. The contributions of the thesis can be summarized as follows. First, we introduce a model-based methodology that secures the design of edge and cyber-physical systems. This solution identifies threats, security controls, and moving target defense techniques based on system features. Then, we focus on access control management. Since access control policies are subject to modifications, we evaluate how they can be efficiently shared among distributed areas, highlighting the effectiveness of distributed ledger technologies. Furthermore, we propose a risk-based authorization middleware, adjusting permissions based on real-time data, and a federated learning framework that enhances trustworthiness by weighting each client's contributions according to the quality of their partial models. Finally, since authorization revocation is another critical concern, we present an efficient revocation scheme for verifiable credentials in IoT networks, featuring decentralization, demanding minimum storage and computing capabilities. All the mechanisms have been evaluated in different conditions, proving their adaptability to the Cloud-to-Thing Continuum landscape.
Resumo:
The Internet of Vehicles (IoV) paradigm has emerged in recent times, where with the support of technologies like the Internet of Things and V2X , Vehicular Users (VUs) can access different services through internet connectivity. With the support of 6G technology, the IoV paradigm will evolve further and converge into a fully connected and intelligent vehicular system. However, this brings new challenges over dynamic and resource-constrained vehicular systems, and advanced solutions are demanded. This dissertation analyzes the future 6G enabled IoV systems demands, corresponding challenges, and provides various solutions to address them. The vehicular services and application requests demands proper data processing solutions with the support of distributed computing environments such as Vehicular Edge Computing (VEC). While analyzing the performance of VEC systems it is important to take into account the limited resources, coverage, and vehicular mobility into account. Recently, Non terrestrial Networks (NTN) have gained huge popularity for boosting the coverage and capacity of terrestrial wireless networks. Integrating such NTN facilities into the terrestrial VEC system can address the above mentioned challenges. Additionally, such integrated Terrestrial and Non-terrestrial networks (T-NTN) can also be considered to provide advanced intelligent solutions with the support of the edge intelligence paradigm. In this dissertation, we proposed an edge computing-enabled joint T-NTN-based vehicular system architecture to serve VUs. Next, we analyze the terrestrial VEC systems performance for VUs data processing problems and propose solutions to improve the performance in terms of latency and energy costs. Next, we extend the scenario toward the joint T-NTN system and address the problem of distributed data processing through ML-based solutions. We also proposed advanced distributed learning frameworks with the support of a joint T-NTN framework with edge computing facilities. In the end, proper conclusive remarks and several future directions are provided for the proposed solutions.
Resumo:
Pervasive and distributed Internet of Things (IoT) devices demand ubiquitous coverage beyond No-man’s land. To satisfy plethora of IoT devices with resilient connectivity, Non-Terrestrial Networks (NTN) will be pivotal to assist and complement terrestrial systems. In a massiveMTC scenario over NTN, characterized by sporadic uplink data reports, all the terminals within a satellite beam shall be served during the short visibility window of the flying platform, thus generating congestion due to simultaneous access attempts of IoT devices on the same radio resource. The more terminals collide, the more average-time it takes to complete an access which is due to the decreased number of successful attempts caused by Back-off commands of legacy methods. A possible countermeasure is represented by Non-Orthogonal Multiple Access scheme, which requires the knowledge of the number of superimposed NPRACH preambles. This work addresses this problem by proposing a Neural Network (NN) algorithm to cope with the uncoordinated random access performed by a prodigious number of Narrowband-IoT devices. Our proposed method classifies the number of colliding users, and for each estimates the Time of Arrival (ToA). The performance assessment, under Line of Sight (LoS) and Non-LoS conditions in sub-urban environments with two different satellite configurations, shows significant benefits of the proposed NN algorithm with respect to traditional methods for the ToA estimation.
Resumo:
L'obbiettivo della seguente tesi è quello di analizzare quali sono ad oggi i migliori framework per lo sviluppo di software in Mixed Reality e studiare i design pattern più utili ad uno sviluppatore in questo ambito. Nel primo capitolo vengono introdotti i concetti di realtà estesa, virtuale, aumentata e mista con le relative differenze. Inoltre vengono descritti i diversi dispositivi che consentono la realtà mista, in particolare i due visori più utilizzati: Microsoft Hololens 2 e Magic Leap 1. Nello stesso capitolo vengono presentati anche gli aspetti chiave nello sviluppo in realtà mista, cioè tutti gli elementi che consentono un'esperienza in Mixed Reality. Nel secondo capitolo vengono descritti i framework e i kit utili per lo sviluppo di applicazioni in realtà mista multi-piattaforma. In particolare vengono introdotti i due ambienti di sviluppo più utilizzati: Unity e Unreal Engine, già esistenti e non specifici per lo sviluppo in MR ma che diventano funzionali se integrati con kit specifici come Mixed Reality ToolKit. Nel terzo capitolo vengono trattati i design pattern, comuni o nativi per applicazioni in realtà estesa, utili per un buono sviluppo di applicazioni MR. Inoltre, vengono presi in esame alcuni dei principali pattern più utilizzati nella programmazione ad oggetti e si verifica se e come sono implementabili correttamente su Unity in uno scenario di realtà mista. Questa analisi risulta utile per capire se l'utilizzo dei framework di sviluppo, metodo comunemente più utilizzato, comporta dei limiti nella libertà di sviluppo del programmatore.
Resumo:
Con il lancio di nuove applicazioni tecnologiche come l'Internet of Things, Big Data, Cloud computing e tecnologie mobili che stanno accelerando in maniera spropositata la velocità di cambiamento, i comportamenti, le abitudini e i modi di vivere sono completamente mutati nel favorire un mondo di tecnologie digitali che agevolino le operazioni quotidiane. Questi progressi stanno velocemente cambiando il modo in cui le aziende fanno business, con grandi ripercussioni in tutto quello che è il contesto aziendale, ma non solo. L’avvento della Digital Transformation ha incrementato questi fenomeni e la si potrebbe definire come causa scatenante di tutti i mutamenti che stiamo vivendo. La velocità e l’intensità del cambiamento ha effetti disruptive rispetto al passato, colpendo numerosi settori economici ed abitudini dei consumatori. L’obiettivo di questo elaborato è di analizzare la trasformazione digitale applicata al caso dell’azienda Alfa, comprendendone le potenzialità. In particolare, si vogliono studiare i principali risvolti portati da tale innovazione, le più importanti iniziative adottate in merito alle nuove tecnologie implementate e i benefici che queste portano in campo strategico, di business e cultura aziendale.
Resumo:
Con l’avvento dell’Industry 4.0, l’utilizzo dei dispositivi Internet of Things (IoT) è in continuo aumento. Le aziende stanno spingendo sempre più verso l’innovazione, andando ad introdurre nuovi metodi in grado di rinnovare sistemi IoT esistenti e crearne di nuovi, con prestazioni all’avanguardia. Un esempio di tecniche innovative emergenti è l’utilizzo dei Digital Twins (DT). Essi sono delle entità logiche in grado di simulare il reale comportamento di un dispositivo IoT fisico; possono essere utilizzati in vari scenari: monitoraggio di dati, rilevazione di anomalie, analisi What-If oppure per l’analisi predittiva. L’integrazione di tali tecnologie con nuovi paradigmi innovativi è in rapido sviluppo, uno tra questi è rappresentato dal Web of Things (WoT). Il Web of Thing è un termine utilizzato per descrivere un paradigma che permette ad oggetti del mondo reale di essere gestiti attraverso interfacce sul World Wide Web, rendendo accessibile la comunicazione tra più dispositivi con caratteristiche hardware e software differenti. Nonostante sia una tecnologia ancora in fase di sviluppo, il Web of Thing sta già iniziando ad essere utilizzato in molte aziende odierne. L’elaborato avrà come obiettivo quello di poter definire un framework capace di integrare un meccanismo di generazione automatica di Digital Twin su un contesto Web of Thing. Combinando tali tecnologie, si potrebbero sfruttare i vantaggi dell’interoperabilità del Web of Thing per poter generare un Digital Twin, indipendentemente dalle caratteristiche hardware e software degli oggetti da replicare.