875 resultados para regulatory and signaling networks
Resumo:
Part 15: Performance Management Frameworks
Resumo:
Nowadays, Power grids are critical infrastructures on which everything else relies, and their correct behavior is of the highest priority. New smart devices are being deployed to be able to manage and control power grids more efficiently and avoid instability. However, the deployment of such smart devices like Phasor Measurement Units (PMU) and Phasor Data Concentrators (PDC), open new opportunities for cyber attackers to exploit network vulnerabilities. If a PDC is compromised, all data coming from PMUs to that PDC is lost, reducing network observability. Our approach to solve this problem is to develop an Intrusion detection System (IDS) in a Software-defined network (SDN). allowing the IDS system to detect compromised devices and use that information as an input for a self-healing SDN controller, which redirects the data of the PMUs to a new, uncompromised PDC, maintaining the maximum possible network observability at every moment. During this research, we have successfully implemented Self-healing in an example network with an SDN controller based on Ryu controller. We have also assessed intrinsic vulnerabilities of Wide Area Management Systems (WAMS) and SCADA networks, and developed some rules for the Intrusion Detection system which specifically protect vulnerabilities of these networks. The integration of the IDS and the SDN controller was also successful. \\To achieve this goal, the first steps will be to implement an existing Self-healing SDN controller and assess intrinsic vulnerabilities of Wide Area Measurement Systems (WAMS) and SCADA networks. After that, we will integrate the Ryu controller with Snort, and create the Snort rules that are specific for SCADA or WAMS systems and protocols.
Resumo:
This work addresses the relationship between University-Firm aims to understand the model of shared management of R&D in petroleum of Petrobras with UFRN. This is a case study which sought to investigate whether the model of cooperation established by the two institutions brings innovation to generate technical-scientific knowledge and contribute to the coordination with other actors in the promotion of technological innovation. In addition to desk research the necessary data for analysis were obtained by sending questionnaires to the coordinators of projects in R&D at the company and university. Also, interviews were conducted with subjects who participated in the study since its inception to the present day. This case study were analysed through the Resource-Based View and Interorganizational Networks theory. The sample data also stands that: searches were aligned to the strategic planning and that 29% of R&D projects have been successful on the scope of the proposed objectives (of which 11% were incorporated into business processes); which was produced technical and scientific knowledge caracterized by hundreds of national and international publications; thesis, dissertations, eleven patents, and radical and incremental innovations; the partnership has also brought benefits to the academic processes induced by the improved infrastructure UFRN and changing the "attitude" of the university (currently with national prominence in research and staff training for the oil sector). As for the model, the technical point of view, although it has some problems, it follows that it is appropriate. From the viewpoint of the management model is criticized for containing an excess of bureaucracy. From the standpoint of strategic allocation of resources from the legal framework needs to be reassessed, because it is focused only on the college level and it is understood that should also reach the high school given the new reality of the oil sector in Brazil. For this it is desirable to add the local government to this partnership. The set of information leads to the conclusion that the model is identified and named as a innovation of organizational arrangement here known as Shared Management of R&D in petroleum of Petrobras with UFRN. It is said that the shared management model it is possible to exist, which is a simple and effective way to manage partnerships between firms and Science and Technology Institutions. It was created by contingencies arising from regulatory stand points and resource dependence. The partnership is the result of a process of Convergence, Construction and Evaluation supported by the tripod Simplicity, Systematization and Continuity, important factors for its consolidation. In practice an organizational arrangement was built to manage innovative university-industry partnership that is defined by a dyadic relationship on two levels (institutional and technical, therefore governance is hybrid), by measuring the quarterly meetings of systematic and standardized financial contribution proportional to the advancement of research. These details have led to the establishment of a point of interaction between the scientific and technological-business dimension, demystifying they are two worlds apart
Resumo:
Efficient and reliable techniques for power delivery and utilization are needed to account for the increased penetration of renewable energy sources in electric power systems. Such methods are also required for current and future demands of plug-in electric vehicles and high-power electronic loads. Distributed control and optimal power network architectures will lead to viable solutions to the energy management issue with high level of reliability and security. This dissertation is aimed at developing and verifying new techniques for distributed control by deploying DC microgrids, involving distributed renewable generation and energy storage, through the operating AC power system. To achieve the findings of this dissertation, an energy system architecture was developed involving AC and DC networks, both with distributed generations and demands. The various components of the DC microgrid were designed and built including DC-DC converters, voltage source inverters (VSI) and AC-DC rectifiers featuring novel designs developed by the candidate. New control techniques were developed and implemented to maximize the operating range of the power conditioning units used for integrating renewable energy into the DC bus. The control and operation of the DC microgrids in the hybrid AC/DC system involve intelligent energy management. Real-time energy management algorithms were developed and experimentally verified. These algorithms are based on intelligent decision-making elements along with an optimization process. This was aimed at enhancing the overall performance of the power system and mitigating the effect of heavy non-linear loads with variable intensity and duration. The developed algorithms were also used for managing the charging/discharging process of plug-in electric vehicle emulators. The protection of the proposed hybrid AC/DC power system was studied. Fault analysis and protection scheme and coordination, in addition to ideas on how to retrofit currently available protection concepts and devices for AC systems in a DC network, were presented. A study was also conducted on the effect of changing the distribution architecture and distributing the storage assets on the various zones of the network on the system’s dynamic security and stability. A practical shipboard power system was studied as an example of a hybrid AC/DC power system involving pulsed loads. Generally, the proposed hybrid AC/DC power system, besides most of the ideas, controls and algorithms presented in this dissertation, were experimentally verified at the Smart Grid Testbed, Energy Systems Research Laboratory. All the developments in this dissertation were experimentally verified at the Smart Grid Testbed.
Resumo:
Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.
Resumo:
With hundreds of millions of users reporting locations and embracing mobile technologies, Location Based Services (LBSs) are raising new challenges. In this dissertation, we address three emerging problems in location services, where geolocation data plays a central role. First, to handle the unprecedented growth of generated geolocation data, existing location services rely on geospatial database systems. However, their inability to leverage combined geographical and textual information in analytical queries (e.g. spatial similarity joins) remains an open problem. To address this, we introduce SpsJoin, a framework for computing spatial set-similarity joins. SpsJoin handles combined similarity queries that involve textual and spatial constraints simultaneously. LBSs use this system to tackle different types of problems, such as deduplication, geolocation enhancement and record linkage. We define the spatial set-similarity join problem in a general case and propose an algorithm for its efficient computation. Our solution utilizes parallel computing with MapReduce to handle scalability issues in large geospatial databases. Second, applications that use geolocation data are seldom concerned with ensuring the privacy of participating users. To motivate participation and address privacy concerns, we propose iSafe, a privacy preserving algorithm for computing safety snapshots of co-located mobile devices as well as geosocial network users. iSafe combines geolocation data extracted from crime datasets and geosocial networks such as Yelp. In order to enhance iSafe's ability to compute safety recommendations, even when crime information is incomplete or sparse, we need to identify relationships between Yelp venues and crime indices at their locations. To achieve this, we use SpsJoin on two datasets (Yelp venues and geolocated businesses) to find venues that have not been reviewed and to further compute the crime indices of their locations. Our results show a statistically significant dependence between location crime indices and Yelp features. Third, review centered LBSs (e.g., Yelp) are increasingly becoming targets of malicious campaigns that aim to bias the public image of represented businesses. Although Yelp actively attempts to detect and filter fraudulent reviews, our experiments showed that Yelp is still vulnerable. Fraudulent LBS information also impacts the ability of iSafe to provide correct safety values. We take steps toward addressing this problem by proposing SpiDeR, an algorithm that takes advantage of the richness of information available in Yelp to detect abnormal review patterns. We propose a fake venue detection solution that applies SpsJoin on Yelp and U.S. housing datasets. We validate the proposed solutions using ground truth data extracted by our experiments and reviews filtered by Yelp.
Resumo:
How does an archaeological museum understand its function in a digital environment? Consumer expectations are rapidly shifting, from what used to be a passive relationship with exhibition contents, towards a different one, in which interaction, individuality and proactivity define the visitor experience. This consumer paradigm is much studied in fast moving markets, where it provokes immediately measurable impacts. In other fields, such as tourism and regional development, the very heterogeneous nature of the product to be branded makes it near to impossible for only one player to engage successfully. This systemic feature implies that museums, acting as major stakeholders, often anchor a regional brand around which SME tend to cluster, and thus assume responsibilities in constructing marketable identities. As such, the archaeological element becomes a very useful trademark. On the other hand, it also emerges erratically on the Internet, in personal blogs, commercial websites, and social networks. This forces museums to enter as a mediator, authenticating contents and providing credibility. What might be called the digital pull factor poses specific challenges to museum management: what is to be promoted, and how, in order to create and maintain a coherent presence in social media? The underlying issue this paper tries to address is how museums perceive their current and future role in digital communication.
Resumo:
Las enfermedades raras o huérfanas corresponden a aquellas con baja prevalencia en la población, y en varios países tienen una definición distinta de acuerdo con el número de pacientes que afectan en la población. La Organización Mundial de la Salud (OMS), las define como un trastorno que afecta de 650 a 1.000 personas por millón de habitantes, de las que se han identificado alrededor de 7.000. En Colombia su prevalencia es menor de 1 por cada 5.000 personas y comprenden: las enfermedades raras, las ultra-huérfanas y las olvidadas. Los pacientes con este tipo de enfermedades imponen retos a los sistemas sanitarios, pues si bien afectan a un bajo porcentaje de la población, su atención implica una alta carga económica por los costos que involucra su atención, la complejidad en su diagnóstico, tratamiento, seguimiento y rehabilitación. El abordaje de las enfermedades raras requiere un manejo interdisciplinar e intersectorial, lo que implica la organización de cada actor del sistema sanitario para su manejo a través de un modelo que abraque las dinámicas posibles entre ellos y las competencias de cada uno. Por lo anterior, y teniendo en cuenta la necesidad de formular políticas sanitarias específicas para la gestión de estas enfermedades, el presente trabajo presenta una aproximación a la formulación de un modelo de gestión para la atención integral de pacientes con enfermedades raras en Colombia. Esta investigación describe los distintos elementos y características de los modelos de gestión clínica y de las enfermedades raras a través de una revisión de literatura, en la que se incluye la descripción de los distintos actores del Sistema de Salud Colombiano, relacionados con la atención integral de estos pacientes para la documentación de un modelo de gestión integral.
Resumo:
Procambarus clarkii is currently recorded from 16 European territories. On top of being a vector of crayfish plague, which is responsible for large-scale disappearance of native crayfish species, it causes severe impacts on diverse aquatic ecosystems, due to its rapid life cycle, dispersal capacities, burrowing activities and high population densities. The species has even been recently discovered in caves. This invasive crayfish is a polytrophic keystone species that can exert multiple pressures on ecosystems. Most studies deal with the decline of macrophytes and predation on several species (amphibians, molluscs, and macroinvertebrates), highlighting how this biodiversity loss leads to unbalanced food chains. At a management level, the species is considered as (a) a devastating digger of the water drainage systems in southern and central Europe, (b) an agricultural pest in Mediterranean territories, consuming, for example, young rice plants, and (c) a threat to the restoration of water bodies in north-western Europe. Indeed, among the high-risk species, P. clarkii consistently attained the highest risk rating. Its negative impacts on ecosystem services were evaluated. These may include the loss of provisioning services such as reductions in valued edible native species of regulatory and supporting services, inducing wide changes in ecological communities and increased costs to agriculture and water management. Finally, cultural services may be lost. The species fulfils the criteria of the Article 4(3) of Regulation (EU) No 1143/2014 of the European Parliament (species widely spread in Europe and impossible to eradicate in a cost-effective manner) and has been included in the “Union List”. Particularly, awareness of the ornamental trade through the internet must be reinforced within the European Community and import and trade regulations should be imposed to reduce the availability of this high-risk species.
Resumo:
Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.
Resumo:
Our study focused on Morocco investigating the dissemination of PBs amongst farmers belonging to the first pillar of the GMP, located in the Fès-Meknès region. As well as to assess how innovation adoption is influenced by the network of relationships that various farmers are involved in. We adopted an “ego network” approach to identify the primary stakeholders responsible for the diffusion of PBs. We collected data through “face-to-face” interviews with 80 farmers in April and May 2021. The data were processed with the aim of: 1) analysing the total number of main and specific topics discussed between egos and egos’ alters regarding the variation of some egos attributes; 2) analysing egos’ network characteristics using E-Net software, and 3) identifying the significant variables that influence farmers to access knowledge, use and reuse of PBs a Binary Logistic Regression (LR) was applied. The first result disclosed that the main PBs topics discussed were technical positioning, the need to use PBs, knowledge of PBs, and organic PBs. We noted that farmers have specific features: they have a high school diploma and a bachelor's degree; they are specialised in fruits and cereals farming, and they are managers and members of a professional organisation. The second result showed results of SNA: 1) PBs seem to become generally a common argument for farmers who have already exchanged fertiliser information with their alters; 2) we disclosed a moderate heterogeneity in the networks, farmers have access to information mainly from acquaintances and professionals, and 3) we revealed that networks have a relatively low density and alters are not tightly connected to each other. Farmers have a brokerage position in the networks controlling the flow of information about the PBs. LR revealed that both the farmers’ attributes and the networks’ characteristics influence growers to know, use and reuse PBs.
Resumo:
In its open and private-based dimension, the Internet is the epitome of the Liberal International Order in its global spatial dimension. Therefore, normative questions arise from the emergence of powerful non-liberal actors such as China in Internet governance. In particular, China has supported a UN-based multilateral Internet governance model based on state sovereignty aimed at replacing the existing ICANN-based multistakeholder model. While persistent, this debate has become less dualistic through time. However, fear of Internet fragmentation has increased as the US-China technological competition grew harsher. This thesis inquires “(To what extent) are Chinese stakeholders reshaping the rules of Global Internet Governance?”. This is further unpacked in three smaller questions: (i) (To what extent) are Chinese stakeholders contributing to increased state influence in multistakeholder fora?; (ii) (how) is China contributing to Internet fragmentation?; and (iii) what are the main drivers of Chinese stakeholders’ stances? To answer these questions, Chinese stakeholders’ actions are observed in the making and management of critical Internet resources at the IETF and ICANN respectively, and in mobile connectivity standard-making at 3GPP. Through the lens of norm entrepreneurship in regime complexes, this thesis interprets changes and persistence in the Internet governance normative order and Chinese attitudes towards it. Three research methods are employed: network analysis, semi-structured expert interviews, and thematic document analysis. While China has enhanced state intervention in several technological fields, fostering debates on digital sovereignty, this research finds that the Chinese government does not exert full control on its domestic private actors and concludes that Chinese stakeholders have increasingly adapted to multistakeholder Internet governance as they grew influential within it. To enhance control over Internet-based activities, the Chinese government resorted to regulatory and technical control domestically rather than establishing a splinternet. This is due to Chinese stakeholders’ interest in retaining the network benefits of global interconnectivity.
Resumo:
Massive Internet of Things is expected to play a crucial role in Beyond 5G (B5G) wireless communication systems, offering seamless connectivity among heterogeneous devices without human intervention. However, the exponential proliferation of smart devices and IoT networks, relying solely on terrestrial networks, may not fully meet the demanding IoT requirements in terms of bandwidth and connectivity, especially in areas where terrestrial infrastructures are not economically viable. To unleash the full potential of 5G and B5G networks and enable seamless connectivity everywhere, the 3GPP envisions the integration of Non-Terrestrial Networks (NTNs) into the terrestrial ones starting from Release 17. However, this integration process requires modifications to the 5G standard to ensure reliable communications despite typical satellite channel impairments. In this framework, this thesis aims at proposing techniques at the Physical and Medium Access Control layers that require minimal adaptations in the current NB-IoT standard via NTN. Thus, firstly the satellite impairments are evaluated and, then, a detailed link budget analysis is provided. Following, analyses at the link and the system levels are conducted. In the former case, a novel algorithm leveraging time-frequency analysis is proposed to detect orthogonal preambles and estimate the signals’ arrival time. Besides, the effects of collisions on the detection probability and Bit Error Rate are investigated and Non-Orthogonal Multiple Access approaches are proposed in the random access and data phases. The system analysis evaluates the performance of random access in case of congestion. Various access parameters are tested in different satellite scenarios, and the performance is measured in terms of access probability and time required to complete the procedure. Finally, a heuristic algorithm is proposed to jointly design the access and data phases, determining the number of satellite passages, the Random Access Periodicity, and the number of uplink repetitions that maximize the system's spectral efficiency.
Resumo:
The purpose of this research study is to discuss privacy and data protection-related regulatory and compliance challenges posed by digital transformation in healthcare in the wake of the COVID-19 pandemic. The public health crisis accelerated the development of patient-centred remote/hybrid healthcare delivery models that make increased use of telehealth services and related digital solutions. The large-scale uptake of IoT-enabled medical devices and wellness applications, and the offering of healthcare services via healthcare platforms (online doctor marketplaces) have catalysed these developments. However, the use of new enabling technologies (IoT, AI) and the platformisation of healthcare pose complex challenges to the protection of patient’s privacy and personal data. This happens at a time when the EU is drawing up a new regulatory landscape for the use of data and digital technologies. Against this background, the study presents an interdisciplinary (normative and technology-oriented) critical assessment on how the new regulatory framework may affect privacy and data protection requirements regarding the deployment and use of Internet of Health Things (hardware) devices and interconnected software (AI systems). The study also assesses key privacy and data protection challenges that affect healthcare platforms (online doctor marketplaces) in their offering of video API-enabled teleconsultation services and their (anticipated) integration into the European Health Data Space. The overall conclusion of the study is that regulatory deficiencies may create integrity risks for the protection of privacy and personal data in telehealth due to uncertainties about the proper interplay, legal effects and effectiveness of (existing and proposed) EU legislation. The proliferation of normative measures may increase compliance costs, hinder innovation and ultimately, deprive European patients from state-of-the-art digital health technologies, which is paradoxically, the opposite of what the EU plans to achieve.
Resumo:
Natural Language Processing (NLP) has seen tremendous improvements over the last few years. Transformer architectures achieved impressive results in almost any NLP task, such as Text Classification, Machine Translation, and Language Generation. As time went by, transformers continued to improve thanks to larger corpora and bigger networks, reaching hundreds of billions of parameters. Training and deploying such large models has become prohibitively expensive, such that only big high tech companies can afford to train those models. Therefore, a lot of research has been dedicated to reducing a model’s size. In this thesis, we investigate the effects of Vocabulary Transfer and Knowledge Distillation for compressing large Language Models. The goal is to combine these two methodologies to further compress models without significant loss of performance. In particular, we designed different combination strategies and conducted a series of experiments on different vertical domains (medical, legal, news) and downstream tasks (Text Classification and Named Entity Recognition). Four different methods involving Vocabulary Transfer (VIPI) with and without a Masked Language Modelling (MLM) step and with and without Knowledge Distillation are compared against a baseline that assigns random vectors to new elements of the vocabulary. Results indicate that VIPI effectively transfers information of the original vocabulary and that MLM is beneficial. It is also noted that both vocabulary transfer and knowledge distillation are orthogonal to one another and may be applied jointly. The application of knowledge distillation first before subsequently applying vocabulary transfer is recommended. Finally, model performance due to vocabulary transfer does not always show a consistent trend as the vocabulary size is reduced. Hence, the choice of vocabulary size should be empirically selected by evaluation on the downstream task similar to hyperparameter tuning.