890 resultados para SCTP (Protocol of Communication Network)
Resumo:
Mestrado Vinifera Euromaster - Instituto Superior de Agronomia - UL
Resumo:
My dissertation emphasizes a cognitive account of multimodality that explicitly integrates experiential knowledge work into the rhetorical pedagogy that informs so many composition and technical communication programs. In these disciplines, multimodality is widely conceived in terms of what Gunther Kress calls “socialsemiotic” modes of communication shaped primarily by culture. In the cognitive and neurolinguistic theories of Vittorio Gallese and George Lakoff, however, multimodality is described as a key characteristic of our bodies’ sensory-motor systems which link perception to action and action to meaning, grounding all communicative acts in knowledge shaped through body-engaged experience. I argue that this “situated” account of cognition – which closely approximates Maurice Merleau-Ponty’s phenomenology of perception, a major framework for my study – has pedagogical precedence in the mimetic pedagogy that informed ancient Sophistic rhetorical training, and I reveal that training’s multimodal dimensions through a phenomenological exegesis of the concept mimesis. Plato’s denigration of the mimetic tradition and his elevation of conceptual contemplation through reason, out of which developed the classic Cartesian separation of mind from body, resulted in a general degradation of experiential knowledge in Western education. But with the recent introduction into college classrooms of digital technologies and multimedia communication tools, renewed emphasis is being placed on the “hands-on” nature of inventive and productive praxis, necessitating a revision of methods of instruction and assessment that have traditionally privileged the acquisition of conceptual over experiential knowledge. The model of multimodality I construct from Merleau-Ponty’s phenomenology, ancient Sophistic rhetorical pedagogy, and current neuroscientific accounts of situated cognition insists on recognizing the significant role knowledges we acquire experientially play in our reading and writing, speaking and listening, discerning and designing practices.
Resumo:
This dissertation introduces a new approach for assessing the effects of pediatric epilepsy on the language connectome. Two novel data-driven network construction approaches are presented. These methods rely on connecting different brain regions using either extent or intensity of language related activations as identified by independent component analysis of fMRI data. An auditory description decision task (ADDT) paradigm was used to activate the language network for 29 patients and 30 controls recruited from three major pediatric hospitals. Empirical evaluations illustrated that pediatric epilepsy can cause, or is associated with, a network efficiency reduction. Patients showed a propensity to inefficiently employ the whole brain network to perform the ADDT language task; on the contrary, controls seemed to efficiently use smaller segregated network components to achieve the same task. To explain the causes of the decreased efficiency, graph theoretical analysis was carried out. The analysis revealed no substantial global network feature differences between the patient and control groups. It also showed that for both subject groups the language network exhibited small-world characteristics; however, the patient’s extent of activation network showed a tendency towards more random networks. It was also shown that the intensity of activation network displayed ipsilateral hub reorganization on the local level. The left hemispheric hubs displayed greater centrality values for patients, whereas the right hemispheric hubs displayed greater centrality values for controls. This hub hemispheric disparity was not correlated with a right atypical language laterality found in six patients. Finally it was shown that a multi-level unsupervised clustering scheme based on self-organizing maps, a type of artificial neural network, and k-means was able to fairly and blindly separate the subjects into their respective patient or control groups. The clustering was initiated using the local nodal centrality measurements only. Compared to the extent of activation network, the intensity of activation network clustering demonstrated better precision. This outcome supports the assertion that the local centrality differences presented by the intensity of activation network can be associated with focal epilepsy.
Resumo:
Over the last decade, there has been a trend where water utility companies aim to make water distribution networks more intelligent in order to improve their quality of service, reduce water waste, minimize maintenance costs etc., by incorporating IoT technologies. Current state of the art solutions use expensive power hungry deployments to monitor and transmit water network states periodically in order to detect anomalous behaviors such as water leakage and bursts. However, more than 97% of water network assets are remote away from power and are often in geographically remote underpopulated areas, facts that make current approaches unsuitable for next generation more dynamic adaptive water networks. Battery-driven wireless sensor/actuator based solutions are theoretically the perfect choice to support next generation water distribution. In this paper, we present an end-to-end water leak localization system, which exploits edge processing and enables the use of battery-driven sensor nodes. Our system combines a lightweight edge anomaly detection algorithm based on compression rates and an efficient localization algorithm based on graph theory. The edge anomaly detection and localization elements of the systems produce a timely and accurate localization result and reduce the communication by 99% compared to the traditional periodic communication. We evaluated our schemes by deploying non-intrusive sensors measuring vibrational data on a real-world water test rig that have had controlled leakage and burst scenarios implemented.
Resumo:
This study aims to characterize the National Long-Term Care Network (NL-TCN) users. The Portuguese National Health Service, was restructured in 2006 with the creation of the National Long-Term Care Network to respond to new health and social needs concerning the continuity of care. Objectives- Analyse the sociodemographic profile of the network users and the review of hospital, local and regional management procedures. Methods-we used various methods of observational or experimental nature (data processing and presentation of results with the program Statistical Package for Social Sciences, version 20, descriptive statistics (frequencies, crosstabs and test chi-square)). The Pearson correlation test showed a positive correlation between time procedures at the local and regional management and hospital’s length of stay. Results- from a sample of 805 cases, 595 (74%) were admitted in the NL-TCN, a rate lower than the national average (86%). Almost half of the sample was admitted in Rehabilitation Units (46%), while nationally the highest number of admissions was in Home Care Teams (30%). The average time from hospital referral to network admission was 9.73 days with a positive correlation between referred network management procedures and hospital length of stay. Conclusions- For specialized units, the maximum waiting times were for the Long-Term and Support Units (mean 30.27 days) and the minimum waiting times were for Home Care Teams (mean 5.57 days). The average time between the local and regional management was 3.59 days. Almost 90% of referrals were orthopaedics, internal medicine and neurology and Network users were mostly elderly (average 75 years old), female and married. Most users were admitted to inpatient units (78%) and only 15% remained in their home town.
Resumo:
The concept of SG (Smart Grids) encompasses a set of technologies that raise the intelligence of the electrical networks, such as smart meters or instruments of communication, sensing and auto-correction of networks. Nevertheless, the cost is still an important obstacle for the transformation of the current electricity system into a smarter one. Regulation can have an important role in setting up a favorable framework that fosters investments. However, the novelty with SG is the disembodied character of the technology, which may change the incentives of the regulated network companies to invest, affecting the effectiveness of the regulatory instruments (“cost plus” or “price cap”). This paper demonstrates that the solution to this “Smart” paradox requires strong incentive regulation mechanisms able to stimulate the adoption of SG technologies. Moreover, the regulation should not jeopardize conventional investments that are unable to be substituted by SG. Thus, a combination of performance regulation and efficiency obligations may be necessary.
Resumo:
We measure quality of service (QoS) in a wireless network architecture of transoceanic aircraft. A distinguishing characteristic of the network scheme we analyze is that it mixes the concept of Delay Tolerant Networking (DTN) through the exploitation of opportunistic contacts, together with direct satellite access in a limited number of the nodes. We provide a graph sparsification technique for deriving a network model that satisfies the key properties of a real aeronautical opportunistic network while enabling scalable simulation. This reduced model allows us to analyze the impact regarding QoS of introducing Internet-like traffic in the form of outgoing data from passengers. Promoting QoS in DTNs is usually really challenging due to their long delays and scarce resources. The availability of satellite communication links offers a chance to provide an improved degree of service regarding a pure opportunistic approach, and therefore it needs to be properly measured and quantified. Our analysis focuses on several QoS indicators such as delivery time, delivery ratio, and bandwidth allocation fairness. Obtained results show significant improvements in all metric indicators regarding QoS, not usually achievable on the field of DTNs.
Resumo:
Jean Monnet, possibly the most important actor during the first post-war decades of European integration, is constantly described in the literature as part of a network that included several influential individuals in Europe and in the United States who, at different moments, held key positions. An important aspect in this regard is that some of Monnet’s transatlantic friends promoted European integration and contributed to a cross-fertilization process across the Atlantic. Considering that most of the authors either list a number of people as being part of this network, or focus on particular individuals’ relationship with Monnet, it is fair to ask to what extent his network helped him in pursuing his goals, if Monnet was simply accepted, and why, in already existing networks, if we can consider his as a transatlantic working group and if we can retrace in this story elements of continuity and long durée that can contribute to the historiography of early European Integration. Considering new trends and interpretations that highlight the role played by networks, examination of Monnet’s techniques and his reliance on his transatlantic connections reveal important findings about his relationship with policymakers, shading also a light on important features of XX century diplomatic and transatlantic history. This dissertation’s attempt, therefore, is to define these as elements of continuity throughout the formative years of one of founding fathers of the Integration process.
Resumo:
In this thesis we will see that the DNA sequence is constantly shaped by the interactions with its environment at multiple levels, showing footprints of DNA methylation, of its 3D organization and, in the case of bacteria, of the interaction with the host organisms. In the first chapter, we will see that analyzing the distribution of distances between consecutive dinucleotides of the same type along the sequence, we can detect epigenetic and structural footprints. In particular, we will see that CG distance distribution allows to distinguish among organisms of different biological complexity, depending on how much CG sites are involved in DNA methylation. Moreover, we will see that CG and TA can be described by the same fitting function, suggesting a relationship between the two. We will also provide an interpretation of the observed trend, simulating a positioning process guided by the presence and absence of memory. In the end, we will focus on TA distance distribution, characterizing deviations from the trend predicted by the best fitting function, and identifying specific patterns that might be related to peculiar mechanical properties of the DNA and also to epigenetic and structural processes. In the second chapter, we will see how we can map the 3D structure of the DNA onto its sequence. In particular, we devised a network-based algorithm that produces a genome assembly starting from its 3D configuration, using as inputs Hi-C contact maps. Specifically, we will see how we can identify the different chromosomes and reconstruct their sequences by exploiting the spectral properties of the Laplacian operator of a network. In the third chapter, we will see a novel method for source clustering and source attribution, based on a network approach, that allows to identify host-bacteria interaction starting from the detection of Single-Nucleotide Polymorphisms along the sequence of bacterial genomes.
Resumo:
Water Distribution Networks (WDNs) play a vital importance rule in communities, ensuring well-being band supporting economic growth and productivity. The need for greater investment requires design choices will impact on the efficiency of management in the coming decades. This thesis proposes an algorithmic approach to address two related problems:(i) identify the fundamental asset of large WDNs in terms of main infrastructure;(ii) sectorize large WDNs into isolated sectors in order to respect the minimum service to be guaranteed to users. Two methodologies have been developed to meet these objectives and subsequently they were integrated to guarantee an overall process which allows to optimize the sectorized configuration of WDN taking into account the needs to integrated in a global vision the two problems (i) and (ii). With regards to the problem (i), the methodology developed introduces the concept of primary network to give an answer with a dual approach, of connecting main nodes of WDN in terms of hydraulic infrastructures (reservoirs, tanks, pumps stations) and identifying hypothetical paths with the minimal energy losses. This primary network thus identified can be used as an initial basis to design the sectors. The sectorization problem (ii) has been faced using optimization techniques by the development of a new dedicated Tabu Search algorithm able to deal with real case studies of WDNs. For this reason, three new large WDNs models have been developed in order to test the capabilities of the algorithm on different and complex real cases. The developed methodology also allows to automatically identify the deficient parts of the primary network and dynamically includes new edges in order to support a sectorized configuration of the WDN. The application of the overall algorithm to the new real case studies and to others from literature has given applicable solutions even in specific complex situations.
Resumo:
The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.
Resumo:
Most cognitive functions require the encoding and routing of information across distributed networks of brain regions. Information propagation is typically attributed to physical connections existing between brain regions, and contributes to the formation of spatially correlated activity patterns, known as functional connectivity. While structural connectivity provides the anatomical foundation for neural interactions, the exact manner in which it shapes functional connectivity is complex and not yet fully understood. Additionally, traditional measures of directed functional connectivity only capture the overall correlation between neural activity, and provide no insight on the content of transmitted information, limiting their ability in understanding neural computations underlying the distributed processing of behaviorally-relevant variables. In this work, we first study the relationship between structural and functional connectivity in simulated recurrent spiking neural networks with spike timing dependent plasticity. We use established measures of time-lagged correlation and overall information propagation to infer the temporal evolution of synaptic weights, showing that measures of dynamic functional connectivity can be used to reliably reconstruct the evolution of structural properties of the network. Then, we extend current methods of directed causal communication between brain areas, by deriving an information-theoretic measure of Feature-specific Information Transfer (FIT) quantifying the amount, content and direction of information flow. We test FIT on simulated data, showing its key properties and advantages over traditional measures of overall propagated information. We show applications of FIT to several neural datasets obtained with different recording methods (magneto and electro-encephalography, spiking activity, local field potentials) during various cognitive functions, ranging from sensory perception to decision making and motor learning. Overall, these analyses demonstrate the ability of FIT to advance the investigation of communication between brain regions, uncovering the previously unaddressed content of directed information flow.
Resumo:
To assess quality of care of women with severe maternal morbidity and to identify associated factors. This is a national multicenter cross-sectional study performing surveillance for severe maternal morbidity, using the World Health Organization criteria. The expected number of maternal deaths was calculated with the maternal severity index (MSI) based on the severity of complication, and the standardized mortality ratio (SMR) for each center was estimated. Analyses on the adequacy of care were performed. 17 hospitals were classified as providing adequate and 10 as nonadequate care. Besides almost twofold increase in maternal mortality ratio, the main factors associated with nonadequate performance were geographic difficulty in accessing health services (P < 0.001), delays related to quality of medical care (P = 0.012), absence of blood derivatives (P = 0.013), difficulties of communication between health services (P = 0.004), and any delay during the whole process (P = 0.039). This is an example of how evaluation of the performance of health services is possible, using a benchmarking tool specific to Obstetrics. In this study the MSI was a useful tool for identifying differences in maternal mortality ratios and factors associated with nonadequate performance of care.
Resumo:
The present study sought to assess the impact of an intervention to reduce weight and control risk factors of noncommunicable chronic diseases in overweight or obese adults who are users of primary and secondary healthcare units of the public health system of Pelotas, Brazil. We hypothesized that individuals who received an educational intervention regarding how to lose weight and prevent other noncommunicable chronic disease risk factors through nutrition would lose weight and acquire active habits during leisure time more frequently than individuals under regular care. Two hundred forty-one participants from the Nutrition Outpatient Clinic of the Medical Teaching Hospital of the Federal University of Pelotas, Brazil, aged 20 years or older and classified as overweight or obese were randomly allocated to either the intervention group (IG; n = 120) or control group (CG; n = 121). The IG received individualized nutritional care for 6 months, and the CG received individualized usual care of the health services. Intention-to-treat analyses showed that at 6 months, mean fasting glycemia and daily consumption of sweet foods and sodium were reduced, and the time spent on physical leisure activity was increased in IG. Analysis of adherence to the protocol of the study revealed that individuals from IG had lost more in body weight, waist circumference, and fasting glucose compared to the CG. Leisure time physical activity increased in IG. Individuals adhered equally to the dietetic recommendations, irrespective of the nutrition approach that was used
Resumo:
Understanding the emergence of extreme opinions and in what kind of environment they might become less extreme is a central theme in our modern globalized society. A model combining continuous opinions and observed discrete actions (CODA) capable of addressing the important issue of measuring how extreme opinions might be has been recently proposed. In this paper I show extreme opinions to arise in a ubiquitous manner in the CODA model for a multitude of social network structures. Depending on network details reducing extremism seems to be possible. However, a large number of agents with extreme opinions is always observed. A significant decrease in the number of extremists can be observed by allowing agents to change their positions in the network.