973 resultados para Network Flow Interpretation
Resumo:
Waterpower: A Geophysical and Archaeological Investigation of the Waterpower System at the West Point Foundry, Cold Spring, New York, describes the results of ground penetrating radar surveys and archaeological excavation undertaken by Michigan Technological University (MTU) archaeologists during the summer of 2003 at the West Point Foundry, Cold Spring, New York. 2003 constituted MTU's second field season at the foundry. Fieldwork concentrated on the foundry's waterpower system, an intricate network of surface and subsurface drains, races, flumes, waterwheels, turbines, dams, and ponds that powered operations and regulated water flow throughout the site. Archaeologists utilized non-destructive geophysical technology, which expedited survey, facilitated placement of excavation units, and provided a model for future archaeogeophysical research at industrial sites. Features discovered during excavation provided valuable information pertaining to the waterpower system's construction and its functions. Data from ground penetrating radar surveys, archaeological excavation, historical photographs, documents, and maps permitted the development of a provisional chronology of the development of various components of the West Point Foundry's waterpower system. Information gathered during this project serves as an aid in sit interpretation and rehabilitation.
Resumo:
Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.
Resumo:
Power flow calculations are one of the most important tools for power system planning and operation. The need to account for uncertainties when performing power flow studies led, among others methods, to the development of the fuzzy power flow (FPF). This kind of models is especially interesting when a scarcity of information exists, which is a common situation in liberalized power systems (where generation and commercialization of electricity are market activities). In this framework, the symmetric/constrained fuzzy power flow (SFPF/CFPF) was proposed in order to avoid some of the problems of the original FPF model. The SFPF/CFPF models are suitable to quantify the adequacy of transmission network to satisfy “reasonable demands for the transmission of electricity” as defined, for instance, in the European Directive 2009/72/EC. In this work it is illustrated how the SFPF/CFPF may be used to evaluate the impact on the adequacy of a transmission system originated by specific investments on new network elements
Resumo:
In restructured power systems, generation and commercialization activities became market activities, while transmission and distribution activities continue as regulated monopolies. As a result, the adequacy of transmission network should be evaluated independent of generation system. After introducing the constrained fuzzy power flow (CFPF) as a suitable tool to quantify the adequacy of transmission network to satisfy 'reasonable demands for the transmission of electricity' (as stated, for instance, at European Directive 2009/72/EC), the aim is now showing how this approach can be used in conjunction with probabilistic criteria in security analysis. In classical security analysis models of power systems are considered the composite system (generation plus transmission). The state of system components is usually modeled with probabilities and loads (and generation) are modeled by crisp numbers, probability distributions or fuzzy numbers. In the case of CFPF the component’s failure of the transmission network have been investigated. In this framework, probabilistic methods are used for failures modeling of the transmission system components and possibility models are used to deal with 'reasonable demands'. The enhanced version of the CFPF model is applied to an illustrative case.
Resumo:
Network monitoring is of paramount importance for effective network management: it allows to constantly observe the network’s behavior to ensure it is working as intended and can trigger both automated and manual remediation procedures in case of failures and anomalies. The concept of SDN decouples the control logic from legacy network infrastructure to perform centralized control on multiple switches in the network, and in this context, the responsibility of switches is only to forward packets according to the flow control instructions provided by controller. However, as current SDN switches only expose simple per-port and per-flow counters, the controller has to do almost all the processing to determine the network state, which causes significant communication overhead and excessive latency for monitoring purposes. The absence of programmability in the data plane of SDN prompted the advent of programmable switches, which allow developers to customize the data-plane pipeline and implement novel programs operating directly in the switches. This means that we can offload certain monitoring tasks to programmable data planes, to perform fine-grained monitoring even at very high packet processing speeds. Given the central importance of network monitoring exploiting programmable data planes, the goal of this thesis is to enable a wide range of monitoring tasks in programmable switches, with a specific focus on the ones equipped with programmable ASICs. Indeed, most network monitoring solutions available in literature do not take computational and memory constraints of programmable switches into due account, preventing, de facto, their successful implementation in commodity switches. This claims that network monitoring tasks can be executed in programmable switches. Our evaluations show that the contributions in this thesis could be used by network administrators as well as network security engineers, to better understand the network status depending on different monitoring metrics, and thus prevent network infrastructure and service outages.
Resumo:
In this thesis we will see that the DNA sequence is constantly shaped by the interactions with its environment at multiple levels, showing footprints of DNA methylation, of its 3D organization and, in the case of bacteria, of the interaction with the host organisms. In the first chapter, we will see that analyzing the distribution of distances between consecutive dinucleotides of the same type along the sequence, we can detect epigenetic and structural footprints. In particular, we will see that CG distance distribution allows to distinguish among organisms of different biological complexity, depending on how much CG sites are involved in DNA methylation. Moreover, we will see that CG and TA can be described by the same fitting function, suggesting a relationship between the two. We will also provide an interpretation of the observed trend, simulating a positioning process guided by the presence and absence of memory. In the end, we will focus on TA distance distribution, characterizing deviations from the trend predicted by the best fitting function, and identifying specific patterns that might be related to peculiar mechanical properties of the DNA and also to epigenetic and structural processes. In the second chapter, we will see how we can map the 3D structure of the DNA onto its sequence. In particular, we devised a network-based algorithm that produces a genome assembly starting from its 3D configuration, using as inputs Hi-C contact maps. Specifically, we will see how we can identify the different chromosomes and reconstruct their sequences by exploiting the spectral properties of the Laplacian operator of a network. In the third chapter, we will see a novel method for source clustering and source attribution, based on a network approach, that allows to identify host-bacteria interaction starting from the detection of Single-Nucleotide Polymorphisms along the sequence of bacterial genomes.
Resumo:
This thesis studies the state-of-the-art of phasor measurement units (PMUs) as well as their metrological requirements stated in the IEEE C37.118.1 and C37.118.2 Standards for guaranteeing correct measurement performances. Communication systems among PMUs and their possible applicability in the field of power quality (PQ) assessment are also investigated. This preliminary study is followed by an analysis of the working principle of real-time (RT) simulators and the importance of hardware-in-the-loop (HIL) implementation, examining the possible case studies specific for PMUs, including compliance tests which are one of the most important parts. The core of the thesis is focused on the implementation of a PMU model in the IEEE 5-bus network in Simulink and in the validation of the results using OPAL RT-4510 as a real-time simulator. An initial check allows one to get an idea about the goodness of the results in Simulink, comparing the PMU data with respect to the load-flow steady-state information. In this part, accuracy indices are also calculated for both voltage and current synchrophasors. The following part consists in the implementation of the same code in OPAL-RT 4510 simulator, after which an initial analysis is carried out in a qualitative way in order to get a sense of the goodness of the outcomes. Finally, the confirmation of the results is based on an examination of the attained voltage and current synchrophasors and accuracy indices coming from Simulink models and from OPAL system, using a Matlab script. This work also proposes suggestions for an upcoming operation of PMUs in a more complex system as the Digital Twin (DT) in order to improve the performances of the already-existing protection devices of the distribution system operator (DSO) for a future enhancement of power systems reliability.
Resumo:
Intelligent systems are currently inherent to the society, supporting a synergistic human-machine collaboration. Beyond economical and climate factors, energy consumption is strongly affected by the performance of computing systems. The quality of software functioning may invalidate any improvement attempt. In addition, data-driven machine learning algorithms are the basis for human-centered applications, being their interpretability one of the most important features of computational systems. Software maintenance is a critical discipline to support automatic and life-long system operation. As most software registers its inner events by means of logs, log analysis is an approach to keep system operation. Logs are characterized as Big data assembled in large-flow streams, being unstructured, heterogeneous, imprecise, and uncertain. This thesis addresses fuzzy and neuro-granular methods to provide maintenance solutions applied to anomaly detection (AD) and log parsing (LP), dealing with data uncertainty, identifying ideal time periods for detailed software analyses. LP provides deeper semantics interpretation of the anomalous occurrences. The solutions evolve over time and are general-purpose, being highly applicable, scalable, and maintainable. Granular classification models, namely, Fuzzy set-Based evolving Model (FBeM), evolving Granular Neural Network (eGNN), and evolving Gaussian Fuzzy Classifier (eGFC), are compared considering the AD problem. The evolving Log Parsing (eLP) method is proposed to approach the automatic parsing applied to system logs. All the methods perform recursive mechanisms to create, update, merge, and delete information granules according with the data behavior. For the first time in the evolving intelligent systems literature, the proposed method, eLP, is able to process streams of words and sentences. Essentially, regarding to AD accuracy, FBeM achieved (85.64+-3.69)%; eGNN reached (96.17+-0.78)%; eGFC obtained (92.48+-1.21)%; and eLP reached (96.05+-1.04)%. Besides being competitive, eLP particularly generates a log grammar, and presents a higher level of model interpretability.
Resumo:
Rapidity-odd directed flow (v1) measurements for charged pions, protons, and antiprotons near midrapidity (y=0) are reported in sNN=7.7, 11.5, 19.6, 27, 39, 62.4, and 200 GeV Au+Au collisions as recorded by the STAR detector at the Relativistic Heavy Ion Collider. At intermediate impact parameters, the proton and net-proton slope parameter dv1/dy|y=0 shows a minimum between 11.5 and 19.6 GeV. In addition, the net-proton dv1/dy|y=0 changes sign twice between 7.7 and 39 GeV. The proton and net-proton results qualitatively resemble predictions of a hydrodynamic model with a first-order phase transition from hadronic matter to deconfined matter, and differ from hadronic transport calculations.
Resumo:
Disconnectivity between the Default Mode Network (DMN) nodes can cause clinical symptoms and cognitive deficits in Alzheimer׳s disease (AD). We aimed to examine the structural connectivity between DMN nodes, to verify the extent in which white matter disconnection affects cognitive performance. MRI data of 76 subjects (25 mild AD, 21 amnestic Mild Cognitive Impairment subjects and 30 controls) were acquired on a 3.0T scanner. ExploreDTI software (fractional Anisotropy threshold=0.25 and the angular threshold=60°) calculated axial, radial, and mean diffusivities, fractional anisotropy and streamline count. AD patients showed lower fractional anisotropy (P=0.01) and streamline count (P=0.029), and higher radial diffusivity (P=0.014) than controls in the cingulum. After correction for white matter atrophy, only fractional anisotropy and radial diffusivity remained significantly lower in AD compared to controls (P=0.003 and P=0.05). In the parahippocampal bundle, AD patients had lower mean and radial diffusivities (P=0.048 and P=0.013) compared to controls, from which only radial diffusivity survived for white matter adjustment (P=0.05). Regression models revealed that cognitive performance is also accounted for by white matter microstructural values. Structural connectivity within the DMN is important to the execution of high-complexity tasks, probably due to its relevant role in the integration of the network.
Resumo:
32
Resumo:
The article seeks to investigate patterns of performance and relationships between grip strength, gait speed and self-rated health, and investigate the relationships between them, considering the variables of gender, age and family income. This was conducted in a probabilistic sample of community-dwelling elderly aged 65 and over, members of a population study on frailty. A total of 689 elderly people without cognitive deficit suggestive of dementia underwent tests of gait speed and grip strength. Comparisons between groups were based on low, medium and high speed and strength. Self-related health was assessed using a 5-point scale. The males and the younger elderly individuals scored significantly higher on grip strength and gait speed than the female and oldest did; the richest scored higher than the poorest on grip strength and gait speed; females and men aged over 80 had weaker grip strength and lower gait speed; slow gait speed and low income arose as risk factors for a worse health evaluation. Lower muscular strength affects the self-rated assessment of health because it results in a reduction in functional capacity, especially in the presence of poverty and a lack of compensatory factors.
Resumo:
83
Resumo:
The search for an Alzheimer's disease (AD) biomarker is one of the most relevant contemporary research topics due to the high prevalence and social costs of the disease. Functional connectivity (FC) of the default mode network (DMN) is a plausible candidate for such a biomarker. We evaluated 22 patients with mild AD and 26 age- and gender-matched healthy controls. All subjects underwent resting functional magnetic resonance imaging (fMRI) in a 3.0 T scanner. To identify the DMN, seed-based FC of the posterior cingulate was calculated. We also measured the sensitivity/specificity of the method, and verified a correlation with cognitive performance. We found a significant difference between patients with mild AD and controls in average z-scores: DMN, whole cortical positive (WCP) and absolute values. DMN individual values showed a sensitivity of 77.3% and specificity of 70%. DMN and WCP values were correlated to global cognition and episodic memory performance. We showed that individual measures of DMN connectivity could be considered a promising method to differentiate AD, even at an early phase, from normal aging. Further studies with larger numbers of participants, as well as validation of normal values, are needed for more definitive conclusions.
Resumo:
Response surface methodology based on Box-Behnken (BBD) design was successfully applied to the optimization in the operating conditions of the electrochemical oxidation of sanitary landfill leachate aimed for making this method feasible for scale up. Landfill leachate was treated in continuous batch-recirculation system, where a dimensional stable anode (DSA(©)) coated with Ti/TiO2 and RuO2 film oxide were used. The effects of three variables, current density (milliampere per square centimeter), time of treatment (minutes), and supporting electrolyte dosage (moles per liter) upon the total organic carbon removal were evaluated. Optimized conditions were obtained for the highest desirability at 244.11 mA/cm(2), 41.78 min, and 0.07 mol/L of NaCl and 242.84 mA/cm(2), 37.07 min, and 0.07 mol/L of Na2SO4. Under the optimal conditions, 54.99 % of chemical oxygen demand (COD) and 71.07 ammonia nitrogen (NH3-N) removal was achieved with NaCl and 45.50 of COD and 62.13 NH3-N with Na2SO4. A new kinetic model predicted obtained from the relation between BBD and the kinetic model was suggested.