982 resultados para Network re-configuration
Resumo:
Pós-graduação em Letras - FCLAS
Resumo:
Pós-graduação em Educação para a Ciência - FC
Resumo:
We present the results of a search for gravitational waves associated with 223 gamma-ray bursts (GRBs) detected by the InterPlanetary Network (IPN) in 2005-2010 during LIGO's fifth and sixth science runs and Virgo's first, second, and third science runs. The IPN satellites provide accurate times of the bursts and sky localizations that vary significantly from degree scale to hundreds of square degrees. We search for both a well-modeled binary coalescence signal, the favored progenitor model for short GRBs, and for generic, unmodeled gravitational wave bursts. Both searches use the event time and sky localization to improve the gravitational wave search sensitivity as compared to corresponding all-time, all-sky searches. We find no evidence of a gravitational wave signal associated with any of the IPN GRBs in the sample, nor do we find evidence for a population of weak gravitational wave signals associated with the GRBs. For all IPN-detected GRBs, for which a sufficient duration of quality gravitational wave data are available, we place lower bounds on the distance to the source in accordance with an optimistic assumption of gravitational wave emission energy of 10(-2)M(circle dot)c(2) at 150 Hz, and find a median of 13 Mpc. For the 27 short-hard GRBs we place 90% confidence exclusion distances to two source models: a binary neutron star coalescence, with a median distance of 12 Mpc, or the coalescence of a neutron star and black hole, with a median distance of 22 Mpc. Finally, we combine this search with previously published results to provide a population statement for GRB searches in first-generation LIGO and Virgo gravitational wave detectors and a resulting examination of prospects for the advanced gravitational wave detectors.
Resumo:
Concept drift, which refers to non stationary learning problems over time, has increasing importance in machine learning and data mining. Many concept drift applications require fast response, which means an algorithm must always be (re)trained with the latest available data. But the process of data labeling is usually expensive and/or time consuming when compared to acquisition of unlabeled data, thus usually only a small fraction of the incoming data may be effectively labeled. Semi-supervised learning methods may help in this scenario, as they use both labeled and unlabeled data in the training process. However, most of them are based on assumptions that the data is static. Therefore, semi-supervised learning with concept drifts is still an open challenging task in machine learning. Recently, a particle competition and cooperation approach has been developed to realize graph-based semi-supervised learning from static data. We have extend that approach to handle data streams and concept drift. The result is a passive algorithm which uses a single classifier approach, naturally adapted to concept changes without any explicit drift detection mechanism. It has built-in mechanisms that provide a natural way of learning from new data, gradually "forgetting" older knowledge as older data items are no longer useful for the classification of newer data items. The proposed algorithm is applied to the KDD Cup 1999 Data of network intrusion, showing its effectiveness.
Resumo:
Traditional supervised data classification considers only physical features (e. g., distance or similarity) of the input data. Here, this type of learning is called low level classification. On the other hand, the human (animal) brain performs both low and high orders of learning and it has facility in identifying patterns according to the semantic meaning of the input data. Data classification that considers not only physical attributes but also the pattern formation is, here, referred to as high level classification. In this paper, we propose a hybrid classification technique that combines both types of learning. The low level term can be implemented by any classification technique, while the high level term is realized by the extraction of features of the underlying network constructed from the input data. Thus, the former classifies the test instances by their physical features or class topologies, while the latter measures the compliance of the test instances to the pattern formation of the data. Our study shows that the proposed technique not only can realize classification according to the pattern formation, but also is able to improve the performance of traditional classification techniques. Furthermore, as the class configuration's complexity increases, such as the mixture among different classes, a larger portion of the high level term is required to get correct classification. This feature confirms that the high level classification has a special importance in complex situations of classification. Finally, we show how the proposed technique can be employed in a real-world application, where it is capable of identifying variations and distortions of handwritten digit images. As a result, it supplies an improvement in the overall pattern recognition rate.
Resumo:
We describe an approach to ion implantation in which the plasma and its electronics are held at ground potential and the ion beam is injected into a space held at high negative potential, allowing considerable savings both economically and technologically. We used an “inverted ion implanter” of this kind to carry out implantation of gold into alumina, with Au ion energy 40 keV and dose (3–9) × 1016 cm−2. Resistivity was measured in situ as a function of dose and compared with predictions of a model based on percolation theory, in which electron transport in the composite is explained by conduction through a random resistor network formed by Au nanoparticles. Excellent agreement is found between the experimental results and the theory.
Resumo:
[EN]The re-identification problem has been commonly accomplished using appearance features based on salient points and color information. In this paper, we focus on the possibilities that simple geometric features obtained from depth images captured with RGB-D cameras may offer for the task, particularly working under severe illumination conditions. The results achieved for different sets of simple geometric features extracted in a top-view setup seem to provide useful descriptors for the re-identification task, which can be integrated in an ambient intelligent environment as part of a sensor network.
Resumo:
Multi-Processor SoC (MPSOC) design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. Scaling down of process technologies has increased process and dynamic variations as well as transistor wearout. Because of this, delay variations increase and impact the performance of the MPSoCs. The interconnect architecture inMPSoCs becomes a single point of failure as it connects all other components of the system together. A faulty processing element may be shut down entirely, but the interconnect architecture must be able to tolerate partial failure and variations and operate with performance, power or latency overhead. This dissertation focuses on techniques at different levels of abstraction to face with the reliability and variability issues in on-chip interconnection networks. By showing the test results of a GALS NoC testchip this dissertation motivates the need for techniques to detect and work around manufacturing faults and process variations in MPSoCs’ interconnection infrastructure. As a physical design technique, we propose the bundle routing framework as an effective way to route the Network on Chips’ global links. For architecture-level design, two cases are addressed: (I) Intra-cluster communication where we propose a low-latency interconnect with variability robustness (ii) Inter-cluster communication where an online functional testing with a reliable NoC configuration are proposed. We also propose dualVdd as an orthogonal way of compensating variability at the post-fabrication stage. This is an alternative strategy with respect to the design techniques, since it enforces the compensation at post silicon stage.
Resumo:
A central design challenge facing network planners is how to select a cost-effective network configuration that can provide uninterrupted service despite edge failures. In this paper, we study the Survivable Network Design (SND) problem, a core model underlying the design of such resilient networks that incorporates complex cost and connectivity trade-offs. Given an undirected graph with specified edge costs and (integer) connectivity requirements between pairs of nodes, the SND problem seeks the minimum cost set of edges that interconnects each node pair with at least as many edge-disjoint paths as the connectivity requirement of the nodes. We develop a hierarchical approach for solving the problem that integrates ideas from decomposition, tabu search, randomization, and optimization. The approach decomposes the SND problem into two subproblems, Backbone design and Access design, and uses an iterative multi-stage method for solving the SND problem in a hierarchical fashion. Since both subproblems are NP-hard, we develop effective optimization-based tabu search strategies that balance intensification and diversification to identify near-optimal solutions. To initiate this method, we develop two heuristic procedures that can yield good starting points. We test the combined approach on large-scale SND instances, and empirically assess the quality of the solutions vis-à-vis optimal values or lower bounds. On average, our hierarchical solution approach generates solutions within 2.7% of optimality even for very large problems (that cannot be solved using exact methods), and our results demonstrate that the performance of the method is robust for a variety of problems with different size and connectivity characteristics.
Resumo:
With research on Wireless Sensor Networks (WSNs) becoming more and more mature in the past five years, researchers from universities all over the world have set up testbeds of wireless sensor networks, in most cases to test and evaluate the real-world behavior of developed WSN protocol mechanisms. Although these testbeds differ heavily in the employed sensor node types and the general architectural set up, they all have similar requirements with respect to management and scheduling functionalities: as every shared resource, a testbed requires a notion of users, resource reservation features, support for reprogramming and reconfiguration of the nodes, provisions to debug and remotely reset sensor nodes in case of node failures, as well as a solution for collecting and storing experimental data. The TARWIS management architecture presented in this paper targets at providing these functionalities independent from node type and node operating system. TARWIS has been designed as a re-usable management solution for research and/or educational oriented research testbeds of wireless sensor networks, relieving researchers intending to deploy a testbed from the burden to implement their own scheduling and testbed management solutions from scratch.
Resumo:
Tracking or target localization is used in a wide range of important tasks from knowing when your flight will arrive to ensuring your mail is received on time. Tracking provides the location of resources enabling solutions to complex logistical problems. Wireless Sensor Networks (WSN) create new opportunities when applied to tracking, such as more flexible deployment and real-time information. When radar is used as the sensing element in a tracking WSN better results can be obtained; because radar has a comparatively larger range both in distance and angle to other sensors commonly used in WSNs. This allows for less nodes deployed covering larger areas, saving money. In this report I implement a tracking WSN platform similar to what was developed by Lim, Wang, and Terzis. This consists of several sensor nodes each with a radar, a sink node connected to a host PC, and a Matlab© program to fuse sensor data. I have re-implemented their experiment with my WSN platform for tracking a non-cooperative target to verify their results and also run simulations to compare. The results of these tests are discussed and some future improvements are proposed.
Resumo:
Zentrale Lernmanagement-Plattformen sind mittlerweile an vielen Hochschulen Standard. Damit diese Plattformen nachhaltig genutzt werden, müssen bei der Bewertung die vielfältigen Interessen von Lehrenden, Studierenden, zentralen Einrichtungen bis hin zur Hochschulleitung berücksichtigt werden. Dies gilt sowohl für die Evaluationsprozesse zur Einführung von Lernplattformen, wie auch für Re-Evaluationsprozesse, die notwendig sind, um die Infrastruktur einer Hochschule den sich verändernden Bedürfnissen und Rahmenbedingungen anpassen zu können. An der Universität Trier wurde bzw. werden (Re-)Evaluationsverfahren durchgeführt, bei denen systematisch alle Stakeholder der Hochschule einbezogen werden. Grundlage dafür ist ein Netzwerk aller E-Learning-Support- und Entwicklungseinrichtungen der Universität, das im Rahmen eines Projektes zur E-Learning-Integration etabliert wurde. Der Artikel stellt als Fallstudie die Konzepte für die Evaluations- und Re-Evaluationsprozesse an der Universität Trier vor. Dabei wird weniger auf das Verfahren selbst hinsichtlich der Kriterienwahl und Bewertung sowie den Ergebnissen fokussiert, sondern vielmehr auf Rollen und Aufgaben der Akteure in diesen Entscheidungsprozessen.
Resumo:
BACKGROUND Previous meta-analyses comparing the efficacy of psychotherapeutic interventions for depression were clouded by a limited number of within-study treatment comparisons. This study used network meta-analysis, a novel methodological approach that integrates direct and indirect evidence from randomised controlled studies, to re-examine the comparative efficacy of seven psychotherapeutic interventions for adult depression. METHODS AND FINDINGS We conducted systematic literature searches in PubMed, PsycINFO, and Embase up to November 2012, and identified additional studies through earlier meta-analyses and the references of included studies. We identified 198 studies, including 15,118 adult patients with depression, and coded moderator variables. Each of the seven psychotherapeutic interventions was superior to a waitlist control condition with moderate to large effects (range d = -0.62 to d = -0.92). Relative effects of different psychotherapeutic interventions on depressive symptoms were absent to small (range d = 0.01 to d = -0.30). Interpersonal therapy was significantly more effective than supportive therapy (d = -0.30, 95% credibility interval [CrI] [-0.54 to -0.05]). Moderator analysis showed that patient characteristics had no influence on treatment effects, but identified aspects of study quality and sample size as effect modifiers. Smaller effects were found in studies of at least moderate (Δd = 0.29 [-0.01 to 0.58]; p = 0.063) and large size (Δd = 0.33 [0.08 to 0.61]; p = 0.012) and those that had adequate outcome assessment (Δd = 0.38 [-0.06 to 0.87]; p = 0.100). Stepwise restriction of analyses by sample size showed robust effects for cognitive-behavioural therapy, interpersonal therapy, and problem-solving therapy (all d>0.46) compared to waitlist. Empirical evidence from large studies was unavailable or limited for other psychotherapeutic interventions. CONCLUSIONS Overall our results are consistent with the notion that different psychotherapeutic interventions for depression have comparable benefits. However, the robustness of the evidence varies considerably between different psychotherapeutic treatments.
Resumo:
In this paper, we show statistical analyses of several types of traffic sources in a 3G network, namely voice, video and data sources. For each traffic source type, measurements were collected in order to, on the one hand, gain better understanding of the statistical characteristics of the sources and, on the other hand, enable forecasting traffic behaviour in the network. The latter can be used to estimate service times and quality of service parameters. The probability density function, mean, variance, mean square deviation, skewness and kurtosis of the interarrival times are estimated by Wolfram Mathematica and Crystal Ball statistical tools. Based on evaluation of packet interarrival times, we show how the gamma distribution can be used in network simulations and in evaluation of available capacity in opportunistic systems. As a result, from our analyses, shape and scale parameters of gamma distribution are generated. Data can be applied also in dynamic network configuration in order to avoid potential network congestions or overflows. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
The prenatal development of neural circuits must provide sufficient configuration to support at least a set of core postnatal behaviors. Although knowledge of various genetic and cellular aspects of development is accumulating rapidly, there is less systematic understanding of how these various processes play together in order to construct such functional networks. Here we make some steps toward such understanding by demonstrating through detailed simulations how a competitive co-operative ('winner-take-all', WTA) network architecture can arise by development from a single precursor cell. This precursor is granted a simplified gene regulatory network that directs cell mitosis, differentiation, migration, neurite outgrowth and synaptogenesis. Once initial axonal connection patterns are established, their synaptic weights undergo homeostatic unsupervised learning that is shaped by wave-like input patterns. We demonstrate how this autonomous genetically directed developmental sequence can give rise to self-calibrated WTA networks, and compare our simulation results with biological data.