930 resultados para Irreducible polynomial
Resumo:
Zusammenfassung: Ziel der Arbeit ist den Sinn von Schauder philosophisch zu erhellen. An einem Einzelphänomen, wie es so bisher nicht behandelt wurde, wird zugleich das Geflecht von Affekt/Emotion/Gefühl, gestützt auf Hegel, Heidegger, Husserl, Freud und Lacan, in immer neuen Ansätzen kritisch reflektiert und Zug um Zug mit Derrida dekonstruiert. In einem Textverfahren, das sich auch grafisch durch gegenübergestellte Kolumnen auszeichnet, werden heterogene Ansätze zum Sinn von Schauder mit dekonstruktivistischen Einsichten konfrontiert. Die Ansätze, Schauder über Datum, Begriff, Phänomen oder Strukturelement bestimmen zu wollen, durchdringen sich dabei mit denjenigen, die sich einer solchen Bestimmung entziehen (Hegels Negativität, Heideggers Seinsentzug oder Lacans Signifikantenmangel). Am Fokus Schauder, an dem sich das Fiktive einer mit sich selbst identischen Präsenz besonders eindringlich darstellt, werden so spezifische Aporien der Metaphysik der Präsenz entfaltet und die Geschlossenheit logozentristischer Systeme in die Bewegung einer anderen Öffnung und Schließung im Sinn der Schrift bzw. des allgemeinen Textes transformiert. Neben der différance, dem Entzug der Metapher, dem Supplement und dem Gespenstischen stützt sich die Arbeit auf die Iterabilität, bei der im selben Zug die Identität des Sinns gestiftet und zerstreut wird (Dissemination). Im Kapitel Piloerection werden Ambivalenzen und Paradoxien des Schauders am Beispiel von computergestützten empirisch-psychologischen Studien aufgezeigt. Im Kapitel Atopologie des Schauders prädikative, propositionale und topologische Bedingungen zum Sinn von Schauder analysiert und dekonstruiert. Ebenso, im Folgekapitel Etymon, etymologische und technisch-mediale Bedingungen. Im Schlußkapitel Maß, Anmaß, Unmaß der Empfindung des Schauders wird am Beispiel der konkreten Beiträge zum Schauder von Aristoteles, Kant, Fechner, Otto, Klages, Lorenz und Adorno aufgezeigt, dass (1) ein Schauder nicht von einem Außen aus an-gemessen werden kann, (2) sich im Schauder die metaphysische Opposition von Fiktion und Realität in einer Unentscheidbarkeit zerstreut, (3) dass trotz der Heterogenität der Ansätze in diesen Beiträgen eine Komplizenschaft zum Ausdruck kommt: ein Begehren nach Präsenz, das durch den Ausschluß des Anderen zugleich die Gewalt des Einen produziert, (4) dass der Signifikant Schauder, der selbst in Abwesenheit eines Referenten, eines bestimmten Signifikats, einer aktuellen Bedeutungsintention, eines Senders oder Empfängers funktioniert, als verändertes Zurückbleiben eines differenzieller Zeichens betrachtet werden muss, als Effekt von Spuren, die sich nur in ihrem eigenen Auslöschen ereignen. Die Arbeit schließt mit dem Vorschlag, Spüren jenseits von Arché, Telos oder Eschaton, jenseits eines Phallogozentrismus in der derridaschen Spur zu denken. Nicht zuletzt über diese Pfropfung, wie sie im Französischen [trace] so nicht möglich ist, schließt sie als deutschsprachiger Beitrag an sein Werk an.
Resumo:
The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.
Resumo:
Impressive claims have been made for the performance of the SNoW algorithm on face detection tasks by Yang et. al. [7]. In particular, by looking at both their results and those of Heisele et. al. [3], one could infer that the SNoW system performed substantially better than an SVM-based system, even when the SVM used a polynomial kernel and the SNoW system used a particularly simplistic 'primitive' linear representation. We evaluated the two approaches in a controlled experiment, looking directly at performance on a simple, fixed-sized test set, isolating out 'infrastructure' issues related to detecting faces at various scales in large images. We found that SNoW performed about as well as linear SVMs, and substantially worse than polynomial SVMs.
Resumo:
The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Labs. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Training a SVM is equivalent to solve a quadratic programming problem with linear and box constraints in a number of variables equal to the number of data points. When the number of data points exceeds few thousands the problem is very challenging, because the quadratic form is completely dense, so the memory needed to store the problem grows with the square of the number of data points. Therefore, training problems arising in some real applications with large data sets are impossible to load into memory, and cannot be solved using standard non-linear constrained optimization algorithms. We present a decomposition algorithm that can be used to train SVM's over large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of, and also establish the stopping criteria for the algorithm. We present previous approaches, as well as results and important details of our implementation of the algorithm using a second-order variant of the Reduced Gradient Method as the solver of the sub-problems. As an application of SVM's, we present preliminary results we obtained applying SVM to the problem of detecting frontal human faces in real images.
Resumo:
We consider the optimization problem of safety stock placement in a supply chain, as formulated in [1]. We prove that this problem is NP-Hard for supply chains modeled as general acyclic networks. Thus, we do not expect to find a polynomial-time algorithm for safety stock placement for a general-network supply chain.
Resumo:
In this article, a new technique for grooming low-speed traffic demands into high-speed optical routes is proposed. This enhancement allows a transparent wavelength-routing switch (WRS) to aggregate traffic en route over existing optical routes without incurring expensive optical-electrical-optical (OEO) conversions. This implies that: a) an optical route may be considered as having more than one ingress node (all inline) and, b) traffic demands can partially use optical routes to reach their destination. The proposed optical routes are named "lighttours" since the traffic originating from different sources can be forwarded together in a single optical route, i.e., as taking a "tour" over different sources towards the same destination. The possibility of creating lighttours is the consequence of a novel WRS architecture proposed in this article, named "enhanced grooming" (G+). The ability to groom more traffic in the middle of a lighttour is achieved with the support of a simple optical device named lambda-monitor (previously introduced in the RingO project). In this article, we present the new WRS architecture and its advantages. To compare the advantages of lighttours with respect to classical lightpaths, an integer linear programming (ILP) model is proposed for the well-known multilayer problem: traffic grooming, routing and wavelength assignment The ILP model may be used for several objectives. However, this article focuses on two objectives: maximizing the network throughput, and minimizing the number of optical-electro-optical conversions used. Experiments show that G+ can route all the traffic using only half of the total OEO conversions needed by classical grooming. An heuristic is also proposed, aiming at achieving near optimal results in polynomial time
Resumo:
Most network operators have considered reducing Label Switched Routers (LSR) label spaces (i.e. the number of labels that can be used) as a means of simplifying management of underlaying Virtual Private Networks (VPNs) and, hence, reducing operational expenditure (OPEX). This letter discusses the problem of reducing the label spaces in Multiprotocol Label Switched (MPLS) networks using label merging - better known as MultiPoint-to-Point (MP2P) connections. Because of its origins in IP, MP2P connections have been considered to have tree- shapes with Label Switched Paths (LSP) as branches. Due to this fact, previous works by many authors affirm that the problem of minimizing the label space using MP2P in MPLS - the Merging Problem - cannot be solved optimally with a polynomial algorithm (NP-complete), since it involves a hard- decision problem. However, in this letter, the Merging Problem is analyzed, from the perspective of MPLS, and it is deduced that tree-shapes in MP2P connections are irrelevant. By overriding this tree-shape consideration, it is possible to perform label merging in polynomial time. Based on how MPLS signaling works, this letter proposes an algorithm to compute the minimum number of labels using label merging: the Full Label Merging algorithm. As conclusion, we reclassify the Merging Problem as Polynomial-solvable, instead of NP-complete. In addition, simulation experiments confirm that without the tree-branch selection problem, more labels can be reduced
Resumo:
All-optical label swapping (AOLS) forms a key technology towards the implementation of all-optical packet switching nodes (AOPS) for the future optical Internet. The capital expenditures of the deployment of AOLS increases with the size of the label spaces (i.e. the number of used labels), since a special optical device is needed for each recognized label on every node. Label space sizes are affected by the way in which demands are routed. For instance, while shortest-path routing leads to the usage of fewer labels but high link utilization, minimum interference routing leads to the opposite. This paper studies all-optical label stacking (AOLStack), which is an extension of the AOLS architecture. AOLStack aims at reducing label spaces while easing the compromise with link utilization. In this paper, an integer lineal program is proposed with the objective of analyzing the softening of the aforementioned trade-off due to AOLStack. Furthermore, a heuristic aiming at finding good solutions in polynomial-time is proposed as well. Simulation results show that AOLStack either a) reduces the label spaces with a low increase in the link utilization or, similarly, b) uses better the residual bandwidth to decrease the number of labels even more
Resumo:
Exercises and solutions in LaTex
Resumo:
Exam questions and solutions in PDF
Resumo:
Exam questions and solutions in LaTex. Diagrams for the questions are all together in the support.zip file, as .eps files
Resumo:
Exercises and solutions in PDF
Resumo:
Exam questions and solutions in LaTex. Diagrams for the questions are all together in the support.zip file, as .eps files
Resumo:
Exam questions and solutions in PDF