903 resultados para Linux kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software Defined Radio (SDR) hardware platforms use parallel architectures. Current concepts of developing applications (such as WLAN) for these platforms are complex, because developers describe an application with hardware-specifics that are relevant to parallelism such as mapping and scheduling. To reduce this complexity, we have developed a new programming approach for SDR applications, called Virtual Radio Engine (VRE). VRE defines a language for describing applications, and a tool chain that consists of a compiler kernel and other tools (such as a code generator) to generate executables. The thesis presents this concept, as well as describes the language and the compiler kernel that have been developed by the author. The language is hardware-independent, i.e., developers describe tasks and dependencies between them. The compiler kernel performs automatic parallelization, i.e., it is capable of transforming a hardware-independent program into a hardware-specific program by solving hardware-specifics, in particular mapping, scheduling and synchronizations. Thus, VRE simplifies programming tasks as developers do not solve hardware-specifics manually.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this paper is the numerical treatment of a boundary value problem for the system of Stokes' equations. For this we extend the method of approximate approximations to boundary value problems. This method was introduced by V. Maz'ya in 1991 and has been used until now for the approximation of smooth functions defined on the whole space and for the approximation of volume potentials. In the present paper we develop an approximation procedure for the solution of the interior Dirichlet problem for the system of Stokes' equations in two dimensions. The procedure is based on potential theoretical considerations in connection with a boundary integral equations method and consists of three approximation steps as follows. In a first step the unknown source density in the potential representation of the solution is replaced by approximate approximations. In a second step the decay behavior of the generating functions is used to gain a suitable approximation for the potential kernel, and in a third step Nyström's method leads to a linear algebraic system for the approximate source density. For every step a convergence analysis is established and corresponding error estimates are given.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Das hier frei verfügbare Skript und die Sammlung an Klausuren mit Musterlösungen aus den Jahren 2004 bis 2015 geht auf einen E-Learning Kurs mit zehn Lektionen zurück, den Prof. Dr. Wegner ab 1985 zunächst an der FH Fulda für das COSTOC-Projekt von Prof. Maurer verfasste. Die COSTOC-Lernsoftware war für den Vertrieb mit Bildschirmtext gedacht, wenn man will, ein Vorläufer des heutigen Internets. Mit im Wesentlichen unverändertem Inhalt wurde der Kurs mehrfach auf neue Plattformen portiert, zuletzt für das Web mit SVG für die animierten Grafiken, und an der Universität Kassel im Bachelorstudium als Wahlveranstaltung angeboten. Das Skript kann also parallel zu der weiterhin verfügbaren elektronischen Vorlesung benutzt werden und gibt eine grundlegende Übersicht zu Unix/Linux mit Prozesskonzept, Dateisystem, Shell-Programmierung und den wesentlichen 50+ Kommandos, die man üblicherweise im Kopf hat, wenn man mit Unix arbeitet.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Das hier frei verfügbare Skript gehört zu einer gleichnamigen Vorlesung, die von Prof. Dr. Lutz Wegner bis zum Sommersemester 2007 gehalten wurde. Davor lief sie bis 1999 unter dem etwas irreführenden Titel „Ausgewählte Themen zu Rechnernetzen“. Behandelt wird die IPC in UNIX-basierten Rechnernetzen. Dazu gehören allgemeine Kenntnisse der Prozessumgebung, die fork- und exec-Systemaufrufe, Lock Files, Signale, Pipes, das Botschaftenkonzept (message queues), Semaphore, Shared Memory, Remote Procedure Calls, Sockets und Threads. Jedes Konzept wird mit kleinen Beispielen besprochen, die in C geschrieben sind. Der Quelltext liegt auf unseren Anlagen vor (für AIX, LINUX, Solaris). Grundlage der Vorlesung und des Skripts ist das ausgezeichnete Buch von John Shapley Gray „Interprocess Communications in UNIX“ aus dem Jahr 1998 bzw. die auf Linux angepasste Auflage desselben Buches „Interprocess Communications in LINUX“ aus dem Jahr 2003.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presentation at the 1997 Dagstuhl Seminar "Evaluation of Multimedia Information Retrieval", Norbert Fuhr, Keith van Rijsbergen, Alan F. Smeaton (eds.), Dagstuhl Seminar Report 175, 14.04. - 18.04.97 (9716). - Abstract: This presentation will introduce ESCHER, a database editor which supports visualization in non-standard applications in engineering, science, tourism and the entertainment industry. It was originally based on the extended nested relational data model and is currently extended to include object-relational properties like inheritance, object types, integrity constraints and methods. It serves as a research platform into areas such as multimedia and visual information systems, QBE-like queries, computer-supported concurrent work (CSCW) and novel storage techniques. In its role as a Visual Information System, a database editor must support browsing and navigation. ESCHER provides this access to data by means of so called fingers. They generalize the cursor paradigm in graphical and text editors. On the graphical display, a finger is reflected by a colored area which corresponds to the object a finger is currently pointing at. In a table more than one finger may point to objects, one of which is the active finger and is used for navigating through the table. The talk will mostly concentrate on giving examples for this type of navigation and will discuss some of the architectural needs for fast object traversal and display. ESCHER is available as public domain software from our ftp site in Kassel. The portable C source can be easily compiled for any machine running UNIX and OSF/Motif, in particular our working environments IBM RS/6000 and Intel-based LINUX systems. A porting to Tcl/Tk is under way.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Support Vector (SV) machine is a novel type of learning machine, based on statistical learning theory, which contains polynomial classifiers, neural networks, and radial basis function (RBF) networks as special cases. In the RBF case, the SV algorithm automatically determines centers, weights and threshold such as to minimize an upper bound on the expected test error. The present study is devoted to an experimental comparison of these machines with a classical approach, where the centers are determined by $k$--means clustering and the weights are found using error backpropagation. We consider three machines, namely a classical RBF machine, an SV machine with Gaussian kernel, and a hybrid system with the centers determined by the SV method and the weights trained by error backpropagation. Our results show that on the US postal service database of handwritten digits, the SV machine achieves the highest test accuracy, followed by the hybrid approach. The SV approach is thus not only theoretically well--founded, but also superior in a practical application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Impressive claims have been made for the performance of the SNoW algorithm on face detection tasks by Yang et. al. [7]. In particular, by looking at both their results and those of Heisele et. al. [3], one could infer that the SNoW system performed substantially better than an SVM-based system, even when the SVM used a polynomial kernel and the SNoW system used a particularly simplistic 'primitive' linear representation. We evaluated the two approaches in a controlled experiment, looking directly at performance on a simple, fixed-sized test set, isolating out 'infrastructure' issues related to detecting faces at various scales in large images. We found that SNoW performed about as well as linear SVMs, and substantially worse than polynomial SVMs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We derive a new representation for a function as a linear combination of local correlation kernels at optimal sparse locations and discuss its relation to PCA, regularization, sparsity principles and Support Vector Machines. We first review previous results for the approximation of a function from discrete data (Girosi, 1998) in the context of Vapnik"s feature space and dual representation (Vapnik, 1995). We apply them to show 1) that a standard regularization functional with a stabilizer defined in terms of the correlation function induces a regression function in the span of the feature space of classical Principal Components and 2) that there exist a dual representations of the regression function in terms of a regularization network with a kernel equal to a generalized correlation function. We then describe the main observation of the paper: the dual representation in terms of the correlation function can be sparsified using the Support Vector Machines (Vapnik, 1982) technique and this operation is equivalent to sparsify a large dictionary of basis functions adapted to the task, using a variation of Basis Pursuit De-Noising (Chen, Donoho and Saunders, 1995; see also related work by Donahue and Geiger, 1994; Olshausen and Field, 1995; Lewicki and Sejnowski, 1998). In addition to extending the close relations between regularization, Support Vector Machines and sparsity, our work also illuminates and formalizes the LFA concept of Penev and Atick (1996). We discuss the relation between our results, which are about regression, and the different problem of pattern classification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first part of this paper we show a similarity between the principle of Structural Risk Minimization Principle (SRM) (Vapnik, 1982) and the idea of Sparse Approximation, as defined in (Chen, Donoho and Saunders, 1995) and Olshausen and Field (1996). Then we focus on two specific (approximate) implementations of SRM and Sparse Approximation, which have been used to solve the problem of function approximation. For SRM we consider the Support Vector Machine technique proposed by V. Vapnik and his team at AT&T Bell Labs, and for Sparse Approximation we consider a modification of the Basis Pursuit De-Noising algorithm proposed by Chen, Donoho and Saunders (1995). We show that, under certain conditions, these two techniques are equivalent: they give the same solution and they require the solution of the same quadratic programming problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analizar la formación recibida en software por parte de los coordinadores de nuevas tecnologías en los centros educativos públicos no universitarios de Asturias ajustándose a la dicotomía libre/privado. Conocer el tipo de software empleado por estos coordinadores en los diferentes contextos: aula, gestión docente y personal. Valorar su opinión sobre qué software es el más indicado para aplicar en el sistema educativo. Conocer las razones usadas para la selección de uno u otro tipo de software. Explicar, en su caso, las diferencias encontradas con las variables de identificación (sexo, edad, años de experiencia como coordinador de nuevas tecnologías, etc.). Explorar las líneas de un proceso formativo en software libre con profesionales de la educación para identificar las dificultades y oportunidades de la formación en este campo. El trabajo se estructura en dos apartados, en primer lugar, una fundamentación teórica compuesta por dos capítulos en los que se revisan los elementos teóricos de la investigación y en segundo lugar, el estudio de campo donde se aclaran todos los aspectos técnicos del proceso investigador, sus objetivos, sus conclusiones y las líneas de actuación propuestas. En la parte teórica, se abordan las relaciones que se establecen entre el desarrollo tecnológico y social y el modo en que esta relación incide en los discursos sobre la inclusión de las nuevas tecnologías de la información y de la comunicación en la escuela. Se hace una revisión de esta relación entre las TIC y el sistema educativo, que concluye con la esquematización de las racionalidades curriculares existentes y el papel que, los medios en general, y el software en particular, ocupan en ella. Se analizan los conceptos de alfabetización digital, software libre y sistema operativo, y se explica el concepto de código abierto y el programa Linux y, finalmente, se hace un repaso a las argumentaciones contrarias al software libre intentando desenmascarar aquellas que transmiten mitos o falsedades. Se exponen las razones que justifican la difusión del software libre en el sistema educativo. Tras la fundamentación teórica de la investigación, en el apartado, estudio de campo, se describe éste, la metodología y las conclusiones de la investigación. Se ha utilizado una metodología cuantitativa en el caso de la primera parte de la investigación y metodologías cualitativas en el caso de la experiencia de formación en software libre. La técnica de investigación ha sido la encuesta, con la realización de un cuestionario. La muestra la integraron 307 sujetos, de los que sólo un 38,8 por ciento respondió al cuestionario. El perfil tipo de los sujetos muestrales es: varón de 47 años de edad con 22 de experiencia en la educación y 4 como coordinador de nuevas tecnologías. Se concluye que a pesar de que la mayor parte de los sujetos encuestados manifiestan que el software libre es el que se debe emplear en el sistema educativo su uso en la escuela asturiana está lejos de ser una realidad, tanto en el aula como en otros ámbitos, siendo el software privado el que ocupa un lugar prioritario favorecido por las propias políticas de la administración. Finalmente, se realiza una propuesta de actuación encaminada a la elaboración de un plan de implementación de software libre en el sistema educativo del que se apuntan procedimientos a corto, medio y largo plazo.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A problem in the archaeometric classification of Catalan Renaissance pottery is the fact, that the clay supply of the pottery workshops was centrally organized by guilds, and therefore usually all potters of a single production centre produced chemically similar ceramics. However, analysing the glazes of the ware usually a large number of inclusions in the glaze is found, which reveal technological differences between single workshops. These inclusions have been used by the potters in order to opacify the transparent glaze and to achieve a white background for further decoration. In order to distinguish different technological preparation procedures of the single workshops, at a Scanning Electron Microscope the chemical composition of those inclusions as well as their size in the two-dimensional cut is recorded. Based on the latter, a frequency distribution of the apparent diameters is estimated for each sample and type of inclusion. Following an approach by S.D. Wicksell (1925), it is principally possible to transform the distributions of the apparent 2D-diameters back to those of the true three-dimensional bodies. The applicability of this approach and its practical problems are examined using different ways of kernel density estimation and Monte-Carlo tests of the methodology. Finally, it is tested in how far the obtained frequency distributions can be used to classify the pottery

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L’objectiu d’aquest treball és dissenyar un model general d’un sistema de telefonia IP per a una petita o mitjana empresa. El model ha de tenir en compteles característiques actuals de la xarxa de l’empresa i proposar una solució adient. Un altre requeriment és la utilització de software de lliure distribució, des del sistema operatiu fins al relatiu a VoIP, i més concretament, el software de centraleta VoIP Asterisk sobre GNU/Linux. En primer lloc s’estudiaran els conceptes bàsics de la telefonia IP (protocols, codificadors, servidors, etc.). En segon lloc, s’analitzaran els diferents escenaris possibles i es proposaran solucions adequades per cadascun d’ells. Després s’estudiarà el funcionament de les centraletes Asterisk i la seva configuració encada escenari. Finalment s’aplicarà aquest estudi a una empresa concreta

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este trabajo se describe la solución ideada para la implantación de un Sistema de Información Geográfica que debe dar servicio al Instituto Universitario del Agua y del Medio Ambiente de la Universidad de Murcia y al Instituto Euromediterráneo del Agua. Dada la naturaleza de ambas instituciones, se trata de una herramienta orientada fundamentalmente al estudio de recursos hídricos y procesos hidrológicos. El proceso se inició con una identificación de las necesidades de los usuarios (con perfiles y requerimiento diferentes) y el posterior desarrollo del diseño conceptual que pudiera asegurar la satisfacción de estas necesidades. Debido a que los requerimientos de los usuarios así lo demandaban, se ha tenido en cuenta tanto a usuarios que trabajan en entorno linux como a otros que lo hacen en entorno windows. Se ha optado por un sistema basado en software libre utilizando GRASS para el manejo de información raster y modelización; postgis (sobre postgreSQL) y GRASS para la gestión de información vectorial; y QGIS, gvSIG y Kosmo como interfaces gráficas de usuario. Otros programas utilizados para propósitos específicos han sido R, Mapserver o GMT

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A pioneer team of students of the University of Girona decided to design and develop an autonomous underwater vehicle (AUV) called ICTINEU-AUV to face the Student Autonomous Underwater Challenge-Europe (SAUC-E). The prototype has evolved from the initial computer aided design (CAD) model to become an operative AUV in the short period of seven months. The open frame and modular design principles together with the compatibility with other robots previously developed at the lab have provided the main design philosophy. Hence, at the robot's core, two networked computers give access to a wide set of sensors and actuators. The Gentoo/Linux distribution was chosen as the onboard operating system. A software architecture based on a set of distributed objects with soft real time capabilities was developed and a hybrid control architecture including mission control, a behavioural layer and a robust map-based localization algorithm made ICTINEU-AUV the winning entry