903 resultados para Linux kernel


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Creación de una estructura de red que permita redireccionar el tráfico entre las redes WLAN y hacia Internet desde un punto central de un centro educativo, de forma que sea dinámico realizar cambios sobre la configuración desde una sola máquina.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exposició dels passos que cal seguir per construir un sistema GNU/Linux adreçat a les aules dels instituts de secundària prenent com a base Linux From Scratch versió 7.5.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aquest projecte pretén estudiar i analitzar tres distribucions Linux especialitzades en seguretat de xarxes. En aquest estudi es descriuran les característiques tècniques de cada distribució així com la classificació de les seves eines principals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Työn tavoitteena oli selvittää mahdollisuuksia käyttää Linux-ympäristöä mekatronisten koneiden reaaliaikaisessa simuloinnissa. Työssä tutkittiin C-kielellä tehdyn reaaliaikaisen simulointimallin ratkaisua Linux-käyttöjärjestelmässä RTLinux-reaaliaikalaajennuksen avulla. Reaaliaikainen simulointi onnistui RTLinuxin avulla tehokkaasti ja mallinnusmenetelmien rajoissa tarkasti. I/O-toimintojen lisäämistä erillisten I/O-korttien avulla ei tarkasteltu tässä työssä. Reaaliaikaista Linuxia ei ole aikaisemmin käytetty mekatronisten koneiden simulointiin. Tämän vuoksi valmiita työkaluja ei ole olemassa. Linux-ympäristö ei näin ollen sovellu kovin hyvin yleiseen koneensuunnitteluun mallintamisen työläyden vuoksi.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We prove upper pointwise estimates for the Bergman kernel of the weighted Fock space of entire functions in $L^{2}(e^{-2\phi}) $ where $\phi$ is a subharmonic function with $\Delta\phi$ a doubling measure. We derive estimates for the canonical solution operator to the inhomogeneous Cauchy-Riemann equation and we characterize the compactness of this operator in terms of $\Delta\phi$.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Työn tarkoituksena on tutkia pinon ylikirjoitukseen perustuvien hyökkäysten toimintaa ja osoittaa kokeellisesti nykyisten suojaustekniikoiden olevan riittämättömiä. Tutkimus suoritetaan testaamalla miten valitut tietoturvatuotteet toimivat eri testitilanteissa. Testatut tuotteet ovat Openwall, PaX, Libsafe 2.0 ja Immunix 6.2. Testaus suoritetaan pääasiassa RedHat 7.0 ympäristössä testiohjelman avulla. Testeissä mitataan sekä tuotteiden kyky havaita hyökkäyksiä että niiden nopeusvaikutukset. Myös erityyppisten hyökkäysten ja niitä vastaan kehitettyjen metodien toimintaperiaatteet esitellään seikkaperäisesti ja havainnollistetaan yksinkertaistetuilla esimerkeillä. Esitellyt tekniikat sisältävät puskurin ylivuodot, laittomat muotoiluparametrit, loppumerkittömät merkkijonot ja taulukoiden ylivuodot. Testit osoittavat, etteivät valitut tuotteet estä kaikkia hyökkäyksiä, joten lopuksi perehdytään myös vahinkojen minimointiin onnistuneiden hyökkäysten varalta.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let $Q$ be a suitable real function on $C$. An $n$-Fekete set corresponding to $Q$ is a subset ${Z_{n1}},\dotsb, Z_{nn}}$ of $C$ which maximizes the expression $\Pi^n_i_{

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Degut al gran interès actual per instal·lar clústers dedicats al tractament de dades amb Hadoop, s'ha dissenyat una distribució de Linux que automatitza totes les tasques associades. Aquesta distribució permet fer el desplegament sobre un clúster i realitzar una configuració bàsica del mateix de la forma més desatesa possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a new kernel estimation of the cumulative distribution function based on transformation and on bias reducing techniques. We derive the optimal bandwidth that minimises the asymptotic integrated mean squared error. The simulation results show that our proposed kernel estimation improves alternative approaches when the variable has an extreme value distribution with heavy tail and the sample size is small.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software integration is a stage in a software development process to assemble separate components to produce a single product. It is important to manage the risks involved and being able to integrate smoothly, because software cannot be released without integrating it first. Furthermore, it has been shown that the integration and testing phase can make up 40 % of the overall project costs. These issues can be mitigated by using a software engineering practice called continuous integration. This thesis work presents how continuous integration is introduced to the author's employer organisation. This includes studying how the continuous integration process works and creating the technical basis to start using the process on future projects. The implemented system supports software written in C and C++ programming languages on Linux platform, but the general concepts can be applied to any programming language and platform by selecting the appropriate tools. The results demonstrate in detail what issues need to be solved when the process is acquired in a corporate environment. Additionally, they provide an implementation and process description suitable to the organisation. The results show that continuous integration can reduce the risks involved in a software process and increase the quality of the product as well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.