937 resultados para Rough Kernels
Resumo:
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.
Resumo:
This paper analyzes applications of cumulant analysis in speech processing. A special focus is made on different second-order statistics. A dominant role is played by an integral representation for cumulants by means of integrals involving cyclic products of kernels.
Resumo:
An Enantiornithes specimen from El Montsec was initially described as an immature individual based upon qualitative traits such as its relatively large orbit and overall proportions of the skull and the postcranium. In this study we re-evaluate the precise determination of the ontogenetic stage of this individual, establishing a cross-talk among taphonomic, anatomic, and morphometric data. The exceptional preservation of the specimen has allowed pondering ontogenetic influence versus preservational bias in features like the external patterns of bone surfaces, instead of being aprioristically considered due to taphonomic alterations only. The rough texture of the periosteal bone associated with pores in the distal, proximal and mid-shaft areas of the humeral shaft, indicates a subadult stage when compared with long bones of modern birds. Forelimb proportions of embryo and juvenile Enanthiornithes are equivalent to those of adult individuals of other taxa within this clade, though this is not a reliable criterion for establishing a precise ontogenetic stage. The El Montsec specimen may be attributed a close adulthood, yet only if growth regimes in Enantiornithes are considered equivalent to those in Neornithes birds.
Resumo:
Tutkimuksen ensisijaisena tavoitteena oli tarkastella luottamuksen rakentumista virtuaalitiimissä. Keskeistä tarkastelussa olivat luottamuksen lähteiden löytäminen, suhteen rakentuminen sekä teknologiavälitteinen kommunikaatio. Myös käytännön keinoja ja sovelluksia etsittiin. Tässä tutkimuksessa luottamus nähtiin tärkeänä yhteistyön mahdollistajana sekä keskeisenä elementtinä ihmisten välisten suhteiden rakentumisessa. Tämä tutkimus oli empiirinen ja kuvaileva tapaustutkimus. Tutkimuksessa kvalitatiivista aineistoa kerättiin pääasiassa web-pohjaisen kyselyn sekä puhelinhaastattelun avulla. Aineistonkeruu toteutettiin siis pääasiassa virtuaalisesti. Saatu aineisto analysoitiin teemoittelun avulla. Tässä työssä teemoja etsittiin tekstistä pääasiassa teoriasta johdettujen oletusten perusteella. Tutkimuksen tuloksena oli, että luottamusta rakentavia mekanismeja ovat, karkeasti luokiteltuna, yhteiset päämäärät ja vastuut, kommunikaatio, sosiaalinen kanssakäyminen ja informaation jakaminen, toisten huomioiminen ja henkilökohtaiset ominaisuudet. Mekanismit eivät suuresti eronneet luottamuksen rakentumisen mekanismeista perinteisessä kontekstissa. Virtuaalitiimityön alkuvaiheessa luottamus pohjautui käsityksille toisten tiimin jäsenten kyvykkyydestä. Myös institutionaalinen identifioituminen loi pohjaa luottamukselle alkuvaiheessa. Muuten luottamus rakentui vähän kerrassaan tehtävään liittyvän kommunikaation ja sosiaalisen kommunikaation kautta. Tekojen merkitys korostui erityisesti ajan myötä. Työssä esitettiin myös käytännön keinoja luottamuksen rakentamiseksi. Olemassa olevien teknologioiden havaittiin tukevan hyvin suhteen rakentumista tiedon jakamiseen ja sen varastoimiseen liittyvissä tehtävissä. Sen sijaan vuorovaikutuksen näkökulmasta tuen ei nähty olevan yhtä kattavaa. Kaiken kaikkiaan kuitenkin parannuksella sosiaalisissa suhteissa voitaneen saada enemmän aikaan kuin parannuksilla teknologian suhteen.
Resumo:
By an exponential sum of the Fourier coefficients of a holomorphic cusp form we mean the sum which is formed by first taking the Fourier series of the said form,then cutting the beginning and the tail away and considering the remaining sum on the real axis. For simplicity’s sake, typically the coefficients are normalized. However, this isn’t so important as the normalization can be done and removed simply by using partial summation. We improve the approximate functional equation for the exponential sums of the Fourier coefficients of the holomorphic cusp forms by giving an explicit upper bound for the error term appearing in the equation. The approximate functional equation is originally due to Jutila [9] and a crucial tool for transforming sums into shorter sums. This transformation changes the point of the real axis on which the sum is to be considered. We also improve known upper bounds for the size estimates of the exponential sums. For very short sums we do not obtain any better estimates than the very easy estimate obtained by multiplying the upper bound estimate for a Fourier coefficient (they are bounded by the divisor function as Deligne [2] showed) by the number of coefficients. This estimate is extremely rough as no possible cancellation is taken into account. However, with small sums, it is unclear whether there happens any remarkable amounts of cancellation.
Resumo:
With the aim of monitoring the dynamics of the Livingston Island ice cap, the Departament de Geodinàmica i Geofísica of the Universitat de Barcelona began ye a r ly surveys in the austral summer of 1994-95 on Johnsons Glacier. During this field campaign 10 shallow ice cores were sampled with a manual ve rtical ice-core drilling machine. The objectives were: i) to detect the tephra layer accumulated on the glacier surface, attributed to the 1970 Deception Island pyroclastic eruption, today interstratified; ii) to verify wheter this layer might serve as a reference level; iii) to measure the 1 3 7Cs radio-isotope concentration accumulated in the 1965 snow stratum; iv) to use the isochrone layer as a mean of verifying the age of the 1970 tephra layer; and, v) to calculate both the equilibrium line of the glacier and average mass balance over the last 28 years (1965-1993). The stratigr a p hy of the cores, their cumulative density curves and the isothermal ice temperatures recorded confi rm that Johnsons Glacier is a temperate glacier. Wi n d, solar radiation heating and liquid water are the main agents controlling the ve rtical and horizontal redistribution of the volcanic and cryoclastic particles that are sedimented and remain interstratified within the g l a c i e r. It is because of this redistribution that the 1970 tephra layer does not always serve as a ve ry good reference level. The position of the equilibrium line altitude (ELA) in 1993, obtained by the 1 3 7Cs spectrometric analysis, varies from about 200 m a.s.l. to 250 m a.s.l. This indicates a rising trend in the equilibrium line altitude from the beginning of the 1970s to the present day. The va rying slope orientation of Johnsons Glacier relative to the prevailing NE wind gives rise to large local differences in snow accumulation, which locally modifies the equilibrium line altitude. In the cores studied, 1 3 7Cs appears to be associated with the 1970 tephra laye r. This indicates an intense ablation episode throughout the sampled area (at least up to 330 m a.s.l), which probably occurred synchronically to the 1970 tephra deposition or later. A rough estimate of the specific mass balance reveals a considerable accumulation gradient related to the increase with altitude.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
This article describes a photocatalytic nanostructured anatase coating deposited by cold gas spray (CGS)supported on titanium sub-oxide (TiO22x) coatings obtained by atmospheric plasma spray (APS) onto stainless steel cylinders. The photocatalytic coating was homogeneous and preserved the composition and nanostructure of the starting powder. The inner titanium sub-oxide coating favored the deposition of anatase particles in the solid state. Agglomerated nano-TiO2 particles fragmented when impacting onto the hard surface of the APS TiO22x bond coat. The rough surface provided by APS provided an ideal scenario for entrapping the nanostructured particles, which may be adhered onto the bond coat due to chemical bonding; a possible bonding mechanism is described. Photocatalytic experiments showed that CGS nano-TiO2 coating was active for photodegrading phenol and formic acid under aqueous conditions. The results were similar to the performance obtained by competitor technologies and materials such as dip-coating P25 photocatalysts. Disparity in the final performance of the photoactive materials may have been caused by differences in grain size and the crystalline composition of titanium dioxide.
Resumo:
Sixty-nine entire male pigs with different halothane genotype (homozygous halothanepositive – nn-, n=36; and homozygous halothane negative – NN-, n=33) were fed with a supplementation of magnesium sulphate (Mg) and/or L-tryptophan (Trp) in the diet for 5days before slaughter. Animals were housed individually and were submitted to stressful ante mortem conditions (mixed in the lorry according to treatments and transported 1h on rough roads). Individual feed intake was recorded during the 5-day treatment. At the abattoir, pig behaviour was assessed in the raceway to the stunning system and during the stunning period by exposure to CO2. Muscle pH, colour, water holding capacity, texture and cathepsin activities were determined to assess meat quality. The number of pigs with an individual feed intake lower than 2kg/day was significantly different among diets (P<0.05; Control: 8.7%; Mg&Trp: 43.5%; Trp:17.4%) and they were considered to have inadequate supplement intake. During the antemortem period, 15.2% of pigs included in the experiment died, and this percentagedecreased to 8.7% in those pigs with a feed intake > 2kg/day, all of them from thestress-sensitive pigs (nn). In general, no differences were observed in the behaviour ofpigs along the corridor leading to the stunning system and inside the CO2 stunningsystem. During the stunning procedure, Trp diet showed shorter periods of muscularexcitation than control and Mg&Trp diets. The combination of a stressful ante mortemtreatment and Mg&Trp supplementation led to carcasses with high incidence of severeskin lesions. Different meat quality results were found when considering all pigs orconsidering only those with adequate supplement intake. In this later case, Trp increased pH45 (6.15) vs Control diet (5.96) in the Longissimus thoracis (LT) muscle (P<0.05) and pH at 24h (Trp: 5.59 vs C: 5.47) led to a higher incidence of dark, firm and dry (DFD) traits in SM muscle (P<0.05). Genotype affected negatively all the meat quality traits. Seventy-five percent of LT and 60.0% of the SM muscles from nn pigs wereclassified as pale, soft and exudative (PSE), while none of the NN pigs showed these traits (P<0.0001). No significant differences were found between genotypes on the incidence of DFD meat. Due to the negative effects observed in the Mg&Trp group in feed intake and carcass quality, the utilization of a mixture of magnesium sulphate and tryptophan is not recommended
Resumo:
Suomessa asuu haja-asutusalueilla yli miljoona ihmistä, jotka käyttävät kiinteistökohtaista jätevesijärjestelmää. Arviolta 350 000-400 000 kiinteistöä joutuu saneeraamaan järjestelmänsä vuoden 2013 loppuun mennessä vastaamaan Valtioneuvoston asetusta. Jätevesijärjestelmän korjaamiseen tarvitaan yleensä toimenpidelupa ja sen liitteeksi suunnitelma jätevesien käsittelystä. Suunnittelu- ja toteutusprosessin tietohallintaan tarvitaan tietojärjestelmä. Diplomityössä perehdyttiin tietokantapohjaisiin Internet-sovelluksiin ja ekstranet-järjestelmiin ja suunniteltiin niiden perusteella käyttökelpoisin tietojärjestelmäratkaisu, jonka avulla jätevesialan toimijat voivat jakaa tietoa, tehdä jätevesiselvityksiä ja -suunnitelmia, välittää toimeksiantoja aliurakoitsijoille ja raportoida tietoja viranomaisille sähköisesti. Työn tuloksena syntyi selvitys tietojärjestelmän suunnittelun pohjaksi, ja suunnitelma järjestelmän toteuttamiseksi. Järjestelmällä voidaan tarjota asiakkaalle eli kiinteistönomistajalle kokonaisvaltainen ratkaisu jätevesijärjestelmän saneeraukseen ja ylläpitoon.
Resumo:
Tässä diplomityössä tarkoituksena on kerätä yhteen toiminnanohjausjärjestelmän varasto- ja hankintamodulin kehittämisessä tarvittavaa teoriatietoa, sekä Nestix Oy:n asiakkaiden ja joidenkin muiden yritysten tarpeita kyseiselle ohjelmistolle. Teoria ja tarpeet yhdistetään karkeaksi ohjelmiston tarvemäärittelyksi, jonka perusteella Nestix Oy toteuttaa uudet modulit toiminnanohjausjärjestelmäänsä. Tutkimuksissa havaittiin, että yrityksillä on hyvin samankaltaisia tarpeita varastojen ja hankintojen hallintaan. Varastoinnin ja hankinnan prosessit poikkeavat hyvin vähän yritysten välillä. Samoja perinteisiä logistiikan perustyökaluja käytetään miltei kaikissa yrityksissä. Tulevaisuuden houkuttelevimpana kehitysalueena voidaan pitää ohjelmistojen internet-pohjaisuutta, joka mahdollistaa ohjelmiston käyttämisen mistäpäin maailmaa tahansa ilman erillisiä ohjelmistoasennuksia. Tutkimuksen jälkeen Nestix-ohjelmistoperheeseen toteutettiin integroidut hankinnan sekä varastoinnin modulit. Ohjelmaa on kehitetty tutkimuksen tekemisen jälkeen eteenpäin ja sitä on myyty useaan Euroopan maahan osana Nestix toiminnanohjausjärjestelmää.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
One main assumption in the theory of rough sets applied to information tables is that the elements that exhibit the same information are indiscernible (similar) and form blocks that can be understood as elementary granules of knowledge about the universe. We propose a variant of this concept defining a measure of similarity between the elements of the universe in order to consider that two objects can be indiscernible even though they do not share all the attribute values because the knowledge is partial or uncertain. The set of similarities define a matrix of a fuzzy relation satisfying reflexivity and symmetry but transitivity thus a partition of the universe is not attained. This problem can be solved calculating its transitive closure what ensure a partition for each level belonging to the unit interval [0,1]. This procedure allows generalizing the theory of rough sets depending on the minimum level of similarity accepted. This new point of view increases the rough character of the data because increases the set of indiscernible objects. Finally, we apply our results to a not real application to be capable to remark the differences and the improvements between this methodology and the classical one
Resumo:
Leimuhitsauksen laadunhallinnassa tärkeää on prosessimainen toiminta ja toimintojen kokonaisvaltaisuuden hahmottaminen. Leimuhitsauksen ollessa osana päättymätöntä valssausta hitsauksen laadunhallinta voidaan jakaa kolmeen päävaiheeseen: ennen hitsausta, hitsauksen aikana ja hitsauksen jälkeen vaikuttaviin laaduntuottotekijöihin. Leimuhitsauksen laaduntuottotekijöiden määritys ja jaottelu kaavioiksi on toteutettu tässä työssä. Leimuhitsin tekniseen laatuun vaikuttavat monet ilmiöt ja parametrit hitsauksen aikana. Tärkeimpiä hitsausparametreista ovat hitsausjännite, alustan liikenopeus, tyssäysmatka sekä leimutusaika. Väärät hitsausolosuhteet tai hitsausparametrit aiheuttavat erilaisia mekaanisia ja metallurgisia vikoja leimuhitsiin. Eräs metallurginen vikatyyppi ovat oksidisulkeumat hitsialueella. Nämä hapettuneet alueet voivat johtaa juurensa erilaisista syistä ja esimerkiksi leimutusajan ja epätasaisen liitospinnan yhteyttä sulkeumien syntyyn on epäilty. Pidennetyillä leimutusajoilla tehtyjen hitsauskokeiden ja hitsien rikkovien aineenkoetuskokeiden tuloksena todettiin tässä tutkittujen koehitsien oksidisulkeumien syntyvän hapettumalla leimutuksen aikana tai juuri hetkellä ennen tyssäystä, minkä lisäksi kosteat olosuhteet hitsausatmosfäärin ympärillä heikentävät lopputuloksen laatua. Leimutusajalla on tärkein osa leimutettavia pintoja tasaavana tekijänä ja mitä epätasaisempi liitettävä pinta on, sitä pidemmän leimutusajan se tarvitsee.