857 resultados para clustering and QoS-aware routing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

IP-verkkojen hyvin tunnettu haitta on, että nämä eivät pysty takaamaan tiettyä palvelunlaatua (Quality of Service) lähetetyille paketeille. Seuraavat kaksi tekniikkaa pidetään lupaavimpina palvelunlaadun tarjoamiselle: Differentiated Services (DiffServ) ja palvelunlaatureititys (QoS Routing). DiffServ on varsin uusi IETF:n määrittelemä Internetille tarkoitettu palvelunlaatumekanismi. DiffServ tarjoaa skaalattavaa palvelujen erilaistamista ilman viestintää joka hypyssä ja per-flow –tilan ohjausta. DiffServ on hyvä esimerkki hajautetusta verkkosuunnittelusta. Tämän palvelutasomekanismin tavoite on viestintäjärjestelmien suunnittelun yksinkertaistaminen. Verkkosolmu voidaan rakentaa pienestä hyvin määritellystä rakennuspalikoiden joukosta. Palvelunlaatureititys on reititysmekanismi, jolla liikennereittejä määritellään verkon käytettävissä olevien resurssien pohjalta. Tässä työssä selvitetään uusi palvelunlaatureititystapa, jota kutsutaan yksinkertaiseksi monitiereititykseksi (Simple Multipath Routing). Tämän työn tarkoitus on suunnitella palvelunlaatuohjain DiffServille. Tässä työssä ehdotettu palvelunlaatuohjain on pyrkimys yhdistää DiffServ ja palvelunlaatureititysmekanismeja. Työn kokeellinen osuus keskittyy erityisesti palvelunlaatureititysalgoritmeihin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Many species contain evolutionarily distinct groups that are genetically highly differentiated but morphologically difficult to distinguish (i.e., cryptic species). The presence of cryptic species poses significant challenges for the accurate assessment of biodiversity and, if unrecognized, may lead to erroneous inferences in many fields of biological research and conservation. RESULTS: We tested for cryptic genetic variation within the broadly distributed alpine mayfly Baetis alpinus across several major European drainages in the central Alps. Bayesian clustering and multivariate analyses of nuclear microsatellite loci, combined with phylogenetic analyses of mitochondrial DNA, were used to assess population genetic structure and diversity. We identified two genetically highly differentiated lineages (A and B) that had no obvious differences in regional distribution patterns, and occurred in local sympatry. Furthermore, the two lineages differed in relative abundance, overall levels of genetic diversity as well as patterns of population structure: lineage A was abundant, widely distributed and had a higher level of genetic variation, whereas lineage B was less abundant, more prevalent in spring-fed tributaries than glacier-fed streams and restricted to high elevations. Subsequent morphological analyses revealed that traits previously acknowledged as intraspecific variation of B. alpinus in fact segregated these two lineages. CONCLUSIONS: Taken together, our findings indicate that even common and apparently ecologically well-studied species may consist of reproductively isolated units, with distinct evolutionary histories and likely different ecology and evolutionary potential. These findings emphasize the need to investigate hidden diversity even in well-known species to allow for appropriate assessment of biological diversity and conservation measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Peer-reviewed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technological developments in microprocessors and ICT landscape have made a shift to a new era where computing power is embedded in numerous small distributed objects and devices in our everyday lives. These small computing devices are ne-tuned to perform a particular task and are increasingly reaching our society at every level. For example, home appliances such as programmable washing machines, microwave ovens etc., employ several sensors to improve performance and convenience. Similarly, cars have on-board computers that use information from many di erent sensors to control things such as fuel injectors, spark plug etc., to perform their tasks e ciently. These individual devices make life easy by helping in taking decisions and removing the burden from their users. All these objects and devices obtain some piece of information about the physical environment. Each of these devices is an island with no proper connectivity and information sharing between each other. Sharing of information between these heterogeneous devices could enable a whole new universe of innovative and intelligent applications. The information sharing between the devices is a diffcult task due to the heterogeneity and interoperability of devices. Smart Space vision is to overcome these issues of heterogeneity and interoperability so that the devices can understand each other and utilize services of each other by information sharing. This enables innovative local mashup applications based on shared data between heterogeneous devices. Smart homes are one such example of Smart Spaces which facilitate to bring the health care system to the patient, by intelligent interconnection of resources and their collective behavior, as opposed to bringing the patient into the health system. In addition, the use of mobile handheld devices has risen at a tremendous rate during the last few years and they have become an essential part of everyday life. Mobile phones o er a wide range of different services to their users including text and multimedia messages, Internet, audio, video, email applications and most recently TV services. The interactive TV provides a variety of applications for the viewers. The combination of interactive TV and the Smart Spaces could give innovative applications that are personalized, context-aware, ubiquitous and intelligent by enabling heterogeneous systems to collaborate each other by sharing information between them. There are many challenges in designing the frameworks and application development tools for rapid and easy development of these applications. The research work presented in this thesis addresses these issues. The original publications presented in the second part of this thesis propose architectures and methodologies for interactive and context-aware applications, and tools for the development of these applications. We demonstrated the suitability of our ontology-driven application development tools and rule basedapproach for the development of dynamic, context-aware ubiquitous iTV applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

According to the working memory model, the phonological loop is the component of working memory specialized in processing and manipulating limited amounts of speech-based information. The Children's Test of Nonword Repetition (CNRep) is a suitable measure of phonological short-term memory for English-speaking children, which was validated by the Brazilian Children's Test of Pseudoword Repetition (BCPR) as a Portuguese-language version. The objectives of the present study were: i) to investigate developmental aspects of the phonological memory processing by error analysis in the nonword repetition task, and ii) to examine phoneme (substitution, omission and addition) and order (migration) errors made in the BCPR by 180 normal Brazilian children of both sexes aged 4-10, from preschool to 4th grade. The dominant error was substitution [F(3,525) = 180.47; P < 0.0001]. The performance was age-related [F(4,175) = 14.53; P < 0.0001]. The length effect, i.e., more errors in long than in short items, was observed [F(3,519) = 108.36; P < 0.0001]. In 5-syllable pseudowords, errors occurred mainly in the middle of the stimuli, before the syllabic stress [F(4,16) = 6.03; P = 0.003]; substitutions appeared more at the end of the stimuli, after the stress [F(12,48) = 2.27; P = 0.02]. In conclusion, the BCPR error analysis supports the idea that phonological loop capacity is relatively constant during development, although school learning increases the efficiency of this system. Moreover, there are indications that long-term memory contributes to holding memory trace. The findings were discussed in terms of distinctiveness, clustering and redintegration hypotheses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The pipeline for macro- and microarray analyses (PMmA) is a set of scripts with a web interface developed to analyze DNA array data generated by array image quantification software. PMmA is designed for use with single- or double-color array data and to work as a pipeline in five classes (data format, normalization, data analysis, clustering, and array maps). It can also be used as a plugin in the BioArray Software Environment, an open-source database for array analysis, or used in a local version of the web service. All scripts in PMmA were developed in the PERL programming language and statistical analysis functions were implemented in the R statistical language. Consequently, our package is a platform-independent software. Our algorithms can correctly select almost 90% of the differentially expressed genes, showing a superior performance compared to other methods of analysis. The pipeline software has been applied to 1536 expressed sequence tags macroarray public data of sugarcane exposed to cold for 3 to 48 h. PMmA identified thirty cold-responsive genes previously unidentified in this public dataset. Fourteen genes were up-regulated, two had a variable expression and the other fourteen were down-regulated in the treatments. These new findings certainly were a consequence of using a superior statistical analysis approach, since the original study did not take into account the dependence of data variability on the average signal intensity of each gene. The web interface, supplementary information, and the package source code are available, free, to non-commercial users at http://ipe.cbmeg.unicamp.br/pub/PMmA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remote sensing techniques involving hyperspectral imagery have applications in a number of sciences that study some aspects of the surface of the planet. The analysis of hyperspectral images is complex because of the large amount of information involved and the noise within that data. Investigating images with regard to identify minerals, rocks, vegetation and other materials is an application of hyperspectral remote sensing in the earth sciences. This thesis evaluates the performance of two classification and clustering techniques on hyperspectral images for mineral identification. Support Vector Machines (SVM) and Self-Organizing Maps (SOM) are applied as classification and clustering techniques, respectively. Principal Component Analysis (PCA) is used to prepare the data to be analyzed. The purpose of using PCA is to reduce the amount of data that needs to be processed by identifying the most important components within the data. A well-studied dataset from Cuprite, Nevada and a dataset of more complex data from Baffin Island were used to assess the performance of these techniques. The main goal of this research study is to evaluate the advantage of training a classifier based on a small amount of data compared to an unsupervised method. Determining the effect of feature extraction on the accuracy of the clustering and classification method is another goal of this research. This thesis concludes that using PCA increases the learning accuracy, and especially so in classification. SVM classifies Cuprite data with a high precision and the SOM challenges SVM on datasets with high level of noise (like Baffin Island).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Magnetic Resonance Imaging (MRI) is a multi sequence medical imaging technique in which stacks of images are acquired with different tissue contrasts. Simultaneous observation and quantitative analysis of normal brain tissues and small abnormalities from these large numbers of different sequences is a great challenge in clinical applications. Multispectral MRI analysis can simplify the job considerably by combining unlimited number of available co-registered sequences in a single suite. However, poor performance of the multispectral system with conventional image classification and segmentation methods makes it inappropriate for clinical analysis. Recent works in multispectral brain MRI analysis attempted to resolve this issue by improved feature extraction approaches, such as transform based methods, fuzzy approaches, algebraic techniques and so forth. Transform based feature extraction methods like Independent Component Analysis (ICA) and its extensions have been effectively used in recent studies to improve the performance of multispectral brain MRI analysis. However, these global transforms were found to be inefficient and inconsistent in identifying less frequently occurred features like small lesions, from large amount of MR data. The present thesis focuses on the improvement in ICA based feature extraction techniques to enhance the performance of multispectral brain MRI analysis. Methods using spectral clustering and wavelet transforms are proposed to resolve the inefficiency of ICA in identifying small abnormalities, and problems due to ICA over-completeness. Effectiveness of the new methods in brain tissue classification and segmentation is confirmed by a detailed quantitative and qualitative analysis with synthetic and clinical, normal and abnormal, data. In comparison to conventional classification techniques, proposed algorithms provide better performance in classification of normal brain tissues and significant small abnormalities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many recent Web 2.0 resource sharing applications can be subsumed under the "folksonomy" moniker. Regardless of the type of resource shared, all of these share a common structure describing the assignment of tags to resources by users. In this report, we generalize the notions of clustering and characteristic path length which play a major role in the current research on networks, where they are used to describe the small-world effects on many observable network datasets. To that end, we show that the notion of clustering has two facets which are not equivalent in the generalized setting. The new measures are evaluated on two large-scale folksonomy datasets from resource sharing systems on the web.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A series of vectors for the over-expression of tagged proteins in Dictyostelium were designed, constructed and tested. These vectors allow the addition of an N- or C-terminal tag (GFP, RFP, 3xFLAG, 3xHA, 6xMYC and TAP) with an optimized polylinker sequence and no additional amino acid residues at the N or C terminus. Different selectable markers (Blasticidin and gentamicin) are available as well as an extra chromosomal version; these allow copy number and thus expression level to be controlled, as well as allowing for more options with regard to complementation, co- and super-transformation. Finally, the vectors share standardized cloning sites, allowing a gene of interest to be easily transfered between the different versions of the vectors as experimental requirements evolve. The organisation and dynamics of the Dictyostelium nucleus during the cell cycle was investigated. The centromeric histone H3 (CenH3) variant serves to target the kinetochore to the centromeres and thus ensures correct chromosome segregation during mitosis and meiosis. A number of Dictyostelium histone H3-domain containing proteins as GFP-tagged fusions were expressed and it was found that one of them functions as CenH3 in this species. Like CenH3 from some other species, Dictyostelium CenH3 has an extended N-terminal domain with no similarity to any other known proteins. The targeting domain, comprising α-helix 2 and loop 1 of the histone fold is required for targeting CenH3 to centromeres. Compared to the targeting domain of other known and putative CenH3 species, Dictyostelium CenH3 has a shorter loop 1 region. The localisation of a variety of histone modifications and histone modifying enzymes was examined. Using fluorescence in situ hybridisation (FISH) and CenH3 chromatin-immunoprecipitation (ChIP) it was shown that the six telocentric centromeres contain all of the DIRS-1 and most of the DDT-A and skipper transposons. During interphase the centromeres remain attached to the centrosome resulting in a single CenH3 cluster which also contains the putative histone H3K9 methyltransferase SuvA, H3K9me3 and HP1 (heterochromatin protein 1). Except for the centromere cluster and a number of small foci at the nuclear periphery opposite the centromeres, the rest of the nucleus is largely devoid of transposons and heterochromatin associated histone modifications. At least some of the small foci correspond to the distal telomeres, suggesting that the chromosomes are organised in a Rabl-like manner. It was found that in contrast to metazoans, loading of CenH3 onto Dictyostelium centromeres occurs in late G2 phase. Transformation of Dictyostelium with vectors carrying the G418 resistance cassette typically results in the vector integrating into the genome in one or a few tandem arrays of approximately a hundred copies. In contrast, plasmids containing a Blasticidin resistance cassette integrate as single or a few copies. The behaviour of transgenes in the nucleus was examined by FISH, and it was found that low copy transgenes show apparently random distribution within the nucleus, while transgenes with more than approximately 10 copies cluster at or immediately adjacent to the centromeres in interphase cells regardless of the actual integration site along the chromosome. During mitosis the transgenes show centromere-like behaviour, and ChIP experiments show that transgenes contain the heterochromatin marker H3K9me2 and the centromeric histone variant H3v1. This clustering, and centromere-like behaviour was not observed on extrachromosomal transgenes, nor on a line where the transgene had integrated into the extrachromosomal rDNA palindrome. This suggests that it is the repetitive nature of the transgenes that causes the centromere-like behaviour. A Dictyostelium homolog of DET1, a protein largely restricted to multicellular eukaryotes where it has a role in developmental regulation was identified. As in other species Dictyostelium DET1 is nuclear localised. In ChIP experiments DET1 was found to bind the promoters of a number of developmentally regulated loci. In contrast to other species where it is an essential protein, loss of DET1 is not lethal in Dictyostelium, although viability is greatly reduced. Loss of DET1 results in delayed and abnormal development with enlarged aggregation territories. Mutant slugs displayed apparent cell type patterning with a bias towards pre-stalk cell types.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La gestió de xarxes és un camp molt ampli i inclou molts aspectes diferents. Aquesta tesi doctoral està centrada en la gestió dels recursos en les xarxes de banda ampla que disposin de mecanismes per fer reserves de recursos, com per exemple Asynchronous Transfer Mode (ATM) o Multi-Protocol Label Switching (MPLS). Es poden establir xarxes lògiques utilitzant els Virtual Paths (VP) d'ATM o els Label Switched Paths (LSP) de MPLS, als que anomenem genèricament camins lògics. Els usuaris de la xarxa utilitzen doncs aquests camins lògics, que poden tenir recursos assignats, per establir les seves comunicacions. A més, els camins lògics són molt flexibles i les seves característiques es poden canviar dinàmicament. Aquest treball, se centra, en particular, en la gestió dinàmica d'aquesta xarxa lògica per tal de maximitzar-ne el rendiment i adaptar-la a les connexions ofertes. En aquest escenari, hi ha diversos mecanismes que poden afectar i modificar les característiques dels camins lògics (ample de banda, ruta, etc.). Aquests mecanismes inclouen els de balanceig de la càrrega (reassignació d'ample de banda i reencaminament) i els de restauració de fallades (ús de camins lògics de backup). Aquests dos mecanismes poden modificar la xarxa lògica i gestionar els recursos (ample de banda) dels enllaços físics. Per tant, existeix la necessitat de coordinar aquests mecanismes per evitar possibles interferències. La gestió de recursos convencional que fa ús de la xarxa lògica, recalcula periòdicament (per exemple cada hora o cada dia) tota la xarxa lògica d'una forma centralitzada. Això introdueix el problema que els reajustaments de la xarxa lògica no es realitzen en el moment en què realment hi ha problemes. D'altra banda també introdueix la necessitat de mantenir una visió centralitzada de tota la xarxa. En aquesta tesi, es proposa una arquitectura distribuïda basada en un sistema multi agent. L'objectiu principal d'aquesta arquitectura és realitzar de forma conjunta i coordinada la gestió de recursos a nivell de xarxa lògica, integrant els mecanismes de reajustament d'ample de banda amb els mecanismes de restauració preplanejada, inclosa la gestió de l'ample de banda reservada per a la restauració. Es proposa que aquesta gestió es porti a terme d'una forma contínua, no periòdica, actuant quan es detecta el problema (quan un camí lògic està congestionat, o sigui, quan està rebutjant peticions de connexió dels usuaris perquè està saturat) i d'una forma completament distribuïda, o sigui, sense mantenir una visió global de la xarxa. Així doncs, l'arquitectura proposada realitza petits rearranjaments a la xarxa lògica adaptant-la d'una forma contínua a la demanda dels usuaris. L'arquitectura proposada també té en consideració altres objectius com l'escalabilitat, la modularitat, la robustesa, la flexibilitat i la simplicitat. El sistema multi agent proposat està estructurat en dues capes d'agents: els agents de monitorització (M) i els de rendiment (P). Aquests agents estan situats en els diferents nodes de la xarxa: hi ha un agent P i diversos agents M a cada node; aquests últims subordinats als P. Per tant l'arquitectura proposada es pot veure com una jerarquia d'agents. Cada agent és responsable de monitoritzar i controlar els recursos als que està assignat. S'han realitzat diferents experiments utilitzant un simulador distribuït a nivell de connexió proposat per nosaltres mateixos. Els resultats mostren que l'arquitectura proposada és capaç de realitzar les tasques assignades de detecció de la congestió, reassignació dinàmica d'ample de banda i reencaminament d'una forma coordinada amb els mecanismes de restauració preplanejada i gestió de l'ample de banda reservat per la restauració. L'arquitectura distribuïda ofereix una escalabilitat i robustesa acceptables gràcies a la seva flexibilitat i modularitat.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Radial basis functions can be combined into a network structure that has several advantages over conventional neural network solutions. However, to operate effectively the number and positions of the basis function centres must be carefully selected. Although no rigorous algorithm exists for this purpose, several heuristic methods have been suggested. In this paper a new method is proposed in which radial basis function centres are selected by the mean-tracking clustering algorithm. The mean-tracking algorithm is compared with k means clustering and it is shown that it achieves significantly better results in terms of radial basis function performance. As well as being computationally simpler, the mean-tracking algorithm in general selects better centre positions, thus providing the radial basis functions with better modelling accuracy

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This piece is a contribution to the exhibition catalogue of Barbadian / Canadian artist Joscelyn Gardner's exhibition, 'Bleeding & Breeding' curated by Olexander Wlasenko, January 14-February 12, 2012 in the Station Gallery, Whitby, Ontario, Canada. The piece examines the ways in which Gardner's Creole Portraits II (2007) and Creole Portraits III (2009) issue a provocative and carefully crafted contestation to the journals of the slave-owner and amateur botanist Thomas Thistlewood. It argues that while Thistlewood’s journals make raced and gendered bodies seemingly available to knowledge, incorporating them within the colonial archive as signs of subjection, Gardener’s portraits disrupt these acts of history and knowledge. Her artistic response marks a radical departure from the significant body of scholarship that has drawn on the Thistlewood journals to date. Creatively contesting his narratives’ dispossession of Creole female subjects and yet aware of the problems of innocent recovery, her works style representations that retain the consciousness and effect of historical erasure. Through an oxymoronic aesthetic that assembles a highly crafted verisimilitude alongside the condition of invisibility and brings atrocity into the orbit of the aesthetic, these portraits force us to question what stakes are involved in bringing the lives of the enslaved and violated back into regimes of representation.