992 resultados para OPEN CLUSTERS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A presente Dissertação propõe uma biblioteca de comunicação de alto desempenho, baseada em troca de mensagens, especificamente projetada para explorar eficientemente as potencialidades da tecnologia SCI (Scalable Coherent Interface). No âmago da referida biblioteca, a qual se denominou DECK/SCI, acham-se três protocolos de comunicação distintos: um protocolo de baixa latência e mínimo overhead, especializado na troca de mensagens pequenas; um protocolo de propósito geral; e um protocolo de comunicação que emprega uma técnica de zero-copy, também idealizada neste Trabalho, no intuito de elevar a máxima largura de banda alcançável durante a transmissão de mensagens grandes. As pesquisas desenvolvidas no decurso da Dissertação que se lhe apresenta têm por mister proporcionar um ambiente para o desenvolvimento de aplicações paralelas, que demandam alto desempenho computacional, em clusters que se utilizam da tecnologia SCI como rede de comunicação. A grande motivação para os esforços envidados reside na consolidação dos clusters como arquiteturas, a um só tempo, tecnologicamente comparáveis às máquinas paralelas dedicadas, e economicamente viáveis. A interface de programação exportada pelo DECK/SCI aos usuários abarca o mesmo conjunto de primitivas da biblioteca DECK (Distributed Execution Communication Kernel), concebida originalmente com vistas à consecução de alto desempenho sobre a tecnologia Myrinet. Os resultados auferidos com o uso do DECK/SCI revelam a eficiência dos mecanismos projetados, e a utilização profícua das características de alto desempenho intrínsecas da rede SCI, haja visto que se obteve uma performance muito próxima dos limites tecnológicos impostos pela arquitetura subjacente. Outrossim, a execução de uma clássica aplicação paralela, para fins de validação, testemunha que as primitivas e abstrações fornecidas pelo DECK/SCI mantêm estritamente a mesma semântica da interface de programação do original DECK.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Este trabalho tem como objetivo desenvolver e empregar técnicas e estruturas de dados agrupadas visando paralelizar os métodos do subespaço de Krylov, fazendo-se uso de diversas ferramentas e abordagens. A partir dos resultados é feita uma análise comparativa de desemvpenho destas ferramentas e abordagens. As paralelizações aqui desenvolvidas foram projetadas para serem executadas em um arquitetura formada por um agregado de máquinas indepentes e multiprocessadas (Cluster), ou seja , são considerados o paralelismo e intra-nodos. Para auxiliar a programação paralela em clusters foram, e estão sendo, desenvolvidas diferentes ferramentas (bibliotecas) que visam a exploração dos dois níveis de paralelismo existentes neste tipo de arquitetura. Neste trabalho emprega-se diferentes bibliotecas de troca de mensagens e de criação de threads para a exploração do paralelismo inter-nodos e intra-nodos. As bibliotecas adotadas são o DECK e o MPICH e a Pthread. Um dos itens a serem analisados nestes trabalho é acomparação do desempenho obtido com essas bibliotecas.O outro item é a análise da influência no desemepnho quando quando tulizadas múltiplas threads no paralelismo em clusters multiprocessados. Os métodos paralelizados nesse trabalho são o Gradiente Conjugação (GC) e o Resíduo Mínmo Generalizado (GMRES), quepodem ser adotados, respectivamente, para solução de sistemas de equações lineares sintéticos positivos e definidos e não simétricas. Tais sistemas surgem da discretização, por exemplo, dos modelos da hidrodinâmica e do transporte de massa que estão sendo desenvolvidos no GMCPAD. A utilização desses métodos é justificada pelo fato de serem métodos iterativos, o que os torna adequados à solução de sistemas de equações esparsas e de grande porte. Na solução desses sistemas através desses métodos iterativos paralelizados faz-se necessário o particionamento do domínio do problema, o qual deve ser feito visando um bom balanceamento de carga e minimização das fronteiras entre os sub-domínios. A estrutura de dados desenvolvida para os métodos paralelizados nesse trabalho permite que eles sejam adotados para solução de sistemas de equações gerados a partir de qualquer tipo de particionamento, pois o formato de armazenamento de dados adotado supre qualquer tipo de dependência de dados. Além disso, nesse trabalho são adotadas duas estratégias de ordenação para as comunicações, estratégias essas que podem ser importantes quando se considera a portabilidade das paralelizações para máquinas interligadas por redes de interconexão com buffer de tamanho insuficiente para evitar a ocorrência de dealock. Os resultados obtidos nessa dissertação contribuem nos trabalhos do GMCPAD, pois as paralelizações são adotadas em aplicações que estão sendo desenvolvidas no grupo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A transformação da tecnologia, tanto na área da informática quanto em telecomunicações, facilitou o acesso a informação, bem como, reduziu os seus custos de acesso. Isso fez com que as redes de relações entre os agentes econômicos adquirissem maior agilidade e alcance geográfico, estreitando a interação ente o local e o global. Desta forma, a organização na busca da inserção no mercado internacional voltou-se para um processo regional, no qual o conceito de cluster passa a ser uma ferramenta hábil para responder distintas indagações. Esses questionamentos surgem desde o ciclo dos negócios e da administração das firmas, até a utilização de recursos como espaço, mão-de-obra, insumos e principalmente da disseminação do conhecimento. Inerte neste novo ambiente criado pelo avanço tecnológico, as aglomerações industriais servem como facilitadoras na criação de inovações que apresentam-se como externalidades positivas, na geração do desenvolvimento econômico regional. Tanto que, chegam a incitar a participação, em determinadas situações, bastante ativa dos governos no intuito de promover e sustentar o industrial clustering. Portanto, o tema central deste trabalho será clusters, e o seu papel determinante na obtenção de vantagens competitivas na indústria e sua relação com o desenvolvimento regional. Dentro deste intuito, também apresenta-se-á, no Capítulo 5, a análise do cluster de calçados do Vale dos Sinos no Estado do Rio Grande do Sul, através do método estrutural diferencial, com dados fornecidos pela Relação Anual de Informações Sociais – RAIS, no período de 1990 até 2001.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Devido a sua baixa latência de banda, os clusters equipados com o adaptador SCI são uma alternativa para sistemas de tempo real distribuídos. Esse trabalho apresenta o projeto e implementação de uma plataforma de comunicação de tempo real sobre clusters SCI. O hardware padrão do SCI não se mostra adequado para a transmissão de tráfego de tempo real devido ao problema da contenção de acesso ao meio que causa inversão de prioridade. Por isso uma disciplina de acesso ao meio é implementada como parte da plataforma. Através da arquitetura implementada é possível o estabelecimento de canais de comunicação com garantia de banda. Assim, aplicações multimídias, por exemplo, podem trocar com taxa constante de conunicação. Cada mensagem é enviada somente uma vez. Assim, mensagens som a semântica de eventos podem ser enviadas. Além disso, a ordem e o tamanho das mensagens são garantidos. Além do tráfego com largura de banda garantida, a plataforma possibilita a troca de pacotes IP entre diferentes máquinas do cluster. Esses pacotes são inseridos no campo de dados dos pacotes próprios da plataforma e após são enviados através do uso de pacotes IP. Além disso, essa funcionalidade da plataforma permite também a execução de bibliotecas de comunicação baseadas em TCP/IP como o MPI sobre o cluster SCI. A plataforma de comunicação é implementada como modulos do sistema operacional Linux com a execução de tempo real RTAI. A valiação da plataforma mostrou que mesmo em cenários com muita comunicação entre todos os nodos correndo, a largura de banda reservada para cada canal foi mantida.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A evolução da Computação Baseada em Clusters, impulsionada pelo avanço tecnológico e pelo custo relativamente baixo do hardware de PCs, tem levado ao surgimento de máquinas paralelas de porte cada vez maior, chegando à ordem das centenas e mesmo milhares de nós de processamento. Um dos principais problemas na implantação de clusters desse porte é o gerenciamento de E/S, pois soluções centralizadas de armazenamento de arquivos, como o NFS, rapidamente se tornam o gargalo dessa parte do sistema. Ao longo dos últimos anos, diversas soluções para esse problema têm sido propostas, tanto pela utilização de tecnologias especializadas de armazenamento e comunicação, como RAID e fibra ótica, como pela distribuição das funcionalidades do servidor de arquivos entre várias máquinas, objetivando a paralelização de suas operações. Seguindo essa última linha, o projeto NFSP (NFS Parallèle) é uma proposta de sistema de arquivos distribuído que estende o NFS padrão de forma a aumentar o desempenho das operações de leitura de dados pela distribuição do serviço em vários nós do cluster. Com essa abordagem, o NFSP objetiva aliar desempenho e escalabilidade aos benefícios do NFS, como a estabilidade de sua implementação e familiaridade de usuários e administradores com sua semântica de uso e seus procedimentos de configuração e gerenciamento. A proposta aqui apresentada, chamada de dNFSP, é uma extensão ao NFSP com o objetivo principal de proporcionar melhor desempenho a aplicações que explorem tanto a leitura como a escrita de dados, uma vez que essa última característica não é contemplada pelo modelo original A base para o funcionamento do sistema é um modelo de gerenciamento distribuído de meta-dados, que permite melhor escalabilidade e reduz o custo computacional sobre o meta-servidor original do NFSP, e também um mecanismo relaxado de manutenção de coerência baseado em LRC (Lazy Release Consistency), o qual permite a distribuição do serviço sem acarretar em operações onerosas de sincronização de dados. Um protótipo do modelo dNFSP foi implementado e avaliado com uma série de testes, benchmarks e aplicações. Os resultados obtidos comprovam que o modelo pode ser aplicado como sistema de arquivos para um cluster, efetivamente proporcionando melhor desempenho às aplicações e ao mesmo tempo mantendo um elevado nível de compatibilidade com as ferramentas e procedimentos habituais de administração de um cluster, em especial o uso de clientes NFS padrões disponíveis em praticamente todos os sistemas operacionais da atualidade.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis aims to open a theoretical discussion on the importance of cluster for the development of Small and Medium Enterprises. The methodology applied was based on bibliographical and qualitative research. The basic questions raised by this study can be summarized as follows: How the creation of a cluster will help in the development of the SME?. Consequently, the final objective of the work is to identify which are the characteristics that help into the success of a Small & Medium Enterprises inserted in a cluster. The answer to this question led the research to a better understanding of (i) the characterization of the Small & Medium Enterprises; (ii) the clusters theory; (iii) evidence for a developed (Italy) and a developing (Chile) country that support our proposition to verify. The main results confirm the relevance of the cluster for the development of the Small and Medium Enterprises because of the collective efficiency that generates, improving the funding conditions, the exporter capacity and diminishing the activities costs of the small companies that are part of the conglomerate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The physical properties of small rhodium clusters, Rh-n, have been in debate due to the shortcomings of density functional theory (DFT). To help in the solution of those problems, we obtained a set of putative lowest energy structures for small Rh-n (n = 2-15) clusters employing hybrid-DFT and the generalized gradient approximation (GGA). For n = 2-6, both hybrid and GGA functionals yield similar ground-state structures (compact), however, hybrid favors compact structures for n = 7-15, while GGA favors open structures based on simple cubic motifs. Thus, experimental results are crucial to indicate the correct ground-state structures, however, we found that a unique set of structures (compact or open) is unable to explain all available experimental data. For example, the GGA structures (open) yield total magnetic moments in excellent agreement with experimental data, while hybrid structures (compact) have larger magnetic moments compared with experiments due to the increased localization of the 4d states. Thus, we would conclude that GGA provides a better description of the Rh-n clusters, however, a recent experimental-theoretical study [ Harding et al., J. Chem. Phys. 133, 214304 (2010)] found that only compact structures are able to explain experimental vibrational data, while open structures cannot. Therefore, it indicates that the study of Rh-n clusters is a challenging problem and further experimental studies are required to help in the solution of this conundrum, as well as a better description of the exchange and correlation effects on the Rh n clusters using theoretical methods such as the quantum Monte Carlo method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Heavy metal Resistance-Nodulation-Division (HME-RND) efflux systems help Gram-negative bacteria to keep the intracellular homeostasis under high metal concentrations. These proteins constitute the cytoplasmic membrane channel of the tripartite RND transport systems. Caulobacter crescentus NA1000 possess two HME-RND proteins, and the aim of this work was to determine their involvement in the response to cadmium, zinc, cobalt and nickel, and to analyze the phylogenetic distribution and characteristic signatures of orthologs of these two proteins. Results Expression assays of the czrCBA operon showed significant induction in the presence of cadmium and zinc, and moderate induction by cobalt and nickel. The nczCBA operon is highly induced in the presence of nickel and cobalt, moderately induced by zinc and not induced by cadmium. Analysis of the resistance phenotype of mutant strains showed that the ΔczrA strain is highly sensitive to cadmium, zinc and cobalt, but resistant to nickel. The ΔnczA strain and the double mutant strain showed reduced growth in the presence of all metals tested. Phylogenetic analysis of the C. crescentus HME-RND proteins showed that CzrA-like proteins, in contrast to those similar to NczA, are almost exclusively found in the Alphaproteobacteria group, and the characteristic protein signatures of each group were highlighted. Conclusions The czrCBA efflux system is involved mainly in response to cadmium and zinc with a secondary role in response to cobalt. The nczCBA efflux system is involved mainly in response to nickel and cobalt, with a secondary role in response to cadmium and zinc. CzrA belongs to the HME2 subfamily, which is almost exclusively found in the Alphaproteobacteria group, as shown by phylogenetic analysis. NczA belongs to the HME1 subfamily which is more widespread among diverse Proteobacteria groups. Each of these subfamilies present distinctive amino acid signatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Thesis we have presented our work on the analysis of galaxy clusters through their X-ray emission and the gravitational lensing effect that they induce. Our research work was mainly finalised to verify and possibly explain the observed mismatch between the galaxy cluster mass distributions estimated through two of the most promising techniques, i.e. the X-ray and the gravitational lensing analyses. Moreover, it is an established evidence that combined, multi-wavelength analyses are extremely effective in addressing and explaining the open issues in astronomy: however, in order to follow this approach, it is crucial to test the reliability and the limitations of the individual analysis techniques. In this Thesis we also assessed the impact of some factors that could affect both the X-ray and the strong lensing analyses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel design based on electric field-free open microwell arrays for the automated continuous-flow sorting of single or small clusters of cells is presented. The main feature of the proposed device is the parallel analysis of cell-cell and cell-particle interactions in each microwell of the array. High throughput sample recovery with a fast and separate transfer from the microsites to standard microtiter plates is also possible thanks to the flexible printed circuit board technology which permits to produce cost effective large area arrays featuring geometries compatible with laboratory equipment. The particle isolation is performed via negative dielectrophoretic forces which convey the particles’ into the microwells. Particles such as cells and beads flow in electrically active microchannels on whose substrate the electrodes are patterned. The introduction of particles within the microwells is automatically performed by generating the required feedback signal by a microscope-based optical counting and detection routine. In order to isolate a controlled number of particles we created two particular configurations of the electric field within the structure. The first one permits their isolation whereas the second one creates a net force which repels the particles from the microwell entrance. To increase the parallelism at which the cell-isolation function is implemented, a new technique based on coplanar electrodes to detect particle presence was implemented. A lock-in amplifying scheme was used to monitor the impedance of the channel perturbed by flowing particles in high-conductivity suspension mediums. The impedance measurement module was also combined with the dielectrophoretic focusing stage situated upstream of the measurement stage, to limit the measured signal amplitude dispersion due to the particles position variation within the microchannel. In conclusion, the designed system complies with the initial specifications making it suitable for cellomics and biotechnology applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small clusters of gallium oxide, technologically important high temperature ceramic, together with interaction of nucleic acid bases with graphene and small-diameter carbon nanotube are focus of first principles calculations in this work. A high performance parallel computing platform is also developed to perform these calculations at Michigan Tech. First principles calculations are based on density functional theory employing either local density or gradient-corrected approximation together with plane wave and gaussian basis sets. The bulk Ga2O3 is known to be a very good candidate for fabricating electronic devices that operate at high temperatures. To explore the properties of Ga2O3 at nonoscale, we have performed a systematic theoretical study on the small polyatomic gallium oxide clusters. The calculated results find that all lowest energy isomers of GamOn clusters are dominated by the Ga-O bonds over the metal-metal or the oxygen-oxygen bonds. Analysis of atomic charges suggest the clusters to be highly ionic similar to the case of bulk Ga2O3. In the study of sequential oxidation of these slusters starting from Ga2O, it is found that the most stable isomers display up to four different backbones of constituent atoms. Furthermore, the predicted configuration of the ground state of Ga2O is recently confirmed by the experimental result of Neumark's group. Guided by the results of calculations the study of gallium oxide clusters, performance related challenge of computational simulations, of producing high performance computers/platforms, has been addressed. Several engineering aspects were thoroughly studied during the design, development and implementation of the high performance parallel computing platform, rama, at Michigan Tech. In an attempt to stay true to the principles of Beowulf revolutioni, the rama cluster was extensively customized to make it easy to understand, and use - for administrators as well as end-users. Following the results of benchmark calculations and to keep up with the complexity of systems under study, rama has been expanded to a total of sixty four processors. Interest in the non-covalent intereaction of DNA with carbon nanotubes has steadily increased during past several years. This hybrid system, at the junction of the biological regime and the nanomaterials world, possesses features which make it very attractive for a wide range of applicatioins. Using the in-house computational power available, we have studied details of the interaction between nucleic acid bases with graphene sheet as well as high-curvature small-diameter carbon nanotube. The calculated trend in the binding energies strongly suggests that the polarizability of the base molecules determines the interaction strength of the nucleic acid bases with graphene. When comparing the results obtained here for physisorption on the small diameter nanotube considered with those from the study on graphene, it is observed that the interaction strength of nucleic acid bases is smaller for the tube. Thus, these results show that the effect of introducing curvature is to reduce the binding energy. The binding energies for the two extreme cases of negligible curvature (i.e. flat graphene sheet) and of very high curvature (i.e. small diameter nanotube) may be considered as upper and lower bounds. This finding represents an important step towards a better understanding of experimentally observed sequence-dependent interaction of DNA with Carbon nanotubes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The majority of global ocean production and total export production is attributed to oligotrophic oceanic regions due to their vast regional expanse. However, energy transfers, food-web structures and trophic relationships in these areas remain largely unknown. Regional and vertical inter- and intra-specific differences in trophic interactions and dietary preferences of calanoid copepods were investigated in four different regions in the open eastern Atlantic Ocean (38°N to 21°S) in October/November 2012 using a combination of fatty acid (FA) and stable isotope (SI) analyses. Mean carnivory indices (CI) based on FA trophic markers generally agreed with trophic positions (TP) derived from d15N analysis. Most copepods were classified as omnivorous (CI ~0.5, TP 1.8 to ~2.5) or carnivorous (CI >=0.7, TP >=2.9). Herbivorous copepods showed typical CIs of <=0.3. Geographical differences in d15N values of epi- (200-0 m) to mesopelagic (1000-200 m) copepods reflected corresponding spatial differences in baseline d15N of particulate organic matter from the upper 100 m. In contrast, species restricted to lower meso- and bathypelagic (2000-1000 m) layers did not show this regional trend. FA compositions were species-specific without distinct intra-specific vertical or spatial variations. Differences were only observed in the southernmost region influenced by the highly productive Benguela Current. Apparently, food availability and dietary composition were widely homogeneous throughout the oligotrophic oceanic regions of the tropical and subtropical Atlantic. Four major species clusters were identified by principal component analysis based on FA compositions. Vertically migrating species clustered with epi- to mesopelagic, non-migrating species, of which only Neocalanus gracilis was moderately enriched in lipids with 16% of dry mass (DM) and stored wax esters (WE) with 37% of total lipid (TL). All other species of this cluster had low lipid contents (< 10% DM) without WE. Of these, the tropical epipelagic Undinula vulgaris showed highest portions of bacterial markers. Rhincalanus cornutus, R. nasutus and Calanoides carinatus formed three separate clusters with species-specific lipid profiles, high lipid contents (>=41% DM), mainly accumulated as WE (>=79% TL). C. carinatus and R. nasutus were primarily herbivorous with almost no bacterial input. Despite deviating feeding strategies, R. nasutus clustered with deep-dwelling, carnivorous species, which had high amounts of lipids (>=37% DM) and WE (>=54% TL). Tropical and subtropical calanoid copepods exhibited a wide variety of life strategies, characterized by specialized feeding. This allows them, together with vertical habitat partitioning, to maintain high abundance and diversity in tropical oligotrophic open oceans, where they play an essential role in the energy flux and carbon cycling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional Text-To-Speech (TTS) systems have been developed using especially-designed non-expressive scripted recordings. In order to develop a new generation of expressive TTS systems in the Simple4All project, real recordings from the media should be used for training new voices with a whole new range of speaking styles. However, for processing this more spontaneous material, the new systems must be able to deal with imperfect data (multi-speaker recordings, background and foreground music and noise), filtering out low-quality audio segments and creating mono-speaker clusters. In this paper we compare several architectures for combining speaker diarization and music and noise detection which improve the precision and overall quality of the segmentation.