936 resultados para BIMETALLIC CLUSTERS
Resumo:
Data analytic applications are characterized by large data sets that are subject to a series of processing phases. Some of these phases are executed sequentially but others can be executed concurrently or in parallel on clusters, grids or clouds. The MapReduce programming model has been applied to process large data sets in cluster and cloud environments. For developing an application using MapReduce there is a need to install/configure/access specific frameworks such as Apache Hadoop or Elastic MapReduce in Amazon Cloud. It would be desirable to provide more flexibility in adjusting such configurations according to the application characteristics. Furthermore the composition of the multiple phases of a data analytic application requires the specification of all the phases and their orchestration. The original MapReduce model and environment lacks flexible support for such configuration and composition. Recognizing that scientific workflows have been successfully applied to modeling complex applications, this paper describes our experiments on implementing MapReduce as subworkflows in the AWARD framework (Autonomic Workflow Activities Reconfigurable and Dynamic). A text mining data analytic application is modeled as a complex workflow with multiple phases, where individual workflow nodes support MapReduce computations. As in typical MapReduce environments, the end user only needs to define the application algorithms for input data processing and for the map and reduce functions. In the paper we present experimental results when using the AWARD framework to execute MapReduce workflows deployed over multiple Amazon EC2 (Elastic Compute Cloud) instances.
Resumo:
ABSTRACT OBJECTIVE To estimate the prevalence of arterial hypertension and obesity and the population attributable fraction of hypertension that is due to obesity in Brazilian adolescents. METHODS Data from participants in the Brazilian Study of Cardiovascular Risks in Adolescents (ERICA), which was the first national school-based, cross-section study performed in Brazil were evaluated. The sample was divided into 32 geographical strata and clusters from 32 schools and classes, with regional and national representation. Obesity was classified using the body mass index according to age and sex. Arterial hypertension was defined when the average systolic or diastolic blood pressure was greater than or equal to the 95th percentile of the reference curve. Prevalences and 95% confidence intervals (95%CI) of arterial hypertension and obesity, both on a national basis and in the macro-regions of Brazil, were estimated by sex and age group, as were the fractions of hypertension attributable to obesity in the population. RESULTS We evaluated 73,399 students, 55.4% female, with an average age of 14.7 years (SD = 1.6). The prevalence of hypertension was 9.6% (95%CI 9.0-10.3); with the lowest being in the North, 8.4% (95%CI 7.7-9.2) and Northeast regions, 8.4% (95%CI 7.6-9.2), and the highest being in the South, 12.5% (95%CI 11.0-14.2). The prevalence of obesity was 8.4% (95%CI 7.9-8.9), which was lower in the North region and higher in the South region. The prevalences of arterial hypertension and obesity were higher in males. Obese adolescents presented a higher prevalence of hypertension, 28.4% (95%CI 25.5-31.2), than overweight adolescents, 15.4% (95%CI 17.0-13.8), or eutrophic adolescents, 6.3% (95%CI 5.6-7.0). The fraction of hypertension attributable to obesity was 17.8%. CONCLUSIONS ERICA was the first nationally representative Brazilian study providing prevalence estimates of hypertension in adolescents. Regional and sex differences were observed. The study indicates that the control of obesity would lower the prevalence of hypertension among Brazilian adolescents by 1/5.
Resumo:
A number of novel, water-stable redox-active cobalt complexes of the C-functionalized tripodal ligands tris(pyrazolyl)methane XC(pz)(3) (X = HOCH2, CH2OCH2Py or CH2OSO2Me) are reported along with their effects on DNA. The compounds were isolated as air-stable solids and fully characterized by IR and FIR spectroscopies, ESI-MS(+/-), cyclic voltammetry, controlled potential electrolysis, elemental analysis and, in a number of cases, also by single-crystal X-ray diffraction. They showed moderate cytotoxicity in vitro towards HCT116 colorectal carcinoma and HepG2 hepatocellular carcinoma human cancer cell lines. This viability loss is correlated with an increase of tumour cell lines apoptosis. Reactivity studies with biomolecules, such as reducing agents, H2O2, plasmid DNA and UV-visible titrations were also performed to provide tentative insights into the mode of action of the complexes. Incubation of Co(II) complexes with pDNA induced double strand breaks, without requiring the presence of any activator. This pDNA cleavage appears to be mediated by O-centred radical species.
Resumo:
A procura de padrões nos dados de modo a formar grupos é conhecida como aglomeração de dados ou clustering, sendo uma das tarefas mais realizadas em mineração de dados e reconhecimento de padrões. Nesta dissertação é abordado o conceito de entropia e são usados algoritmos com critérios entrópicos para fazer clustering em dados biomédicos. O uso da entropia para efetuar clustering é relativamente recente e surge numa tentativa da utilização da capacidade que a entropia possui de extrair da distribuição dos dados informação de ordem superior, para usá-la como o critério na formação de grupos (clusters) ou então para complementar/melhorar algoritmos existentes, numa busca de obtenção de melhores resultados. Alguns trabalhos envolvendo o uso de algoritmos baseados em critérios entrópicos demonstraram resultados positivos na análise de dados reais. Neste trabalho, exploraram-se alguns algoritmos baseados em critérios entrópicos e a sua aplicabilidade a dados biomédicos, numa tentativa de avaliar a adequação destes algoritmos a este tipo de dados. Os resultados dos algoritmos testados são comparados com os obtidos por outros algoritmos mais “convencionais" como o k-médias, os algoritmos de spectral clustering e um algoritmo baseado em densidade.
Resumo:
Uma das maiores preocupações do mundo neste momento prende-se com o facto da grande dependência do petróleo e seus aglomerados. Esta dependência causa dois problemas: novos estudos fomentam o começo da escassez deste produto, atirando para cima o preço deste material precioso, e a poluição que que este causa. Um dos sectores mais dependentes e que mais polui, é o dos transportes. Nos últimos anos, o mundo teve finalmente noção deste problema e uma das apostas neste sector é o desenvolvimento da célula de combustível, uma tecnologia que utiliza água como combustível, podendo ser reutilizada. É uma tecnologia ainda em fase de introdução pelo que, para já, a médio prazo não será solução. Uma solução intermédia é a utilização de energia elétrica como ―combustível‖. Apesar de grande parte da produção de energia elétrica ser a partir da queima de derivados de petróleo, os motores elétricos são por si só muito mais eficientes comparando com os motores de combustão. Não se vai aqui debater se são uma solução com viabilidade devido à questão da transferência da dependência do petróleo do sector dos transportes para o sector da produção de energia elétrica. O objetivo deste trabalho será desenvolver um sistema de faça a gestão do ―combustível‖ dos veículos elétricos, ou seja, baterias. Essa gestão tem como objetivo aumentar a autonomia do veículo e prolongar o tempo de vida das baterias. Na primeira fase, uma introdução à atualidade dos veículos elétricos, fazendo uma análise às diferentes soluções. Serão referidas os diferentes tipos de baterias e suas características, passando depois para exemplos de sistemas de gestão de baterias. A explicação da ideia para este sistema vem com o capítulo projeto, ficando a implementação para o capítulo seguinte.
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão da Informação
Resumo:
A novel two-component enzyme system from Escherichia coli involving a flavorubredoxin (FlRd) and its reductase was studied in terms of spectroscopic, redox, and biochemical properties of its constituents. FlRd contains one FMN and one rubredoxin (Rd) center per monomer. To assess the role of the Rd domain, FlRd and a truncated form lacking the Rd domain (FlRd¢Rd), were characterized. FlRd contains 2.9 ( 0.5 iron atoms/subunit, whereas FlRd¢Rd contains 2.1 ( 0.6 iron atoms/subunit. While for FlRd one iron atom corresponds to the Rd center, the other two irons, also present in FlRd¢Rd, are most probably due to a di-iron site. Redox titrations of FlRd using EPR and visible spectroscopies allowed us to determine that the Rd site has a reduction potential of -140 ( 15 mV, whereas the FMN undergoes reduction via a red-semiquinone, at -140 ( 15 mV (Flox/Flsq) and -180 ( 15 mV (Flsq/Flred), at pH 7.6. The Rd site has the lowest potential ever reported for a Rd center, which may be correlated with specific amino acid substitutions close to both cysteine clusters. The gene adjacent to that encoding FlRd was found to code for an FAD-containing protein, (flavo)rubredoxin reductase (FlRd-reductase), which is capable of mediating electron transfer from NADH to DesulfoVibrio gigas Rd as well as to E. coli FlRd. Furthermore, electron donation was found to proceed through the Rd domain of FlRd as the Rd-truncated protein does not react with FlRd-reductase. In vitro, this pathway links NADH oxidation with dioxygen reduction. The possible function of this chain is discussed considering the presence of FlRd homologues in all known genomes of anaerobes and facultative aerobes.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Química
Resumo:
Consolidation consists in scheduling multiple virtual machines onto fewer servers in order to improve resource utilization and to reduce operational costs due to power consumption. However, virtualization technologies do not offer performance isolation, causing applications’ slowdown. In this work, we propose a performance enforcing mechanism, composed of a slowdown estimator, and a interference- and power-aware scheduling algorithm. The slowdown estimator determines, based on noisy slowdown data samples obtained from state-of-the-art slowdown meters, if tasks will complete within their deadlines, invoking the scheduling algorithm if needed. When invoked, the scheduling algorithm builds performance and power aware virtual clusters to successfully execute the tasks. We conduct simulations injecting synthetic jobs which characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our strategy can be efficiently integrated with state-of-the-art slowdown meters to fulfil contracted SLAs in real-world environments, while reducing operational costs in about 12%.
Resumo:
Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.
Resumo:
Dissertation submitted for obtainment of the Master’s Degree in Biotechnology, by the Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Eukaryotic Cell, Vol.8, Nº3
Resumo:
Applied Physics Letters, 89