857 resultados para Complex network analysis. Time varying graph mine (TVG). Slow-wave sleep (SWS). Fault tolerance
Resumo:
OBJECTIVE To analyze if metabolic syndrome and its altered components are associated with demographic, socioeconomic and behavioral factors in fixed-shift workers.METHODS A cross-sectional study was conducted on a sample of 902 shift workers of both sexes in a poultry processing plant in Southern Brazil in 2010. The diagnosis of metabolic syndrome was determined according to the recommendations from Harmonizing the Metabolic Syndrome. Its frequency was evaluated according to the demographic (sex, skin color, age and marital status), socioeconomic (educational level, income and work shift), and behavioral characteristics (smoking, alcohol intake, leisure time physical activity, number of meals and sleep duration) of the sample. The multivariate analysis followed a theoretical framework for identifying metabolic syndrome in fixed-shift workers.RESULTS The prevalence of metabolic syndrome in the sample was 9.3% (95%CI 7.4;11.2). The most frequently altered component was waist circumference (PR 48.4%; 95%CI 45.5;51.2), followed by high-density lipoprotein. Work shift was not associated with metabolic syndrome and its altered components. After adjustment, the prevalence of metabolic syndrome was positively associated with women (PR 2.16; 95%CI 1.28;3.64), workers aged over 40 years (PR 3.90; 95%CI 1.78;8.93) and those who reported sleeping five hours or less per day (PR 1.70; 95%CI 1.09;2.24). On the other hand, metabolic syndrome was inversely associated with educational level and having more than three meals per day (PR 0.43; 95%CI 0.26;0.73).CONCLUSIONS Being female, older and deprived of sleep are probable risk factors for metabolic syndrome, whereas higher educational level and higher number of meals per day are protective factors for metabolic syndrome in fixed-shift workers.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
A new method, based on linear correlation and phase diagrams was successfully developed for processes like the sedimentary process, where the deposition phase can have different time duration - represented by repeated values in a series - and where the erosion can play an important rule deleting values of a series. The sampling process itself can be the cause of repeated values - large strata twice sampled - or deleted values: tiny strata fitted between two consecutive samples. What we developed was a mathematical procedure which, based upon the depth chemical composition evolution, allows the establishment of frontiers as well as the periodicity of different sedimentary environments. The basic tool isn't more than a linear correlation analysis which allow us to detect the existence of eventual evolution rules, connected with cyclical phenomena within time series (considering the space assimilated to time), with the final objective of prevision. A very interesting discovery was the phenomenon of repeated sliding windows that represent quasi-cycles of a series of quasi-periods. An accurate forecast can be obtained if we are inside a quasi-cycle (it is possible to predict the other elements of the cycle with the probability related with the number of repeated and deleted points). We deal with an innovator methodology, reason why it's efficiency is being tested in some case studies, with remarkable results that shows it's efficacy. Keywords: sedimentary environments, sequence stratigraphy, data analysis, time-series, conditional probability.
Resumo:
Proceedings of the International Conference on Computational Intelligence in Medicine Healthcare, CIMED 2005, Costa da Caparica, June 29 - July 1, 2005
Resumo:
Trabalho de projecto apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica.
Resumo:
Dissertation submitted to Faculdade de Ciências e Tecnologia - Universidade Nova de Lisboa in fulfilment of the requirements for the degree of Doctor of Philosophy (Biochemistry - Biotechnology)
Resumo:
The complexity of systems is considered an obstacle to the progress of the IT industry. Autonomic computing is presented as the alternative to cope with the growing complexity. It is a holistic approach, in which the systems are able to configure, heal, optimize, and protect by themselves. Web-based applications are an example of systems where the complexity is high. The number of components, their interoperability, and workload variations are factors that may lead to performance failures or unavailability scenarios. The occurrence of these scenarios affects the revenue and reputation of businesses that rely on these types of applications. In this article, we present a self-healing framework for Web-based applications (SHõWA). SHõWA is composed by several modules, which monitor the application, analyze the data to detect and pinpoint anomalies, and execute recovery actions autonomously. The monitoring is done by a small aspect-oriented programming agent. This agent does not require changes to the application source code and includes adaptive and selective algorithms to regulate the level of monitoring. The anomalies are detected and pinpointed by means of statistical correlation. The data analysis detects changes in the server response time and analyzes if those changes are correlated with the workload or are due to a performance anomaly. In the presence of per- formance anomalies, the data analysis pinpoints the anomaly. Upon the pinpointing of anomalies, SHõWA executes a recovery procedure. We also present a study about the detection and localization of anomalies, the accuracy of the data analysis, and the performance impact induced by SHõWA. Two benchmarking applications, exercised through dynamic workloads, and different types of anomaly were considered in the study. The results reveal that (1) the capacity of SHõWA to detect and pinpoint anomalies while the number of end users affected is low; (2) SHõWA was able to detect anomalies without raising any false alarm; and (3) SHõWA does not induce a significant performance overhead (throughput was affected in less than 1%, and the response time delay was no more than 2 milliseconds).
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
The research presented in this paper proposes a novel quantitative model for decomposing and assessing the Value for the Customer. The proposed approach builds on the different dimensions of the Value Network analysis proposed by Verna Allee having as background the concept of Value for the Customer proposed by Woodall. In this context, the Value for the Customer is modelled as a relationship established between the exchanged deliverables and a combination of tangible and intangible assets projected into their endogenous or exogenous dimensions. The Value Network Analysis of the deliverables exchange enables an in-depth understanding of this frontier and the implicit modelling of co-creation scenarios. The proposed Conceptual Model for Decomposing Value for the Customer combines several concepts: from the marketing area we have the concept of Value for the Customer; from the area of intellectual capital the concept of Value Network Analysis; from the collaborative networks area we have the perspective of the enterprise life cycle and the endogenous and exogenous perspectives; at last, the proposed model is supported by a mathematical formal description that stems from the area of Multi-Criteria Decision Making. The whole concept is illustrated in the context of a case study of an enterprise in the footwear industry (Pontechem). The merits of this approach seem evident from the contact with Pontechem as it provides a structured approach for the enterprises to assess the adequacy of their value proposition to the client/customer needs and how these relate to their endogenous and/or exogenous tangible or intangible assets. The proposed model, as a tool, may therefore be a useful instrument in supporting the commercialisation of new products and/or services.
Resumo:
Atualmente, as Tecnologias de Informação (TI) são cada vez mais vitais dentro das organizações. As TI são o motor de suporte do negócio. Para grande parte das organizações, o funcionamento e desenvolvimento das TI têm como base infraestruturas dedicadas (internas ou externas) denominadas por Centro de Dados (CD). Nestas infraestruturas estão concentrados os equipamentos de processamento e armazenamento de dados de uma organização, por isso, são e serão cada vez mais desafiadas relativamente a diversos fatores tais como a escalabilidade, disponibilidade, tolerância à falha, desempenho, recursos disponíveis ou disponibilizados, segurança, eficiência energética e inevitavelmente os custos associados. Com o aparecimento das tecnologias baseadas em computação em nuvem e virtualização, abrese todo um leque de novas formas de endereçar os desafios anteriormente descritos. Perante este novo paradigma, surgem novas oportunidades de consolidação dos CD que podem representar novos desafios para os gestores de CD. Por isso, é no mínimo irrealista para as organizações simplesmente eliminarem os CD ou transforma-los segundo os mais altos padrões de qualidade. As organizações devem otimizar os seus CD, contudo um projeto eficiente desta natureza, com capacidade para suportar as necessidades impostas pelo mercado, necessidades dos negócios e a velocidade da evolução tecnológica, exigem soluções complexas e dispendiosas tanto para a sua implementação como a sua gestão. É neste âmbito que surge o presente trabalho. Com o objetivo de estudar os CD inicia-se um estudo sobre esta temática, onde é detalhado o seu conceito, evolução histórica, a sua topologia, arquitetura e normas existentes que regem os mesmos. Posteriormente o estudo detalha algumas das principais tendências condicionadoras do futuro dos CD. Explorando o conhecimento teórico resultante do estudo anterior, desenvolve-se uma metodologia de avaliação dos CD baseado em critérios de decisão. O estudo culmina com uma análise sobre uma nova solução tecnológica e a avaliação de três possíveis cenários de implementação: a primeira baseada na manutenção do atual CD; a segunda baseada na implementação da nova solução em outro CD em regime de hosting externo; e finalmente a terceira baseada numa implementação em regime de IaaS.
Resumo:
Publications are often used as a measure of research work success. Human T-lymphotropic virus (HTLV) type 1 and 2 are human retroviruses, which were discovered in the early 1980s, and it is estimated that 15-20 million people are infected worldwide. This article describes a bibliometric review and a coauthorship network analysis of literature on HTLV indexed in PubMed in a 24-year period. A total of 7,564 documents were retrieved, showing a decrease in the number of documents from 1996 to 2007. HTLV manuscripts were published in 1,074 journals. Japan and USA were the countries with the highest contribution in this field (61%) followed by France (8%). Production ranking changed when the number of publications was normalized by population (Dominican Republic and Japan), by gross domestic product (Guinea-Bissau and Gambia), and by gross national income per capita (Brazil and Japan). The present study has shed light on some of the defining features of scientific collaboration performed by HTLV research community, such as the existence of core researchers responsible for articulating the development of research in the area, facilitating wider collaborative relationships and the integration of new authors in the research groups.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação
Resumo:
Transthyretin amyloidosis is a conformational pathology characterized by the extracellular formation of amyloid deposits and the progressive impairment of the peripheral nervous system. Point mutations in this tetrameric plasma protein decrease its stability and are linked to disease onset and progression. Since non-mutated transthyretin also forms amyloid in systemic senile amyloidosis and some mutation bearers are asymptomatic throughout their lives, non-genetic factors must also be involved in transthyretin amyloidosis. We discovered, using a differential proteomics approach, that extracellular chaperones such as fibrinogen, clusterin, haptoglobin, alpha-1-anti-trypsin and 2-macroglobulin are overrepresented in transthyretin amyloidosis. Our data shows that a complex network of extracellular chaperones are over represented in human plasma and we speculate that they act synergistically to cope with amyloid prone proteins. Proteostasis may thus be as important as point mutations in transthyretin amyloidosis.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
Thesis submitted in partial fulfillment of the requirements for the Degree of Doctor of Statistics and Information Management