983 resultados para Computing models
Resumo:
The Graphics Processing Unit (GPU) is present in almost every modern day personal computer. Despite its specific purpose design, they have been increasingly used for general computations with very good results. Hence, there is a growing effort from the community to seamlessly integrate this kind of devices in everyday computing. However, to fully exploit the potential of a system comprising GPUs and CPUs, these devices should be presented to the programmer as a single platform. The efficient combination of the power of CPU and GPU devices is highly dependent on each device’s characteristics, resulting in platform specific applications that cannot be ported to different systems. Also, the most efficient work balance among devices is highly dependable on the computations to be performed and respective data sizes. In this work, we propose a solution for heterogeneous environments based on the abstraction level provided by algorithmic skeletons. Our goal is to take full advantage of the power of all CPU and GPU devices present in a system, without the need for different kernel implementations nor explicit work-distribution.To that end, we extended Marrow, an algorithmic skeleton framework for multi-GPUs, to support CPU computations and efficiently balance the work-load between devices. Our approach is based on an offline training execution that identifies the ideal work balance and platform configurations for a given application and input data size. The evaluation of this work shows that the combination of CPU and GPU devices can significantly boost the performance of our benchmarks in the tested environments, when compared to GPU-only executions.
Resumo:
Breast cancer is the most common cancer among women, being a major public health problem. Worldwide, X-ray mammography is the current gold-standard for medical imaging of breast cancer. However, it has associated some well-known limitations. The false-negative rates, up to 66% in symptomatic women, and the false-positive rates, up to 60%, are a continued source of concern and debate. These drawbacks prompt the development of other imaging techniques for breast cancer detection, in which Digital Breast Tomosynthesis (DBT) is included. DBT is a 3D radiographic technique that reduces the obscuring effect of tissue overlap and appears to address both issues of false-negative and false-positive rates. The 3D images in DBT are only achieved through image reconstruction methods. These methods play an important role in a clinical setting since there is a need to implement a reconstruction process that is both accurate and fast. This dissertation deals with the optimization of iterative algorithms, with parallel computing through an implementation on Graphics Processing Units (GPUs) to make the 3D reconstruction faster using Compute Unified Device Architecture (CUDA). Iterative algorithms have shown to produce the highest quality DBT images, but since they are computationally intensive, their clinical use is currently rejected. These algorithms have the potential to reduce patient dose in DBT scans. A method of integrating CUDA in Interactive Data Language (IDL) is proposed in order to accelerate the DBT image reconstructions. This method has never been attempted before for DBT. In this work the system matrix calculation, the most computationally expensive part of iterative algorithms, is accelerated. A speedup of 1.6 is achieved proving the fact that GPUs can accelerate the IDL implementation.
Resumo:
Theoretical epidemiology aims to understand the dynamics of diseases in populations and communities. Biological and behavioral processes are abstracted into mathematical formulations which aim to reproduce epidemiological observations. In this thesis a new system for the self-reporting of syndromic data — Influenzanet — is introduced and assessed. The system is currently being extended to address greater challenges of monitoring the health and well-being of tropical communities.(...)
Resumo:
"Amyotrophic Lateral Sclerosis (ALS) is the most severe and common adult onset disorder that affects motor neurons in the spinal cord, brainstem and cortex, resulting in progressive weakness and death from respiratory failure within two to five years of symptoms onset(...)
Resumo:
Nowadays, a significant increase on the demand for interoperable systems for exchanging data in business collaborative environments has been noticed. Consequently, cooperation agreements between each of the involved enterprises have been brought to light. However, due to the fact that even in a same community or domain, there is a big variety of knowledge representation not semantically coincident, which embodies the existence of interoperability problems in the enterprises information systems that need to be addressed. Moreover, in relation to this, most organizations face other problems about their information systems, as: 1) domain knowledge not being easily accessible by all the stakeholders (even intra-enterprise); 2) domain knowledge not being represented in a standard format; 3) and even if it is available in a standard format, it is not supported by semantic annotations or described using a common and understandable lexicon. This dissertation proposes an approach for the establishment of an enterprise reference lexicon from business models. It addresses the automation in the information models mapping for the reference lexicon construction. It aggregates a formal and conceptual representation of the business domain, with a clear definition of the used lexicon to facilitate an overall understanding by all the involved stakeholders, including non-IT personnel.
Resumo:
Ontologies formalized by means of Description Logics (DLs) and rules in the form of Logic Programs (LPs) are two prominent formalisms in the field of Knowledge Representation and Reasoning. While DLs adhere to the OpenWorld Assumption and are suited for taxonomic reasoning, LPs implement reasoning under the Closed World Assumption, so that default knowledge can be expressed. However, for many applications it is useful to have a means that allows reasoning over an open domain and expressing rules with exceptions at the same time. Hybrid MKNF knowledge bases make such a means available by formalizing DLs and LPs in a common logic, the Logic of Minimal Knowledge and Negation as Failure (MKNF). Since rules and ontologies are used in open environments such as the Semantic Web, inconsistencies cannot always be avoided. This poses a problem due to the Principle of Explosion, which holds in classical logics. Paraconsistent Logics offer a solution to this issue by assigning meaningful models even to contradictory sets of formulas. Consequently, paraconsistent semantics for DLs and LPs have been investigated intensively. Our goal is to apply the paraconsistent approach to the combination of DLs and LPs in hybrid MKNF knowledge bases. In this thesis, a new six-valued semantics for hybrid MKNF knowledge bases is introduced, extending the three-valued approach by Knorr et al., which is based on the wellfounded semantics for logic programs. Additionally, a procedural way of computing paraconsistent well-founded models for hybrid MKNF knowledge bases by means of an alternating fixpoint construction is presented and it is proven that the algorithm is sound and complete w.r.t. the model-theoretic characterization of the semantics. Moreover, it is shown that the new semantics is faithful w.r.t. well-studied paraconsistent semantics for DLs and LPs, respectively, and maintains the efficiency of the approach it extends.
Resumo:
The computational power is increasing day by day. Despite that, there are some tasks that are still difficult or even impossible for a computer to perform. For example, while identifying a facial expression is easy for a human, for a computer it is an area in development. To tackle this and similar issues, crowdsourcing has grown as a way to use human computation in a large scale. Crowdsourcing is a novel approach to collect labels in a fast and cheap manner, by sourcing the labels from the crowds. However, these labels lack reliability since annotators are not guaranteed to have any expertise in the field. This fact has led to a new research area where we must create or adapt annotation models to handle these weaklylabeled data. Current techniques explore the annotators’ expertise and the task difficulty as variables that influences labels’ correction. Other specific aspects are also considered by noisy-labels analysis techniques. The main contribution of this thesis is the process to collect reliable crowdsourcing labels for a facial expressions dataset. This process consists in two steps: first, we design our crowdsourcing tasks to collect annotators labels; next, we infer the true label from the collected labels by applying state-of-art crowdsourcing algorithms. At the same time, a facial expression dataset is created, containing 40.000 images and respective labels. At the end, we publish the resulting dataset.
Resumo:
Real-time collaborative editing systems are common nowadays, and their advantages are widely recognized. Examples of such systems include Google Docs, ShareLaTeX, among others. This thesis aims to adopt this paradigm in a software development environment. The OutSystems visual language lends itself very appropriate to this kind of collaboration, since the visual code enables a natural flow of knowledge between developers regarding the developed code. Furthermore, communication and coordination are simplified. This proposal explores the field of collaboration on a very structured and rigid model, where collaboration is made through the copy-modify-merge paradigm, in which a developer gets its own private copy from the shared repository, modifies it in isolation and later uploads his changes to be merged with modifications concurrently produced by other developers. To this end, we designed and implemented an extension to the OutSystems Platform, in order to enable real-time collaborative editing. The solution guarantees consistency among the artefacts distributed across several developers working on the same project. We believe that it is possible to achieve a much more intense collaboration over the same models with a low negative impact on the individual productivity of each developer.
Resumo:
No atual contexto da inovação, um grande número de estudos tem analisado o potencial do modelo de Inovação Aberta. Neste sentido, o autor Henry Chesbrough (2003) considerado o pai da Inovação Aberta, afirma que as empresas estão vivenciando uma “mudança de paradigma” na maneira como desenvolvem os seus processos de inovação e na comercialização de tecnologia e conhecimento. Desta forma, o modelo de Inovação Aberta defende que as empresas podem e devem utilizar os recursos disponíveis fora das suas fronteiras sendo esta combinação de ideias e tecnologias internas e externas crucial para atingir uma posição de liderança no mercado. Já afirmava Chesbrough (2003) que não se faz inovação isoladamente e o próprio dinamismo do cenário atual reforça esta ideia. Assim, os riscos inerentes ao processo de inovação podem ser atenuados através da realização de parcerias entre empresas e instituições. A adoção do modelo de Inovação Aberta é percebida com base na abundância de conhecimento disponível, que poderá proporcionar valor também à empresa que o criou, como é o caso do licenciamento de patentes. O presente estudo teve como objetivo identificar as práticas de Inovação Aberta entre as parcerias mencionadas pelas empresas prestadoras de Cloud Computing. Através da Análise de Redes Sociais foram construídas matrizes referentes às parcerias mencionadas pelas empresas e informações obtidas em fontes secundárias (Sousa, 2012). Essas matrizes de relacionamento (redes) foram analisadas e representadas através de diagramas. Desta forma, foi possível traçar um panorama das parcerias consideradas estratégicas pelas empresas entrevistadas e identificar quais delas constituem, de fato, práticas de Inovação Aberta. Do total de 26 parcerias estratégicas mencionadas nas entrevistas, apenas 11 foram caracterizadas como práticas do modelo aberto. A análise das práticas conduzidas pelas empresas entrevistadas permite verificar algumas limitações no aproveitamento do modelo de Inovação Aberta. Por fim, são feitas algumas recomendações sobre a implementação deste modelo pelas pequenas e médias empresas baseadas em tecnologias emergentes, como é o caso do conceito de cloud computing.
Resumo:
INTRODUCTION: Malaria is a serious problem in the Brazilian Amazon region, and the detection of possible risk factors could be of great interest for public health authorities. The objective of this article was to investigate the association between environmental variables and the yearly registers of malaria in the Amazon region using Bayesian spatiotemporal methods. METHODS: We used Poisson spatiotemporal regression models to analyze the Brazilian Amazon forest malaria count for the period from 1999 to 2008. In this study, we included some covariates that could be important in the yearly prediction of malaria, such as deforestation rate. We obtained the inferences using a Bayesian approach and Markov Chain Monte Carlo (MCMC) methods to simulate samples for the joint posterior distribution of interest. The discrimination of different models was also discussed. RESULTS: The model proposed here suggests that deforestation rate, the number of inhabitants per km², and the human development index (HDI) are important in the prediction of malaria cases. CONCLUSIONS: It is possible to conclude that human development, population growth, deforestation, and their associated ecological alterations are conducive to increasing malaria risk. We conclude that the use of Poisson regression models that capture the spatial and temporal effects under the Bayesian paradigm is a good strategy for modeling malaria counts.
Resumo:
This study discusses some fundamental issues so that the development and diffusion of services based in cloud computing happen positively in several countries. For exposure of this subject is discusses public initiatives by the most advanced countries in terms of cloud computing application and the brazilin position in this context. Based on presented evidences here it appears that the essential elements for the development and diffusion of cloud computing in Brazil made important steps and show evidence of maturity, as the cybercrime legislation. However, other elements still require analysis and specifically adaptations for the cloud computing case, such as the Intellectual Property Rights. Despite showing broadband services still lacking, one cannot disregard the government effort to facilitate access for all society. In contrast, the large volume of the Brazilian IT market is an interest factor for companies seeking to invest in the country.
Resumo:
This paper analyses the boundaries of simplified wind turbine models used to represent the behavior of wind turbines in order to conduct power system stability studies. Based on experimental measurements, the response of recent simplified (also known as generic) wind turbine models that are currently being developed by the International Standard IEC 61400-27 is compared to complex detailed models elaborated by wind turbine manufacturers. This International Standard, whose Technical Committee was convened in October 2009, is focused on defining generic simulation models for both wind turbines (Part 1) and wind farms (Part 2). The results of this work provide an improved understanding of the usability of generic models for conducting power system simulations.
Resumo:
The development of human cell models that recapitulate hepatic functionality allows the study of metabolic pathways involved in toxicity and disease. The increased biological relevance, cost-effectiveness and high-throughput of cell models can contribute to increase the efficiency of drug development in the pharmaceutical industry. Recapitulation of liver functionality in vitro requires the development of advanced culture strategies to mimic in vivo complexity, such as 3D culture, co-cultures or biomaterials. However, complex 3D models are typically associated with poor robustness, limited scalability and compatibility with screening methods. In this work, several strategies were used to develop highly functional and reproducible spheroid-based in vitro models of human hepatocytes and HepaRG cells using stirred culture systems. In chapter 2, the isolation of human hepatocytes from resected liver tissue was implemented and a liver tissue perfusion method was optimized towards the improvement of hepatocyte isolation and aggregation efficiency, resulting in an isolation protocol compatible with 3D culture. In chapter 3, human hepatocytes were co-cultivated with mesenchymal stem cells (MSC) and the phenotype of both cell types was characterized, showing that MSC acquire a supportive stromal function and hepatocytes retain differentiated hepatic functions, stability of drug metabolism enzymes and higher viability in co-cultures. In chapter 4, a 3D alginate microencapsulation strategy for the differentiation of HepaRG cells was evaluated and compared with the standard 2D DMSO-dependent differentiation, yielding higher differentiation efficiency, comparable levels of drug metabolism activity and significantly improved biosynthetic activity. The work developed in this thesis provides novel strategies for 3D culture of human hepatic cell models, which are reproducible, scalable and compatible with screening platforms. The phenotypic and functional characterization of the in vitro systems performed contributes to the state of the art of human hepatic cell models and can be applied to the improvement of pre-clinical drug development efficiency of the process, model disease and ultimately, development of cell-based therapeutic strategies for liver failure.
Resumo:
This paper develops the model of Bicego, Grosso, and Otranto (2008) and applies Hidden Markov Models to predict market direction. The paper draws an analogy between financial markets and speech recognition, seeking inspiration from the latter to solve common issues in quantitative investing. Whereas previous works focus mostly on very complex modifications of the original hidden markov model algorithm, the current paper provides an innovative methodology by drawing inspiration from thoroughly tested, yet simple, speech recognition methodologies. By grouping returns into sequences, Hidden Markov Models can then predict market direction the same way they are used to identify phonemes in speech recognition. The model proves highly successful in identifying market direction but fails to consistently identify whether a trend is in place. All in all, the current paper seeks to bridge the gap between speech recognition and quantitative finance and, even though the model is not fully successful, several refinements are suggested and the room for improvement is significant.
Resumo:
Natural disasters are events that cause general and widespread destruction of the built environment and are becoming increasingly recurrent. They are a product of vulnerability and community exposure to natural hazards, generating a multitude of social, economic and cultural issues of which the loss of housing and the subsequent need for shelter is one of its major consequences. Nowadays, numerous factors contribute to increased vulnerability and exposure to natural disasters such as climate change with its impacts felt across the globe and which is currently seen as a worldwide threat to the built environment. The abandonment of disaster-affected areas can also push populations to regions where natural hazards are felt more severely. Although several actors in the post-disaster scenario provide for shelter needs and recovery programs, housing is often inadequate and unable to resist the effects of future natural hazards. Resilient housing is commonly not addressed due to the urgency in sheltering affected populations. However, by neglecting risks of exposure in construction, houses become vulnerable and are likely to be damaged or destroyed in future natural hazard events. That being said it becomes fundamental to include resilience criteria, when it comes to housing, which in turn will allow new houses to better withstand the passage of time and natural disasters, in the safest way possible. This master thesis is intended to provide guiding principles to take towards housing recovery after natural disasters, particularly in the form of flood resilient construction, considering floods are responsible for the largest number of natural disasters. To this purpose, the main structures that house affected populations were identified and analyzed in depth. After assessing the risks and damages that flood events can cause in housing, a methodology was proposed for flood resilient housing models, in which there were identified key criteria that housing should meet. The same methodology is based in the US Federal Emergency Management Agency requirements and recommendations in accordance to specific flood zones. Finally, a case study in Maldives – one of the most vulnerable countries to sea level rise resulting from climate change – has been analyzed in light of housing recovery in a post-disaster induced scenario. This analysis was carried out by using the proposed methodology with the intent of assessing the resilience of the newly built housing to floods in the aftermath of the 2004 Indian Ocean Tsunami.