861 resultados para multiple data sources
Resumo:
Introduction Leprosy remains a relevant public health issue in Brazil. The objective of this study was to analyze the spatial distribution of new cases of leprosy and to detect areas with higher risks of disease in the City of Vitória. Methods The study was ecologically based on the spatial distribution of leprosy in the City of Vitória, State of Espírito Santo between 2005 and 2009. The data sources used came from the available records of the State Health Secretary of Espírito Santo. A global and local empirical Bayesian method was used in the spatial analysis to produce a leprosy risk estimation, and the fluctuation effect was smoothed from the detection coefficients. Results The study used thematic maps to illustrate that leprosy is distributed heterogeneously between the neighborhoods and that it is possible to identify areas with high risk of disease. The Pearson correlation coefficient of 0.926 (p = 0.001) for the Local Method indicated highly correlated coefficients. The Moran index was calculated to evaluate correlations between the incidences of adjoining districts. Conclusions We identified the spatial contexts in which there were the highest incidence rates of leprosy in Vitória during the studied period. The results contribute to the knowledge of the spatial distribution of leprosy in the City of Vitória, which can help establish more cost-effective control strategies because they indicate specific regions and priority planning activities that can interfere with the transmission chain.
Resumo:
Based in internet growth, through semantic web, together with communication speed improvement and fast development of storage device sizes, data and information volume rises considerably every day. Because of this, in the last few years there has been a growing interest in structures for formal representation with suitable characteristics, such as the possibility to organize data and information, as well as the reuse of its contents aimed for the generation of new knowledge. Controlled Vocabulary, specifically Ontologies, present themselves in the lead as one of such structures of representation with high potential. Not only allow for data representation, as well as the reuse of such data for knowledge extraction, coupled with its subsequent storage through not so complex formalisms. However, for the purpose of assuring that ontology knowledge is always up to date, they need maintenance. Ontology Learning is an area which studies the details of update and maintenance of ontologies. It is worth noting that relevant literature already presents first results on automatic maintenance of ontologies, but still in a very early stage. Human-based processes are still the current way to update and maintain an ontology, which turns this into a cumbersome task. The generation of new knowledge aimed for ontology growth can be done based in Data Mining techniques, which is an area that studies techniques for data processing, pattern discovery and knowledge extraction in IT systems. This work aims at proposing a novel semi-automatic method for knowledge extraction from unstructured data sources, using Data Mining techniques, namely through pattern discovery, focused in improving the precision of concept and its semantic relations present in an ontology. In order to verify the applicability of the proposed method, a proof of concept was developed, presenting its results, which were applied in building and construction sector.
Resumo:
The main objective of this pedagogical case study is to analyse the market entry dynamics of pharmaceutical innovative drugs in Portugal, and the role and impact of the different stakeholders in this process. The case focuses on the market entry of Vyndaqel (Tafamidis) Pfizer’s orphan innovative product to treat TTR-FAP, “paramiloidose”, a highly incapacitating rare disease that has more than 2.000 diagnosed patients in Portugal, one of the highest prevalence worldwide and an incidence of 100 new patients every year. In terms of methodology it were used two main sources of information. Regarding secondary data sources it was made an exhaustive search using the main specialty search engines regarding the Tafamidis case, market access, orphan drugs and market entry context in Portugal and Europe. In terms of primary data it were conducted 7 direct interviews with the main case stakeholders. The pedagogical case study focuses on 5 main questions that provide the base of the discussion for the classes. First it is analysed the rationale behind the introduction of Tafamidis in Portugal, and its relevance for Pfizer, namely due to the previous investment made with the acquisition of FoldRX by $400M, the company that developed the product in the first place. It is also analysed the point of view of the NHS, and the reasoning behind drug reimbursement that considered not only the technical (efficacy and safety) and financial benefits of the drug, but also the social impact, due to the major role played by patient associations’ actions and coverage provided by the media that impacted the reimbursement decision. Finally it is analysed the vertical financing methodology that was selected by the Ministry of Health for drug acquisition by 2 public hospitals, that served as reference centres for the treatment of this disease
Resumo:
O objetivo deste trabalho é apresentar os resultados da análise das concepções de dois protagonistas de uma reforma curricular que está sendo implementada numa escola de engenharia. A principal característica do novo currículo é o uso de projetos e oficinas como atividades complementares a serem realizadas pelos estudantes. As atividades complementares acontecerão em paralelo ao trabalho realizado nas disciplinas sem que haja uma relação de interdisciplinaridade. O novo currículo está sendo implantado desde fevereiro de 2015. Segundo Pacheco (2005) há dois momentos, dentre outros, no processo de mudança curricular, o currículo “ideal”, determinado por dimensões epistemológica, política, econômica, ideológica, técnica, estética, e histórica e, que recebe influência direta daquele que idealiza e cria o novo currículo e, o currículo “formal” que se traduz na prática implementada na escola. São essas duas etapas estudadas nesta pesquisa. Para isso serão considerados como fontes de dados dois protagonistas, um mais ligado à concepção do currículo e outro da sua implementação, a partir dos quais se busca compreender as motivações, crenças e percepções que, por sua vez, determinam a reforma curricular. Entrevistas semiestruturadas foram utilizadas como técnica de pesquisa, com o propósito de se entender a gênese da proposta e as mudanças entre essas duas etapas. Os dados revelam que mudanças aconteceram desde a idealização até a formalização do currículo, motivadas por demandas do processo de implementação, revela ainda diferenças na visão de currículo e a motivação para romper com padrões na formação de engenheiros no Brasil.
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
Usually, data warehousing populating processes are data-oriented workflows composed by dozens of granular tasks that are responsible for the integration of data coming from different data sources. Specific subset of these tasks can be grouped on a collection together with their relationships in order to form higher- level constructs. Increasing task granularity allows for the generalization of processes, simplifying their views and providing methods to carry out expertise to new applications. Well-proven practices can be used to describe general solutions that use basic skeletons configured and instantiated according to a set of specific integration requirements. Patterns can be applied to ETL processes aiming to simplify not only a possible conceptual representation but also to reduce the gap that often exists between two design perspectives. In this paper, we demonstrate the feasibility and effectiveness of an ETL pattern-based approach using task clustering, analyzing a real world ETL scenario through the definitions of two commonly used clusters of tasks: a data lookup cluster and a data conciliation and integration cluster.
Resumo:
In Intensive Medicine, the presentation of medical information is done in many ways, depending on the type of data collected and stored. The way in which the information is presented can make it difficult for intensivists to quickly understand the patient's condition. When there is the need to cross between several types of clinical data sources the situation is even worse. This research seeks to explore a new way of presenting information about patients, based on the timeframe in which events occur. By developing an interactive Patient Timeline, intensivists will have access to a new environment in real-time where they can consult the patient clinical history and the data collected until the moment. The medical history will be available from the moment in which patients is admitted in the ICU until discharge, allowing intensivist to examine data regarding vital signs, medication, exams, among others. This timeline also intends to, through the use of information and models produced by the INTCare system, combine several clinical data in order to help diagnose the future patients’ conditions. This platform will help intensivists to make more accurate decision. This paper presents the first approach of the solution designed
Resumo:
PhD thesis in Educational Sciences (specialization in Politics of Education).
Resumo:
The MAP-i Doctoral Programme in Informatics, of the Universities of Minho, Aveiro and Porto
Resumo:
Under the framework of constraint based modeling, genome-scale metabolic models (GSMMs) have been used for several tasks, such as metabolic engineering and phenotype prediction. More recently, their application in health related research has spanned drug discovery, biomarker identification and host-pathogen interactions, targeting diseases such as cancer, Alzheimer, obesity or diabetes. In the last years, the development of novel techniques for genome sequencing and other high-throughput methods, together with advances in Bioinformatics, allowed the reconstruction of GSMMs for human cells. Considering the diversity of cell types and tissues present in the human body, it is imperative to develop tissue-specific metabolic models. Methods to automatically generate these models, based on generic human metabolic models and a plethora of omics data, have been proposed. However, their results have not yet been adequately and critically evaluated and compared. This work presents a survey of the most important tissue or cell type specific metabolic model reconstruction methods, which use literature, transcriptomics, proteomics and metabolomics data, together with a global template model. As a case study, we analyzed the consistency between several omics data sources and reconstructed distinct metabolic models of hepatocytes using different methods and data sources as inputs. The results show that omics data sources have a poor overlapping and, in some cases, are even contradictory. Additionally, the hepatocyte metabolic models generated are in many cases not able to perform metabolic functions known to be present in the liver tissue. We conclude that reliable methods for a priori omics data integration are required to support the reconstruction of complex models of human cells.
Resumo:
A partir de las últimas décadas se ha impulsado el desarrollo y la utilización de los Sistemas de Información Geográficos (SIG) y los Sistemas de Posicionamiento Satelital (GPS) orientados a mejorar la eficiencia productiva de distintos sistemas de cultivos extensivos en términos agronómicos, económicos y ambientales. Estas nuevas tecnologías permiten medir variabilidad espacial de propiedades del sitio como conductividad eléctrica aparente y otros atributos del terreno así como el efecto de las mismas sobre la distribución espacial de los rendimientos. Luego, es posible aplicar el manejo sitio-específico en los lotes para mejorar la eficiencia en el uso de los insumos agroquímicos, la protección del medio ambiente y la sustentabilidad de la vida rural. En la actualidad, existe una oferta amplia de recursos tecnológicos propios de la agricultura de precisión para capturar variación espacial a través de los sitios dentro del terreno. El óptimo uso del gran volumen de datos derivado de maquinarias de agricultura de precisión depende fuertemente de las capacidades para explorar la información relativa a las complejas interacciones que subyacen los resultados productivos. La covariación espacial de las propiedades del sitio y el rendimiento de los cultivos ha sido estudiada a través de modelos geoestadísticos clásicos que se basan en la teoría de variables regionalizadas. Nuevos desarrollos de modelos estadísticos contemporáneos, entre los que se destacan los modelos lineales mixtos, constituyen herramientas prometedoras para el tratamiento de datos correlacionados espacialmente. Más aún, debido a la naturaleza multivariada de las múltiples variables registradas en cada sitio, las técnicas de análisis multivariado podrían aportar valiosa información para la visualización y explotación de datos georreferenciados. La comprensión de las bases agronómicas de las complejas interacciones que se producen a la escala de lotes en producción, es hoy posible con el uso de éstas nuevas tecnologías. Los objetivos del presente proyecto son: (l) desarrollar estrategias metodológicas basadas en la complementación de técnicas de análisis multivariados y geoestadísticas, para la clasificación de sitios intralotes y el estudio de interdependencias entre variables de sitio y rendimiento; (ll) proponer modelos mixtos alternativos, basados en funciones de correlación espacial de los términos de error que permitan explorar patrones de correlación espacial de los rendimientos intralotes y las propiedades del suelo en los sitios delimitados. From the last decades the use and development of Geographical Information Systems (GIS) and Satellite Positioning Systems (GPS) is highly promoted in cropping systems. Such technologies allow measuring spatial variability of site properties including electrical conductivity and others soil features as well as their impact on the spatial variability of yields. Therefore, site-specific management could be applied to improve the efficiency in the use of agrochemicals, the environmental protection, and the sustainability of the rural life. Currently, there is a wide offer of technological resources to capture spatial variation across sites within field. However, the optimum use of data coming from the precision agriculture machineries strongly depends on the capabilities to explore the information about the complex interactions underlying the productive outputs. The covariation between spatial soil properties and yields from georeferenced data has been treated in a graphical manner or with standard geostatistical approaches. New statistical modeling capabilities from the Mixed Linear Model framework are promising to deal with correlated data such those produced by the precision agriculture. Moreover, rescuing the multivariate nature of the multiple data collected at each site, several multivariate statistical approaches could be crucial tools for data analysis with georeferenced data. Understanding the basis of complex interactions at the scale of production field is now within reach the use of these new techniques. Our main objectives are: (1) to develop new statistical strategies, based on the complementarities of geostatistics and multivariate methods, useful to classify sites within field grown with grain crops and analyze the interrelationships of several soil and yield variables, (2) to propose mixed linear models to predict yield according spatial soil variability and to build contour maps to promote a more sustainable agriculture.
Resumo:
The modern computer systems that are in use nowadays are mostly processor-dominant, which means that their memory is treated as a slave element that has one major task – to serve execution units data requirements. This organization is based on the classical Von Neumann's computer model, proposed seven decades ago in the 1950ties. This model suffers from a substantial processor-memory bottleneck, because of the huge disparity between the processor and memory working speeds. In order to solve this problem, in this paper we propose a novel architecture and organization of processors and computers that attempts to provide stronger match between the processing and memory elements in the system. The proposed model utilizes a memory-centric architecture, wherein the execution hardware is added to the memory code blocks, allowing them to perform instructions scheduling and execution, management of data requests and responses, and direct communication with the data memory blocks without using registers. This organization allows concurrent execution of all threads, processes or program segments that fit in the memory at a given time. Therefore, in this paper we describe several possibilities for organizing the proposed memory-centric system with multiple data and logicmemory merged blocks, by utilizing a high-speed interconnection switching network.
Resumo:
This technical background paper describes the methods applied and data sources used in the compilation of the 1980-2003 data set for material flow accounts of the Mexican economy and presents the data set. It is organised in four parts: the first part gives an overview of the Material Flow Accounting (MFA) methodology. The second part presents the main material flows of the Mexican economy including biomass, fossil fuels, metal ores, industrial minerals and, construction minerals. The aim of this part is to explain the procedures and methods followed, the data sources used as well as providing a brief evaluation of the quality and reliability of the information used and the accounts established. Finally, some conclusions will be provided.
Resumo:
BACKGROUND: Lipid-lowering therapy is costly but effective at reducing coronary heart disease (CHD) risk. OBJECTIVE: To assess the cost-effectiveness and public health impact of Adult Treatment Panel III (ATP III) guidelines and compare with a range of risk- and age-based alternative strategies. DESIGN: The CHD Policy Model, a Markov-type cost-effectiveness model. DATA SOURCES: National surveys (1999 to 2004), vital statistics (2000), the Framingham Heart Study (1948 to 2000), other published data, and a direct survey of statin costs (2008). TARGET POPULATION: U.S. population age 35 to 85 years. Time Horizon: 2010 to 2040. PERSPECTIVE: Health care system. INTERVENTION: Lowering of low-density lipoprotein cholesterol with HMG-CoA reductase inhibitors (statins). OUTCOME MEASURE: Incremental cost-effectiveness. RESULTS OF BASE-CASE ANALYSIS: Full adherence to ATP III primary prevention guidelines would require starting (9.7 million) or intensifying (1.4 million) statin therapy for 11.1 million adults and would prevent 20,000 myocardial infarctions and 10,000 CHD deaths per year at an annual net cost of $3.6 billion ($42,000/QALY) if low-intensity statins cost $2.11 per pill. The ATP III guidelines would be preferred over alternative strategies if society is willing to pay $50,000/QALY and statins cost $1.54 to $2.21 per pill. At higher statin costs, ATP III is not cost-effective; at lower costs, more liberal statin-prescribing strategies would be preferred; and at costs less than $0.10 per pill, treating all persons with low-density lipoprotein cholesterol levels greater than 3.4 mmol/L (>130 mg/dL) would yield net cost savings. RESULTS OF SENSITIVITY ANALYSIS: Results are sensitive to the assumptions that LDL cholesterol becomes less important as a risk factor with increasing age and that little disutility results from taking a pill every day. LIMITATION: Randomized trial evidence for statin effectiveness is not available for all subgroups. CONCLUSION: The ATP III guidelines are relatively cost-effective and would have a large public health impact if implemented fully in the United States. Alternate strategies may be preferred, however, depending on the cost of statins and how much society is willing to pay for better health outcomes. FUNDING: Flight Attendants' Medical Research Institute and the Swanson Family Fund. The Framingham Heart Study and Framingham Offspring Study are conducted and supported by the National Heart, Lung, and Blood Institute.
Resumo:
OBJECTIVE: To investigate the planning of subgroup analyses in protocols of randomised controlled trials and the agreement with corresponding full journal publications. DESIGN: Cohort of protocols of randomised controlled trial and subsequent full journal publications. SETTING: Six research ethics committees in Switzerland, Germany, and Canada. DATA SOURCES: 894 protocols of randomised controlled trial involving patients approved by participating research ethics committees between 2000 and 2003 and 515 subsequent full journal publications. RESULTS: Of 894 protocols of randomised controlled trials, 252 (28.2%) included one or more planned subgroup analyses. Of those, 17 (6.7%) provided a clear hypothesis for at least one subgroup analysis, 10 (4.0%) anticipated the direction of a subgroup effect, and 87 (34.5%) planned a statistical test for interaction. Industry sponsored trials more often planned subgroup analyses compared with investigator sponsored trials (195/551 (35.4%) v 57/343 (16.6%), P<0.001). Of 515 identified journal publications, 246 (47.8%) reported at least one subgroup analysis. In 81 (32.9%) of the 246 publications reporting subgroup analyses, authors stated that subgroup analyses were prespecified, but this was not supported by 28 (34.6%) corresponding protocols. In 86 publications, authors claimed a subgroup effect, but only 36 (41.9%) corresponding protocols reported a planned subgroup analysis. CONCLUSIONS: Subgroup analyses are insufficiently described in the protocols of randomised controlled trials submitted to research ethics committees, and investigators rarely specify the anticipated direction of subgroup effects. More than one third of statements in publications of randomised controlled trials about subgroup prespecification had no documentation in the corresponding protocols. Definitive judgments regarding credibility of claimed subgroup effects are not possible without access to protocols and analysis plans of randomised controlled trials.