944 resultados para discovery driven analysis
Resumo:
RESUMO - O planeamento dos recursos humanos em saúde é um assunto relevante na formulação de políticas, face às importantes alterações nos cuidados e necessidades, características demográficas e socioeconómicas. Este planeamento consiste na estimativa do número de profissionais necessários para se atingir determinados objetivos, existindo diferentes métodos para a sua realização. Segundo a Direção Geral de Saúde considera-se adequado um Terapeuta da Fala para 60.000 habitantes – valores calculados através de estudos de prevalência de doença. Porém, o número de recursos humanos encontra-se intimamente ligado à produtividade, determinada através de unidades de medida como os procedimentos. Nesta área, fatores como a complexidade dos doentes e trabalho indireto, podem influenciar o produto final. Neste estudo pretende-se averiguar a necessidade de recursos humanos em Terapia da Fala, analisando a atividade destes serviços nos hospitais da região de Lisboa e Vale do Tejo e aplicando a fórmula de preconização proposta pelo Ministério da Saúde, baseada num modelo de oferta. Participaram no estudo 23 Terapeutas da Fala de 9 instituições hospitalares. Foi construída uma folha de registo do trabalho diário, preenchida durante cinco dias não consecutivos, averiguando-se assim o tempo gasto nas diferentes atividades. Verificou-se que 63,21% do horário laboral é utilizado na concretização de atos diretos e 36,76% gasto em atos indiretos, relacionados com os utentes, não contabilizados na fórmula proposta. Incluindo as diferentes componentes (atos diretos e indiretos), constata-se que o número de profissionais existentes na região de Lisboa e Vale do Tejo é adequado, embora numa análise por instituição o resultado seja contraditório.
Resumo:
Obesity has become a major worldwide challenge to public health, owing to an interaction between the Western 'obesogenic' environment and a strong genetic contribution. Recent extensive genome-wide association studies (GWASs) have identified numerous single nucleotide polymorphisms associated with obesity, but these loci together account for only a small fraction of the known heritable component. Thus, the 'common disease, common variant' hypothesis is increasingly coming under challenge. Here we report a highly penetrant form of obesity, initially observed in 31 subjects who were heterozygous for deletions of at least 593 kilobases at 16p11.2 and whose ascertainment included cognitive deficits. Nineteen similar deletions were identified from GWAS data in 16,053 individuals from eight European cohorts. These deletions were absent from healthy non-obese controls and accounted for 0.7% of our morbid obesity cases (body mass index (BMI) >or= 40 kg m(-2) or BMI standard deviation score >or= 4; P = 6.4 x 10(-8), odds ratio 43.0), demonstrating the potential importance in common disease of rare variants with strong effects. This highlights a promising strategy for identifying missing heritability in obesity and other complex traits: cohorts with extreme phenotypes are likely to be enriched for rare variants, thereby improving power for their discovery. Subsequent analysis of the loci so identified may well reveal additional rare variants that further contribute to the missing heritability, as recently reported for SIM1 (ref. 3). The most productive approach may therefore be to combine the 'power of the extreme' in small, well-phenotyped cohorts, with targeted follow-up in case-control and population cohorts.
Resumo:
The human auditory cortex comprises the supratemporal plane and large parts of the temporal and parietal convexities. We have investigated the relevant intrahemispheric cortico-cortical connections using in vivo DSI tractography combined with landmark-based registration, automatic cortical parcellation and whole-brain structural connection matrices in 20 right-handed male subjects. On the supratemporal plane, the pattern of connectivity was related to the architectonically defined early-stage auditory areas. It revealed a three-tier architecture characterized by a cascade of connections from the primary auditory cortex to six adjacent non-primary areas and from there to the superior temporal gyrus. Graph theory-driven analysis confirmed the cascade-like connectivity pattern and demonstrated a strong degree of segregation and hierarchy within early-stage auditory areas. Putative higher-order areas on the temporal and parietal convexities had more widely spread local connectivity and long-range connections with the prefrontal cortex; analysis of optimal community structure revealed five distinct modules in each hemisphere. The pattern of temporo-parieto-frontal connectivity was partially asymmetrical. In conclusion, the human early-stage auditory cortical connectivity, as revealed by in vivo DSI tractography, has strong similarities with that of non-human primates. The modular architecture and hemispheric asymmetry in higher-order regions is compatible with segregated processing streams and lateralization of cognitive functions.
Resumo:
The growing population in cities increases the energy demand and affects the environment by increasing carbon emissions. Information and communications technology solutions which enable energy optimization are needed to address this growing energy demand in cities and to reduce carbon emissions. District heating systems optimize the energy production by reusing waste energy with combined heat and power plants. Forecasting the heat load demand in residential buildings assists in optimizing energy production and consumption in a district heating system. However, the presence of a large number of factors such as weather forecast, district heating operational parameters and user behavioural parameters, make heat load forecasting a challenging task. This thesis proposes a probabilistic machine learning model using a Naive Bayes classifier, to forecast the hourly heat load demand for three residential buildings in the city of Skellefteå, Sweden over a period of winter and spring seasons. The district heating data collected from the sensors equipped at the residential buildings in Skellefteå, is utilized to build the Bayesian network to forecast the heat load demand for horizons of 1, 2, 3, 6 and 24 hours. The proposed model is validated by using four cases to study the influence of various parameters on the heat load forecast by carrying out trace driven analysis in Weka and GeNIe. Results show that current heat load consumption and outdoor temperature forecast are the two parameters with most influence on the heat load forecast. The proposed model achieves average accuracies of 81.23 % and 76.74 % for a forecast horizon of 1 hour in the three buildings for winter and spring seasons respectively. The model also achieves an average accuracy of 77.97 % for three buildings across both seasons for the forecast horizon of 1 hour by utilizing only 10 % of the training data. The results indicate that even a simple model like Naive Bayes classifier can forecast the heat load demand by utilizing less training data.
Resumo:
Ce mémoire porte sur la chronologie culturelle des Amérindiens du Nord-Est américain. Il vise à documenter un des épisodes culturels de la préhistoire de l’Estrie, soit le Sylvicole moyen ancien, compris entre l’an 400 avant notre ère et 500 de notre ère. De la poterie typique de cette période a été récoltée sur le site Vieux-Pont (BiEx-1) à Lennoxville par des archéologues amateurs et professionnels depuis sa découverte. L’analyse des tessons de poterie réalisée dans ce projet a surtout révélé une forte homogénéité de l’effet basculant, une technique d’application décorative, sur la paroi interne et la panse des vases. Elle a aussi permis de proposer une occupation récente au Sylvicole moyen ancien, entre les ans 1 et 500-600 de notre ère. L’analyse comparative suggère la participation des groupes de Vieux-Pont aux mêmes réseaux d’interactions et d’échanges que ceux des régions de Montréal, de Québec, du Haut-Richelieu et de la Nouvelle-Angleterre.
Resumo:
Essai présenté en vue de l’obtention du grade de Doctorat en psychologie, option psychologie clinique (D. Psy)
Resumo:
Climate-G is a large scale distributed testbed devoted to climate change research. It is an unfunded effort started in 2008 and involving a wide community both in Europe and US. The testbed is an interdisciplinary effort involving partners from several institutions and joining expertise in the field of climate change and computational science. Its main goal is to allow scientists carrying out geographical and cross-institutional data discovery, access, analysis, visualization and sharing of climate data. It represents an attempt to address, in a real environment, challenging data and metadata management issues. This paper presents a complete overview about the Climate-G testbed highlighting the most important results that have been achieved since the beginning of this project.
Resumo:
Whereas there is substantial scholarship on formulaic language in L1 and L2 English, there is less research on formulaicity in other languages. The aim of this paper is to contribute to learner corpus research into formulaic language in native and non-native German. To this effect, a corpus of argumentative essays written by advanced British students of German (WHiG) was compared with a corpus of argumentative essays written by German native speakers (Falko-L1). A corpus-driven analysis reveals a larger number of 3-grams in WHiG than in Falko-L1, which suggests that British advanced learners of German are more likely to use formulaic language in argumentative writing than their native-speaker counterparts. Secondly, by classifying the formulaic sequences according to their functions, this study finds that native speakers of German prefer discourse-structuring devices to stance expressions, whilst British advanced learners display the opposite preferences. Thirdly, the results show that learners of German make greater use of macro-discourse-structuring devices and cautious language, whereas native speakers favour micro-discourse structuring devices and tend to use more direct language. This study increases our understanding of formulaic language typical of British advanced learners of German and reveals how diverging cultural paradigms can shape written native speaker and learner output.
Resumo:
Effective public policy to mitigate climate change footprints should build on data-driven analysis of firm-level strategies. This article’s conceptual approach augments the resource-based view (RBV) of the firm and identifies investments in four firm-level resource domains (Governance, Information management, Systems, and Technology [GISTe]) to develop capabilities in climate change impact mitigation. The authors denote the resulting framework as the GISTe model, which frames their analysis and public policy recommendations. This research uses the 2008 Carbon Disclosure Project (CDP) database, with high-quality information on firm-level climate change strategies for 552 companies from North America and Europe. In contrast to the widely accepted myth that European firms are performing better than North American ones, the authors find a different result. Many firms, whether European or North American, do not just “talk” about climate change impact mitigation, but actually do “walk the talk.” European firms appear to be better than their North American counterparts in “walk I,” denoting attention to governance, information management, and systems. But when it comes down to “walk II,” meaning actual Technology-related investments, North American firms’ performance is equal or superior to that of the European companies. The authors formulate public policy recommendations to accelerate firm-level, sector-level, and cluster-level implementation of climate change strategies.
Resumo:
Human parasitic diseases are the foremost threat to human health and welfare around the world. Trypanosomiasis is a very serious infectious disease against which the currently available drugs are limited and not effective. Therefore, there is an urgent need for new chemotherapeutic agents. One attractive drug target is the major cysteine protease from Trypanosoma cruzi, cruzain. In the present work, comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA) studies were conducted on a series of thiosemicarbazone and semicarbazone derivatives as inhibitors of cruzain. Molecular modeling studies were performed in order to identify the preferred binding mode of the inhibitors into the enzyme active site, and to generate structural alignments for the three-dimensional quantitative structure-activity relationship (3D QSAR) investigations. Statistically significant models were obtained (CoMFA. r(2) = 0.96 and q(2) = 0.78; CoMSIA, r(2) = 0.91 and q(2) = 0.73), indicating their predictive ability for untested compounds. The models were externally validated employing a test set, and the predicted values were in good agreement with the experimental results. The final QSAR models and the information gathered from the 3D CoMFA and CoMSIA contour maps provided important insights into the chemical and structural basis involved in the molecular recognition process of this family of cruzain inhibitors, and should be useful for the design of new structurally related analogs with improved potency. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Pós-graduação em Engenharia Mecânica - FEIS
Resumo:
WE INVESTIGATED HOW WELL STRUCTURAL FEATURES such as note density or the relative number of changes in the melodic contour could predict success in implicit and explicit memory for unfamiliar melodies. We also analyzed which features are more likely to elicit increasingly confident judgments of "old" in a recognition memory task. An automated analysis program computed structural aspects of melodies, both independent of any context, and also with reference to the other melodies in the testset and the parent corpus of pop music. A few features predicted success in both memory tasks, which points to a shared memory component. However, motivic complexity compared to a large corpus of pop music had different effects on explicit and implicit memory. We also found that just a few features are associated with different rates of "old" judgments, whether the items were old or new. Rarer motives relative to the testset predicted hits and rarer motives relative to the corpus predicted false alarms. This data-driven analysis provides further support for both shared and separable mechanisms in implicit and explicit memory retrieval, as well as the role of distinctiveness in true and false judgments of familiarity.
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides continuous measurements made with a Biospherical Instrument Inc. QCR-2150 surface PAR sensor mounted on a sensor mast at the stern of the ship (ca. 8m above deck) and time synchronized with the CTD recording unit. The sensor consists of a cosine collector and was also utilized to correct the CTD PAR sensor data. The dark was computed as the lowest 0.01% voltage of the signal that was found to be very stable (0.00965V) for all the legs except for the 2nd leg of the polar circle where there was no complete night (the manufacturer dark was 0.0097V). The manufacturer calibration slope from 12/ 2012 was used to transform the data to scientific units.
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides continuous measurements made with a WETLabs Eco-FL sensor mounted on the flowthrough system between June 4th, 2011 and March 30th, 2012. Data was recorded approximately every 10s. Two issues affected the data: 1. Periods when the water 0.2µm filtered water were used as blanks and 2. Periods where fluorescence was affected by non-photochemical quenching (NPQ, chlorophyll fluorescence is reduced when cells are exposed to light, e.g. Falkowski and Raven, 1997). Median data and their standard deviation were binned to 5min bins with period of light/dark indicated by an added variable (so that NPQ affected data could be neglected if the user so chooses). Data was first calibrated using HPLC data collected on the Tara (there were 36 data within 30min of each other). Fewer were available when there was no evident NPQ and the resulting scale factor was 0.0106 mg Chl m-3/count. To increase the calibration match-ups we used the AC-S data which provided a robust estimate of Chlorophyll (e.g. Boss et al., 2013). Scale factor computed over a much larger range of values than HPLC was 0.0088 mg Chl m-3/count (compared to 0.0079 mg Chl m-3/count based on manufacturer). In the archived data the fluorometer data is merged with the TSG, raw data is provided as well as manufacturer calibration constants, blank computed from filtered measurements and chlorophyll calibrated using the AC-S. For a full description of the processing of the Eco-FL please see Taillandier, 2015.
Resumo:
The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides continuous pH measurements made during 2013 expedition with a Satlantic SeaFET instrument that was connected to the flowthrough system. Data calibration was performed according to Bresnahan et al. (2014) (using spectrophotometric pH measurements on discrete samples (Clayton and Byrne 1993). pH_internal values were taken to calibrate the data (rather than pH_external) because of the better calibration coefficient (there was no trend associated with it). The equations of Clayton and Byrne (1993) was used to compute pH from the measured absorbance values at the temperature of measurement. The data was converted to in situ temperature using the "CO2-sys" program which can be downloaded from http://cdiac.ornl.gov/ftp/co2sys/.