969 resultados para on-disk data layout


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change has the potential to impact on global, regional, and national disease burdens both directly and indirectly. Projecting and valuing these health impacts is important not only in terms of assessing the overall impact of climate change on various parts of the world, but also in terms of ensuring that national and regional decision-making institutions have access to the data necessary to guide investment decisions and future policy design. This report contributes to the research focusing on projecting and valuing the impacts of climate change in the Caribbean by projecting the climate change-induced excess disease burden for two climate change scenarios in Montserrat for the period 2010 - 2050, and by estimating the monetary value associated with this excess disease burden. The diseases initially considered in this report are variety of vector and water-borne impacts and other miscellaneous conditions; specifically, malaria, dengue fever, gastroenteritis/diarrheal disease, schistosomiasis, leptospirosis, ciguatera poisoning, meningococcal meningitis, and cardio-respiratory diseases. Disease projections were based on derived baseline incidence and mortality rates, available dose-response relationships found in the published literature, climate change scenario population projections for the A2 and B2 IPCC SRES scenario families, and annual temperature and precipitation anomalies as projected by the downscaled ECHAM4 global climate model. Monetary valuation was based on a transfer value of statistical life approach with a modification for morbidity. Using discount rates of 1%, 2% and 4%, results show mean annual costs (morbidity and mortality) ranges of $0.61 million (in the B2 scenario, discounted at 4% annually) – $1 million (in the A2 scenario, discounted at 1% annually) for Montserrat. These costs are compared to adaptation cost scenarios involving increased direct spending on per capita health care. This comparison reveals a high benefit-cost ratio suggesting that moderate costs will deliver significant benefit in terms of avoided health burdens in the period 2010-2050. The methodology and results suggest that a focus on coordinated data collection and improved monitoring represents a potentially important no regrets adaptation strategy for Montserrat. Also the report highlights the need for this to be part of a coordinated regional response that avoids duplication in spending.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change has the potential to impact on global, regional, and national disease burdens both directly and indirectly. Projecting and valuing these health impacts is important not only in terms of assessing the overall impact of climate change on various parts of the world, but also of ensuring that national and regional decision-making institutions have access to the data necessary to guide investment decisions and future policy design. This report contributes to the research focusing on projecting and valuing the impacts of climate change in the Caribbean by projecting the climate change-induced excess disease burden for two climate change scenarios in Saint Lucia for the period 2010 - 2050, and by estimating the non-market, statistical life-based costs associated with this excess disease burden. The diseases initially considered in this report are a variety of vector and water-borne impacts and other miscellaneous conditions; specifically, malaria, dengue fever, gastroenteritis/diarrhoeal disease, schistosomiasis, leptospirosis, ciguatera poisoning, meningococcal meningitis, and cardio-respiratory diseases. Disease projections were based on derived baseline incidence and mortality rates, available dose-response relationships found in the published literature, climate change scenario population projections for the A2 and B2 IPCC SRES scenario families, and annual temperature and precipitation anomalies as projected by the downscaled ECHAM4 global climate model. Monetary valuation was based on a transfer value of statistical life approach with a modification for morbidity. Using discount rates of 1, 2, and 4%, results show mean annual costs (morbidity and mortality) ranges of $80.2 million (in the B2 scenario, discounted at 4% annually) -$182.4 million (in the A2 scenario, discounted at 1% annually) for St. Lucia.1 These costs are compared to adaptation cost scenarios involving direct and indirect interventions in health care. This comparison reveals a high benefit-cost ratio suggesting that moderate costs will deliver significant benefit in terms of avoided health costs from 2010-2050. In this context indirect interventions target sectors other than healthcare (e.g. water supply). It is also important to highlight that interventions can target both the supply of health infrastructure (including health status and disease monitoring), and households. It is suggested that a focus on coordinated data collection and improved monitoring represents a potentially important no regrets adaptation strategy for St Lucia. Also, the need for this to be part of a coordinated regional response that avoids duplication in spending is highlighted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os guaribas, do gênero Alouatta, que são os primatas do Novo Mundo com maior distribuição geográfica, têm sido colocados em três grupos de espécies: o grupo Alouatta palliata da América central, e os grupos sulamericanos Alouatta seniculus e Alouatta caraya. Este último é monotípico, mas o grupo A. seniculus inclui pelo menos três espécies (A. seniculus, A. belzebul e A. fusca). Neste estudo, foram seqüenciados aproximadamente 600 pares de base do pseudogene globina g1 nas quatro espécies brasileiras (A. seniculus, A. belzebul, A. fusca e A. caraya). Os métodos de máxima parcimônia e máxima verossimilhança produziram árvores filogenéticas com o mesmo arranjo: {A. caraya [A. seniculus (A. fusca, A. belzebul)]}. A árvore mais parcimoniosa apresentou valores de bootstrap maiores de 82% para todos os agrupamentos, e valores de força de ligação de pelo menos 2, apoiando o agrupamento irmão de A. fusca e A. belzebul. O estudo também confirmou a presença em A. fusca do elemento de inserção Alu, com 150 pares de base, e uma deleção de 1,8 kb no pseudogene globina g1 já conhecidos nas demais espécies de guaribas. A classificação cladística baseada em dados moleculares é congruente com as de estudos morfológicos, com um isolamento claro do grupo monoespecífico A. caraya em relação ao grupo A. seniculus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho apresenta resultados práticos de uma atenção sistemática dada ao processamento e à interpretação sísmica de algumas linhas terrestres do conjunto de dados do gráben do Tacutu (Brasil), sobre os quais foram aplicadas etapas fundamentais do sistema WIT de imageamento do empilhamento CRS (Superfície de Reflexão Comum) vinculado a dados. Como resultado, esperamos estabelecer um fluxograma para a reavaliação sísmica de bacias sedimentares. Fundamentado nos atributos de frente de onda resultantes do empilhamento CRS, um macro-modelo suave de velocidades foi obtido através de inversão tomográfica. Usando este macro-modelo, foi realizado uma migração à profundidade pré- e pós-empilhamento. Além disso, outras técnicas baseadas no empilhamento CRS foram realizadas em paralelo como correção estática residual e migração de abertura-limitada baseada na zona de Fresnel projetada. Uma interpretação geológica sobre as seções empilhadas e migradas foi esboçada. A partir dos detalhes visuais dos painéis é possível interpretar desconformidades, afinamentos, um anticlinal principal falhado com conjuntos de horstes e grábens. Também, uma parte da linha selecionada precisa de processamento mais detalhado para evidenciar melhor qualquer estrutura presente na subsuperfície.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smoking cue-provoked craving is an intricate behavior associated with strong changes in neural networks. Craving is one of the main reasons subjects continue to smoke; therefore interventions that can modify activity in neural networks associated with craving can be useful tools in future research investigating novel treatments for smoking cessation. The goal of this study was to use a neuromodulatory technique associated with a powerful effect on spontaneous neuronal firing - transcranial direct current stimulation (tDCS) - to modify cue-provoked smoking craving. Based on preliminary data showing that craving can be modified after a single tDCS session, here we investigated the effects of repeated tDCS sessions on craving behavior. Twenty-seven subjects were randomized to receive sham or active tDCS (anodal tDCS of the left DLPFC). Our results show a significant cumulative effect of tDCS on modifying smoking cue-provoked craving. In fact, in the group of active stimulation, smoking cues had an opposite effect on craving after stimulation - it decreased craving - as compared to sham stimulation in which there was a small decrease or increase on craving. In addition, during these 5 days of stimulation there was a small but significant decrease in the number of cigarettes smoked in the active as compared to sham tDCS group. Our findings extend the results of our previous study as they confirm the notion that tDCS has a specific effect on craving behavior and that the effects of several sessions can increase the magnitude of its effect. These results open avenues for the exploration of this method as a therapeutic alternative for smoking cessation and also as a mean to change stimulus-induced behavior. (C) 2009 Elsevier Ireland Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We estimate the impact of regulatory heterogeneity on agri-food trade using a gravity analysis that relies on detailed data on non-tariff measures (NTMs) collected by the NTM-Impact project. The data cover a broad range of import requirements for agricultural and food products for the EU and nine of its major trade partners. We find that trade is significantly reduced when importing countries have stricter maximum residue limits (MRLs) for plant products than exporting countries. For most other measures, due to their qualitative nature, we were unable to infer whether the importer has stricter standards relative to the exporter, and we do not find a robust relationship between these measures and trade. Our findings suggest that, at least for some import standards, harmonising regulations will increase trade. We also conclude that tariff reductions remain an effective means to increase trade even when NTMs abound.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this research was to evaluate economic costs of respiratory and circulatory diseases in the municipality of Cubatao, in the state of Sao Paulo, Brazil. Data on hospital admissions and on missed working days due to hospitalization (for age group 14 to 70 years old) from the database of Sistema Unico de Sa de (SUS - Brazilian National Health System) were used. Results: Based on these data, it was calculated that R$ 22.1 million were spent in the period 2000 to 2009 due to diseases of the respiratory and circulatory systems. Part of these expenses can be directly related to the emission of atmospheric pollutants in the city. In order to estimate the costs related to air pollution, data on Cubatao were compared to data from two other municipalities that are also located at the coast side (Guaruja and Peru be), but which have little industrial activity in comparison to Cubatao. It was verified that, in both, average per capita costs were lower when compared to Cubatao, but that this difference has been decreasing in recent years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Empirical approaches and, more recently, physical approaches, have grounded the establishment of logical connections between radiometric variables derived from remote data and biophysical variables derived from vegetation cover. This study was aimed at evaluating correlations of dendrometric and density data from canopies of Eucalyptus spp., as collected in Capao Bonito forest unit, with radiometric data from imagery acquired by the TM/Landsat-5 sensor on two orbital passages over the study site (dates close to field data collection). Results indicate that stronger correlations were identified between crown dimensions and canopy height with near-infrared spectral band data (rho(s)4), irrespective of the satellite passage date. Estimates of spatial distribution of dendrometric data and canopy density (D) using spectral characterization were consistent with the spatial distribution of tree ages during the study period. Statistical tests were applied to evaluate performance disparities of empirical models depending on which date data were acquired. Results indicated a significant difference between models based on distinct data acquisition dates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Semi-supervised learning is a classification paradigm in which just a few labeled instances are available for the training process. To overcome this small amount of initial label information, the information provided by the unlabeled instances is also considered. In this paper, we propose a nature-inspired semi-supervised learning technique based on attraction forces. Instances are represented as points in a k-dimensional space, and the movement of data points is modeled as a dynamical system. As the system runs, data items with the same label cooperate with each other, and data items with different labels compete among them to attract unlabeled points by applying a specific force function. In this way, all unlabeled data items can be classified when the system reaches its stable state. Stability analysis for the proposed dynamical system is performed and some heuristics are proposed for parameter setting. Simulation results show that the proposed technique achieves good classification results on artificial data sets and is comparable to well-known semi-supervised techniques using benchmark data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For its particular position and the complex geological history, the Northern Apennines has been considered as a natural laboratory to apply several kinds of investigations. By the way, it is complicated to joint all the knowledge about the Northern Apennines in a unique picture that explains the structural and geological emplacement that produced it. The main goal of this thesis is to put together all information on the deformation - in the crust and at depth - of this region and to describe a geodynamical model that takes account of it. To do so, we have analyzed the pattern of deformation in the crust and in the mantle. In both cases the deformation has been studied using always information recovered from earthquakes, although using different techniques. In particular the shallower deformation has been studied using seismic moment tensors information. For our purpose we used the methods described in Arvidsson and Ekstrom (1998) that allowing the use in the inversion of surface waves [and not only of the body waves as the Centroid Moment Tensor (Dziewonski et al., 1981) one] allow to determine seismic source parameters for earthquakes with magnitude as small as 4.0. We applied this tool in the Northern Apennines and through this activity we have built up the Italian CMT dataset (Pondrelli et al., 2006) and the pattern of seismic deformation using the Kostrov (1974) method on a regular grid of 0.25 degree cells. We obtained a map of lateral variations of the pattern of seismic deformation on different layers of depth, taking into account the fact that shallow earthquakes (within 15 km of depth) in the region occur everywhere while most of events with a deeper hypocenter (15-40 km) occur only in the outer part of the belt, on the Adriatic side. For the analysis of the deep deformation, i.e. that occurred in the mantle, we used the anisotropy information characterizing the structure below the Northern Apennines. The anisotropy is an earth properties that in the crust is due to the presence of aligned fluid filled cracks or alternating isotropic layers with different elastic properties while in the mantle the most important cause of seismic anisotropy is the lattice preferred orientation (LPO) of the mantle minerals as the olivine. This last is a highly anisotropic mineral and tends to align its fast crystallographic axes (a-axis) parallel to the astenospheric flow as a response to finite strain induced by geodynamic processes. The seismic anisotropy pattern of a region is measured utilizing the shear wave splitting phenomenon (that is the seismological analogue to optical birefringence). Here, to do so, we apply on teleseismic earthquakes recorded on stations located in the study region, the Sileny and Plomerova (1996) approach. The results are analyzed on the basis of their lateral and vertical variations to better define the earth structure beneath Northern Apennines. We find different anisotropic domains, a Tuscany and an Adria one, with a pattern of seismic anisotropy which laterally varies in a similar way respect to the seismic deformation. Moreover, beneath the Adriatic region the distribution of the splitting parameters is so complex to request an appropriate analysis. Therefore we applied on our data the code of Menke and Levin (2003) which allows to look for different models of structures with multilayer anisotropy. We obtained that the structure beneath the Po Plain is probably even more complicated than expected. On the basis of the results obtained for this thesis, added with those from previous works, we suggest that slab roll-back, which created the Apennines and opened the Tyrrhenian Sea, evolved in the north boundary of Northern Apennines in a different way from its southern part. In particular, the trench retreat developed primarily south of our study region, with an eastward roll-back. In the northern portion of the orogen, after a first stage during which the retreat was perpendicular to the trench, it became oblique with respect to the structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.