945 resultados para Point density analysis
Resumo:
The purpose of this paper is to study the possible differences among countries as CO2 emitters and to examine the underlying causes of these differences. The starting point of the analysis is the Kaya identity, which allows us to break down per capita emissions in four components: an index of carbon intensity, transformation efficiency, energy intensity and social wealth. Through a cluster analysis we have identified five groups of countries with different behavior according to these four factors. One significant finding is that these groups are stable for the period analyzed. This suggests that a study based on these components can characterize quite accurately the polluting behavior of individual countries, that is to say, the classification found in the analysis could be used in other studies which look to study the behavior of countries in terms of CO2 emissions in homogeneous groups. In this sense, it supposes an advance over the traditional regional or rich-poor countries classifications .
Resumo:
INTRODUCTION: Hip fractures are responsible for excessive mortality, decreasing the 5-year survival rate by about 20%. From an economic perspective, they represent a major source of expense, with direct costs in hospitalization, rehabilitation, and institutionalization. The incidence rate sharply increases after the age of 70, but it can be reduced in women aged 70-80 years by therapeutic interventions. Recent analyses suggest that the most efficient strategy is to implement such interventions in women at the age of 70 years. As several guidelines recommend bone mineral density (BMD) screening of postmenopausal women with clinical risk factors, our objective was to assess the cost-effectiveness of two screening strategies applied to elderly women aged 70 years and older. METHODS: A cost-effectiveness analysis was performed using decision-tree analysis and a Markov model. Two alternative strategies, one measuring BMD of all women, and one measuring BMD only of those having at least one risk factor, were compared with the reference strategy "no screening". Cost-effectiveness ratios were measured as cost per year gained without hip fracture. Most probabilities were based on data observed in EPIDOS, SEMOF and OFELY cohorts. RESULTS: In this model, which is mostly based on observed data, the strategy "screen all" was more cost effective than "screen women at risk." For one woman screened at the age of 70 and followed for 10 years, the incremental (additional) cost-effectiveness ratio of these two strategies compared with the reference was 4,235 euros and 8,290 euros, respectively. CONCLUSION: The results of this model, under the assumptions described in the paper, suggest that in women aged 70-80 years, screening all women with dual-energy X-ray absorptiometry (DXA) would be more effective than no screening or screening only women with at least one risk factor. Cost-effectiveness studies based on decision-analysis trees maybe useful tools for helping decision makers, and further models based on different assumptions should be performed to improve the level of evidence on cost-effectiveness ratios of the usual screening strategies for osteoporosis.
Resumo:
BACKGROUND: A reorganization of healthcare systems is required to meet the challenge of the increasing prevalence of chronic diseases, e.g. diabetes. In North-America and Europe, several countries have thus developed national or regional chronic disease management programs. In Switzerland, such initiatives have only emerged recently. In 2010, the canton of Vaud set up the "Diabetes Cantonal Program", within the framework of which we conducted a study designed to ascertain the opinions of both diabetic patients and healthcare professionals on the elements that could be integrated into this program, the barriers and facilitators to its development, and the incentives that could motivate these actors to participate. METHODS: We organized eight focus-groups: one with diabetic patients and one with healthcare professionals in the four sanitary areas of the canton of Vaud. The discussions were recorded, transcribed and submitted to a thematic content analysis. RESULTS: Patients and healthcare professionals were rather in favour of the implementation of a cantonal program, although patients were more cautious concerning its necessity. All participants envisioned a set of elements that could be integrated to this program. They also considered that the program could be developed more easily if it were adapted to patients' and professionals' needs and if it used existing structures and professionals. The difficulty to motivate both patients and professionals to participate was mentioned as a barrier to the development of this program however. Quality or financial incentives could therefore be created to overcome this potential problem. CONCLUSION: The identification of the elements to consider, barriers, facilitators and incentives to participate to a chronic disease management program, obtained by exploring the opinions of patients and healthcare professionals, should favour its further development and implementation.
Resumo:
This article sets out a theoretical framework for the study of organisational change within political alliances. To achieve this objective it uses as a starting point a series of premises, the most notable of which include the definition of organisational change as a discrete, complex and focussed phenomenon of changes in power within the party. In accordance with these premises, it analyses the synthetic model of organisational change proposed by Panebianco (1988). After examining its limitations, a number of amendments are proposed to adapt it to the way political alliances operate. The above has resulted in the design of four new models. In order to test its validity and explanatory power in a preliminary manner, the second part looks at the organisational change of the UDC within the CiU alliance between 1978 and 2001. The discussion and conclusions reached demonstrate the problems of determinism of the Panebianco model and suggest, tentatively, the importance of the power balance within the alliance as a key factor.
Resumo:
In this paper we examine whether variations in the level of public capital across Spain‟s Provinces affected productivity levels over the period 1996-2005. The analysis is motivated by contemporary urban economics theory, involving a production function for the competitive sector of the economy („industry‟) which includes the level of composite services derived from „service‟ firms under monopolistic competition. The outcome is potentially increasing returns to scale resulting from pecuniary externalities deriving from internal increasing returns in the monopolistic competition sector. We extend the production function by also making (log) labour efficiency a function of (log) total public capital stock and (log) human capital stock, leading to a simple and empirically tractable reduced form linking productivity level to density of employment, human capital and public capital stock. The model is further extended to include technological externalities or spillovers across provinces. Using panel data methodology, we find significant elasticities for total capital stock and for human capital stock, and a significant impact for employment density. The finding that the effect of public capital is significantly different from zero, indicating that it has a direct effect even after controlling for employment density, is contrary to some of the earlier research findings which leave the question of the impact of public capital unresolved.
Resumo:
The distribution of larvae of Simulium goeldii was studies in four streams in upland tropical forest near Manaus, Amazonas, Brazil. In each month 32 points were sampled, each with an area of 30 x 50 cm. The areas of all substrates available were measured at each point. The larvae of S. goeldii were collected and later counted for all substrate types where larvae of this species were found. The available substrates were classified into eight types: dry leaves, green leaves, branches, fruits, detritus, rocks and sand; anly the first four types had larvae present. The Kruskal-Wallis test and analysis of variance indicated that the larvae occupy these substrates differently; the Newman-Keuls identified the following differences in intensity of occupation of the susbstrates: branchs differ from roots, dry leaves and green leaves, and green leaves differ from roots and dry leaves. The highest density of larvae was observed on green leaves. However, because the most abundant substrates in the study area were roots and dry leaves, I suggest that the latter two substrates are the most important ones for the esteblishment of this population of S. goeldii.
Resumo:
After a large scale field trial performed in central Brazil envisaging the control of Chagas' disease vectors in an endemic area colonized by Triatoma infestans and T. sordida the cost-effectiveness analysis for each insecticide/formulation was performed. It considered the operational costs and the prices of insecticides and formulations, related to the activity and persistence of each one. The end point was considered to be less than 90% of domicilliary unitis (house + annexes) free of infestation. The results showed good cost-effectiveness for a slow-release emulsifiable suspension (SRES) based on PVA and containing malathion as active ingredient, as well as for the pyrethroids tested in this assay-cyfluthrin, cypermethrin, deltamethrin and permethrin.
Resumo:
The measurement of inter-connectedness in an economy using input-output tables is not new, however much of the previous literature has not had any explicit dynamic dimension. Studies have tried to estimate the degree of inter-relatedness for an economy at a given point in time using one input-output table, some have compared different economies at a point in time but few have looked at the question of how interconnectedness within an economy changes over time. The publication in 2010 of a consistent series of input-output tables for Scotland offers the researcher the opportunity to track changes in the degree of inter-connectedness over the seven year period 1998 to 2007. The paper is in two parts. A simple measure of inter-connectedness is introduced in the first part of the paper and applied to the Scottish tables. In the second part of the paper an extraction method is applied to sector by sector to the tables in order to estimate how interconnectedness has changed over time for each industrial sector.
Resumo:
The measurement of inter-connectedness in an economy using input-output tables is not new, however much of the previous literature has not had any explicit dynamic dimension. Studies have tried to estimate the degree of inter-relatedness for an economy at a given point in time using one input-output table, some have compared different economies at a point in time but few have looked at the question of how interconnectedness within an economy changes over time. The publication in 2010 of a consistent series of input-output tables for Scotland offers the researcher the opportunity to track changes in the degree of inter-connectedness over the seven year period 1998 to 2007. The paper is in two parts. A simple measure of inter-connectedness is introduced in the first part of the paper and applied to the Scottish tables. In the second part of the paper an extraction method is applied to sector by sector to the tables in order to estimate how interconnectedness has changed over time for each industrial sector.
Resumo:
This study presents a first attempt to extend the “Multi-scale integrated analysis of societal and ecosystem metabolism (MuSIASEM)” approach to a spatial dimension using GIS techniques in the Metropolitan area of Barcelona. We use a combination of census and commercial databases along with a detailed land cover map to create a layer of Common Geographic Units that we populate with the local values of human time spent in different activities according to MuSIASEM hierarchical typology. In this way, we mapped the hours of available human time, in regards to the working hours spent in different locations, putting in evidence the gradients in spatial density between the residential location of workers (generating the work supply) and the places where the working hours are actually taking place. We found a strong three-modal pattern of clumps of areas with different combinations of values of time spent on household activities and on paid work. We also measured and mapped spatial segregation between these two activities and put forward the conjecture that this segregation increases with higher energy throughput, as the size of the functional units must be able to cope with the flow of exosomatic energy. Finally, we discuss the effectiveness of the approach by comparing our geographic representation of exosomatic throughput to the one issued from conventional methods.
Resumo:
Aging is ubiquitous to the human condition. The MRI correlates of healthy aging have been extensively investigated using a range of modalities, including volumetric MRI, quantitative MRI (qMRI), and diffusion tensor imaging. Despite this, the reported brainstem related changes remain sparse. This is, in part, due to the technical and methodological limitations in quantitatively assessing and statistically analyzing this region. By utilizing a new method of brainstem segmentation, a large cohort of 100 healthy adults were assessed in this study for the effects of aging within the human brainstem in vivo. Using qMRI, tensor-based morphometry (TBM), and voxel-based quantification (VBQ), the volumetric and quantitative changes across healthy adults between 19 and 75 years were characterized. In addition to the increased R2* in substantia nigra corresponding to increasing iron deposition with age, several novel findings were reported in the current study. These include selective volumetric loss of the brachium conjunctivum, with a corresponding decrease in magnetization transfer and increase in proton density (PD), accounting for the previously described "midbrain shrinkage." Additionally, we found increases in R1 and PD in several pontine and medullary structures. We consider these changes in the context of well-characterized, functional age-related changes, and propose potential biophysical mechanisms. This study provides detailed quantitative analysis of the internal architecture of the brainstem and provides a baseline for further studies of neurodegenerative diseases that are characterized by early, pre-clinical involvement of the brainstem, such as Parkinson's and Alzheimer's diseases.
Resumo:
Pond-breeding amphibians are affected by site-specific factors and regional and landscape-scale patterns of land use. Recent anthropogenic landscape modifications (drainage, agriculture intensification, larger road networks, and increased traffic) affect species by reducing the suitable habitat area and fragmenting remaining populations. Using a robust concentric approach based on permutation tests, we evaluated the impact of recent landscape changes on the presence of the endangered European tree frog (Hyla arborea.) in wetlands. We analyzed the frequency of 1 traffic and 14 land-use indices at 20 circular ranges (from 100-m up to 2-km radii) around 76 ponds identified in western Switzerland. Urban areas and road surfaces had a strong adverse effect on tree frog presence even at relatively great distances (from 100 m up to 1 km). When traffic measurements were considered instead of road surfaces, the effect increased, suggesting a negative impact due to a vehicle-induced effect. Altogether, our results indicate that urbanization and traffic must be taken into account when pond creation is an option in conservation management plans, as is the case for the European tree frog in western Switzerland. We conclude that our easy-to-use and robust concentric method of analysis can successfully assist managers in identifying potential sites for pond creation, where probability of the presence of tree frogs is maximized.
Resumo:
South Peak is a 7-Mm3 potentially unstable rock mass located adjacent to the 1903 Frank Slide on Turtle Mountain, Alberta. This paper presents three-dimensional numerical rock slope stability models and compares them with a previous conceptual slope instability model based on discontinuity surfaces identified using an airborne LiDAR digital elevation model (DEM). Rock mass conditions at South Peak are described using the Geological Strength Index and point load tests, whilst the mean discontinuity set orientations and characteristics are based on approximately 500 field measurements. A kinematic analysis was first conducted to evaluate probable simple discontinuity-controlled failure modes. The potential for wedge failure was further assessed by considering the orientation of wedge intersections over the airborne LiDAR DEM and through a limit equilibrium combination analysis. Block theory was used to evaluate the finiteness and removability of blocks in the rock mass. Finally, the complex interaction between discontinuity sets and the topography within South Peak was investigated through three-dimensional distinct element models using the code 3DEC. The influence of individual discontinuity sets, scale effects, friction angle and the persistence along the discontinuity surfaces on the slope stability conditions were all investigated using this code.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
What genotype should the scientist specify for conducting a database search to try to find the source of a low-template-DNA (lt-DNA) trace? When the scientist answers this question, he or she makes a decision. Here, we approach this decision problem from a normative point of view by defining a decision-theoretic framework for answering this question for one locus. This framework combines the probability distribution describing the uncertainty over the trace's donor's possible genotypes with a loss function describing the scientist's preferences concerning false exclusions and false inclusions that may result from the database search. According to this approach, the scientist should choose the genotype designation that minimizes the expected loss. To illustrate the results produced by this approach, we apply it to two hypothetical cases: (1) the case of observing one peak for allele xi on a single electropherogram, and (2) the case of observing one peak for allele xi on one replicate, and a pair of peaks for alleles xi and xj, i ≠ j, on a second replicate. Given that the probabilities of allele drop-out are defined as functions of the observed peak heights, the threshold values marking the turning points when the scientist should switch from one designation to another are derived in terms of the observed peak heights. For each case, sensitivity analyses show the impact of the model's parameters on these threshold values. The results support the conclusion that the procedure should not focus on a single threshold value for making this decision for all alleles, all loci and in all laboratories.