935 resultados para accessibility analysis tools
Resumo:
The identification of plausible causes for water body status deterioration will be much easier if it can build on available, reliable, extensive and comprehensive biogeochemical monitoring data (preferably aggregated in a database). A plausible identification of such causes is a prerequisite for well-informed decisions on which mitigation or remediation measures to take. In this chapter, first a rationale for an extended monitoring programme is provided; it is then compared to the one required by the Water Framework Directive (WFD). This proposal includes a list of relevant parameters that are needed for an integrated, a priori status assessment. Secondly, a few sophisticated statistical tools are described that subsequently allow for the estiation of the magnitude of impairment as well as the likely relative importance of different stressors in a multiple stressed environment. The advantages and restrictions of these rather complicated analytical methods are discussed. Finally, the use of Decision Support Systems (DSS) is advocated with regard to the specific WFD implementation requirements.
Resumo:
BACKGROUND Patent foramen ovale (PFO) is associated with cryptogenic stroke (CS), although the pathogenicity of a discovered PFO in the setting of CS is typically unclear. Transesophageal echocardiography features such as PFO size, associated hypermobile septum, and presence of a right-to-left shunt at rest have all been proposed as markers of risk. The association of these transesophageal echocardiography features with other markers of pathogenicity has not been examined. METHODS AND RESULTS We used a recently derived score based on clinical and neuroimaging features to stratify patients with PFO and CS by the probability that their stroke is PFO-attributable. We examined whether high-risk transesophageal echocardiography features are seen more frequently in patients more likely to have had a PFO-attributable stroke (n=637) compared with those less likely to have a PFO-attributable stroke (n=657). Large physiologic shunt size was not more frequently seen among those with probable PFO-attributable strokes (odds ratio [OR], 0.92; P=0.53). The presence of neither a hypermobile septum nor a right-to-left shunt at rest was detected more often in those with a probable PFO-attributable stroke (OR, 0.80; P=0.45; OR, 1.15; P=0.11, respectively). CONCLUSIONS We found no evidence that the proposed transesophageal echocardiography risk markers of large PFO size, hypermobile septum, and presence of right-to-left shunt at rest are associated with clinical features suggesting that a CS is PFO-attributable. Additional tools to describe PFOs may be useful in helping to determine whether an observed PFO is incidental or pathogenically related to CS.
Resumo:
Desertification research conventionally focuses on the problem – that is, degradation – while neglecting the appraisal of successful conservation practices. Based on the premise that Sustainable Land Management (SLM) experiences are not sufficiently or comprehensively documented, evaluated, and shared, the World Overview of Conservation Approaches and Technologies (WOCAT) initiative (www.wocat.net), in collaboration with FAO’s Land Degradation Assessment in Drylands (LADA) project (www.fao.org/nr/lada/) and the EU’s DESIRE project (http://www.desire-project.eu/), has developed standardised tools and methods for compiling and evaluating the biophysical and socio-economic knowledge available about SLM. The tools allow SLM specialists to share their knowledge and assess the impact of SLM at the local, national, and global levels. As a whole, the WOCAT–LADA–DESIRE methodology comprises tools for documenting, self-evaluating, and assessing the impact of SLM practices, as well as for knowledge sharing and decision support in the field, at the planning level, and in scaling up identified good practices. SLM depends on flexibility and responsiveness to changing complex ecological and socioeconomic causes of land degradation. The WOCAT tools are designed to reflect and capture this capacity of SLM. In order to take account of new challenges and meet emerging needs of WOCAT users, the tools are constantly further developed and adapted. Recent enhancements include tools for improved data analysis (impact and cost/benefit), cross-scale mapping, climate change adaptation and disaster risk management, and easier reporting on SLM best practices to UNCCD and other national and international partners. Moreover, WOCAT has begun to give land users a voice by backing conventional documentation with video clips straight from the field. To promote the scaling up of SLM, WOCAT works with key institutions and partners at the local and national level, for example advisory services and implementation projects. Keywords: Sustainable Land Management (SLM), knowledge management, decision-making, WOCAT–LADA–DESIRE methodology.
Resumo:
Previous studies have shown that collective property rights offer higher flexibility than individual property and improve sustainable community-based forest management. Our case study, carried out in the Beni department of Bolivia, does not contradict this assertion, but shows that collective rights have been granted in areas where ecological contexts and market facilities were less favourable to intensive land use. Previous experiences suggest investigating political processes in order to understand the criteria according to which access rights were distributed. Based on remote sensing and on a multi-level land governance framework, our research confirms that land placed under collective rights, compared to individual property, is less affected by deforestation among Andean settlements. However, analysis of the historical process of land distribution in the area shows that the distribution of property rights is the result of a political process based on economic, spatial, and environmental strategies that are defined by multiple stakeholders. Collective titles were established in the more remote areas and distributed to communities with lower productive potentialities. Land rights are thus a secondary factor of forest cover change which results from diverse political compromises based on population distribution, accessibility, environmental perceptions, and expected production or extraction incomes.
Resumo:
Detector uniformity is a fundamental performance characteristic of all modern gamma camera systems, and ensuring a stable, uniform detector response is critical for maintaining clinical images that are free of artifact. For these reasons, the assessment of detector uniformity is one of the most common activities associated with a successful clinical quality assurance program in gamma camera imaging. The evaluation of this parameter, however, is often unclear because it is highly dependent upon acquisition conditions, reviewer expertise, and the application of somewhat arbitrary limits that do not characterize the spatial location of the non-uniformities. Furthermore, as the goal of any robust quality control program is the determination of significant deviations from standard or baseline conditions, clinicians and vendors often neglect the temporal nature of detector degradation (1). This thesis describes the development and testing of new methods for monitoring detector uniformity. These techniques provide more quantitative, sensitive, and specific feedback to the reviewer so that he or she may be better equipped to identify performance degradation prior to its manifestation in clinical images. The methods exploit the temporal nature of detector degradation and spatially segment distinct regions-of-non-uniformity using multi-resolution decomposition. These techniques were tested on synthetic phantom data using different degradation functions, as well as on experimentally acquired time series floods with induced, progressively worsening defects present within the field-of-view. The sensitivity of conventional, global figures-of-merit for detecting changes in uniformity was evaluated and compared to these new image-space techniques. The image-space algorithms provide a reproducible means of detecting regions-of-non-uniformity prior to any single flood image’s having a NEMA uniformity value in excess of 5%. The sensitivity of these image-space algorithms was found to depend on the size and magnitude of the non-uniformities, as well as on the nature of the cause of the non-uniform region. A trend analysis of the conventional figures-of-merit demonstrated their sensitivity to shifts in detector uniformity. The image-space algorithms are computationally efficient. Therefore, the image-space algorithms should be used concomitantly with the trending of the global figures-of-merit in order to provide the reviewer with a richer assessment of gamma camera detector uniformity characteristics.
Resumo:
Quantitative measures of polygon shapes and orientation are important elements of geospatial analysis. These kinds of measures are particularly valuable in the case of lakes, where shape and orientation patterns can help identifying the geomorphological agents behind lake formation and evolution. However, the lack of built-in tools in commercial geographic information system (GIS) software packages designed for this kind of analysis has meant that many researchers often must rely on tools and workarounds that are not always accurate. Here, an easy-to-use method to measure rectangularity R, ellipticity E, and orientation O is developed. In addition, a new rectangularity vs. ellipticity index, REi, is defined. Following a step-by-step process, it is shown how these measures and index can be easily calculated using a combination of GIS built-in functions. The identification of shapes and estimation of orientations performed by this method is applied to the case study of the geometric and oriented lakes of the Llanos de Moxos, in the Bolivian Amazon, where shape and orientation have been the two most important elements studied to infer possible formation mechanisms. It is shown that, thanks to these new indexes, shape and orientation patterns are unveiled, which would have been hard to identify otherwise.
Resumo:
PURPOSE Autografts are considered to support bone regeneration. Paracrine factors released from cortical bone might contribute to the overall process of graft consolidation. The aim of this study was to characterize the paracrine factors by means of proteomic analysis. MATERIALS AND METHODS Bone-conditioned medium (BCM) was prepared from fresh bone chips of porcine mandibles and subjected to proteomic analysis. Proteins were categorized and clustered using the bioinformatic tools UNIPROT and PANTHER, respectively. RESULTS Proteomic analysis showed that BCM contains more than 150 proteins, of which 43 were categorized into "secreted" and "extracellular matrix." Growth factors that are not only detectable in BCM, but potentially also target cellular processes involved in bone regeneration, eg, pleiotrophin, galectin-1, transforming growth factor beta (TGF-β)-induced gene (TGFBI), lactotransferrin, insulin-like growth factor (IGF)-binding protein 5, latency-associated peptide forming a complex with TGF-β1, and TGF-β2, were discovered. CONCLUSION The present results demonstrate that cortical bone chips release a large spectrum of proteins with the possibility of modulating cellular aspects of bone regeneration. The data provide the basis for future studies to understand how these paracrine factors may contribute to the complex process of graft consolidation.
Resumo:
The brain is a complex neural network with a hierarchical organization and the mapping of its elements and connections is an important step towards the understanding of its function. Recent developments in diffusion-weighted imaging have provided the opportunity to reconstruct the whole-brain structural network in-vivo at a large scale level and to study the brain structural substrate in a framework that is close to the current understanding of brain function. However, methods to construct the connectome are still under development and they should be carefully evaluated. To this end, the first two studies included in my thesis aimed at improving the analytical tools specific to the methodology of brain structural networks. The first of these papers assessed the repeatability of the most common global and local network metrics used in literature to characterize the connectome, while in the second paper the validity of further metrics based on the concept of communicability was evaluated. Communicability is a broader measure of connectivity which accounts also for parallel and indirect connections. These additional paths may be important for reorganizational mechanisms in the presence of lesions as well as to enhance integration in the network. These studies showed good to excellent repeatability of global network metrics when the same methodological pipeline was applied, but more variability was detected when considering local network metrics or when using different thresholding strategies. In addition, communicability metrics have been found to add some insight into the integration properties of the network by detecting subsets of nodes that were highly interconnected or vulnerable to lesions. The other two studies used methods based on diffusion-weighted imaging to obtain knowledge concerning the relationship between functional and structural connectivity and about the etiology of schizophrenia. The third study integrated functional oscillations measured using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) as well as diffusion-weighted imaging data. The multimodal approach that was applied revealed a positive relationship between individual fluctuations of the EEG alpha-frequency and diffusion properties of specific connections of two resting-state networks. Finally, in the fourth study diffusion-weighted imaging was used to probe for a relationship between the underlying white matter tissue structure and season of birth in schizophrenia patients. The results are in line with the neurodevelopmental hypothesis of early pathological mechanisms as the origin of schizophrenia. The different analytical approaches selected in these studies also provide arguments for discussion of the current limitations in the analysis of brain structural networks. To sum up, the first studies presented in this thesis illustrated the potential of brain structural network analysis to provide useful information on features of brain functional segregation and integration using reliable network metrics. In the other two studies alternative approaches were presented. The common discussion of the four studies enabled us to highlight the benefits and possibilities for the analysis of the connectome as well as some current limitations.
Resumo:
BACKGROUND A precise detection of volume change allows for better estimating the biological behavior of the lung nodules. Postprocessing tools with automated detection, segmentation, and volumetric analysis of lung nodules may expedite radiological processes and give additional confidence to the radiologists. PURPOSE To compare two different postprocessing software algorithms (LMS Lung, Median Technologies; LungCARE®, Siemens) in CT volumetric measurement and to analyze the effect of soft (B30) and hard reconstruction filter (B70) on automated volume measurement. MATERIAL AND METHODS Between January 2010 and April 2010, 45 patients with a total of 113 pulmonary nodules were included. The CT exam was performed on a 64-row multidetector CT scanner (Somatom Sensation, Siemens, Erlangen, Germany) with the following parameters: collimation, 24x1.2 mm; pitch, 1.15; voltage, 120 kVp; reference tube current-time, 100 mAs. Automated volumetric measurement of each lung nodule was performed with the two different postprocessing algorithms based on two reconstruction filters (B30 and B70). The average relative volume measurement difference (VME%) and the limits of agreement between two methods were used for comparison. RESULTS At soft reconstruction filters the LMS system produced mean nodule volumes that were 34.1% (P < 0.0001) larger than those by LungCARE® system. The VME% was 42.2% with a limit of agreement between -53.9% and 138.4%.The volume measurement with soft filters (B30) was significantly larger than with hard filters (B70); 11.2% for LMS and 1.6% for LungCARE®, respectively (both with P < 0.05). LMS measured greater volumes with both filters, 13.6% for soft and 3.8% for hard filters, respectively (P < 0.01 and P > 0.05). CONCLUSION There is a substantial inter-software (LMS/LungCARE®) as well as intra-software variability (B30/B70) in lung nodule volume measurement; therefore, it is mandatory to use the same equipment with the same reconstruction filter for the follow-up of lung nodule volume.
Resumo:
We present the results of an investigation into the nature of information needs of software developers who work in projects that are part of larger ecosystems. This work is based on a quantitative survey of 75 professional software developers. We corroborate the results identified in the sur- vey with needs and motivations proposed in a previous sur- vey and discover that tool support for developers working in an ecosystem context is even more meager than we thought: mailing lists and internet search are the most popular tools developers use to satisfy their ecosystem-related information needs.
Resumo:
This review reports on the application of charge density analysis in the field of crystal engineering, which is one of the most growing and productive areas of the entire field of crystallography. While methods to calculate or measure electron density are not discussed in detail, the derived quantities and tools, useful for crystal engineering analyses, are presented and their applications in the recent literature are illustrated. Potential developments and future perspectives are also highlighted and critically discussed.
Resumo:
Sequence analysis and optimal matching are useful heuristic tools for the descriptive analysis of heterogeneous individual pathways such as educational careers, job sequences or patterns of family formation. However, to date it remains unclear how to handle the inevitable problems caused by missing values with regard to such analysis. Multiple Imputation (MI) offers a possible solution for this problem but it has not been tested in the context of sequence analysis. Against this background, we contribute to the literature by assessing the potential of MI in the context of sequence analyses using an empirical example. Methodologically, we draw upon the work of Brendan Halpin and extend it to additional types of missing value patterns. Our empirical case is a sequence analysis of panel data with substantial attrition that examines the typical patterns and the persistence of sex segregation in school-to-work transitions in Switzerland. The preliminary results indicate that MI is a valuable methodology for handling missing values due to panel mortality in the context of sequence analysis. MI is especially useful in facilitating a sound interpretation of the resulting sequence types.
Resumo:
The nematode Caenorhabditis elegans is characterized by many features that make it highly attractive to study nuclear pore complexes (NPCs) and nucleocytoplasmic transport. NPC composition and structure are highly conserved in nematodes and being amenable to a variety of genetic manipulations, key aspects of nuclear envelope dynamics can be observed in great details during breakdown, reassembly, and interphase. In this chapter, we provide an overview of some of the most relevant modern techniques that allow researchers unfamiliar with C. elegans to embark on studies of nucleoporins in an intact organism through its development from zygote to aging adult. We focus on methods relevant to generate loss-of-function phenotypes and their analysis by advanced microscopy. Extensive references to available reagents, such as mutants, transgenic strains, and antibodies are equally useful to scientists with or without prior C. elegans or nucleoporin experience.
Resumo:
The nematode Caenorhabditis elegans is a well-known model organism used to investigate fundamental questions in biology. Motility assays of this small roundworm are designed to study the relationships between genes and behavior. Commonly, motility analysis is used to classify nematode movements and characterize them quantitatively. Over the past years, C. elegans' motility has been studied across a wide range of environments, including crawling on substrates, swimming in fluids, and locomoting through microfluidic substrates. However, each environment often requires customized image processing tools relying on heuristic parameter tuning. In the present study, we propose a novel Multi-Environment Model Estimation (MEME) framework for automated image segmentation that is versatile across various environments. The MEME platform is constructed around the concept of Mixture of Gaussian (MOG) models, where statistical models for both the background environment and the nematode appearance are explicitly learned and used to accurately segment a target nematode. Our method is designed to simplify the burden often imposed on users; here, only a single image which includes a nematode in its environment must be provided for model learning. In addition, our platform enables the extraction of nematode ‘skeletons’ for straightforward motility quantification. We test our algorithm on various locomotive environments and compare performances with an intensity-based thresholding method. Overall, MEME outperforms the threshold-based approach for the overwhelming majority of cases examined. Ultimately, MEME provides researchers with an attractive platform for C. elegans' segmentation and ‘skeletonizing’ across a wide range of motility assays.
Resumo:
The adenosine receptors are members of the G-protein coupled receptor (GPCR) family which represents the largest class of cell-surface proteins mediating cellular communication. As a result, GPCRs are formidable drug targets and it is estimated that approximately 30% of the marketed drugs act through members of this receptor class. There are four known subtypes of adenosine receptors: A1, A2A, A2B and A3. The adenosine A1 receptor, which is the subject of this presentation, mediates the physiological effects of adenosine in various tissues including the brain, heart, kidney and adipocytes. In the brain for instance, its role in epilepsy and ischemia has been the focus of many studies. Previous attempts to study the biosynthesis, trafficking and agonist-induced internalisation of the adenosine A1 receptor in neurons using fluorescent protein-receptor fusion constructs have been hampered by the sheer size of the fluorescent protein (GFP) that ultimately affected the function of the receptor. We have therefore initiated a research programme to develop small molecule fluorescent agonists that selectively activate the adenosine A1 receptor. Our probe design is based on the endogenous ligand adenosine and the known unselective adenosine receptor agonist NECA. We have synthesised a small library of non-fluorescent adenosine derivatives that have different cyclic and bicyclic moieties at the 6 position of the purine ring and have evaluated the pharmacology of these compounds using a yeast-based assay. This analysis revealed compounds with interesting behaviour, i.e. exhibiting subtype-selectivity and biased signalling, that can be potentially used as tool compounds in their own right for cellular studies of the adenosine A1 receptor. Furthermore, we have also linked fluorescent dyes to the purine ring and discovered fluorescent compounds that can activate the adenosine A1 receptor.