953 resultados para Graph-Based Metrics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational network analysis provides new methods to analyze the brain's structural organization based on diffusion imaging tractography data. Networks are characterized by global and local metrics that have recently given promising insights into diagnosis and the further understanding of psychiatric and neurologic disorders. Most of these metrics are based on the idea that information in a network flows along the shortest paths. In contrast to this notion, communicability is a broader measure of connectivity which assumes that information could flow along all possible paths between two nodes. In our work, the features of network metrics related to communicability were explored for the first time in the healthy structural brain network. In addition, the sensitivity of such metrics was analysed using simulated lesions to specific nodes and network connections. Results showed advantages of communicability over conventional metrics in detecting densely connected nodes as well as subsets of nodes vulnerable to lesions. In addition, communicability centrality was shown to be widely affected by the lesions and the changes were negatively correlated with the distance from lesion site. In summary, our analysis suggests that communicability metrics that may provide an insight into the integrative properties of the structural brain network and that these metrics may be useful for the analysis of brain networks in the presence of lesions. Nevertheless, the interpretation of communicability is not straightforward; hence these metrics should be used as a supplement to the more standard connectivity network metrics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The brain is a complex neural network with a hierarchical organization and the mapping of its elements and connections is an important step towards the understanding of its function. Recent developments in diffusion-weighted imaging have provided the opportunity to reconstruct the whole-brain structural network in-vivo at a large scale level and to study the brain structural substrate in a framework that is close to the current understanding of brain function. However, methods to construct the connectome are still under development and they should be carefully evaluated. To this end, the first two studies included in my thesis aimed at improving the analytical tools specific to the methodology of brain structural networks. The first of these papers assessed the repeatability of the most common global and local network metrics used in literature to characterize the connectome, while in the second paper the validity of further metrics based on the concept of communicability was evaluated. Communicability is a broader measure of connectivity which accounts also for parallel and indirect connections. These additional paths may be important for reorganizational mechanisms in the presence of lesions as well as to enhance integration in the network. These studies showed good to excellent repeatability of global network metrics when the same methodological pipeline was applied, but more variability was detected when considering local network metrics or when using different thresholding strategies. In addition, communicability metrics have been found to add some insight into the integration properties of the network by detecting subsets of nodes that were highly interconnected or vulnerable to lesions. The other two studies used methods based on diffusion-weighted imaging to obtain knowledge concerning the relationship between functional and structural connectivity and about the etiology of schizophrenia. The third study integrated functional oscillations measured using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) as well as diffusion-weighted imaging data. The multimodal approach that was applied revealed a positive relationship between individual fluctuations of the EEG alpha-frequency and diffusion properties of specific connections of two resting-state networks. Finally, in the fourth study diffusion-weighted imaging was used to probe for a relationship between the underlying white matter tissue structure and season of birth in schizophrenia patients. The results are in line with the neurodevelopmental hypothesis of early pathological mechanisms as the origin of schizophrenia. The different analytical approaches selected in these studies also provide arguments for discussion of the current limitations in the analysis of brain structural networks. To sum up, the first studies presented in this thesis illustrated the potential of brain structural network analysis to provide useful information on features of brain functional segregation and integration using reliable network metrics. In the other two studies alternative approaches were presented. The common discussion of the four studies enabled us to highlight the benefits and possibilities for the analysis of the connectome as well as some current limitations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the issue of fully automatic segmentation of a hip CT image with the goal to preserve the joint structure for clinical applications in hip disease diagnosis and treatment. For this purpose, we propose a Multi-Atlas Segmentation Constrained Graph (MASCG) method. The MASCG method uses multi-atlas based mesh fusion results to initialize a bone sheetness based multi-label graph cut for an accurate hip CT segmentation which has the inherent advantage of automatic separation of the pelvic region from the bilateral proximal femoral regions. We then introduce a graph cut constrained graph search algorithm to further improve the segmentation accuracy around the bilateral hip joint regions. Taking manual segmentation as the ground truth, we evaluated the present approach on 30 hip CT images (60 hips) with a 15-fold cross validation. When the present approach was compared to manual segmentation, an average surface distance error of 0.30 mm, 0.29 mm, and 0.30 mm was found for the pelvis, the left proximal femur, and the right proximal femur, respectively. A further look at the bilateral hip joint regions demonstrated an average surface distance error of 0.16 mm, 0.21 mm and 0.20 mm for the acetabulum, the left femoral head, and the right femoral head, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The healthcare industry spends billions on worker injury and employee turnover. Hospitals and healthcare settings have one of the highest rates of lost days due to injuries. The occupational hazards for healthcare workers can be classified into biological, chemical, ergonomic, physical, organizational, and psychosocial. Therefore, interventions addressing a range of occupational health risks are needed to prevent injuries and reduce turnover and reduce costs. ^ The Sacred Vocation Program (SVP) seeks to change the content of work, i.e., the meaningfulness of work, to improve work environments. The SVP intervenes at both the individual and organizational level. First the SVP attempts to connect healthcare workers with meaning from their work through a series of 5 self-discovery group sessions. In a sixth session the graduates take an oath recommitting them to do their work as a vocation. Once motivated to connect with meaning in their work, a representative employee group meets in a second set of five meetings. This representative group suggests organizational changes to create a culture that supports employees in their calling. The employees present their plan in the twelfth session to management beginning a new phase in the existing dialogue between employees and management. ^ The SVP was implemented in a large Dallas hospital (almost 1000 licensed beds). The Baylor University Medical Center (BUMC) Pastoral Care department invited front-line caregivers (primarily Patient Care Assistants, PCAs, or Patient Care Technicians, PCTs) to participate in the SVP. Participants completed SVP questionnaires at the beginning and following SVP implementation. Following implementation, employer records were collected on injury, absence and turnover to further evaluate the program's effectiveness on metrics that are meaningful to managers in assessing organizational performance. This provided an opportunity to perform an epidemiological evaluation of the intervention using the two sources of information: employee self-reports and employer administrative data. ^ The ability to evaluate the effectiveness of the SVP on program outcomes could be limited by the strength of the measures used. An ordinal CFA performed on baseline SVP questionnaire measurements examined the construct validity and reliability of the SVP scales. Scales whose item-factor structure was confirmed in ordinal CFA were evaluated for their psychometric properties (i.e., reliability, mean, ceiling and floor effects). CFA supported the construct validity of six of the proposed scales: blocks to spirituality, meaning at work, work satisfaction, affective commitment, collaborative communication, and MHI-5. Five of the six scales confirmed had acceptable measures of reliability (all but MHI-5 had α>0.7). All six scales had a high percentage (>30%) of the scores at the ceiling. These findings supported the use of these items in the evaluation of change although strong ceiling effects may hinder discerning change. ^ Next, the confirmed SVP scales were used to evaluate whether the intervention improved program constructs. To evaluate the SVP a one group pretest-posttest design compared participants’ self-reports before and after the intervention. It was hypothesized that measurements of reduced blocks to spirituality (α = 0.76), meaning at work (α = 0.86), collaborative communication (α = 0.67) and SVP job tasks (α = 0.97) would improve following SVP implementation. The SVP job tasks scale was included even though it was not included in the ordinal CFA analysis due to a limited sample and high inter-item correlation. Changes in scaled measurements were assessed using multilevel linear regression methods. All post-intervention measurements increased (increases <0.28 points) but only reduced blocks to spirituality was statistically significant (0.22 points on a scale from 1 to 7, p < 0.05) after adjustment for covariates. Intensity of the intervention (stratifying on high participation units) strengthened effects; but were not statistically significant. The findings provide preliminary support for the hypothesis that meaning in work can be improved and, importantly, lend greater credence to any observed improvements in the outcomes. (Abstract shortened by UMI.)^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health Information Exchange (HIE) will play a key part in our nation’s effort to improve healthcare. The evidence of HIEs transformational role in healthcare delivery systems is quite limited. The lack of such evidence led us to explore what exists in the healthcare industry that may provide evidence of effectiveness and efficiency of HIEs. The objective of the study was to find out how many fully functional HIEs are using any measurements or metrics to gauge impact of HIE on quality improvement (QI) and on return on investment (ROI).^ A web-based survey was used to determine the number of operational HIEs using metrics for QI and ROI. Our study highlights the fact that only 50 percent of the HIEs who responded use or plan to use metrics. However, 95 percent of the respondents believed HIEs improve quality of care while only 56 percent believed HIE showed positive ROI. Although operational HIEs present numerous opportunities to demonstrate the business model for improving health care quality, evidence to document the impact of HIEs is lacking. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a weakly supervised method to arrange images of a given category based on the relative pose between the camera and the object in the scene. Relative poses are points on a sphere centered at the object in a given canonical pose, which we call object viewpoints. Our method builds a graph on this sphere by assigning images with similar viewpoint to the same node and by connecting nodes if they are related by a small rotation. The key idea is to exploit a large unlabeled dataset to validate the likelihood of dominant 3D planes of the object geometry. A number of 3D plane hypotheses are evaluated by applying small 3D rotations to each hypothesis and by measuring how well the deformed images match other images in the dataset. Correct hypotheses will result in deformed images that correspond to plausible views of the object, and thus will likely match well other images in the same category. The identified 3D planes are then used to compute affinities between images related by a change of viewpoint. We then use the affinities to build a view graph via a greedy method and the maximum spanning tree.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports a learning experience related to the acquisition of project management competences. Students from three different universities and backgrounds, cooperate in a common project that drives the learning-teaching process. Previous related works on this initiative have already evaluated the goodness of this multidisciplinary, project-based learning approach in the context of a new educative paradigm. Yet the innovative experience has allowed the authors to define a rubric in order to measure specific competences in project management. The study shows the rubric’s main aspects as well as competence acquisition evaluation alternatives, based in the metrics defined. Key indicators and specific reports obtained from data base fields in the web tool will support this work. As a result, new competences can be assessed, such ones like teamwork, problem solving, communication and leadership. Final goal is to provide an overall competence map to the students at the same time they improve their skills.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Static analyses of object-oriented programs usually rely on intermediate representations that respect the original semantics while having a more uniform and basic syntax. Most of the work involving object-oriented languages and abstract interpretation usually omits the description of that language or just refers to the Control Flow Graph(CFG) it represents. However, this lack of formalization on one hand results in an absence of assurances regarding the correctness of the transformation and on the other it typically strongly couples the analysis to the source language. In this work we present a framework for analysis of object-oriented languages in which in a first phase we transform the input program into a representation based on Horn clauses. This allows on one hand proving the transformation correct attending to a simple condition and on the other being able to apply an existing analyzer for (constraint) logic programming to automatically derive a safe approximation of the semantics of the original program. The approach is flexible in the sense that the first phase decouples the analyzer from most languagedependent features, and correct because the set of Horn clauses returned by the transformation phase safely approximates the standard semantics of the input program. The resulting analysis is also reasonably scalable due to the use of mature, modular (C)LP-based analyzers. The overall approach allows us to report results for medium-sized programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the empirical evidence that the ratio of email messages in public mailing lists to versioning system commits has remained relatively constant along the history of the Apache Software Foundation (ASF), this paper has as goal to study what can be inferred from such a metric for projects of the ASF. We have found that the metric seems to be an intensive metric as it is independent of the size of the project, its activity, or the number of developers, and remains relatively independent of the technology or functional area of the project. Our analysis provides evidence that the metric is related to the technical effervescence and popularity of project, and as such can be a good candidate to measure its healthy evolution. Other, similar metrics -like the ratio of developer messages to commits and the ratio of issue tracker messages to commits- are studied for several projects as well, in order to see if they have similar characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Motion metrics have become an important source of information when addressing the assessment of surgical expertise. However, their direct relationship with the different surgical skills has not been fully explored. The purpose of this study is to investigate the relevance of motion-related metrics in the evaluation processes of basic psychomotor laparoscopic skills, as well as their correlation with the different abilities sought to measure. METHODS: A framework for task definition and metric analysis is proposed. An explorative survey was first conducted with a board of experts to identify metrics to assess basic psychomotor skills. Based on the output of that survey, three novel tasks for surgical assessment were designed. Face and construct validation study was performed, with focus on motion-related metrics. Tasks were performed by 42 participants (16 novices, 22 residents and 4 experts). Movements of the laparoscopic instruments were registered with the TrEndo tracking system and analyzed. RESULTS: Time, path length and depth showed construct validity for all three tasks. Motion smoothness and idle time also showed validity for tasks involving bi-manual coordination and tasks requiring a more tactical approach respectively. Additionally, motion smoothness and average speed showed a high internal consistency, proving them to be the most task-independent of all the metrics analyzed. CONCLUSION: Motion metrics are complementary and valid for assessing basic psychomotor skills, and their relevance depends on the skill being evaluated. A larger clinical implementation, combined with quality performance information, will give more insight on the relevance of the results shown in this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Collaborative filtering recommender systems contribute to alleviating the problem of information overload that exists on the Internet as a result of the mass use of Web 2.0 applications. The use of an adequate similarity measure becomes a determining factor in the quality of the prediction and recommendation results of the recommender system, as well as in its performance. In this paper, we present a memory-based collaborative filtering similarity measure that provides extremely high-quality and balanced results; these results are complemented with a low processing time (high performance), similar to the one required to execute traditional similarity metrics. The experiments have been carried out on the MovieLens and Netflix databases, using a representative set of information retrieval quality measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

E-learning systems output a huge quantity of data on a learning process. However, it takes a lot of specialist human resources to manually process these data and generate an assessment report. Additionally, for formative assessment, the report should state the attainment level of the learning goals defined by the instructor. This paper describes the use of the granular linguistic model of a phenomenon (GLMP) to model the assessment of the learning process and implement the automated generation of an assessment report. GLMP is based on fuzzy logic and the computational theory of perceptions. This technique is useful for implementing complex assessment criteria using inference systems based on linguistic rules. Apart from the grade, the model also generates a detailed natural language progress report on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. This is illustrated by applying the model to the assessment of Dijkstra’s algorithm learning using a visual simulation-based graph algorithm learning environment, called GRAPHs

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-invasive quantitative assessment of the right ventricular anatomical and functional parameters is a challenging task. We present a semi-automatic approach for right ventricle (RV) segmentation from 4D MR images in two variants, which differ in the amount of user interaction. The method consists of three main phases: First, foreground and background markers are generated from the user input. Next, an over-segmented region image is obtained applying a watershed transform. Finally, these regions are merged using 4D graph-cuts with an intensity based boundary term. For the first variant the user outlines the inside of the RV wall in a few end-diastole slices, for the second two marker pixels serve as starting point for a statistical atlas application. Results were obtained by blind evaluation on 16 testing 4D MR volumes. They prove our method to be robust against markers location and place it favourably in the ranks of existing approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today's motivation for autonomous systems research stems out of the fact that networked environments have reached a level of complexity and heterogeneity that make their control and management by solely human administrators more and more difficult. The optimisation of performance metrics for the air traffic management system, like in other networked system, has become more complex with increasing number of flights, capacity constraints, environmental factors and safety regulations. It is anticipated that a new structure of planning layers and the introduction of higher levels of automation will reduce complexity and will optimise the performance metrics of the air traffic management system. This paper discusses the complexity of optimising air traffic management performance metrics and proposes a way forward based on higher levels of automation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Como en todos los medios de transporte, la seguridad en los viajes en avión es de primordial importancia. Con los aumentos de tráfico aéreo previstos en Europa para la próxima década, es evidente que el riesgo de accidentes necesita ser evaluado y monitorizado cuidadosamente de forma continúa. La Tesis presente tiene como objetivo el desarrollo de un modelo de riesgo de colisión exhaustivo como método para evaluar el nivel de seguridad en ruta del espacio aéreo europeo, considerando todos los factores de influencia. La mayor limitación en el desarrollo de metodologías y herramientas de monitorización adecuadas para evaluar el nivel de seguridad en espacios de ruta europeos, donde los controladores aéreos monitorizan el tráfico aéreo mediante la vigilancia radar y proporcionan instrucciones tácticas a las aeronaves, reside en la estimación del riesgo operacional. Hoy en día, la estimación del riesgo operacional está basada normalmente en reportes de incidentes proporcionados por el proveedor de servicios de navegación aérea (ANSP). Esta Tesis propone un nuevo e innovador enfoque para evaluar el nivel de seguridad basado exclusivamente en el procesamiento y análisis trazas radar. La metodología propuesta ha sido diseñada para complementar la información recogida en las bases de datos de accidentes e incidentes, mediante la provisión de información robusta de los factores de tráfico aéreo y métricas de seguridad inferidas del análisis automático en profundidad de todos los eventos de proximidad. La metodología 3-D CRM se ha implementado en un prototipo desarrollado en MATLAB © para analizar automáticamente las trazas radar y planes de vuelo registrados por los Sistemas de Procesamiento de Datos Radar (RDP) e identificar y analizar todos los eventos de proximidad (conflictos, conflictos potenciales y colisiones potenciales) en un periodo de tiempo y volumen del espacio aéreo. Actualmente, el prototipo 3-D CRM está siendo adaptado e integrado en la herramienta de monitorización de prestaciones de Aena (PERSEO) para complementar las bases de accidentes e incidentes ATM y mejorar la monitorización y proporcionar evidencias de los niveles de seguridad. ABSTRACT As with all forms of transport, the safety of air travel is of paramount importance. With the projected increases in European air traffic in the next decade and beyond, it is clear that the risk of accidents needs to be assessed and carefully monitored on a continuing basis. The present thesis is aimed at the development of a comprehensive collision risk model as a method of assessing the European en-route risk, due to all causes and across all dimensions within the airspace. The major constraint in developing appropriate monitoring methodologies and tools to assess the level of safety in en-route airspaces where controllers monitor air traffic by means of radar surveillance and provide aircraft with tactical instructions lies in the estimation of the operational risk. The operational risk estimate normally relies on incident reports provided by the air navigation service providers (ANSPs). This thesis proposes a new and innovative approach to assessing aircraft safety level based exclusively upon the process and analysis of radar tracks. The proposed methodology has been designed to complement the information collected in the accident and incident databases, thereby providing robust information on air traffic factors and safety metrics inferred from the in depth assessment of proximate events. The 3-D CRM methodology is implemented in a prototype tool in MATLAB © in order to automatically analyze recorded aircraft tracks and flight plan data from the Radar Data Processing systems (RDP) and identify and analyze all proximate events (conflicts, potential conflicts and potential collisions) within a time span and a given volume of airspace. Currently, the 3D-CRM prototype is been adapted and integrated in AENA’S Performance Monitoring Tool (PERSEO) to complement the information provided by the ATM accident and incident databases and to enhance monitoring and providing evidence of levels of safety.