55 resultados para Context-based
Resumo:
Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.
Resumo:
Over thirty years ago, Leamer (1983) - among many others - expressed doubts about the quality and usefulness of empirical analyses for the economic profession by stating that "hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else's data analyses seriously" (p.37). Improvements in data quality, more robust estimation methods and the evolution of better research designs seem to make that assertion no longer justifiable (see Angrist and Pischke (2010) for a recent response to Leamer's essay). The economic profes- sion and policy makers alike often rely on empirical evidence as a means to investigate policy relevant questions. The approach of using scientifically rigorous and systematic evidence to identify policies and programs that are capable of improving policy-relevant outcomes is known under the increasingly popular notion of evidence-based policy. Evidence-based economic policy often relies on randomized or quasi-natural experiments in order to identify causal effects of policies. These can require relatively strong assumptions or raise concerns of external validity. In the context of this thesis, potential concerns are for example endogeneity of policy reforms with respect to the business cycle in the first chapter, the trade-off between precision and bias in the regression-discontinuity setting in chapter 2 or non-representativeness of the sample due to self-selection in chapter 3. While the identification strategies are very useful to gain insights into the causal effects of specific policy questions, transforming the evidence into concrete policy conclusions can be challenging. Policy develop- ment should therefore rely on the systematic evidence of a whole body of research on a specific policy question rather than on a single analysis. In this sense, this thesis cannot and should not be viewed as a comprehensive analysis of specific policy issues but rather as a first step towards a better understanding of certain aspects of a policy question. The thesis applies new and innovative identification strategies to policy-relevant and topical questions in the fields of labor economics and behavioral environmental economics. Each chapter relies on a different identification strategy. In the first chapter, we employ a difference- in-differences approach to exploit the quasi-experimental change in the entitlement of the max- imum unemployment benefit duration to identify the medium-run effects of reduced benefit durations on post-unemployment outcomes. Shortening benefit duration carries a double- dividend: It generates fiscal benefits without deteriorating the quality of job-matches. On the contrary, shortened benefit durations improve medium-run earnings and employment possibly through containing the negative effects of skill depreciation or stigmatization. While the first chapter provides only indirect evidence on the underlying behavioral channels, in the second chapter I develop a novel approach that allows to learn about the relative impor- tance of the two key margins of job search - reservation wage choice and search effort. In the framework of a standard non-stationary job search model, I show how the exit rate from un- employment can be decomposed in a way that is informative on reservation wage movements over the unemployment spell. The empirical analysis relies on a sharp discontinuity in unem- ployment benefit entitlement, which can be exploited in a regression-discontinuity approach to identify the effects of extended benefit durations on unemployment and survivor functions. I find evidence that calls for an important role of reservation wage choices for job search be- havior. This can have direct implications for the optimal design of unemployment insurance policies. The third chapter - while thematically detached from the other chapters - addresses one of the major policy challenges of the 21st century: climate change and resource consumption. Many governments have recently put energy efficiency on top of their agendas. While pricing instru- ments aimed at regulating the energy demand have often been found to be short-lived and difficult to enforce politically, the focus of energy conservation programs has shifted towards behavioral approaches - such as provision of information or social norm feedback. The third chapter describes a randomized controlled field experiment in which we discuss the effective- ness of different types of feedback on residential electricity consumption. We find that detailed and real-time feedback caused persistent electricity reductions on the order of 3 to 5 % of daily electricity consumption. Also social norm information can generate substantial electricity sav- ings when designed appropriately. The findings suggest that behavioral approaches constitute effective and relatively cheap way of improving residential energy-efficiency.
Resumo:
In vivo dosimetry is a way to verify the radiation dose delivered to the patient in measuring the dose generally during the first fraction of the treatment. It is the only dose delivery control based on a measurement performed during the treatment. In today's radiotherapy practice, the dose delivered to the patient is planned using 3D dose calculation algorithms and volumetric images representing the patient. Due to the high accuracy and precision necessary in radiation treatments, national and international organisations like ICRU and AAPM recommend the use of in vivo dosimetry. It is also mandatory in some countries like France. Various in vivo dosimetry methods have been developed during the past years. These methods are point-, line-, plane- or 3D dose controls. A 3D in vivo dosimetry provides the most information about the dose delivered to the patient, with respect to ID and 2D methods. However, to our knowledge, it is generally not routinely applied to patient treatments yet. The aim of this PhD thesis was to determine whether it is possible to reconstruct the 3D delivered dose using transmitted beam measurements in the context of narrow beams. An iterative dose reconstruction method has been described and implemented. The iterative algorithm includes a simple 3D dose calculation algorithm based on the convolution/superposition principle. The methodology was applied to narrow beams produced by a conventional 6 MV linac. The transmitted dose was measured using an array of ion chambers, as to simulate the linear nature of a tomotherapy detector. We showed that the iterative algorithm converges quickly and reconstructs the dose within a good agreement (at least 3% / 3 mm locally), which is inside the 5% recommended by the ICRU. Moreover it was demonstrated on phantom measurements that the proposed method allows us detecting some set-up errors and interfraction geometry modifications. We also have discussed the limitations of the 3D dose reconstruction for dose delivery error detection. Afterwards, stability tests of the tomotherapy MVCT built-in onboard detector was performed in order to evaluate if such a detector is suitable for 3D in-vivo dosimetry. The detector showed stability on short and long terms comparable to other imaging devices as the EPIDs, also used for in vivo dosimetry. Subsequently, a methodology for the dose reconstruction using the tomotherapy MVCT detector is proposed in the context of static irradiations. This manuscript is composed of two articles and a script providing further information related to this work. In the latter, the first chapter introduces the state-of-the-art of in vivo dosimetry and adaptive radiotherapy, and explains why we are interested in performing 3D dose reconstructions. In chapter 2 a dose calculation algorithm implemented for this work is reviewed with a detailed description of the physical parameters needed for calculating 3D absorbed dose distributions. The tomotherapy MVCT detector used for transit measurements and its characteristics are described in chapter 3. Chapter 4 contains a first article entitled '3D dose reconstruction for narrow beams using ion chamber array measurements', which describes the dose reconstruction method and presents tests of the methodology on phantoms irradiated with 6 MV narrow photon beams. Chapter 5 contains a second article 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations. A dose reconstruction process specific to the use of the tomotherapy MVCT detector is presented in chapter 6. A discussion and perspectives of the PhD thesis are presented in chapter 7, followed by a conclusion in chapter 8. The tomotherapy treatment device is described in appendix 1 and an overview of 3D conformai- and intensity modulated radiotherapy is presented in appendix 2. - La dosimétrie in vivo est une technique utilisée pour vérifier la dose délivrée au patient en faisant une mesure, généralement pendant la première séance du traitement. Il s'agit de la seule technique de contrôle de la dose délivrée basée sur une mesure réalisée durant l'irradiation du patient. La dose au patient est calculée au moyen d'algorithmes 3D utilisant des images volumétriques du patient. En raison de la haute précision nécessaire lors des traitements de radiothérapie, des organismes nationaux et internationaux tels que l'ICRU et l'AAPM recommandent l'utilisation de la dosimétrie in vivo, qui est devenue obligatoire dans certains pays dont la France. Diverses méthodes de dosimétrie in vivo existent. Elles peuvent être classées en dosimétrie ponctuelle, planaire ou tridimensionnelle. La dosimétrie 3D est celle qui fournit le plus d'information sur la dose délivrée. Cependant, à notre connaissance, elle n'est généralement pas appliquée dans la routine clinique. Le but de cette recherche était de déterminer s'il est possible de reconstruire la dose 3D délivrée en se basant sur des mesures de la dose transmise, dans le contexte des faisceaux étroits. Une méthode itérative de reconstruction de la dose a été décrite et implémentée. L'algorithme itératif contient un algorithme simple basé sur le principe de convolution/superposition pour le calcul de la dose. La dose transmise a été mesurée à l'aide d'une série de chambres à ionisations alignées afin de simuler la nature linéaire du détecteur de la tomothérapie. Nous avons montré que l'algorithme itératif converge rapidement et qu'il permet de reconstruire la dose délivrée avec une bonne précision (au moins 3 % localement / 3 mm). De plus, nous avons démontré que cette méthode permet de détecter certaines erreurs de positionnement du patient, ainsi que des modifications géométriques qui peuvent subvenir entre les séances de traitement. Nous avons discuté les limites de cette méthode pour la détection de certaines erreurs d'irradiation. Par la suite, des tests de stabilité du détecteur MVCT intégré à la tomothérapie ont été effectués, dans le but de déterminer si ce dernier peut être utilisé pour la dosimétrie in vivo. Ce détecteur a démontré une stabilité à court et à long terme comparable à d'autres détecteurs tels que les EPIDs également utilisés pour l'imagerie et la dosimétrie in vivo. Pour finir, une adaptation de la méthode de reconstruction de la dose a été proposée afin de pouvoir l'implémenter sur une installation de tomothérapie. Ce manuscrit est composé de deux articles et d'un script contenant des informations supplémentaires sur ce travail. Dans ce dernier, le premier chapitre introduit l'état de l'art de la dosimétrie in vivo et de la radiothérapie adaptative, et explique pourquoi nous nous intéressons à la reconstruction 3D de la dose délivrée. Dans le chapitre 2, l'algorithme 3D de calcul de dose implémenté pour ce travail est décrit, ainsi que les paramètres physiques principaux nécessaires pour le calcul de dose. Les caractéristiques du détecteur MVCT de la tomothérapie utilisé pour les mesures de transit sont décrites dans le chapitre 3. Le chapitre 4 contient un premier article intitulé '3D dose reconstruction for narrow beams using ion chamber array measurements', qui décrit la méthode de reconstruction et présente des tests de la méthodologie sur des fantômes irradiés avec des faisceaux étroits. Le chapitre 5 contient un second article intitulé 'Stability of the Helical TomoTherapy HiArt II detector for treatment beam irradiations'. Un procédé de reconstruction de la dose spécifique pour l'utilisation du détecteur MVCT de la tomothérapie est présenté au chapitre 6. Une discussion et les perspectives de la thèse de doctorat sont présentées au chapitre 7, suivies par une conclusion au chapitre 8. Le concept de la tomothérapie est exposé dans l'annexe 1. Pour finir, la radiothérapie «informationnelle 3D et la radiothérapie par modulation d'intensité sont présentées dans l'annexe 2.
Resumo:
BACKGROUND: Migration is considered a depression risk factor when associated with psychosocial adversity, but its impact on depression's clinical characteristics has not been specifically studied. We compared 85 migrants to 34 controls, examining depression's severity, symptomatology, comorbidity profile and clinical course. METHOD: A MINI interview modified to assess course characteristics was used to assign DSM-IV axis I diagnoses; medical files were used for Somatoform Disorders. Severity was assessed with the Montgomery-Asberg scale. Wherever possible, we adjusted comparisons for age and gender using logistic and linear regressions. RESULTS: Depression in migrants was characterized by higher comorbidity (mostly somatoform and anxiety disorders), higher severity, and a non-recurrent, chronic course. LIMITATIONS: Our sample comes from a single center, and should be replicated in other health care facilities and other countries. Somatoform disorder diagnoses were solely based on file-content. CONCLUSION: Depression in migrants presented as a complex, chronic clinical picture. Most of our migrant patients experienced significant psychosocial adversity before and after migration: beyond cultural issues, our results suggest that psychosocial adversity impacts on the clinical expression of depression. Our study also suggests that migration associated with psychosocial adversity might play a specific etiological role, resulting in a distinct clinical picture, questioning the DSM-IV unitarian model of depression. The chronic course might indicate a resistance to standard therapeutic regimen and hints at the necessity of developing specific treatment strategies, adapted to the individual patients and their specific context.
Resumo:
PURPOSE: We evaluated the feasibility of biomarker development in the context of multicenter clinical trials. EXPERIMENTAL DESIGN: Formalin-fixed, paraffin-embedded (FFPE) tissue samples were collected from a prospective adjuvant colon cancer trial (PETACC3). DNA was isolated from tumor as well as normal tissue and used for analysis of microsatellite instability, KRAS and BRAF genotyping, UGT1A1 genotyping, and loss of heterozygosity of 18 q loci. Immunohistochemistry was used to test expression of TERT, SMAD4, p53, and TYMS. Messenger RNA was retrieved and tested for use in expression profiling experiments. RESULTS: Of the 3,278 patients entered in the study, FFPE blocks were obtained from 1,564 patients coming from 368 different centers in 31 countries. In over 95% of the samples, genomic DNA tests yielded a reliable result. Of the immmunohistochemical tests, p53 and SMAD4 staining did best with reliable results in over 85% of the cases. TERT was the most problematic test with 46% of failures, mostly due to insufficient tissue processing quality. Good quality mRNA was obtained, usable in expression profiling experiments. CONCLUSIONS: Prospective clinical trials can be used as framework for biomarker development using routinely processed FFPE tissues. Our results support the notion that as a rule, translational studies based on FFPE should be included in prospective clinical trials.
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
Abstract Since its creation, the Internet has permeated our daily life. The web is omnipresent for communication, research and organization. This exploitation has resulted in the rapid development of the Internet. Nowadays, the Internet is the biggest container of resources. Information databases such as Wikipedia, Dmoz and the open data available on the net are a great informational potentiality for mankind. The easy and free web access is one of the major feature characterizing the Internet culture. Ten years earlier, the web was completely dominated by English. Today, the web community is no longer only English speaking but it is becoming a genuinely multilingual community. The availability of content is intertwined with the availability of logical organizations (ontologies) for which multilinguality plays a fundamental role. In this work we introduce a very high-level logical organization fully based on semiotic assumptions. We thus present the theoretical foundations as well as the ontology itself, named Linguistic Meta-Model. The most important feature of Linguistic Meta-Model is its ability to support the representation of different knowledge sources developed according to different underlying semiotic theories. This is possible because mast knowledge representation schemata, either formal or informal, can be put into the context of the so-called semiotic triangle. In order to show the main characteristics of Linguistic Meta-Model from a practical paint of view, we developed VIKI (Virtual Intelligence for Knowledge Induction). VIKI is a work-in-progress system aiming at exploiting the Linguistic Meta-Model structure for knowledge expansion. It is a modular system in which each module accomplishes a natural language processing task, from terminology extraction to knowledge retrieval. VIKI is a supporting system to Linguistic Meta-Model and its main task is to give some empirical evidence regarding the use of Linguistic Meta-Model without claiming to be thorough.
Resumo:
The first AO comprehensive pediatric long-bone fracture classification system has been proposed following a structured path of development and validation with experienced pediatric surgeons. A Web-based multicenter agreement study involving 70 surgeons in 15 clinics and 5 countries was conducted to assess the reliability and accuracy of this classification when used by a wide range of surgeons with various levels of experience. Training was provided at each clinic before the session. Using the Internet, participants could log in at any time and classify 275 supracondylar, radius, and tibia fractures at their own pace. The fracture diagnosis was made following the hierarchy of the classification system using both clinical terminology and codes. kappa coefficients for the single-surgeon diagnosis of epiphyseal, metaphyseal, or diaphyseal fracture type were 0.66, 0.80, and 0.91, respectively. Median accuracy estimates for each bone and type were all greater than 80%. Depending on their experience and specialization, surgeons greatly varied in their ability to classify fractures. Pediatric training and at least 2 years of experience were associated with significant improvement in reliability and accuracy. Kappa coefficients for diagnosis of specific child patterns were 0.51, 0.63, and 0.48 for epiphyseal, metaphyseal, and diaphyseal fractures, respectively. Identified reasons for coding discrepancies were related to different understandings of terminology and definitions, as well as poor quality radiographic images. Results supported some minor adjustments in the coding of fracture type and child patterns. This classification system received wide acceptance and support among the surgeons involved. As long as appropriate training could be performed, the system classification was reliable, especially among surgeons with a minimum of 2 years of clinical experience. We encourage broad-based consultation between surgeons' international societies and the use of this classification system in the context of clinical practice as well as prospectively for clinical studies.
Resumo:
The delivery kinetics of growth factors has been suggested to play an important role in the regeneration of peripheral nerves following axotomy. In this context, we designed a nerve conduit (NC) with adjustable release kinetics of nerve growth factor (NGF). A multi-ply system was designed where NC consisting of a polyelectrolyte alginate/chitosan complex was coated with layers of poly(lactide-co-glycolide) (PLGA) to control the release of embedded NGF. Prior to assessing the in vitro NGF release from NC, various release test media, with and without stabilizers for NGF, were evaluated to ensure adequate quantification of NGF by ELISA. Citrate (pH 5.0) and acetate (pH 5.5) buffered saline solutions containing 0.05% Tween 20 yielded the most reliable results for ELISA active NGF. The in vitro release experiments revealed that the best results in terms of reproducibility and release control were achieved when the NGF was embedded between two PLGA layers and the ends of the NC tightly sealed by the PLGA coatings. The release kinetics could be efficiently adjusted by accommodating NGF at different radial locations within the NC. A sustained release of bioactive NGF in the low nanogram per day range was obtained for at least 15days. In conclusion, the developed multi-ply NGF loaded NC is considered a suitable candidate for future implantation studies to gain insight into the relationship between local growth factor availability and nerve regeneration.
Resumo:
BACKGROUND CONTEXT: Studies involving factor analysis (FA) of the items in the North American Spine Society (NASS) outcome assessment instrument have revealed inconsistent factor structures for the individual items. PURPOSE: This study examined whether the factor structure of the NASS varied in relation to the severity of the back/neck problem and differed from that originally recommended by the developers of the questionnaire, by analyzing data before and after surgery in a large series of patients undergoing lumbar or cervical disc arthroplasty. STUDY DESIGN/SETTING: Prospective multicenter observational case series. PATIENT SAMPLE: Three hundred ninety-one patients with low back pain and 553 patients with neck pain completed questionnaires preoperatively and again at 3 to 6 and 12 months follow-ups (FUs), in connection with the SWISSspine disc arthroplasty registry. OUTCOME MEASURES: North American Spine Society outcome assessment instrument. METHODS: First, an exploratory FA without a priori assumptions and subsequently a confirmatory FA were performed on the 17 items of the NASS-lumbar and 19 items of the NASS-cervical collected at each assessment time point. The item-loading invariance was tested in the German version of the questionnaire for baseline and FU. RESULTS: Both NASS-lumbar and NASS-cervical factor structures differed between baseline and postoperative data sets. The confirmatory analysis and item-loading invariance showed better fit for a three-factor (3F) structure for NASS-lumbar, containing items on "disability," "back pain," and "radiating pain, numbness, and weakness (leg/foot)" and for a 5F structure for NASS-cervical including disability, "neck pain," "radiating pain and numbness (arm/hand)," "weakness (arm/hand)," and "motor deficit (legs)." CONCLUSIONS: The best-fitting factor structure at both baseline and FU was selected for both the lumbar- and cervical-NASS questionnaires. It differed from that proposed by the originators of the NASS instruments. Although the NASS questionnaire represents a valid outcome measure for degenerative spine diseases, it is able to distinguish among all major symptom domains (factors) in patients undergoing lumbar and cervical disc arthroplasty; overall, the item structure could be improved. Any potential revision of the NASS should consider its factorial structure; factorial invariance over time should be aimed for, to allow for more precise interpretations of treatment success.
Resumo:
A urology nursing team examined its perioperative practices in the light of scientific data and implemented updated care practices adapted to this context.This experience favours the development of skills essential for interdisciplinary collaboration drawing on the resources of each profession, to work towards a common goal for the benefit of the patient.
Resumo:
Atlas registration is a recognized paradigm for the automatic segmentation of normal MR brain images. Unfortunately, atlas-based segmentation has been of limited use in presence of large space-occupying lesions. In fact, brain deformations induced by such lesions are added to normal anatomical variability and they may dramatically shift and deform anatomically or functionally important brain structures. In this work, we chose to focus on the problem of inter-subject registration of MR images with large tumors, inducing a significant shift of surrounding anatomical structures. First, a brief survey of the existing methods that have been proposed to deal with this problem is presented. This introduces the discussion about the requirements and desirable properties that we consider necessary to be fulfilled by a registration method in this context: To have a dense and smooth deformation field and a model of lesion growth, to model different deformability for some structures, to introduce more prior knowledge, and to use voxel-based features with a similarity measure robust to intensity differences. In a second part of this work, we propose a new approach that overcomes some of the main limitations of the existing techniques while complying with most of the desired requirements above. Our algorithm combines the mathematical framework for computing a variational flow proposed by Hermosillo et al. [G. Hermosillo, C. Chefd'Hotel, O. Faugeras, A variational approach to multi-modal image matching, Tech. Rep., INRIA (February 2001).] with the radial lesion growth pattern presented by Bach et al. [M. Bach Cuadra, C. Pollo, A. Bardera, O. Cuisenaire, J.-G. Villemure, J.-Ph. Thiran, Atlas-based segmentation of pathological MR brain images using a model of lesion growth, IEEE Trans. Med. Imag. 23 (10) (2004) 1301-1314.]. Results on patients with a meningioma are visually assessed and compared to those obtained with the most similar method from the state-of-the-art.
Resumo:
Until recently, the hard X-ray, phase-sensitive imaging technique called grating interferometry was thought to provide information only in real space. However, by utilizing an alternative approach to data analysis we demonstrated that the angular resolved ultra-small angle X-ray scattering distribution can be retrieved from experimental data. Thus, reciprocal space information is accessible by grating interferometry in addition to real space. Naturally, the quality of the retrieved data strongly depends on the performance of the employed analysis procedure, which involves deconvolution of periodic and noisy data in this context. The aim of this article is to compare several deconvolution algorithms to retrieve the ultra-small angle X-ray scattering distribution in grating interferometry. We quantitatively compare the performance of three deconvolution procedures (i.e., Wiener, iterative Wiener and Lucy-Richardson) in case of realistically modeled, noisy and periodic input data. The simulations showed that the algorithm of Lucy-Richardson is the more reliable and more efficient as a function of the characteristics of the signals in the given context. The availability of a reliable data analysis procedure is essential for future developments in grating interferometry.
Resumo:
The mitogen-activated protein kinases (MAPKs) pathways are highly organized signaling systems that transduce extracellular signals into a variety of intracellular responses. In this context, it is currently poorly understood how kinases constituting these signaling cascades are assembled and activated in response to receptor stimulation to generate specific cellular responses. Here, we show that AKAP-Lbc, an A-kinase anchoring protein (AKAP) with an intrinsic Rho-specific guanine nucleotide exchange factor activity, is critically involved in the activation of the p38α MAPK downstream of α(1b)-adrenergic receptors (α(1b)-ARs). Our results indicate that AKAP-Lbc can assemble a novel transduction complex containing the RhoA effector PKNα, MLTK, MKK3, and p38α, which integrates signals from α(1b)-ARs to promote RhoA-dependent activation of p38α. In particular, silencing of AKAP-Lbc expression or disrupting the formation of the AKAP-Lbc·p38α signaling complex specifically reduces α(1)-AR-mediated p38α activation without affecting receptor-mediated activation of other MAPK pathways. These findings provide a novel mechanistic hypothesis explaining how assembly of macromolecular complexes can specify MAPK signaling downstream of α(1)-ARs.