239 resultados para negative space
Resumo:
Diffuse large B-cell lymphoma (DLBCL) with MYC rearrangement (MYC-R) carries an unfavorable outcome. We explored the prognostic value of the MYC translocation partner gene in a series of MYC-R de novo DLBCL patients enrolled in first-line prospective clinical trials (Groupe d'Etudes des Lymphomes de l'Adulte/Lymphoma Study Association) and treated with rituximab-anthracycline-based chemotherapy. A total of 774 DLBCL cases characterized for cell of origin by the Hans classifier were analyzed using fluorescence in situ hybridization with BCL2, BCL6, MYC, immunoglobulin (IG)K, and IGL break-apart and IGH/MYC, IGK/MYC, and IGL/MYC fusion probes. MYC-R was observed in 51/574 (8.9%) evaluable DLBCL cases. MYC-R cases were predominantly of the germinal center B-cell-like subtype 37/51 (74%) with no distinctive morphologic and phenotypic features. Nineteen cases were MYC single-hit and 32 cases were MYC double-hit (MYC plus BCL2 and/or BCL6) DLBCL. MYC translocation partner was an IG gene in 24 cases (MYC-IG) and a non-IG gene (MYC-non-IG) in 26 of 50 evaluable cases. Noteworthy, MYC-IG patients had shorter overall survival (OS) (P = .0002) compared with MYC-negative patients, whereas no survival difference was observed between MYC-non-IG and MYC-negative patients. In multivariate analyses, MYC-IG predicted poor progression-free survival (P = .0051) and OS (P = .0006) independently from the International Prognostic Index and the Hans classifier. In conclusion, we show in this prospective randomized trial that the adverse prognostic impact of MYC-R is correlated to the MYC-IG translocation partner gene in DLBCL patients treated with immunochemotherapy. These results may have an important impact on the clinical management of DLBCL patients with MYC-R who should be routinely characterized according to MYC partner gene. These trials are individually registered at www.clinicaltrials.gov as #NCT00144807, #NCT01087424, #NCT00169143, #NCT00144755, #NCT00140660, #NCT00140595, and #NCT00135499.
Resumo:
Modelling the shoulder's musculature is challenging given its mechanical and geometric complexity. The use of the ideal fibre model to represent a muscle's line of action cannot always faithfully represent the mechanical effect of each muscle, leading to considerable differences between model-estimated and in vivo measured muscle activity. While the musculo-tendon force coordination problem has been extensively analysed in terms of the cost function, only few works have investigated the existence and sensitivity of solutions to fibre topology. The goal of this paper is to present an analysis of the solution set using the concepts of torque-feasible space (TFS) and wrench-feasible space (WFS) from cable-driven robotics. A shoulder model is presented and a simple musculo-tendon force coordination problem is defined. The ideal fibre model for representing muscles is reviewed and the TFS and WFS are defined, leading to the necessary and sufficient conditions for the existence of a solution. The shoulder model's TFS is analysed to explain the lack of anterior deltoid (DLTa) activity. Based on the analysis, a modification of the model's muscle fibre geometry is proposed. The performance with and without the modification is assessed by solving the musculo-tendon force coordination problem for quasi-static abduction in the scapular plane. After the proposed modification, the DLTa reaches 20% of activation.
Resumo:
PURPOSE: Pediatric rhabdomyosarcoma (RMS) has two common histologic subtypes: embryonal (ERMS) and alveolar (ARMS). PAX-FOXO1 fusion gene status is a more reliable prognostic marker than alveolar histology, whereas fusion gene-negative (FN) ARMS patients are clinically similar to ERMS patients. A five-gene expression signature (MG5) previously identified two diverse risk groups within the fusion gene-negative RMS (FN-RMS) patients, but this has not been independently validated. The goal of this study was to test whether expression of the MG5 metagene, measured using a technical platform that can be applied to routine pathology material, would correlate with outcome in a new cohort of patients with FN-RMS. EXPERIMENTAL DESIGN: Cases were taken from the Children's Oncology Group (COG) D9803 study of children with intermediate-risk RMS, and gene expression profiling for the MG5 genes was performed using the nCounter assay. The MG5 score was correlated with clinical and pathologic characteristics as well as overall and event-free survival. RESULTS: MG5 standardized score showed no significant association with any of the available clinicopathologic variables. The MG5 signature score showed a significant correlation with overall (N = 57; HR, 7.3; 95% CI, 1.9-27.0; P = 0.003) and failure-free survival (N = 57; HR, 6.1; 95% CI, 1.9-19.7; P = 0.002). CONCLUSIONS: This represents the first, validated molecular prognostic signature for children with FN-RMS who otherwise have intermediate-risk disease. The capacity to measure the expression of a small number of genes in routine pathology material and apply a simple mathematical formula to calculate the MG5 metagene score provides a clear path toward better risk stratification in future prospective clinical trials. Clin Cancer Res; 21(20); 4733-9. ©2015 AACR.
Resumo:
NOD-like receptors (NLR) are a family of cytosolic pattern recognition receptors that include many key drivers of innate immune responses. NLRP12 is an emerging member of the NLR family that is closely related to the well-known inflammasome scaffold, NLRP3. Since its discovery, various functions have been proposed for NLRP12, including the positive regulation of dendritic cell (DC) and neutrophil migration and the inhibition of NF-κB and ERK signalling in DC and macrophages. We show here that NLRP12 is poorly expressed in murine macrophages and DC, but is strongly expressed in neutrophils. Using myeloid cells from WT and Nlrp12(-/)(-) mice, we show that, contrary to previous reports, NLRP12 does not suppress LPS- or infection-induced NF-κB or ERK activation in myeloid cells, and is not required for DC migration in vitro. Surprisingly, we found that Nlrp12 deficiency caused increased rather than decreased neutrophil migration towards the chemokine CXCL1 and the neutrophil parasite Leishmania major, revealing NLRP12 as a negative regulator of directed neutrophil migration under these conditions.
Resumo:
Gram-negative bacteria represent a major group of pathogens that infect all eukaryotes from plants to mammals. Gram-negative microbe-associated molecular patterns include lipopolysaccharides and peptidoglycans, major immunostimulatory determinants across phyla. Recent advances have furthered our understanding of Gram-negative detection beyond the well-defined pattern recognition receptors such as TLR4. A B-type lectin receptor for LPS and Lysine-motif containing receptors for peptidoglycans were recently added to the plant arsenal. Caspases join the ranks of mammalian cytosolic immune detectors by binding LPS, and make TLR4 redundant for septic shock. Fascinating bacterial evasion mechanisms lure the host into tolerance or promote inter-bacterial competition. Our review aims to cover recent advances on bacterial messages and host decoding systems across phyla, and highlight evolutionarily recurrent strategies.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.
Resumo:
Mutations of the huntingtin protein (HTT) gene underlie both adult-onset and juvenile forms of Huntington's disease (HD). HTT modulates mitotic spindle orientation and cell fate in mouse cortical progenitors from the ventricular zone. Using human embryonic stem cells (hESC) characterized as carrying mutations associated with adult-onset disease during pre-implantation genetic diagnosis, we investigated the influence of human HTT and of an adult-onset HD mutation on mitotic spindle orientation in human neural stem cells (NSCs) derived from hESCs. The RNAi-mediated silencing of both HTT alleles in neural stem cells derived from hESCs disrupted spindle orientation and led to the mislocalization of dynein, the p150Glued subunit of dynactin and the large nuclear mitotic apparatus (NuMA) protein. We also investigated the effect of the adult-onset HD mutation on the role of HTT during spindle orientation in NSCs derived from HD-hESCs. By combining SNP-targeting allele-specific silencing and gain-of-function approaches, we showed that a 46-glutamine expansion in human HTT was sufficient for a dominant-negative effect on spindle orientation and changes in the distribution within the spindle pole and the cell cortex of dynein, p150Glued and NuMA in neural cells. Thus, neural derivatives of disease-specific human pluripotent stem cells constitute a relevant biological resource for exploring the impact of adult-onset HD mutations of the HTT gene on the division of neural progenitors, with potential applications in HD drug discovery targeting HTT-dynein-p150Glued complex interactions.
Resumo:
The extension of traditional data mining methods to time series has been effectively applied to a wide range of domains such as finance, econometrics, biology, security, and medicine. Many existing mining methods deal with the task of change points detection, but very few provide a flexible approach. Querying specific change points with linguistic variables is particularly useful in crime analysis, where intuitive, understandable, and appropriate detection of changes can significantly improve the allocation of resources for timely and concise operations. In this paper, we propose an on-line method for detecting and querying change points in crime-related time series with the use of a meaningful representation and a fuzzy inference system. Change points detection is based on a shape space representation, and linguistic terms describing geometric properties of the change points are used to express queries, offering the advantage of intuitiveness and flexibility. An empirical evaluation is first conducted on a crime data set to confirm the validity of the proposed method and then on a financial data set to test its general applicability. A comparison to a similar change-point detection algorithm and a sensitivity analysis are also conducted. Results show that the method is able to accurately detect change points at very low computational costs. More broadly, the detection of specific change points within time series of virtually any domain is made more intuitive and more understandable, even for experts not related to data mining.
Resumo:
Since routine eubacterial 16S rRNA PCR does not amplify members of the Chlamydiales order, we tested all samples received in our laboratory during a 10 months period using a pan-Chlamydiales real-time PCR. 3 of 107 samples (2.8%) revealed to be positive, suggesting a role of some Chlamydiales in the pathogenesis of chronic bronchial stenosis or bronchial stenosis superinfection and as agents of orthopaedic prosthesis infections.