128 resultados para CPR artifacts
Resumo:
PURPOSE: At high magnetic field strengths (B0 ≥ 3 T), the shorter radiofrequency wavelength produces an inhomogeneous distribution of the transmit magnetic field. This can lead to variable contrast across the brain which is particularly pronounced in T2 -weighted imaging that requires multiple radiofrequency pulses. To obtain T2 -weighted images with uniform contrast throughout the whole brain at 7 T, short (2-3 ms) 3D tailored radiofrequency pulses (kT -points) were integrated into a 3D variable flip angle turbo spin echo sequence. METHODS: The excitation and refocusing "hard" pulses of a variable flip angle turbo spin echo sequence were replaced with kT -point pulses. Spatially resolved extended phase graph simulations and in vivo acquisitions at 7 T, utilizing both single channel and parallel-transmit systems, were used to test different kT -point configurations. RESULTS: Simulations indicated that an extended optimized k-space trajectory ensured a more homogeneous signal throughout images. In vivo experiments showed that high quality T2 -weighted brain images with uniform signal and contrast were obtained at 7 T by using the proposed methodology. CONCLUSION: This work demonstrates that T2 -weighted images devoid of artifacts resulting from B1 (+) inhomogeneity can be obtained at high field through the optimization of extended kT -point pulses. Magn Reson Med 71:1478-1488, 2014. © 2013 Wiley Periodicals, Inc.
Free-breathing whole-heart coronary MRA with 3D radial SSFP and self-navigated image reconstruction.
Resumo:
Respiratory motion is a major source of artifacts in cardiac magnetic resonance imaging (MRI). Free-breathing techniques with pencil-beam navigators efficiently suppress respiratory motion and minimize the need for patient cooperation. However, the correlation between the measured navigator position and the actual position of the heart may be adversely affected by hysteretic effects, navigator position, and temporal delays between the navigators and the image acquisition. In addition, irregular breathing patterns during navigator-gated scanning may result in low scan efficiency and prolonged scan time. The purpose of this study was to develop and implement a self-navigated, free-breathing, whole-heart 3D coronary MRI technique that would overcome these shortcomings and improve the ease-of-use of coronary MRI. A signal synchronous with respiration was extracted directly from the echoes acquired for imaging, and the motion information was used for retrospective, rigid-body, through-plane motion correction. The images obtained from the self-navigated reconstruction were compared with the results from conventional, prospective, pencil-beam navigator tracking. Image quality was improved in phantom studies using self-navigation, while equivalent results were obtained with both techniques in preliminary in vivo studies.
Resumo:
The "one-gene, one-protein" rule, coined by Beadle and Tatum, has been fundamental to molecular biology. The rule implies that the genetic complexity of an organism depends essentially on its gene number. The discovery, however, that alternative gene splicing and transcription are widespread phenomena dramatically altered our understanding of the genetic complexity of higher eukaryotic organisms; in these, a limited number of genes may potentially encode a much larger number of proteins. Here we investigate yet another phenomenon that may contribute to generate additional protein diversity. Indeed, by relying on both computational and experimental analysis, we estimate that at least 4%-5% of the tandem gene pairs in the human genome can be eventually transcribed into a single RNA sequence encoding a putative chimeric protein. While the functional significance of most of these chimeric transcripts remains to be determined, we provide strong evidence that this phenomenon does not correspond to mere technical artifacts and that it is a common mechanism with the potential of generating hundreds of additional proteins in the human genome.
Resumo:
Résumé -Caractéristiques architecturales des génomes bactériens et leurs applications Les bactéries possèdent généralement un seul chromosome circulaire. A chaque génération, ce chromosome est répliqué bidirectionnellement, par deux complexes enzymatiques de réplication se déplaçant en sens opposé depuis l'origine de réplication jusqu'au terminus, situé à l'opposé. Ce mode de réplication régit l'architecture du chromosome -l'orientation des gènes par rapport à la réplication, notamment - et est en grande partie à l'origine des pressions qui provoquent la variation de la composition en nucléotides du génome, hors des contraintes liées à la structure et à la fonction des protéines codées sur le chromosome. Le but de cette thèse est de contribuer à quantifier les effets de la réplication sur l'architecture chromosomique, en s'intéressant notamment aux gènes des ARN ribosomiques, cruciaux pour la bactérie. D'un autre côté, cette architecture est spécifique à l'espèce et donne ainsi une «identité génomique » aux gènes. Il est démontré ici qu'il est possible d'utiliser des marqueurs «naïfs » de cette identité pour détecter, notamment dans le génome du staphylocoque doré, des îlots de pathogénicité, qui concentrent un grand nombre de facteurs de virulence de la bactérie. Ces îlots de pathogénicité sont mobiles, et peuvent passer d'une bactérie à une autre, mais conservent durant un certain temps l'identité génomique de leur hôte précédent, ce qui permet de les reconnaître dans leur nouvel hôte. Ces méthodes simples, rapides et fiables seront de la plus haute importance lorsque le séquençage des génomes entiers sera rapide et disponible à très faible coût. Il sera alors possible d'analyser instantanément les déterminants pathogéniques et de résistance aux antibiotiques des agents pathogènes. Summary The bacterial genome is a highly organized structure, which may be referred to as the genome architecture, and is mainly directed by DNA replication. This thesis provides significant insights in the comprehension of the forces that shape bacterial chromosomes, different in each genome and contributing to confer them an identity. First, it shows the importance of the replication in directing the orientation of prokaryotic ribosomal RNAs, and how it shapes their nucleotide composition in a tax on-specific manner. Second, it highlights the pressure acting on the orientation of the genes in general, a majority of which are transcribed in the same direction as replication. Consequently, apparent infra-arm genome rearrangements, involving an exchange of the leading/lagging strands and shown to reduce growth rate, are very likely artifacts due to an incorrect contig assembly. Third, it shows that this genomic identity can be used to detect foreign parts in genomes, by establishing this identity for a given host and identifying the regions that deviate from it. This property is notably illustrated with Staphylococcus aureus: known pathogenicity islands and phages, and putative ancient pathogenicity islands concentrating many known pathogenicity-related genes are highlighted; the analysis also detects, incidentally, proteins responsible for the adhesion of S. aureus to the hosts' cells. In conclusion, the study of nucleotide composition of bacterial genomes provides the opportunity to better understand the genome-level pressures that shape DNA sequences, and to identify genes and regions potentially related to pathogenicity with fast, simple and reliable methods. This will be of crucial importance when whole-genome sequencing will be a rapid, inexpensive and routine tool.
Resumo:
The Cancer Vaccine Consortium of the Sabin Vaccine Institute (CVC/SVI) is conducting an ongoing large-scale immune monitoring harmonization program through its members and affiliated associations. This effort was brought to life as an external validation program by conducting an international Elispot proficiency panel with 36 laboratories in 2005, and was followed by a second panel with 29 participating laboratories in 2006 allowing for application of learnings from the first panel. Critical protocol choices, as well as standardization and validation practices among laboratories were assessed through detailed surveys. Although panel participants had to follow general guidelines in order to allow comparison of results, each laboratory was able to use its own protocols, materials and reagents. The second panel recorded an overall significantly improved performance, as measured by the ability to detect all predefined responses correctly. Protocol choices and laboratory practices, which can have a dramatic effect on the overall assay outcome, were identified and lead to the following recommendations: (A) Establish a laboratory SOP for Elispot testing procedures including (A1) a counting method for apoptotic cells for determining adequate cell dilution for plating, and (A2) overnight rest of cells prior to plating and incubation, (B) Use only pre-tested serum optimized for low background: high signal ratio, (C) Establish a laboratory SOP for plate reading including (C1) human auditing during the reading process and (C2) adequate adjustments for technical artifacts, and (D) Only allow trained personnel, which is certified per laboratory SOPs to conduct assays. Recommendations described under (A) were found to make a statistically significant difference in assay performance, while the remaining recommendations are based on practical experiences confirmed by the panel results, which could not be statistically tested. These results provide initial harmonization guidelines to optimize Elispot assay performance to the immunotherapy community. Further optimization is in process with ongoing panels.
Resumo:
For the development and evaluation of cardiac magnetic resonance (MR) imaging sequences and methodologies, the availability of a periodically moving phantom to model respiratory and cardiac motion would be of substantial benefit. Given the specific physical boundary conditions in an MR environment, the choice of materials and power source of such phantoms is heavily restricted. Sophisticated commercial solutions are available; however, they are often relatively costly and user-specific modifications may not easily be implemented. We therefore sought to construct a low-cost MR-compatible motion phantom that could be easily reproduced and had design flexibility. A commercially available K'NEX construction set (Hyper Space Training Tower, K'NEX Industries, Inc., Hatfield, PA) was used to construct a periodically moving phantom head. The phantom head performs a translation with a superimposed rotation, driven by a motor over a 2-m rigid rod. To synchronize the MR data acquisition with phantom motion (without introducing radiofrequency-related image artifacts), a fiberoptic control unit generates periodic trigger pulses synchronized to the phantom motion. Total material costs of the phantom are US$ < 200.00, and a total of 80 man-hours were required to design and construct the original phantom. With schematics of the present solution, the phantom reproduction may be achieved in approximately 15 man-hours. The presented MR-compatible periodically moving phantom can easily be reproduced, and user-specific modifications may be implemented. Such an approach allows a detailed investigation of motion-related phenomena in MR images.
Resumo:
This paper presents a validation study on statistical nonsupervised brain tissue classification techniques in magnetic resonance (MR) images. Several image models assuming different hypotheses regarding the intensity distribution model, the spatial model and the number of classes are assessed. The methods are tested on simulated data for which the classification ground truth is known. Different noise and intensity nonuniformities are added to simulate real imaging conditions. No enhancement of the image quality is considered either before or during the classification process. This way, the accuracy of the methods and their robustness against image artifacts are tested. Classification is also performed on real data where a quantitative validation compares the methods' results with an estimated ground truth from manual segmentations by experts. Validity of the various classification methods in the labeling of the image as well as in the tissue volume is estimated with different local and global measures. Results demonstrate that methods relying on both intensity and spatial information are more robust to noise and field inhomogeneities. We also demonstrate that partial volume is not perfectly modeled, even though methods that account for mixture classes outperform methods that only consider pure Gaussian classes. Finally, we show that simulated data results can also be extended to real data.
Resumo:
Here we describe a method for measuring tonotopic maps and estimating bandwidth for voxels in human primary auditory cortex (PAC) using a modification of the population Receptive Field (pRF) model, developed for retinotopic mapping in visual cortex by Dumoulin and Wandell (2008). The pRF method reliably estimates tonotopic maps in the presence of acoustic scanner noise, and has two advantages over phase-encoding techniques. First, the stimulus design is flexible and need not be a frequency progression, thereby reducing biases due to habituation, expectation, and estimation artifacts, as well as reducing the effects of spatio-temporal BOLD nonlinearities. Second, the pRF method can provide estimates of bandwidth as a function of frequency. We find that bandwidth estimates are narrower for voxels within the PAC than in surrounding auditory responsive regions (non-PAC).
Resumo:
In this article we propose a novel method for calculating cardiac 3-D strain. The method requires the acquisition of myocardial short-axis (SA) slices only and produces the 3-D strain tensor at every point within every pair of slices. Three-dimensional displacement is calculated from SA slices using zHARP which is then used for calculating the local displacement gradient and thus the local strain tensor. There are three main advantages of this method. First, the 3-D strain tensor is calculated for every pixel without interpolation; this is unprecedented in cardiac MR imaging. Second, this method is fast, in part because there is no need to acquire long-axis (LA) slices. Third, the method is accurate because the 3-D displacement components are acquired simultaneously and therefore reduces motion artifacts without the need for registration. This article presents the theory of computing 3-D strain from two slices using zHARP, the imaging protocol, and both phantom and in-vivo validation.
Resumo:
Breathing-induced bulk motion of the myocardium during data acquisition may cause severe image artifacts in coronary magnetic resonance angiography (MRA). Current motion compensation strategies include breath-holding or free-breathing MR navigator gating and tracking techniques. Navigator-based techniques have been further refined by the applications of sophisticated 2D k-space reordering techniques. A further improvement in image quality and a reduction of relative scanning duration may be expected from a 3D k-space reordering scheme. Therefore, a 3D k-space reordered acquisition scheme including a 3D navigator gated and corrected segmented k-space gradient echo imaging sequence for coronary MRA was implemented. This new zonal motion-adapted acquisition and reordering technique (ZMART) was developed on the basis of a numerical simulation of the Bloch equations. The technique was implemented on a commercial 1.5T MR system, and first phantom and in vivo experiments were performed. Consistent with the results of the theoretical findings, the results obtained in the phantom studies demonstrate a significant reduction of motion artifacts when compared to conventional (non-k-space reordered) gating techniques. Preliminary in vivo findings also compare favorably with the phantom experiments and theoretical considerations. Magn Reson Med 45:645-652, 2001.
Resumo:
Literature on medical dispatch is growing, focusing mainly on efficiency (under and overtriage) and dispatch-assisted CPR. But the issue of population catchment size, functional costs and rationalization is rarely addressed. If we can observe a trend toward a decreasing number of dispatch centres in many European countries, there is today no evidence on what is the right catchment size to reach the best balance between quality of services and costs.
Resumo:
We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2×2×3 factorial design with the following factors: PMC on or off; 3.0mm or 1.5mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p<0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies.
Resumo:
Engineered nanomaterials (ENMs) exhibit special physicochemical properties and thus are finding their way into an increasing number of industries, enabling products with improved properties. Their increased use brings a greater likelihood of exposure to the nanoparticles (NPs) that could be released during the life cycle of nano-abled products. The field of nanotoxicology has emerged as a consequence of the development of these novel materials, and it has gained ever more attention due to the urgent need to gather information on exposure to them and to understand the potential hazards they engender. However, current studies on nanotoxicity tend to focus on pristine ENMs, and they use these toxicity results to generalize risk assessments on human exposure to NPs. ENMs released into the environment can interact with their surroundings, change characteristics and exhibit toxicity effects distinct from those of pristine ENMs. Furthermore, NPs' large surface areas provide extra-large potential interfaces, thus promoting more significant interactions between NPs and other co-existing species. In such processes, other species can attach to a NP's surface and modify its surface functionality, in addition to the toxicity in normally exhibits. One particular occupational health scenario involves NPs and low-volatile organic compounds (LVOC), a common type of pollutant existing around many potential sources of NPs. LVOC can coat a NP's surface and then dominate its toxicity. One important mechanism in nanotoxicology is the creation of reactive oxygen species (ROS) on a NP's surface; LVOC can modify the production of these ROS. In summary, nanotoxicity research should not be limited to the toxicity of pristine NPs, nor use their toxicity to evaluate the health effects of exposure to environmental NPs. Instead, the interactions which NPs have with other environmental species should also be considered and researched. The potential health effects of exposure to NPs should be derived from these real world NPs with characteristics modified by the environment and their distinct toxicity. Failure to suitably address toxicity results could lead to an inappropriate treatment of nano- release, affect the environment and public health and put a blemish on the development of sustainable nanotechnologies as a whole. The main objective of this thesis is to demonstrate a process for coating NP surfaces with LVOC using a well-controlled laboratory design and, with regard to these NPs' capacity to generate ROS, explore the consequences of changing particle toxicity. The dynamic coating system developed yielded stable and replicable coating performance, simulating an important realistic scenario. Clear changes in the size distribution of airborne NPs were observed using a scanning mobility particle sizer, were confirmed using both liquid nanotracking analyses and transmission electron microscopy (TEM) imaging, and were verified thanks to the LVOC coating. Coating thicknesses corresponded to the amount of coating material used and were controlled using the parameters of the LVOC generator. The capacity of pristine silver NPs (Ag NPs) to generate ROS was reduced when they were given a passive coating of inert paraffin: this coating blocked the reactive zones on the particle surfaces. In contrast, a coating of active reduced-anthraquinone contributed to redox reactions and generated ROS itself, despite the fact that ROS generation due to oxidation by Ag NPs themselves was quenched. Further objectives of this thesis included development of ROS methodology and the analysis of ROS case studies. Since the capacity of NPs to create ROS is an important effect in nanotoxicity, we attempted to refine and standardize the use of 2'7-dichlorodihydrofluorescin (DCFH) as a chemical tailored for the characterization of NPs' capacity for ROS generation. Previous studies had reported a wide variety of results, which were due to a number of insufficiently well controlled factors. We therefore cross-compared chemicals and concentrations, explored ways of dispersing NP samples in liquid solutions, identified sources of contradictions in the literature and investigated ways of reducing artificial results. The most robust results were obtained by sonicating an optimal sample of NPs in a DCFH-HRP solution made of 5,M DCFH and 0.5 unit/ml horseradish peroxidase (HRP). Our findings explained how the major reasons for previously conflicting results were the different experimental approaches used and the potential artifacts appearing when using high sample concentrations. Applying our advanced DCFH protocol with other physicochemical characterizations and biological analyses, we conducted several case studies, characterizing aerosols and NP samples. Exposure to aged brake wear dust engenders a risk of potential deleterious health effects in occupational scenarios. We performed microscopy and elemental analyses, as well as ROS measurements, with acellular and cellular DCFH assays. TEM images revealed samples to be heterogeneous mixtures with few particles in the nano-scale. Metallic and non-metallic elements were identified, primarily iron, carbon and oxygen. Moderate amounts of ROS were detected in the cell-free fluorescent tests; however, exposed cells were not dramatically activated. In addition to their highly aged state due to oxidation, the reason aged brake wear samples caused less oxidative stress than fresh brake wear samples may be because of their larger size and thus smaller relative reactive surface area. Other case studies involving welding fumes and differently charged NPs confirmed the performance of our DCFH assay and found ROS generation linked to varying characteristics, especially the surface functionality of the samples. Les nanomatériaux manufacturés (ENM) présentent des propriétés physico-chimiques particulières et ont donc trouvés des applications dans un nombre croissant de secteurs, permettant de réaliser des produits ayant des propriétés améliorées. Leur utilisation accrue engendre un plus grand risque pour les êtres humains d'être exposés à des nanoparticules (NP) qui sont libérées au long de leur cycle de vie. En conséquence, la nanotoxicologie a émergé et gagné de plus en plus d'attention dû à la nécessité de recueillir les renseignements nécessaires sur l'exposition et les risques associés à ces nouveaux matériaux. Cependant, les études actuelles sur la nanotoxicité ont tendance à se concentrer sur les ENM et utiliser ces résultats toxicologiques pour généraliser l'évaluation des risques sur l'exposition humaine aux NP. Les ENM libérés dans l'environnement peuvent interagir avec l'environnement, changeant leurs caractéristiques, et montrer des effets de toxicité distincts par rapport aux ENM originaux. Par ailleurs, la grande surface des NP fournit une grande interface avec l'extérieur, favorisant les interactions entre les NP et les autres espèces présentes. Dans ce processus, d'autres espèces peuvent s'attacher à la surface des NP et modifier leur fonctionnalité de surface ainsi que leur toxicité. Un scénario d'exposition professionnel particulier implique à la fois des NP et des composés organiques peu volatils (LVOC), un type commun de polluant associé à de nombreuses sources de NP. Les LVOC peuvent se déposer sur la surface des NP et donc dominer la toxicité globale de la particule. Un mécanisme important en nanotoxicologie est la création d'espèces réactives d'oxygène (ROS) sur la surface des particules, et les LVOC peuvent modifier cette production de ROS. En résumé, la recherche en nanotoxicité ne devrait pas être limitée à la toxicité des ENM originaux, ni utiliser leur toxicité pour évaluer les effets sur la santé de l'exposition aux NP de l'environnement; mais les interactions que les NP ont avec d'autres espèces environnementales doivent être envisagées et étudiées. Les effets possibles sur la santé de l'exposition aux NP devraient être dérivés de ces NP aux caractéristiques modifiées et à la toxicité distincte. L'utilisation de résultats de toxicité inappropriés peut conduire à une mauvaise prise en charge de l'exposition aux NP, de détériorer l'environnement et la santé publique et d'entraver le développement durable des industries de la nanotechnologie dans leur ensemble. L'objectif principal de cette thèse est de démontrer le processus de déposition des LVOC sur la surface des NP en utilisant un environnement de laboratoire bien contrôlé et d'explorer les conséquences du changement de toxicité des particules sur leur capacité à générer des ROS. Le système de déposition dynamique développé a abouti à des performances de revêtement stables et reproductibles, en simulant des scénarios réalistes importants. Des changements clairs dans la distribution de taille des NP en suspension ont été observés par spectrométrie de mobilité électrique des particules, confirmé à la fois par la méthode dite liquid nanotracking analysis et par microscopie électronique à transmission (MET), et a été vérifié comme provenant du revêtement par LVOC. La correspondance entre l'épaisseur de revêtement et la quantité de matériau de revêtement disponible a été démontré et a pu être contrôlé par les paramètres du générateur de LVOC. La génération de ROS dû aux NP d'argent (Ag NP) a été diminuée par un revêtement passif de paraffine inerte bloquant les zones réactives à la surface des particules. Au contraire, le revêtement actif d'anthraquinone réduit a contribué aux réactions redox et a généré des ROS, même lorsque la production de ROS par oxydation des Ag NP avec l'oxygène a été désactivé. Les objectifs associés comprennent le développement de la méthodologie et des études de cas spécifique aux ROS. Etant donné que la capacité des NP à générer des ROS contribue grandement à la nanotoxicité, nous avons tenté de définir un standard pour l'utilisation de 27- dichlorodihydrofluorescine (DCFH) adapté pour caractériser la génération de ROS par les NP. Des etudes antérieures ont rapporté une grande variété de résultats différents, ce qui était dû à un contrôle insuffisant des plusieurs facteurs. Nous avons donc comparé les produits chimiques et les concentrations utilisés, exploré les moyens de dispersion des échantillons HP en solution liquide, investigué les sources de conflits identifiées dans les littératures et étudié les moyens de réduire les résultats artificiels. De très bon résultats ont été obtenus par sonication d'une quantité optimale d'échantillons de NP en solution dans du DCFH-HRP, fait de 5 nM de DCFH et de 0,5 unité/ml de Peroxydase de raifort (HRP). Notre étude a démontré que les principales raisons causant les conflits entre les études précédemment conduites dans la littérature étaient dues aux différentes approches expérimentales et à des artefacts potentiels dus à des concentrations élevées de NP dans les échantillons. Utilisant notre protocole DCFH avancé avec d'autres caractérisations physico-chimiques et analyses biologiques, nous avons mené plusieurs études de cas, caractérisant les échantillons d'aérosols et les NP. La vielle poussière de frein en particulier présente un risque élevé d'exposition dans les scénarios professionnels, avec des effets potentiels néfastes sur la santé. Nous avons effectué des analyses d'éléments et de microscopie ainsi que la mesure de ROS avec DCFH cellulaire et acellulaire. Les résultats de MET ont révélé que les échantillons se présentent sous la forme de mélanges de particules hétérogènes, desquels une faible proportion se trouve dans l'échelle nano. Des éléments métalliques et non métalliques ont été identifiés, principalement du fer, du carbone et de l'oxygène. Une quantité modérée de ROS a été détectée dans le test fluorescent acellulaire; cependant les cellules exposées n'ont pas été très fortement activées. La raison pour laquelle les échantillons de vielle poussière de frein causent un stress oxydatif inférieur par rapport à la poussière de frein nouvelle peut-être à cause de leur plus grande taille engendrant une surface réactive proportionnellement plus petite, ainsi que leur état d'oxydation avancé diminuant la réactivité. D'autres études de cas sur les fumées de soudage et sur des NP différemment chargées ont confirmé la performance de notre test DCFH et ont trouvé que la génération de ROS est liée à certaines caractéristiques, notamment la fonctionnalité de surface des échantillons.
Resumo:
Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX ( www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth.