921 resultados para visualisation formalism
Resumo:
PURPOSE: To assess the failure pattern observed after (18)F fluoroethyltyrosine (FET) planning after chemo- and radiotherapy (RT) for high-grade glioma. METHODS: All patients underwent prospectively RT planning using morphological gross tumour volumes (GTVs) and biological tumour volumes (BTVs). The post-treatment recurrence tumour volumes (RTVs) of 10 patients were transferred on their CT planning. First, failure patterns were defined in terms of percentage of RTV located outside the GTV and BTV. Second, the location of the RTV with respect to the delivered dose distribution was assessed using the RTV's DVHs. Recurrences with >95% of their volume within 95% isodose line were considered as central recurrences. Finally, the relationship between survival and GTV/BTV mismatches was assessed. RESULTS: The median percentages of RTV outside the GTV and BTV were 41.8% (range, 10.5-92.4) and 62.8% (range, 34.2-81.1), respectively. The majority of recurrences (90%) were centrally located. Using a composite target volume planning formalism, the degree of GTV and BTV mismatch did not correlate with survivorship. CONCLUSIONS: The observed failure pattern after FET-PET planning and chemo-RT is primarily central. The target mismatch-survival data suggest that using FET-PET planning may counteract the possibility of BTV-related progression, which may have a detrimental effect on survival.
Resumo:
Per a determinar la dinàmica espai-temporal completa d’un sistema quàntic tridimensional de N partícules cal integrar l’equació d’Schrödinger en 3N dimensions. La capacitat dels ordinadors actuals permet fer-ho com a molt en 3 dimensions. Amb l’objectiu de disminuir el temps de càlcul necessari per a integrar l’equació d’Schrödinger multidimensional, es realitzen usualment una sèrie d’aproximacions, com l’aproximació de Born–Oppenheimer o la de camp mig. En general, el preu que es paga en realitzar aquestes aproximacions és la pèrdua de les correlacions quàntiques (o entrellaçament). Per tant, és necessari desenvolupar mètodes numèrics que permetin integrar i estudiar la dinàmica de sistemes mesoscòpics (sistemes d’entre tres i unes deu partícules) i en els que es tinguin en compte, encara que sigui de forma aproximada, les correlacions quàntiques entre partícules. Recentment, en el context de la propagació d’electrons per efecte túnel en materials semiconductors, X. Oriols ha desenvolupat un nou mètode [Phys. Rev. Lett. 98, 066803 (2007)] per al tractament de les correlacions quàntiques en sistemes mesoscòpics. Aquesta nova proposta es fonamenta en la formulació de la mecànica quàntica de de Broglie– Bohm. Així, volem fer notar que l’enfoc del problema que realitza X. Oriols i que pretenem aquí seguir no es realitza a fi de comptar amb una eina interpretativa, sinó per a obtenir una eina de càlcul numèric amb la que integrar de manera més eficient l’equació d’Schrödinger corresponent a sistemes quàntics de poques partícules. En el marc del present projecte de tesi doctoral es pretén estendre els algorismes desenvolupats per X. Oriols a sistemes quàntics constituïts tant per fermions com per bosons, i aplicar aquests algorismes a diferents sistemes quàntics mesoscòpics on les correlacions quàntiques juguen un paper important. De forma específica, els problemes a estudiar són els següents: (i) Fotoionització de l’àtom d’heli i de l’àtom de liti mitjançant un làser intens. (ii) Estudi de la relació entre la formulació de X. Oriols amb la aproximació de Born–Oppenheimer. (iii) Estudi de les correlacions quàntiques en sistemes bi- i tripartits en l’espai de configuració de les partícules mitjançant la formulació de de Broglie–Bohm.
Resumo:
Rapport de synthèse: Enjeux et contexte de recherche : la coarctation de l'aorte, rétrécissement de l'aorte thoracique descendante, est une des malformations cardiaques congénitales les plus fréquentes. Son diagnostic reste cependant difficile surtout lorsqu'elle est associée à la présence d'un canal artériel ou à une malformation cardiaque plus complexe. Dans ces contextes, les signes échographiques classiques qui posent habituellement le diagnostic (visualisation d'un rétrécissement juxtaductal et accélération au Doppler au niveau de l'isthme aortique) peuvent faire défaut ou être difficile à imager. La validation d'index basé sur des mesures anatomiques faciles à acquérir par échographie cardiaque, indépendantes de l'âge, de la situation hémodynamique et des malformations cardiaques associées représente une aide significative dans le diagnostic de la coarctation de l'aorte. Nous avons donc voulu valider par une étude rétrospective la fiabilité de deux index dans cette indication : l'index des artères carotido-sous-clavière (index CSA; rapport du diamètre de l'arc aortique transverse distal sur la distance entre les artères carotide et sous-clavière gauches) et l'index de l'aorte isthmique-descendante (index I/D; rapport des diamètres de l'aorte isthmique sur celui de l'aorte descendante). Notre article : nous avons rétrospectivement calculé la valeur des deux index (CSA et I/D) chez un groupe de 68 enfants avec coarctation et un groupe 24 cas contrôles apparenté pour l'âge et le sexe. Les enfants avec coarctation ont un index CSA et I/D significativement plus bas que le groupe contrôle (Index CSA : 0.84 ± 0.39 vs. 2.65 ± 0.82, p<0.0001 - Index I/D : 0.58 ± 0.18 vs. 0.98 ± 0.19, p<0.0001). Pour les deux index, ni la présence d'une autre malformation cardiaque, ni l'âge n'ont un impact sur la différence significative entre les deux groupes. Conclusions: notre recherche à permis de valider qu'un index CSA de moins de 1,5 est fortement suggestif d'une coarctation, indépendamment de l'âge du patient et de la présence d'une autre malformation cardiaque. L'index I/D (valeur limite 0,64) est moins spécifique que l'index CSA. L'association des deux index augmente la sensibilité et permet rétrospectivement le diagnostic de tous les cas de coarctation seulement sur la base d'une échographie cardiaque standard faite au lit du patient. Perspectives : cette étude rétrospective mérite d'être vérifiée par une étude prospective afin de confirmer la contribution de ces index à la prise en charge des patients suspects d'une coarctation de l'aorte. Cependant, la situation dans laquelle ces index pourraient avoir le plus grand impact reste encore à explorer. En effet, si le diagnostic postnatal de la coarctation est parfois difficile, son diagnostic prénatal l'est nettement plus. La présence obligatoire du canal artériel et l'existence d'une hypoplasie isthmique physiologique chez le foetus obligent le cardiologue foetale à observer des signes indirects, peu sensibles, d'une possible coarctation (prépondérance des cavités droites). La validation de l'index CSA et/ou de l'index I/D chez le foetus constituerait donc une avancée majeure dans le diagnostic prénatal de la coarctation de l'aorte.
Resumo:
ABSTRACT: q-Space-based techniques such as diffusion spectrum imaging, q-ball imaging, and their variations have been used extensively in research for their desired capability to delineate complex neuronal architectures such as multiple fiber crossings in each of the image voxels. The purpose of this article was to provide an introduction to the q-space formalism and the principles of basic q-space techniques together with the discussion on the advantages as well as challenges in translating these techniques into the clinical environment. A review of the currently used q-space-based protocols in clinical research is also provided.
Resumo:
Specific properties emerge from the structure of large networks, such as that of worldwide air traffic, including a highly hierarchical node structure and multi-level small world sub-groups that strongly influence future dynamics. We have developed clustering methods to understand the form of these structures, to identify structural properties, and to evaluate the effects of these properties. Graph clustering methods are often constructed from different components: a metric, a clustering index, and a modularity measure to assess the quality of a clustering method. To understand the impact of each of these components on the clustering method, we explore and compare different combinations. These different combinations are used to compare multilevel clustering methods to delineate the effects of geographical distance, hubs, network densities, and bridges on worldwide air passenger traffic. The ultimate goal of this methodological research is to demonstrate evidence of combined effects in the development of an air traffic network. In fact, the network can be divided into different levels of âeurooecohesionâeuro, which can be qualified and measured by comparative studies (Newman, 2002; Guimera et al., 2005; Sales-Pardo et al., 2007).
Resumo:
Network analysis naturally relies on graph theory and, more particularly, on the use of node and edge metrics to identify the salient properties in graphs. When building visual maps of networks, these metrics are turned into useful visual cues or are used interactively to filter out parts of a graph while querying it, for instance. Over the years, analysts from different application domains have designed metrics to serve specific needs. Network science is an inherently cross-disciplinary field, which leads to the publication of metrics with similar goals; different names and descriptions of their analytics often mask the similarity between two metrics that originated in different fields. Here, we study a set of graph metrics and compare their relative values and behaviors in an effort to survey their potential contributions to the spatial analysis of networks.
Resumo:
Projecte de recerca elaborat a partir d’una estada a la Center for European Integration de la Freie Universität Berlin, Alemania, entre 2007 i 2009. El tema central del projecte consisteix en la descripció matemàtica de processos espai-temporals mitjançant la teoria dels Continuous-Time Random Walks. L'aportació més significativa del nostre treball en aquest camp consisteix en considerar per primera vegada la interacció entre diversos processos actuant de manera acoblada, ja que fins ara els models existents es limitaven a l'estudi de processos individuals o independents. Aquesta idea fa possible, per exemple, plantejar un sistema de transport en l'espai i a la vegada un procés de reacció (una reacció química, per exemple), i estudiar estadísticament com cada un pot alterar el comportament de l'altre. Això suposa un salt qualitatiu important en la descripció de processos de reacció-dispersió, ja que els nostres models permeten incorporar patrons de dispersió i comportaments temporals (cicles de vida) força realistes en comparació amb els models convencionals. Per tal de completar aquest treball teòric ha estat necessari també desenvolupar algunes eines numèriques (models de xarxa) per facilitar la implementació dels models. En la vessant pràctica, hem aplicat aquestes idees al cas de la dinàmica entre virus i el sistema immunològic que té lloc quan es produeix una infecció a l'organisme. Diferents estudis experimentals portats a terme els últims anys mostren com la resposta immunològica dels organismes superiors presenta una dinàmica temporal força complexa (per exemple, en el cas de la resposta programada). Per aquest motiu, les nostres tècniques matemàtiques són d'especial utilitat per a l'anàlisi d'aquests sistemes. Finalment, altres possibles aplicacions dels models, com ara l'estudi d'invasions biològiques, també han estat considerades.
Resumo:
RESUME : Bien que les propriétés physiques de la structure de l'ADN aient été intensivement étudiées pendant plus de 50 ans il y a encore beaucoup de questions importantes qui attendent des réponses. Par exemple, qu'arrive-t-il à la structure de la double hélice d'ADN nue (sans protéines liées) lorsqu'elle est fortement courbée, de la même manière que dans les nucléosomes? Cet ADN nu est-il facilement plié (il reste dans le régime élastique) ou réduit-il la contrainte de flexion en formant des sites hyperflexibles «kinks» (il sort du régime élastique en cassant l'empilement des paires de bases à certains endroits) ? La microscopie électronique peut fournir une réponse à cette question par visualisation directe des minicercles d'ADN de la longueur d'un tour de nucléosome (environ 90 paires de bases). Pour que la réponse soit scientifiquement valide, on doit observer les molécules d'ADN lorsqu'elles sont en suspension dans la solution d'intérêt et sans que des colorations, produits chimiques ou fixatifs n'aient été ajoutés, étant donné que ceux-ci peuvent changer les propriétés de l'ADN. La technique de la cryo-microscopie électronique (cryo-EM) développée par le groupe de Jacques Dubochet au début des années 80, permet la visualisation directe des molécules d'ADN suspendues dans des couche minces vitrifiées de solutions aqueuses. Toutefois, le faible contraste qui caractérise la cryo-EM combinée avec la très petite taille des minicercles d'ADN rendent nécessaire l'optimisation de plusieurs étapes, aussi bien dans la préparation des échantillons que dans le processus d'acquisition d'images afin d'obtenir deux clichés stéréo qui permettent la reconstruction 3-D des minicercles d'ADN. Dans la première partie de ma thèse, je décris l'optimisation de certains paramètres pour la cryoEM et des processus d'acquisition d'image utilisant comme objets de test des plasmides et d'autres molécules d'ADN. Dans la deuxième partie, je .décris comment j'ai construit les minicercles d'ADN de 94 bp et comment j'ai introduit des modifications structurelles comme des coupures ou des lacunes. Dans la troisième partie, je décris l'analyse des reconstructions des rninicercles d'ADN. Cette analyse, appuyée par des tests biochimiques, indique fortement que des molécules d'ADN sont capables de former de petites molécules circulaires de 94 bp sans dépasser les limites d'élasticité, indiquant que les minicercles adoptent une forme circulaire régulière où la flexion est redistribuée le long la molécule. ABSTRACT : Although physical properties of DNA structure have been intensively studied for over 50 years there are still many important questions that need to be answered. For example, what happens to protein-free double-stranded DNA when it is strongly bent, as in DNA forming nucleosomes? Is such protein-free DNA smoothly bent (i.e. it remains within elastic limits of DNA rigidity) or does it release its bending stress by forming sharp kinks (i.e. it exits the elastic regime and breaks the stacking between neighbouring base-pairs in localized regions)? Electron microscopy can provide an answer to this question by directly visualizing DNA minicircles that have the size of nucleosome gyres (ca 90 bp). For the answer to be scientifically valid, one needs to observe DNA molecules while they are still suspended in the solution of interest and no staining chemicals or fixatives have been added since these can change the properties of the DNA. CryoEM techniques developed by Jacques Dubochet's group beginning in the 1980's permit direct visualization of DNA molecules suspended in cryo-vitrified layers of aqueous solutions. However, a relatively weak contrast of cryo-EM preparations combined with the very small size of the DNA minicircles made it necessary to optimize many of the steps and parameters of the cryo-EM specimen preparation and image acquisition processes in order to obtain stereo-pairs of images that permit the 3-D reconstruction of the observed DNA minicircles. In the first part of my thesis I describe the optimization of the cryo-EM preparation and the image acquisition processes using plasmid size DNA molecules as a test object. In the second part, I describe how I formed the 94 by DNA minicircles and how I introduced structural modifications like nicks or gaps. In the third part, I describe the cryo-EM analysis of the constructed DNA minicircles. That analysis, supported by biochemical tests, strongly indicates that DNA minicircles as small as 94 by remain within the elastic limits of DNA structure, i.e. the minicircles adopt a regular circular shape where bending is redistributed along the molecules.
Resumo:
Taking as a starting point the seeming inconsistency of late-medieval romances notoriously 'run wild' (verwildert), this article is concerned with the description of an abstract form of narrative coherence that is based on the notion of the diagrammatic. In a first section, this concept is illustrated in a simplified manner by an analysis of Boccaccio's Decameron based on two levels of spatial structure: that of the autograph Berlin manuscript (Codex Hamilton 90) and that of the recipient's mental visualisation of the relations between the frame and the tales of the work. It is argued that the connectivity of the work as a whole depends on the perception of those two spatial representations of the plot. A second section develops this concept in a more theoretical fashion, drawing on Charles Sanders Peirce's notion of diagrammatic reasoning as a way of perceiving relations through mental and material topological representations. Correspondingly, a view of narrative is proposed that does not depend on the traditional perspective of temporal sequence but emphasizes the spatial structure of literary narrative. It is argued that these conditions form the primary ontological mode of narrative, whereas the temporal development of a story is an aesthetic illusion that has been specifically stimulated by the narrative conventions of approximately the past three centuries and must thus be considered a secondary effect. To conclude, an interpretation in miniature of an aspect of Heinrich von Neustadt's Apollonius von Tyrland that seems to have 'run wild' is undertaken from a diagrammatic perspective.
Resumo:
Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating for camera saturation which takes into account the variable activity in the field of view, i.e. time-dependent dead-time effects. The algorithm presented here accomplishes this task.
Resumo:
OBJECTIVE: To compare three spin-echo sequences, transverse T1-weighted (T1WI), transverse fat-saturated (FS) T2-weighted (T2WI), and transverse gadolinium-enhanced (Gd) FS T1WI, for the visualisation of normal and abnormal finger A2 pulley with magnetic resonance (MR) imaging at 3 tesla (T). MATERIALS AND METHODS: Sixty-three fingers from 21 patients were consecutively investigated. Two musculoskeletal radiologists retrospectively compared all sequences to assess the visibility of normal and abnormal A2 pulleys and the presence of motion or ghost artefacts. RESULTS: Normal and abnormal A2 pulleys were visible in 94% (59/63) and 95% (60/63) on T1WI sequences, in 63% (40/63) and 60% (38/63) on FS T2WI sequences, and in 87% (55/63) and 73% (46/63) on Gd FS T1WI sequences when read by the first and second observer, respectively. Motion and ghost artefacts were higher on FS T2WI sequences. Seven among eight abnormal A2 pulleys were detected, and were best depicted with Gd FS T1WI sequences in 71% (5/7) and 86% (6/7) by the first and the second observer, respectively. CONCLUSION: In 3-T MRI, the comparison between transverse T1WI, FS T2WI, and Gd FS T1WI sequences shows that transverse T1WI allows excellent depiction of the A2 pulley, that FS T2WI suffers from a higher rate of motion and ghost artefacts, and transverse Gd FS T1WI is the best sequence for the depiction of abnormal A2 pulley.
Resumo:
InterPro, an integrated documentation resource of protein families, domains and functional sites, was created in 1999 as a means of amalgamating the major protein signature databases into one comprehensive resource. PROSITE, Pfam, PRINTS, ProDom, SMART and TIGRFAMs have been manually integrated and curated and are available in InterPro for text- and sequence-based searching. The results are provided in a single format that rationalises the results that would be obtained by searching the member databases individually. The latest release of InterPro contains 5629 entries describing 4280 families, 1239 domains, 95 repeats and 15 post-translational modifications. Currently, the combined signatures in InterPro cover more than 74% of all proteins in SWISS-PROT and TrEMBL, an increase of nearly 15% since the inception of InterPro. New features of the database include improved searching capabilities and enhanced graphical user interfaces for visualisation of the data. The database is available via a webserver (http://www.ebi.ac.uk/interpro) and anonymous FTP (ftp://ftp.ebi.ac.uk/pub/databases/interpro).
Resumo:
INTRODUCTION: Gamma Knife surgery (GKS) is a non-invasive neurosurgical stereotactic procedure, increasingly used as an alternative to open functional procedures. This includes targeting of the ventro-intermediate nucleus of the thalamus (e.g. Vim) for tremor. We currently perform an indirect targeting, as the Vim is not visible on current 3Tesla MRI acquisitions. Our objective was to enhance anatomic imaging (aiming at refining the precision of anatomic target selection by direct visualisation) in patients treated for tremor with Vim GKS, by using high field 7T MRI. MATERIALS AND METHODSH: Five young healthy subjects were scanned on 3 (T1-w and diffusion tensor imaging) and 7T (high-resolution susceptibility weighted images (SWI)) MRI in Lausanne. All images were further integrated for the first time into the Gamma Plan Software(®) (Elekta Instruments, AB, Sweden) and co-registered (with T1 was a reference). A simulation of targeting of the Vim was done using various methods on the 3T images. Furthermore, a correlation with the position of the found target with the 7T SWI was performed. The atlas of Morel et al. (Zurich, CH) was used to confirm the findings on a detailed analysis inside/outside the Gamma Plan. RESULTS: The use of SWI provided us with a superior resolution and an improved image contrast within the basal ganglia. This allowed visualization and direct delineation of some subgroups of thalamic nuclei in vivo, including the Vim. The position of the target, as assessed on 3T, perfectly matched with the supposed one of the Vim on the SWI. Furthermore, a 3-dimensional model of the Vim-target area was created on the basis of the obtained images. CONCLUSION: This is the first report of the integration of SWI high field MRI into the LGP, aiming at the improvement of targeting validation of the Vim in tremor. The anatomical correlation between the direct visualization on 7T and the current targeting methods on 3T (e.g. quadrilatere of Guyot, histological atlases) seems to show a very good anatomical matching. Further studies are needed to validate this technique, both by improving the accuracy of the targeting of the Vim (potentially also other thalamic nuclei) and to perform clinical assessment.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
ABSTRACT This dissertation investigates the, nature of space-time as described by the theory of general relativity. It mainly argues that space-time can be naturally interpreted as a physical structure in the precise sense of a network of concrete space-time relations among concrete space-time points that do not possess any intrinsic properties and any intrinsic identity. Such an interpretation is fundamentally based on two related key features of general relativity, namely substantive general covariance and background independence, where substantive general covariance is understood as a gauge-theoretic invariance under active diffeomorphisms and background independence is understood in the sense that the metric (or gravitational) field is dynamical and that, strictly speaking, it cannot be uniquely split into a purely gravitational part and a fixed purely inertial part or background. More broadly, a precise notion of (physical) structure is developed within the framework of a moderate version of structural realism understood as a metaphysical claim about what there is in the world. So, the developement of this moderate structural realism pursues two main aims. The first is purely metaphysical, the aim being to develop a coherent metaphysics of structures and of objects (particular attention is paid to the questions of identity and individuality of these latter within this structural realist framework). The second is to argue that moderate structural realism provides a convincing interpretation of the world as described by fundamental physics and in particular of space-time as described by general relativity. This structuralist interpretation of space-time is discussed within the traditional substantivalist-relationalist debate, which is best understood within the broader framework of the question about the relationship between space-time on the one hand and matter on the other. In particular, it is claimed that space-time structuralism does not constitute a 'tertium quid' in the traditional debate. Some new light on the question of the nature of space-time may be shed from the fundamental foundational issue of space-time singularities. Their possible 'non-local' (or global) feature is discussed in some detail and it is argued that a broad structuralist conception of space-time may provide a physically meaningful understanding of space-time singularities, which is not plagued by the conceptual difficulties of the usual atomsitic framework. Indeed, part of these difficulties may come from the standard differential geometric description of space-time, which encodes to some extent this atomistic framework; it raises the question of the importance of the mathematical formalism for the interpretation of space-time.