22 resultados para Hamiltonian formalism
em Université de Lausanne, Switzerland
Resumo:
Abstract Textual autocorrelation is a broad and pervasive concept, referring to the similarity between nearby textual units: lexical repetitions along consecutive sentences, semantic association between neighbouring lexemes, persistence of discourse types (narrative, descriptive, dialogal...) and so on. Textual autocorrelation can also be negative, as illustrated by alternating phonological or morpho-syntactic categories, or the succession of word lengths. This contribution proposes a general Markov formalism for textual navigation, and inspired by spatial statistics. The formalism can express well-known constructs in textual data analysis, such as term-document matrices, references and hyperlinks navigation, (web) information retrieval, and in particular textual autocorrelation, as measured by Moran's I relatively to the exchange matrix associated to neighbourhoods of various possible types. Four case studies (word lengths alternation, lexical repulsion, parts of speech autocorrelation, and semantic autocorrelation) illustrate the theory. In particular, one observes a short-range repulsion between nouns together with a short-range attraction between verbs, both at the lexical and semantic levels. Résumé: Le concept d'autocorrélation textuelle, fort vaste, réfère à la similarité entre unités textuelles voisines: répétitions lexicales entre phrases successives, association sémantique entre lexèmes voisins, persistance du type de discours (narratif, descriptif, dialogal...) et ainsi de suite. L'autocorrélation textuelle peut être également négative, comme l'illustrent l'alternance entre les catégories phonologiques ou morpho-syntaxiques, ou la succession des longueurs de mots. Cette contribution propose un formalisme markovien général pour la navigation textuelle, inspiré par la statistique spatiale. Le formalisme est capable d'exprimer des constructions bien connues en analyse des données textuelles, telles que les matrices termes-documents, les références et la navigation par hyperliens, la recherche documentaire sur internet, et, en particulier, l'autocorélation textuelle, telle que mesurée par le I de Moran relatif à une matrice d'échange associée à des voisinages de différents types possibles. Quatre cas d'étude illustrent la théorie: alternance des longueurs de mots, répulsion lexicale, autocorrélation des catégories morpho-syntaxiques et autocorrélation sémantique. On observe en particulier une répulsion à courte portée entre les noms, ainsi qu'une attraction à courte portée entre les verbes, tant au niveau lexical que sémantique.
Resumo:
PURPOSE: To assess the failure pattern observed after (18)F fluoroethyltyrosine (FET) planning after chemo- and radiotherapy (RT) for high-grade glioma. METHODS: All patients underwent prospectively RT planning using morphological gross tumour volumes (GTVs) and biological tumour volumes (BTVs). The post-treatment recurrence tumour volumes (RTVs) of 10 patients were transferred on their CT planning. First, failure patterns were defined in terms of percentage of RTV located outside the GTV and BTV. Second, the location of the RTV with respect to the delivered dose distribution was assessed using the RTV's DVHs. Recurrences with >95% of their volume within 95% isodose line were considered as central recurrences. Finally, the relationship between survival and GTV/BTV mismatches was assessed. RESULTS: The median percentages of RTV outside the GTV and BTV were 41.8% (range, 10.5-92.4) and 62.8% (range, 34.2-81.1), respectively. The majority of recurrences (90%) were centrally located. Using a composite target volume planning formalism, the degree of GTV and BTV mismatch did not correlate with survivorship. CONCLUSIONS: The observed failure pattern after FET-PET planning and chemo-RT is primarily central. The target mismatch-survival data suggest that using FET-PET planning may counteract the possibility of BTV-related progression, which may have a detrimental effect on survival.
Resumo:
ABSTRACT: q-Space-based techniques such as diffusion spectrum imaging, q-ball imaging, and their variations have been used extensively in research for their desired capability to delineate complex neuronal architectures such as multiple fiber crossings in each of the image voxels. The purpose of this article was to provide an introduction to the q-space formalism and the principles of basic q-space techniques together with the discussion on the advantages as well as challenges in translating these techniques into the clinical environment. A review of the currently used q-space-based protocols in clinical research is also provided.
Resumo:
Whole-body (WB) planar imaging has long been one of the staple methods of dosimetry, and its quantification has been formalized by the MIRD Committee in pamphlet no 16. One of the issues not specifically addressed in the formalism occurs when the count rates reaching the detector are sufficiently high to result in camera count saturation. Camera dead-time effects have been extensively studied, but all of the developed correction methods assume static acquisitions. However, during WB planar (sweep) imaging, a variable amount of imaged activity exists in the detector's field of view as a function of time and therefore the camera saturation is time dependent. A new time-dependent algorithm was developed to correct for dead-time effects during WB planar acquisitions that accounts for relative motion between detector heads and imaged object. Static camera dead-time parameters were acquired by imaging decaying activity in a phantom and obtaining a saturation curve. Using these parameters, an iterative algorithm akin to Newton's method was developed, which takes into account the variable count rate seen by the detector as a function of time. The algorithm was tested on simulated data as well as on a whole-body scan of high activity Samarium-153 in an ellipsoid phantom. A complete set of parameters from unsaturated phantom data necessary for count rate to activity conversion was also obtained, including build-up and attenuation coefficients, in order to convert corrected count rate values to activity. The algorithm proved successful in accounting for motion- and time-dependent saturation effects in both the simulated and measured data and converged to any desired degree of precision. The clearance half-life calculated from the ellipsoid phantom data was calculated to be 45.1 h after dead-time correction and 51.4 h with no correction; the physical decay half-life of Samarium-153 is 46.3 h. Accurate WB planar dosimetry of high activities relies on successfully compensating for camera saturation which takes into account the variable activity in the field of view, i.e. time-dependent dead-time effects. The algorithm presented here accomplishes this task.
Resumo:
ABSTRACT This dissertation investigates the, nature of space-time as described by the theory of general relativity. It mainly argues that space-time can be naturally interpreted as a physical structure in the precise sense of a network of concrete space-time relations among concrete space-time points that do not possess any intrinsic properties and any intrinsic identity. Such an interpretation is fundamentally based on two related key features of general relativity, namely substantive general covariance and background independence, where substantive general covariance is understood as a gauge-theoretic invariance under active diffeomorphisms and background independence is understood in the sense that the metric (or gravitational) field is dynamical and that, strictly speaking, it cannot be uniquely split into a purely gravitational part and a fixed purely inertial part or background. More broadly, a precise notion of (physical) structure is developed within the framework of a moderate version of structural realism understood as a metaphysical claim about what there is in the world. So, the developement of this moderate structural realism pursues two main aims. The first is purely metaphysical, the aim being to develop a coherent metaphysics of structures and of objects (particular attention is paid to the questions of identity and individuality of these latter within this structural realist framework). The second is to argue that moderate structural realism provides a convincing interpretation of the world as described by fundamental physics and in particular of space-time as described by general relativity. This structuralist interpretation of space-time is discussed within the traditional substantivalist-relationalist debate, which is best understood within the broader framework of the question about the relationship between space-time on the one hand and matter on the other. In particular, it is claimed that space-time structuralism does not constitute a 'tertium quid' in the traditional debate. Some new light on the question of the nature of space-time may be shed from the fundamental foundational issue of space-time singularities. Their possible 'non-local' (or global) feature is discussed in some detail and it is argued that a broad structuralist conception of space-time may provide a physically meaningful understanding of space-time singularities, which is not plagued by the conceptual difficulties of the usual atomsitic framework. Indeed, part of these difficulties may come from the standard differential geometric description of space-time, which encodes to some extent this atomistic framework; it raises the question of the importance of the mathematical formalism for the interpretation of space-time.
Parts, places, and perspectives : a theory of spatial relations based an mereotopology and convexity
Resumo:
This thesis suggests to carry on the philosophical work begun in Casati's and Varzi's seminal book Parts and Places, by extending their general reflections on the basic formal structure of spatial representation beyond mereotopology and absolute location to the question of perspectives and perspective-dependent spatial relations. We show how, on the basis of a conceptual analysis of such notions as perspective and direction, a mereotopological theory with convexity can express perspectival spatial relations in a strictly qualitative framework. We start by introducing a particular mereotopological theory, AKGEMT, and argue that it constitutes an adequate core for a theory of spatial relations. Two features of AKGEMT are of particular importance: AKGEMT is an extensional mereotopology, implying that sameness of proper parts is a sufficient and necessary condition for identity, and it allows for (lower- dimensional) boundary elements in its domain of quantification. We then discuss an extension of AKGEMT, AKGEMTS, which results from the addition of a binary segment operator whose interpretation is that of a straight line segment between mereotopological points. Based on existing axiom systems in standard point-set topology, we propose an axiomatic characterisation of the segment operator and show that it is strong enough to sustain complex properties of a convexity predicate and a convex hull operator. We compare our segment-based characterisation of the convex hull to Cohn et al.'s axioms for the convex hull operator, arguing that our notion of convexity is significantly stronger. The discussion of AKGEMTS defines the background theory of spatial representation on which the developments in the second part of this thesis are built. The second part deals with perspectival spatial relations in two-dimensional space, i.e., such relations as those expressed by 'in front of, 'behind', 'to the left/right of, etc., and develops a qualitative formalism for perspectival relations within the framework of AKGEMTS. Two main claims are defended in part 2: That perspectival relations in two-dimensional space are four- place relations of the kind R(x, y, z, w), to be read as x is i?-related to y as z looks at w; and that these four-place structures can be satisfactorily expressed within the qualitative theory AKGEMTS. To defend these two claims, we start by arguing for a unified account of perspectival relations, thus rejecting the traditional distinction between 'relative' and 'intrinsic' perspectival relations. We present a formal theory of perspectival relations in the framework of AKGEMTS, deploying the idea that perspectival relations in two-dimensional space are four-place relations, having a locational and a perspectival part and show how this four-place structure leads to a unified framework of perspectival relations. Finally, we present a philosophical motivation to the idea that perspectival relations are four-place, cashing out the thesis that perspectives are vectorial properties and argue that vectorial properties are relations between spatial entities. Using Fine's notion of "qua objects" for an analysis of points of view, we show at last how our four-place approach to perspectival relations compares to more traditional understandings.
Resumo:
Natural populations are of finite size and organisms carry multilocus genotypes. There are, nevertheless, few results on multilocus models when both random genetic drift and natural selection affect the evolutionary dynamics. In this paper we describe a formalism to calculate systematic perturbation expansions of moments of allelic states around neutrality in populations of constant size. This allows us to evaluate multilocus fixation probabilities (long-term limits of the moments) under arbitrary strength of selection and gene action. We show that such fixation probabilities can be expressed in terms of selection coefficients weighted by mean first passages times of ancestral gene lineages within a single ancestor. These passage times extend the coalescence times that weight selection coefficients in one-locus perturbation formulas for fixation probabilities. We then apply these results to investigate the Hill-Robertson effect and the coevolution of helping and punishment. Finally, we discuss limitations and strengths of the perturbation approach. In particular, it provides accurate approximations for fixation probabilities for weak selection regimes only (Ns < or = 1), but it provides generally good prediction for the direction of selection under frequency-dependent selection.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
BACKGROUND: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. RESULTS: We present the Systems Biology Markup Language (SBML) Qualitative Models Package ("qual"), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the collective effort to define the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyse qualitative models. CONCLUSIONS: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.
Resumo:
A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.
Resumo:
Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.
Resumo:
We investigate the selective pressures on a social trait when evolution occurs in a population of constant size. We show that any social trait that is spiteful simultaneously qualifies as altruistic. In other words, any trait that reduces the fitness of less related individuals necessarily increases that of related ones. Our analysis demonstrates that the distinction between "Hamiltonian spite" and "Wilsonian spite" is not justified on the basis of fitness effects. We illustrate this general result with an explicit model for the evolution of a social act that reduces the recipient's survival ("harming trait"). This model shows that the evolution of harming is favoured if local demes are of small size and migration is low (philopatry). Further, deme size and migration rate determine whether harming evolves as a selfish strategy by increasing the fitness of the actor, or as a spiteful/altruistic strategy through its positive effect on the fitness of close kin.