8 resultados para SYMMETRIC SPACE-TIMES
em Aston University Research Archive
Resumo:
This thesis is concerned with exact solutions of Einstein's field equations of general relativity, in particular, when the source of the gravitational field is a perfect fluid with a purely electric Weyl tensor. General relativity, cosmology and computer algebra are discussed briefly. A mathematical introduction to Riemannian geometry and the tetrad formalism is then given. This is followed by a review of some previous results and known solutions concerning purely electric perfect fluids. In addition, some orthonormal and null tetrad equations of the Ricci and Bianchi identities are displayed in a form suitable for investigating these space-times. Conformally flat perfect fluids are characterised by the vanishing of the Weyl tensor and form a sub-class of the purely electric fields in which all solutions are known (Stephani 1967). The number of Killing vectors in these space-times is investigated and results presented for the non-expanding space-times. The existence of stationary fields that may also admit 0, 1 or 3 spacelike Killing vectors is demonstrated. Shear-free fluids in the class under consideration are shown to be either non-expanding or irrotational (Collins 1984) using both orthonormal and null tetrads. A discrepancy between Collins (1984) and Wolf (1986) is resolved by explicitly solving the field equations to prove that the only purely electric, shear-free, geodesic but rotating perfect fluid is the Godel (1949) solution. The irrotational fluids with shear are then studied and solutions due to Szafron (1977) and Allnutt (1982) are characterised. The metric is simplified in several cases where new solutions may be found. The geodesic space-times in this class and all Bianchi type 1 perfect fluid metrics are shown to have a metric expressible in a diagonal form. The position of spherically symmetric and Bianchi type 1 space-times in relation to the general case is also illustrated.
Resumo:
Doubt is cast on the much quoted results of Yakupov that the torsion vector in embedding class two vacuum space-times is necessarily a gradient vector and that class 2 vacua of Petrov type III do not exist. The rst result is equivalent to the fact that the two second fundamental forms associated with the embedding necessarily commute and has been assumed in most later investigations of class 2 vacuum space-times. Yakupov stated the result without proof, but hinted that it followed purely algebraically from his identity: Rijkl Ckl = 0 where Cij is the commutator of the two second fundamental forms of the embedding.From Yakupov's identity, it is shown that the only class two vacua with non-zero commutator Cij must necessarily be of Petrov type III or N. Several examples are presented of non-commuting second fundamental forms that satisfy Yakupovs identity and the vacuum condition following from the Gauss equation; both Petrov type N and type III examples occur. Thus it appears unlikely that his results could follow purely algebraically. The results obtained so far do not constitute denite counter-examples to Yakupov's results as the non-commuting examples could turn out to be incompatible with the Codazzi and Ricci embedding equations. This question is currently being investigated.
Resumo:
To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.
Resumo:
Computer simulated trajectories of bulk water molecules form complex spatiotemporal structures at the picosecond time scale. This intrinsic complexity, which underlies the formation of molecular structures at longer time scales, has been quantified using a measure of statistical complexity. The method estimates the information contained in the molecular trajectory by detecting and quantifying temporal patterns present in the simulated data (velocity time series). Two types of temporal patterns are found. The first, defined by the short-time correlations corresponding to the velocity autocorrelation decay times (â‰0.1â€ps), remains asymptotically stable for time intervals longer than several tens of nanoseconds. The second is caused by previously unknown longer-time correlations (found at longer than the nanoseconds time scales) leading to a value of statistical complexity that slowly increases with time. A direct measure based on the notion of statistical complexity that describes how the trajectory explores the phase space and independent from the particular molecular signal used as the observed time series is introduced. © 2008 The American Physical Society.
Resumo:
A framework that connects computational mechanics and molecular dynamics has been developed and described. As the key parts of the framework, the problem of symbolising molecular trajectory and the associated interrelation between microscopic phase space variables and macroscopic observables of the molecular system are considered. Following Shalizi and Moore, it is shown that causal states, the constituent parts of the main construct of computational mechanics, the e-machine, define areas of the phase space that are optimal in the sense of transferring information from the micro-variables to the macro-observables. We have demonstrated that, based on the decay of their Poincare´ return times, these areas can be divided into two classes that characterise the separation of the phase space into resonant and chaotic areas. The first class is characterised by predominantly short time returns, typical to quasi-periodic or periodic trajectories. This class includes a countable number of areas corresponding to resonances. The second class includes trajectories with chaotic behaviour characterised by the exponential decay of return times in accordance with the Poincare´ theorem.
Resumo:
Purpose: It is widely accepted that pupil responses to visual stimuli are determined by the ambient illuminance, and recently it has been shown that changes in stimulus color also contributes to a pupillary control mechanism. However, the role of pupillary responses to chromatic stimuli is not clear. The aim of this study was to investigate how color and luminance signals contribute to the pupillary control mechanism. Methods: We measured pupillary iso-response contours in M-and L-cone contrast space. The iso-response contours in cone-contrast space have been determined to examine what mechanisms contribute to the pupillary pathway. The shapes of the iso-response contour change when different mechanisms determine the response. Results: It was shown that for all subjects, the pupillary iso-response contours form an ellipse with positive slope in cone-contrast space, indicating that the sensitivities to the chromatic stimuli are higher than those for the luminance stimuli. The pupil responds maximally to a grating that has a stronger L-cone modulation than the red-green isoluminant grating. Conclusions: The sensitivity of the chromatic pathway, in terms of pupillary response, is three times larger than that of the luminance pathway, a property that might have utility in clinical applications. Copyright © Taylor & Francis Group, LLC.
Resumo:
This work contributes to the development of search engines that self-adapt their size in response to fluctuations in workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computational resources to or from the engine. In this paper, we focus on the problem of regrouping the metric-space search index when the number of virtual machines used to run the search engine is modified to reflect changes in workload. We propose an algorithm for incrementally adjusting the index to fit the varying number of virtual machines. We tested its performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud, while calibrating the results to compensate for the performance fluctuations of the platform. Our experiments show that, when compared with computing the index from scratch, the incremental algorithm speeds up the index computation 2–10 times while maintaining a similar search performance.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.