931 resultados para Space – time blocks coding
Resumo:
This thesis is concerned with the interaction between literature and abstract thought. More specifically, it studies the epistemological charge of the literary, the type of knowledge that is carried by elements proper to fictional narratives into different disciplines. By concentrating on two different theoretical methods, the creation of thought experiments and the framing of possible worlds, methods which were elaborated and are still used today in spheres as varied as modal logics, analytic philosophy and physics, and by following their reinsertion within literary theory, the research develops the theory that both thought experiments and possible worlds are in fact short narrative stories that inform knowledge through literary means. By using two novels, Abbott’s Flatland and Vonnegut’s The Sirens of Titan, that describe extra-dimensional existence in radically different ways, respectively as a phenomenologically unknowable space and as an outward perspective on time, it becomes clear that literature is constitutive of the way in which worlds, fictive, real or otherwise, are constructed and understood. Thus dimensions, established through extensional analogies as either experimental knowledge or modal possibility for a given world, generate new directions for thought, which can then take part in the inductive/deductive process of scientia. By contrasting the dimensions of narrative with the way that dimensions were historically constituted, the research also establishes that the literary opens up an infinite potential of abstract space-time domains, defined by their specific rules and limits, and that these different experimental folds are themselves partaking in a dimensional process responsible for new forms of understanding. Over against science fiction literary theories of speculation that posit an equation between the fictive and the real, this thesis examines the complex structure of many overlapping possibilities that can organise themselves around larger compossible wholes, thus offering a theory of reading that is both non-mimetic and non-causal. It consequently examines the a dynamic process whereby literature is always reconceived through possibilities actualised by reading while never defining how the reader will ultimately understand the overarching structure. In this context, the thesis argues that a causal story can be construed out of any one interaction with a given narrative—underscoring, for example, the divinatory strength of a particular vision of the future—even as this narrative represents only a fraction of the potential knowledge of any particular literary text. Ultimately, the study concludes by tracing out how novel comprehensions of the literary, framed by the material conditions of their own space and time, endlessly renew themselves through multiple interactions, generating analogies and speculations that facilitate the creation of new knowledge.
Resumo:
The thesis report results obtained from a detailed analysis of the fluctuations of the rheological parameters viz. shear and normal stresses, simulated by means of the Stokesian Dynamics method, of a macroscopically homogeneous sheared suspension of neutrally buoyant non-Brownian suspension of identical spheres in the Couette gap between two parallel walls in the limit of vanishingly small Reynolds numbers using the tools of non-linear dynamics and chaos theory for a range of particle concentration and Couette gaps. The thesis used the tools of nonlinear dynamics and chaos theory viz. average mutual information, space-time separation plots, visual recurrence analysis, principal component analysis, false nearest-neighbor technique, correlation integrals, computation of Lyapunov exponents for a range of area fraction of particles and for different Couette gaps. The thesis observed that one stress component can be predicted using another stress component at the same area fraction. This implies a type of synchronization of one stress component with another stress component. This finding suggests us to further analysis of the synchronization of stress components with another stress component at the same or different area fraction of particles. The different model equations of stress components for different area fraction of particles hints at the possible existence a general formula for stress fluctuations with area fraction of particle as a parameter
Resumo:
In this thesis we are studying possible invariants in hydrodynamics and hydromagnetics. The concept of flux preservation and line preservation of vector fields, especially vorticity vector fields, have been studied from the very beginning of the study of fluid mechanics by Helmholtz and others. In ideal magnetohydrodynamic flows the magnetic fields satisfy the same conservation laws as that of vorticity field in ideal hydrodynamic flows. Apart from these there are many other fields also in ideal hydrodynamic and magnetohydrodynamic flows which preserves flux across a surface or whose vector lines are preserved. A general study using this analogy had not been made for a long time. Moreover there are other physical quantities which are also invariant under the flow, such as Ertel invariant. Using the calculus of differential forms Tur and Yanovsky classified the possible invariants in hydrodynamics. This mathematical abstraction of physical quantities to topological objects is needed for an elegant and complete analysis of invariants.Many authors used a four dimensional space-time manifold for analysing fluid flows. We have also used such a space-time manifold in obtaining invariants in the usual three dimensional flows.In chapter one we have discussed the invariants related to vorticity field using vorticity field two form w2 in E4. Corresponding to the invariance of four form w2 ^ w2 we have got the invariance of the quantity E. w. We have shown that in an isentropic flow this quantity is an invariant over an arbitrary volume.In chapter three we have extended this method to any divergence-free frozen-in field. In a four dimensional space-time manifold we have defined a closed differential two form and its potential one from corresponding to such a frozen-in field. Using this potential one form w1 , it is possible to define the forms dw1 , w1 ^ dw1 and dw1 ^ dw1 . Corresponding to the invariance of the four form we have got an additional invariant in the usual hydrodynamic flows, which can not be obtained by considering three dimensional space.In chapter four we have classified the possible integral invariants associated with the physical quantities which can be expressed using one form or two form in a three dimensional flow. After deriving some general results which hold for an arbitrary dimensional manifold we have illustrated them in the context of flows in three dimensional Euclidean space JR3. If the Lie derivative of a differential p-form w is not vanishing,then the surface integral of w over all p-surfaces need not be constant of flow. Even then there exist some special p-surfaces over which the integral is a constant of motion, if the Lie derivative of w satisfies certain conditions. Such surfaces can be utilised for investigating the qualitative properties of a flow in the absence of invariance over all p-surfaces. We have also discussed the conditions for line preservation and surface preservation of vector fields. We see that the surface preservation need not imply the line preservation. We have given some examples which illustrate the above results. The study given in this thesis is a continuation of that started by Vedan et.el. As mentioned earlier, they have used a four dimensional space-time manifold to obtain invariants of flow from variational formulation and application of Noether's theorem. This was from the point of view of hydrodynamic stability studies using Arnold's method. The use of a four dimensional manifold has great significance in the study of knots and links. In the context of hydrodynamics, helicity is a measure of knottedness of vortex lines. We are interested in the use of differential forms in E4 in the study of vortex knots and links. The knowledge of surface invariants given in chapter 4 may also be utilised for the analysis of vortex and magnetic reconnections.
Resumo:
One of the interesting consequences of Einstein's General Theory of Relativity is the black hole solutions. Until the observation made by Hawking in 1970s, it was believed that black holes are perfectly black. The General Theory of Relativity says that black holes are objects which absorb both matter and radiation crossing the event horizon. The event horizon is a surface through which even light is not able to escape. It acts as a one sided membrane that allows the passage of particles only in one direction i.e. towards the center of black holes. All the particles that are absorbed by black hole increases the mass of the black hole and thus the size of event horizon also increases. Hawking showed in 1970s that when applying quantum mechanical laws to black holes they are not perfectly black but they can emit radiation. Thus the black hole can have temperature known as Hawking temperature. In the thesis we have studied some aspects of black holes in f(R) theory of gravity and Einstein's General Theory of Relativity. The scattering of scalar field in this background space time studied in the first chapter shows that the extended black hole will scatter scalar waves and have a scattering cross section and applying tunneling mechanism we have obtained the Hawking temperature of this black hole. In the following chapter we have investigated the quasinormal properties of the extended black hole. We have studied the electromagnetic and scalar perturbations in this space-time and find that the black hole frequencies are complex and show exponential damping indicating the black hole is stable against the perturbations. In the present study we show that not only the black holes exist in modified gravities but also they have similar properties of black hole space times in General Theory of Relativity. 2 + 1 black holes or three dimensional black holes are simplified examples of more complicated four dimensional black holes. Thus these models of black holes are known as toy models of black holes in four dimensional black holes in General theory of Relativity. We have studied some properties of these types of black holes in Einstein model (General Theory of Relativity). A three dimensional black hole known as MSW is taken for our study. The thermodynamics and spectroscopy of MSW black hole are studied and obtained the area spectrum which is equispaced and different thermo dynamical properties are studied. The Dirac perturbation of this three dimensional black hole is studied and the resulting quasinormal spectrum of this three dimensional black hole is obtained. The different quasinormal frequencies are tabulated in tables and these values show an exponential damping of oscillations indicating the black hole is stable against the mass less Dirac perturbation. In General Theory of Relativity almost all solutions contain singularities. The cosmological solution and different black hole solutions of Einstein's field equation contain singularities. The regular black hole solutions are those which are solutions of Einstein's equation and have no singularity at the origin. These solutions possess event horizon but have no central singularity. Such a solution was first put forward by Bardeen. Hayward proposed a similar regular black hole solution. We have studied the thermodynamics and spectroscopy of Hay-ward regular black holes. We have also obtained the different thermodynamic properties and the area spectrum. The area spectrum is a function of the horizon radius. The entropy-heat capacity curve has a discontinuity at some value of entropy showing a phase transition.
Resumo:
Inhalt dieser Arbeit ist ein Verfahren zur numerischen Lösung der zweidimensionalen Flachwassergleichung, welche das Fließverhalten von Gewässern, deren Oberflächenausdehnung wesentlich größer als deren Tiefe ist, modelliert. Diese Gleichung beschreibt die gravitationsbedingte zeitliche Änderung eines gegebenen Anfangszustandes bei Gewässern mit freier Oberfläche. Diese Klasse beinhaltet Probleme wie das Verhalten von Wellen an flachen Stränden oder die Bewegung einer Flutwelle in einem Fluss. Diese Beispiele zeigen deutlich die Notwendigkeit, den Einfluss von Topographie sowie die Behandlung von Nass/Trockenübergängen im Verfahren zu berücksichtigen. In der vorliegenden Dissertation wird ein, in Gebieten mit hinreichender Wasserhöhe, hochgenaues Finite-Volumen-Verfahren zur numerischen Bestimmung des zeitlichen Verlaufs der Lösung der zweidimensionalen Flachwassergleichung aus gegebenen Anfangs- und Randbedingungen auf einem unstrukturierten Gitter vorgestellt, welches in der Lage ist, den Einfluss topographischer Quellterme auf die Strömung zu berücksichtigen, sowie in sogenannten \glqq lake at rest\grqq-stationären Zuständen diesen Einfluss mit den numerischen Flüssen exakt auszubalancieren. Basis des Verfahrens ist ein Finite-Volumen-Ansatz erster Ordnung, welcher durch eine WENO Rekonstruktion unter Verwendung der Methode der kleinsten Quadrate und eine sogenannte Space Time Expansion erweitert wird mit dem Ziel, ein Verfahren beliebig hoher Ordnung zu erhalten. Die im Verfahren auftretenden Riemannprobleme werden mit dem Riemannlöser von Chinnayya, LeRoux und Seguin von 1999 gelöst, welcher die Einflüsse der Topographie auf den Strömungsverlauf mit berücksichtigt. Es wird in der Arbeit bewiesen, dass die Koeffizienten der durch das WENO-Verfahren berechneten Rekonstruktionspolynome die räumlichen Ableitungen der zu rekonstruierenden Funktion mit einem zur Verfahrensordnung passenden Genauigkeitsgrad approximieren. Ebenso wird bewiesen, dass die Koeffizienten des aus der Space Time Expansion resultierenden Polynoms die räumlichen und zeitlichen Ableitungen der Lösung des Anfangswertproblems approximieren. Darüber hinaus wird die wohlbalanciertheit des Verfahrens für beliebig hohe numerische Ordnung bewiesen. Für die Behandlung von Nass/Trockenübergangen wird eine Methode zur Ordnungsreduktion abhängig von Wasserhöhe und Zellgröße vorgeschlagen. Dies ist notwendig, um in der Rechnung negative Werte für die Wasserhöhe, welche als Folge von Oszillationen des Raum-Zeit-Polynoms auftreten können, zu vermeiden. Numerische Ergebnisse die die theoretische Verfahrensordnung bestätigen werden ebenso präsentiert wie Beispiele, welche die hervorragenden Eigenschaften des Gesamtverfahrens in der Berechnung herausfordernder Probleme demonstrieren.
Resumo:
Veränderungen des Raum-Zeit-Verhaltens im Zuge von Lebensumbrüchen und ihre Anforderungen an die Stadt- und Verkehrsplanung am Beispiel des Eintritts in den Ruhestand. In der vorliegenden Dissertation wurde untersucht, ob und in welchem Maße sich das Raum-Zeit-Verhalten im Alltag mit dem Eintritt in den Ruhestand verändert. Bei der Untersuchung handelt es sich um eine mehrjährige Panel-Studie, die in den Regionen Hamburg und Kassel durchgeführt wurde. Mit insgesamt 50 Studienteilnehmern wurden vor und nach ihrem Ausscheiden aus dem Erwerbsleben umfassende Interviews geführt. Hierbei kam die an der Oxford University entwickelte „HATS“-Methode („Household Activity Travel Simulator“) zum Einsatz, die einen tiefen Einblick in die Alltagsstrukturen und das aus ihnen resultierende raum-zeitliche Verhalten ermöglicht. Ein Untersuchungsschwerpunkt lag auf der Verkehrsmittelwahl. Auf Grundlage der Untersuchungsergebnisse wurden Handlungsempfehlungen für die Stadt- und Verkehrsplanung abgeleitet. Es zeigte sich, dass die Studienteilnehmer mit ihrem Eintritt in den Ruhestand grundsätzlich deutlich später im Tagesverlauf als Verkehrsteilnehmer in Erscheinung treten. Darüber hinaus zeichnete sich ein Bedeutungszuwachs des Stadtquartiers bzw. der nahräumlichen Mobilität ab; der Fuß- und Fahrradverkehr gewinnt für die Alltagsmobilität an Bedeutung. Versorgungs- und Dienstleistungsangebote im eigenen Wohnquartier – und somit nutzungsgemischte Stadtquartiere – erweisen sich demnach insbesondere für die Gruppe der Ruheständler als besonders relevant. Trotz der steigenden Bedeutung des Fuß-und Fahrradverkehrs zeigt die Studie, dass dem Pkw in der Alltagsmobilität eine (nach wie vor) dominante Rolle zukommt – eine Entwicklung, die sich aufgrund des Kohorteneffekts eher noch verstärken wird. Im Rahmen der Diskussion geeigneter Handlungsansätze für die Stadt- und Verkehrsplanung zur Stärkung des Umweltverbundes werden – neben Interventionen zur Attraktivitätsminderung des Pkw – insbesondere verschiedene Maßnahmen zu Angebotsverbesserungen im ÖPNV behandelt. Dabei wird u. a. auch die Verhaltensrelevanz von Kostenwahrnehmungen betrachtet. Zusätzlich wird deutlich, dass bei der Etablierung verkehrsplanerischer Maßnahmen auch die zu-nehmende Bedeutung von Wegen, die in Begleitung anderer Haushaltsmitglieder zurückgelegt werden (Stichwort: Haushaltsmobilität), berücksichtigt werden muss. Der Eintritt in den Ruhestand erweist sich grundsätzlich als eine Umbruchsituation im Lebensverlauf, die ein Aufbrechen von (Verkehrs-)Gewohnheiten im Alltag begünstigt und die Betroffenen besonders empfänglich für Informationen zu verschiedenen Verkehrsangeboten bzw. Verhaltens-alternativen werden lässt. Hinsichtlich möglicher Handlungsansätze wird in dieser Studie u. a. thematisiert, wie im Rahmen einer zielgruppenspezifischen Kommunikation im ÖPNV dieses Zeit-fenster genutzt werden kann, um Menschen an der Schwelle zum Eintritt in den Ruhestand als regelmäßige Nutzer von Bus und Bahn (neu) zu gewinnen bzw. zu halten.
Resumo:
La información y los datos genéticos que emanan hoy de las investigaciones del genoma humano demandan el desarrollo de herramientas informáticas capaces de procesar la gran cantidad de información disponible. La mayor cantidad de datos genéticos es el resultado de equipos que realizan el análisis simultáneo de cientos o miles de polimorfismos o variaciones genéticas, de nuevas técnicas de laboratorio de mayor rendimiento que, en conjunto, ofrecen una mayor disponibilidad de información en un corto espacio de tiempo. Esta problemática conduce a la necesidad de desarrollar nuevas herramientas informáticas capaces de lidiar con este mayor volumen de datos genéticos. En el caso de la genética de poblaciones, a pesar de que existen herramientas informáticas que permiten procesar y facilitar el análisis de los datos, estas tienen limitaciones como la falta de conocimiento de los usuarios de algunos lenguajes de programación para alimentar la información y otras herramientas informáticas no realizan todas las estimaciones que se requieren y otros presentan limitaciones en cuanto al número de datos que pueden incorporar o manejar. En algunos casos hay redundancia al tener que usarse dos o más herramientas para poder procesar un conjunto de datos de información genética. El presente trabajo tiene por objetivo el desarrollo de una herramienta informática basada en aplicaciones de computador comunes, en este caso Microsoft Excel® y que resuelva todos los problemas y las limitaciones descritas antes. El desarrollo del conjunto de subprogramas que constituyen a Lustro; permiten superar lo anterior, presentar los resultados en un ambiente sencillo, conocido y fácil de operar, simplificando de esta forma el proceso de adaptación del usuario del programa, sin entrenamiento previo, obteniéndose en corto tiempo el procesamiento de la información genética de interés.
Resumo:
Con este trabajo se pretende generar una implementación de una mejora logística que agregue valor, aumente la eficiencia y mejore los procesos de almacenamiento y distribución, gestión del control de inventarios y seguridad industrial de la empresa YOKOMOTOS. Se realizó un estudio profundo de la situación y los problemas que tiene actualmente la empresa. Todo esto con el fin de dar resultados que diferencien a esta compañía en el mercado de los repuestos para motos, obteniendo mayor prestigio y reconocimiento a nivel latinoamericano. De igual forma se establecieron las posibles soluciones que permitieran mitigar estos problemas, mejorando los procesos en el área de almacenamiento, sistema de inventarios y seguridad industrial. Se realizaron diferentes pruebas piloto para analizar la viabilidad de nuestras soluciones, analizando espacios, tiempos y costos. Por último se implementó la mejor solución la cual se ajustó respondiendo a los requerimientos de la compañía, mejorando así los procesos de almacenamiento y distribución agregándole valor a su cadena.
Resumo:
El actual documento presenta los resultados de una investigación que vengo desarrollando hace algunos años sobre El Otoño del Patriarca. Esta investigación sufrió una importante variación en torno a la reflexión sobre los propósitos de la lectura filosófica de un texto literario, y ha sido consecuencia de, por un lado, un desplazamiento en la determinación de las relaciones entre las dimensiones del lenguaje, el poder, el tiempo y el espacio abiertos por la narración, y por otro, de una diferencia de concepción sobre la formación discursiva de estas relaciones en El Otoño del Patriarca como obra narrativa, oral y poética.
Resumo:
Resumen tomado de la publicación
Resumo:
A transição do 1º para o 2º ciclo implica, em muitos casos, para além de uma mudança nos modelos de organização (espaços, tempos e pessoas…), uma mudança da própria escola. E se uma preparação atempada pode ser facilitadora nesse processo de transição (inter-escolas; escola/família e família/aluno), a verdade é que o novo ciclo implica novos problemas e novos desafios que testam e mobilizam, diariamente, nos alunos em trânsito, a sua capacidade de adaptação a novas situações. Aceitando-se que para um elevado número de crianças, esse período é curto e facilmente ultrapassável, reconhece-se que para outras, a inclusão no novo ciclo exige mais tempo e adaptações específicas, em função das necessidades educativas especiais que manifestam. O trabalho que se apresenta orientado numa perspectiva ecológica e desenvolvido com base numa metodologia de investigação-acção, permitiu-nos um melhor conhecimento do "Pedro" (nome fictício), enquanto pessoa (jovem, aluno, colega, filho, neto, vizinho e amigo), e dos contextos nos quais se movimenta; a identificação das suas potencialidades e necessidades educativas e a definição, implementação e avaliação das respostas educativas que viabilizaram e optimizaram a sua inclusão na escola do 2º ciclo. A intervenção realizada implicou um trabalho de equipa caracterizado pela colaboração e articulação regular, entre os diferentes intervenientes e pela persistência e coerência na acção desenvolvida. Permitiu ainda uma maior consciencialização de que quaisquer que sejam as características que nos tornam singulares, é possível evoluir em relação ao ponto de partida, se nos diferentes contextos de vida de cada pessoa se criarem as condições que viabilizem e estimulem percursos evolutivos. Com o trabalho desenvolvido reforçaram-se relações interpessoais, aprofundou-se a colaboração entre pares; entre a Escola e a Família, entre os Pais e o "Pedro" e desenvolveram-se as aprendizagens dos diferentes intervenientes, conforme testemunha a avaliação realizada.
Resumo:
El autor realiza un análisis sociológico de la novela de Pareja, amparado en los criterios del crítico Lucien Goldmann. Las categorías narratológicas que sirven de pauta son: el narrador, el espacio, el tiempo y los personajes. Se detiene en el análisis de los tres espacios que Pareja proyecta en la novela: campo, ciudad (ámbito identificado con la pobreza, el robo y el abuso de las autoridades) y mundo exterior (apenas referencial, pero gravitante en los inicios de modernización del país). Revisa los personajes y la estructura mental del autor. Concluye que el texto plantea una concepción diferente del arte de novelar, a tono con la vanguardia europea, a pesar de que muestra una realidad social que no cambia (Baldomera nació pobre, se desplaza entre el burdel, la cantina, el hospital y, finalmente, la cárcel). El texto se aleja del simple documento testimonial gracias al diseño del personaje: pese a su descomunal físico ya que personifica la desgracia, Baldomera expresa, al mismo tiempo, los valores de fidelidad, amor maternal y solidaridad.
Resumo:
Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.
Resumo:
When the orthogonal space-time block code (STBC), or the Alamouti code, is applied on a multiple-input multiple-output (MIMO) communications system, the optimum reception can be achieved by a simple signal decoupling at the receiver. The performance, however, deteriorates significantly in presence of co-channel interference (CCI) from other users. In this paper, such CCI problem is overcome by applying the independent component analysis (ICA), a blind source separation algorithm. This is based on the fact that, if the transmission data from every transmit antenna are mutually independent, they can be effectively separated at the receiver with the principle of the blind source separation. Then equivalently, the CCI is suppressed. Although they are not required by the ICA algorithm itself, a small number of training data are necessary to eliminate the phase and order ambiguities at the ICA outputs, leading to a semi-blind approach. Numerical simulation is also shown to verify the proposed ICA approach in the multiuser MIMO system.
Resumo:
This paper introduces perspex algebra which is being developed as a common representation of geometrical knowledge. A perspex can currently be interpreted in one of four ways. First, the algebraic perspex is a generalization of matrices, it provides the most general representation for all of the interpretations of a perspex. The algebraic perspex can be used to describe arbitrary sets of coordinates. The remaining three interpretations of the perspex are all related to square matrices and operate in a Euclidean model of projective space-time, called perspex space. Perspex space differs from the usual Euclidean model of projective space in that it contains the point at nullity. It is argued that the point at nullity is necessary for a consistent account of perspective in top-down vision. Second, the geometric perspex is a simplex in perspex space. It can be used as a primitive building block for shapes, or as a way of recording landmarks on shapes. Third, the transformational perspex describes linear transformations in perspex space that provide the affine and perspective transformations in space-time. It can be used to match a prototype shape to an image, even in so called 'accidental' views where the depth of an object disappears from view, or an object stays in the same place across time. Fourth, the parametric perspex describes the geometric and transformational perspexes in terms of parameters that are related to everyday English descriptions. The parametric perspex can be used to obtain both continuous and categorical perception of objects. The paper ends with a discussion of issues related to using a perspex to describe logic.