41 resultados para Threshold concept theory
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
Tese de doutoramento em Filosofia
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
A PhD Dissertation, presented as part of the requirements for the Degree of Doctor of Philosophy from the NOVA - School of Business and Economics
Resumo:
Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).
Resumo:
In the past years, Software Architecture has attracted increased attention by academia and industry as the unifying concept to structure the design of complex systems. One particular research area deals with the possibility of reconfiguring architectures to adapt the systems they describe to new requirements. Reconfiguration amounts to adding and removing components and connections, and may have to occur without stopping the execution of the system being reconfigured. This work contributes to the formal description of such a process. Taking as a premise that a single formalism hardly ever satisfies all requirements in every situation, we present three approaches, each one with its own assumptions about the systems it can be applied to and with different advantages and disadvantages. Each approach is based on work of other researchers and has the aesthetic concern of changing as little as possible the original formalism, keeping its spirit. The first approach shows how a given reconfiguration can be specified in the same manner as the system it is applied to and in a way to be efficiently executed. The second approach explores the Chemical Abstract Machine, a formalism for rewriting multisets of terms, to describe architectures, computations, and reconfigurations in a uniform way. The last approach uses a UNITY-like parallel programming design language to describe computations, represents architectures by diagrams in the sense of Category Theory, and specifies reconfigurations by graph transformation rules.
Resumo:
The paper will present the central discourse of the knowledge-based society. Already in the 1960s the debate of the industrial society already raised the question whether there can be considered a paradigm shift towards a knowledge-based society. Some prominent authors already foreseen ‘knowledge’ as the main indicator in order to displace ‘labour’ and ‘capital’ as the main driving forces of the capitalistic development. Today on the political level and also in many scientific disciplines the assumption that we are already living in a knowledge-based society seems obvious. Although we still do not have a theory of the knowledge-based society and there still exist a methodological gap about the empirical indicators, the vision of a knowledge-based society determines at least the perception of the Western societies. In a first step the author will pinpoint the assumptions about the knowledge-based society on three levels: on the societal, on the organisational and on the individual level. These assumptions are relied on the following topics: a) The role of the information and communication technologies; b) The dynamic development of globalisation as an ‘evolutionary’ process; c) The increasing importance of knowledge management within organisations; d) The changing role of the state within the economic processes. Not only the differentiation between the levels but also the revision of the assumptions of a knowledge-based society will show that the ‘topics raised in the debates’ cannot be considered as the results of a profound societal paradigm shift. However what seems very impressive is the normative and virtual shift towards a concept of modernity, which strongly focuses on the role of technology as a driving force as well as on the global economic markets, which has to be accepted. Therefore – according to the official debate - the successful adaptation of these processes seems the only way to meet the knowledge-based society. Analysing the societal changes on the three levels, the label ‘knowledge-based society’ can be seen critically. Therefore the main question of Theodor W. Adorno during the 16th Congress of Sociology in 1968 did not loose its actuality. Facing the societal changes he asked whether we are still living in the industrial society or already in a post-industrial state. Thinking about the knowledge-based society according to these two options, this exercise would enrich the whole debate in terms of social inequality, political, economic exclusion processes and at least the power relationship between social groups.
Resumo:
Banks that has introduced CRM system, had to make some difficult changes in their organization in order to become more customer oriented. Beside the pure CRM banks try to adopt other innovative tools related with the core CRM. Some of these solutions are constructed in such a way so that ensured could be also access to the information beside to bank’s organization.
Resumo:
This Thesis describes the application of automatic learning methods for a) the classification of organic and metabolic reactions, and b) the mapping of Potential Energy Surfaces(PES). The classification of reactions was approached with two distinct methodologies: a representation of chemical reactions based on NMR data, and a representation of chemical reactions from the reaction equation based on the physico-chemical and topological features of chemical bonds. NMR-based classification of photochemical and enzymatic reactions. Photochemical and metabolic reactions were classified by Kohonen Self-Organizing Maps (Kohonen SOMs) and Random Forests (RFs) taking as input the difference between the 1H NMR spectra of the products and the reactants. The development of such a representation can be applied in automatic analysis of changes in the 1H NMR spectrum of a mixture and their interpretation in terms of the chemical reactions taking place. Examples of possible applications are the monitoring of reaction processes, evaluation of the stability of chemicals, or even the interpretation of metabonomic data. A Kohonen SOM trained with a data set of metabolic reactions catalysed by transferases was able to correctly classify 75% of an independent test set in terms of the EC number subclass. Random Forests improved the correct predictions to 79%. With photochemical reactions classified into 7 groups, an independent test set was classified with 86-93% accuracy. The data set of photochemical reactions was also used to simulate mixtures with two reactions occurring simultaneously. Kohonen SOMs and Feed-Forward Neural Networks (FFNNs) were trained to classify the reactions occurring in a mixture based on the 1H NMR spectra of the products and reactants. Kohonen SOMs allowed the correct assignment of 53-63% of the mixtures (in a test set). Counter-Propagation Neural Networks (CPNNs) gave origin to similar results. The use of supervised learning techniques allowed an improvement in the results. They were improved to 77% of correct assignments when an ensemble of ten FFNNs were used and to 80% when Random Forests were used. This study was performed with NMR data simulated from the molecular structure by the SPINUS program. In the design of one test set, simulated data was combined with experimental data. The results support the proposal of linking databases of chemical reactions to experimental or simulated NMR data for automatic classification of reactions and mixtures of reactions. Genome-scale classification of enzymatic reactions from their reaction equation. The MOLMAP descriptor relies on a Kohonen SOM that defines types of bonds on the basis of their physico-chemical and topological properties. The MOLMAP descriptor of a molecule represents the types of bonds available in that molecule. The MOLMAP descriptor of a reaction is defined as the difference between the MOLMAPs of the products and the reactants, and numerically encodes the pattern of bonds that are broken, changed, and made during a chemical reaction. The automatic perception of chemical similarities between metabolic reactions is required for a variety of applications ranging from the computer validation of classification systems, genome-scale reconstruction (or comparison) of metabolic pathways, to the classification of enzymatic mechanisms. Catalytic functions of proteins are generally described by the EC numbers that are simultaneously employed as identifiers of reactions, enzymes, and enzyme genes, thus linking metabolic and genomic information. Different methods should be available to automatically compare metabolic reactions and for the automatic assignment of EC numbers to reactions still not officially classified. In this study, the genome-scale data set of enzymatic reactions available in the KEGG database was encoded by the MOLMAP descriptors, and was submitted to Kohonen SOMs to compare the resulting map with the official EC number classification, to explore the possibility of predicting EC numbers from the reaction equation, and to assess the internal consistency of the EC classification at the class level. A general agreement with the EC classification was observed, i.e. a relationship between the similarity of MOLMAPs and the similarity of EC numbers. At the same time, MOLMAPs were able to discriminate between EC sub-subclasses. EC numbers could be assigned at the class, subclass, and sub-subclass levels with accuracies up to 92%, 80%, and 70% for independent test sets. The correspondence between chemical similarity of metabolic reactions and their MOLMAP descriptors was applied to the identification of a number of reactions mapped into the same neuron but belonging to different EC classes, which demonstrated the ability of the MOLMAP/SOM approach to verify the internal consistency of classifications in databases of metabolic reactions. RFs were also used to assign the four levels of the EC hierarchy from the reaction equation. EC numbers were correctly assigned in 95%, 90%, 85% and 86% of the cases (for independent test sets) at the class, subclass, sub-subclass and full EC number level,respectively. Experiments for the classification of reactions from the main reactants and products were performed with RFs - EC numbers were assigned at the class, subclass and sub-subclass level with accuracies of 78%, 74% and 63%, respectively. In the course of the experiments with metabolic reactions we suggested that the MOLMAP / SOM concept could be extended to the representation of other levels of metabolic information such as metabolic pathways. Following the MOLMAP idea, the pattern of neurons activated by the reactions of a metabolic pathway is a representation of the reactions involved in that pathway - a descriptor of the metabolic pathway. This reasoning enabled the comparison of different pathways, the automatic classification of pathways, and a classification of organisms based on their biochemical machinery. The three levels of classification (from bonds to metabolic pathways) allowed to map and perceive chemical similarities between metabolic pathways even for pathways of different types of metabolism and pathways that do not share similarities in terms of EC numbers. Mapping of PES by neural networks (NNs). In a first series of experiments, ensembles of Feed-Forward NNs (EnsFFNNs) and Associative Neural Networks (ASNNs) were trained to reproduce PES represented by the Lennard-Jones (LJ) analytical potential function. The accuracy of the method was assessed by comparing the results of molecular dynamics simulations (thermal, structural, and dynamic properties) obtained from the NNs-PES and from the LJ function. The results indicated that for LJ-type potentials, NNs can be trained to generate accurate PES to be used in molecular simulations. EnsFFNNs and ASNNs gave better results than single FFNNs. A remarkable ability of the NNs models to interpolate between distant curves and accurately reproduce potentials to be used in molecular simulations is shown. The purpose of the first study was to systematically analyse the accuracy of different NNs. Our main motivation, however, is reflected in the next study: the mapping of multidimensional PES by NNs to simulate, by Molecular Dynamics or Monte Carlo, the adsorption and self-assembly of solvated organic molecules on noble-metal electrodes. Indeed, for such complex and heterogeneous systems the development of suitable analytical functions that fit quantum mechanical interaction energies is a non-trivial or even impossible task. The data consisted of energy values, from Density Functional Theory (DFT) calculations, at different distances, for several molecular orientations and three electrode adsorption sites. The results indicate that NNs require a data set large enough to cover well the diversity of possible interaction sites, distances, and orientations. NNs trained with such data sets can perform equally well or even better than analytical functions. Therefore, they can be used in molecular simulations, particularly for the ethanol/Au (111) interface which is the case studied in the present Thesis. Once properly trained, the networks are able to produce, as output, any required number of energy points for accurate interpolations.
Resumo:
In this thesis we implement estimating procedures in order to estimate threshold parameters for the continuous time threshold models driven by stochastic di®erential equations. The ¯rst procedure is based on the EM (expectation-maximization) algorithm applied to the threshold model built from the Brownian motion with drift process. The second procedure mimics one of the fundamental ideas in the estimation of the thresholds in time series context, that is, conditional least squares estimation. We implement this procedure not only for the threshold model built from the Brownian motion with drift process but also for more generic models as the ones built from the geometric Brownian motion or the Ornstein-Uhlenbeck process. Both procedures are implemented for simu- lated data and the least squares estimation procedure is also implemented for real data of daily prices from a set of international funds. The ¯rst fund is the PF-European Sus- tainable Equities-R fund from the Pictet Funds company and the second is the Parvest Europe Dynamic Growth fund from the BNP Paribas company. The data for both funds are daily prices from the year 2004. The last fund to be considered is the Converging Europe Bond fund from the Schroder company and the data are daily prices from the year 2005.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertation presented at the Faculty of Sciences and Technology of the New University of Lisbon to obtain the degree of Doctor in Electrical Engineering, specialty of Robotics and Integrated Manufacturing
Resumo:
Since 1989, five parliamentary elections have been the stage for the foundation and demise of political parties aspiring to govern the new democratic Polish state. The demise of the AWS before the 2001 elections after ten years of attempts to create a centre-right core party resulted in a new splintering of the right-wing, and the centre-right became again devoid of a pivotal formation. While Eurosceptic parties in average gain 8 percent of the vote, in the 2001 Polish parliamentary elections Eurosceptic parties gained around 20 percent of the vote. In Poland right-wing parties show an unusual propensity for Euroscepticism. The persistence and increased importance of nationalism in Poland, which has prevented the development of a strong Christian democratic party, effectively explains the levels of Euroscepticism on the right. After the autumn 2005 parliamentary elections the national conservative party, Law and Justice, formed a governing coalition with the national Catholic League of Polish Families, creating one of the first Eurosceptic governments. Although this work does not intend to provide a theorisation of party systems development, it shows that the context of European integration fostered nationalists’ divisiveness of, and provoked the splitting of the right the unusual propensity of parties for Euroscepticism makes Poland a paradigmatic case of the kind of conflicts over European integration emerging in Central and Eastern European party systems.
Resumo:
Resumo: Com base no conceito de implementação de intenções (Gollwitzer, 1993, 1999) e na teoria do contexto de resposta de Kirsch & Lynn (1997), o presente trabalho testou a eficácia de uma intervenção combinada de implementação de intenções com hipnose e sugestão pós-hipnótica na promoção da adesão a uma tarefa simples (avaliação do humor) e uma tarefa difícil (actividade física). Os participantes são estudantes universitários de uma universidade na Nova Jérsia, (N=124, Estudo 1, EUA) e em Lisboa (N=323, Estudo 2, Portugal). Em ambos os estudos os participantes foram seleccionados a partir de uma amostra mais vasta baseado num escrutínio da sua sugestibilidade hipnótica avaliada por meio da Escala de Grupo de Sugestibilidade Hipnótica de Waterloo-Stanford (WSGC): Forma C. O Estudo 1 usou um desenho factorial do tipo 2x2x3 (tipo de intenção formada x hipnose x nível de sugestionabilidade) e o Estudo 2 usou um desenho factorial do tipo 2 x 2x 2 x 4 (tipo de tarefa x tipo de intenção formada x hipnose x nível de sugestionabilidade). No Estudo 1 foi pedido aos participantes que corressem todos os dias e durante três semanas durante 5 minutos, que medissem a sua pulsação antes e depois da actividade física e que mandassem um e-mail ao experimentador, fornecendo assim uma medida comportamental e uma medida de auto-relato. Aos participantes no grupo de intenções de meta foi apenas pedido que corressem todos os dias. Aos participantes no grupo de implementação de intenções foi pedido que especificasses com exactidão quando e onde iriam correr e enviar o e-mail. Para além disso, cerca de metade dos participantes foram hipnotizados e receberam uma sugestão pós-hipnótica em que lhes foi sugerido que o pensamento de correr todos os dias lhes viria à mente sem esforço no momento apropriado. A outra metade dos participantes não recebeu qualquer sugestão hipnótica. No Estudo 2 foi seguido o mesmo procedimento, mas a cerca de metade dos participantes foi atribuída uma tarefa fácil (enviar um Adherence to health-related behaviors ix SMS com a avaliação diária do seu estado de humor naquele momento) e à outra metade da amostra foi atribuída a tarefa de exercício físico atrás descrita (tarefa difícil). Os resultados do estudo 1 mostraram uma interacção significativa entre o nível de sugestionabilidade dos participantes e a sugestão pós-hipnótica (p<.01) indicando que a administração da sugestão pós-hipnótica aumentou a adesão nos participantes muito sugestionáveis, mas baixou a adesão nos participantes pouco sugestionáveis. Não se encontraram diferenças entre os grupos que formaram intenções de meta e os que formaram implementação de intenções. No Estudo 2 os resultados indicaram que os participantes aderiram significativamente mais à tarefa fácil do que à tarefa difícil (p<.001). Os resultados não revelaram diferenças significativas entre as condições implementações de intenções, hipnose e as duas estratégias combinadas, indicando que a implementação de intenções não foi eficaz no aumento da adesão às duas tarefas propostas e não beneficiou da combinação com as sugestões pós-hipnóticas. A utilização da hipnose com sugestão pós-hipnótica significativamente reduziu a adesão a ambas as tarefas. Dado que não existiam instrumentos em Português destinados a avaliar a sugestionabilidade hipnótica, traduziu-se e adaptou-se para Português Escala de Grupo de sugestibilidade hipnótica de Waterloo-Stanford (WSGC): Forma C. A amostra Portuguesa (N=625) apresentou resultados semelhantes aos encontrados nas amostras de referência em termos do formato da distribuição dos padrões da pontuação e do índice de dificuldade dos itens. Contudo, a proporção de estudantes portugueses encontrada que pontuaram na zona superior de sugestionabilidade foi significativamente inferior à proporção de participantes na mesma zona encontrada nas amostras de referência. No sentido de lançar alguma luz sobre as razões para este resultado, inquiriu-se alguns dos participantes acerca das suas atitudes face à hipnose utilizando uma versão portuguesa da Escala de Valência de Atitudes e Crenças face à Hipnose e comparou-se com a opinião de Adherence to health-related behaviors xAbstract: On the basis of Gollwitzer’s (1993, 1999) implementation intentions’ concept, and Kirsch & Lynn’s (1997) response set theory, this dissertation tested the effectiveness of a combined intervention of implementation intentions with hypnosis with posthypnotic suggestions in enhancing adherence to a simple (mood report) and a difficult (physical activity) health-related task. Participants were enrolled in a university in New Jersey (N=124, Study 1, USA) and in two universities in Lisbon (N=323, Study 2, Portugal). In both studies participants were selected from a broader sample based on their suggestibility scores using the Waterloo-Stanford Group C (WSGC) scale of hypnotic susceptibility and then randomly assigned to the experimental groups. Study 1 used a 2x2x3 factorial design (instruction x hypnosis x level of suggestibility) and Study 2 used a 2 x 2x 2 x 4 factorial design (task x instructions x hypnosis x level of suggestibility). In Study 1 participants were asked to run in place for 5 minutes each day for a three-week period, to take their pulse rate before and after the activity, and to send a daily email report to the experimenter, thus providing both a self-report and a behavioral measure of adherence. Participants in the goal intention condition were simply asked to run in place and send the e-mail once a day. Those in the implementation intention condition were further asked to specify the exact place and time they would perform the physical activity and send the e-mail. In addition, half of the participants were given a post-hypnotic suggestion indicating that the thought of running in place would come to mind without effort at the appropriate moment. The other half did not receive a posthypnotic suggestion. Study 2 followed the same procedure, but additionally half of the participants were instructed to send a mood report by SMS (easy task) and half were assigned to the physical activity task described above (difficult task). Adherence to health-related behaviors vii Study 1 result’s showed a significant interaction between participant’s suggestibility level and posthypnotic suggestion (p<.01) indicating that posthypnotic suggestion enhanced adherence among highly suggestible participants, but lowered it among low suggestible individuals. No differences between the goal intention and the implementation intentions groups were found. In Study 2, participants adhered significantly more (p<.001) to the easy task than to the difficult task. Results did not revealed significant differences between the implementation intentions, hypnosis and the two conditions combined, indicating that implementation intentions was not enhanced by hypnosis with posthypnotic suggestion, neither was effective as single intervention in enhancing adherence to any of the tasks. Hypnosis with posthypnotic suggestion alone significantly reduced adherence to both tasks in comparison with participants that did not receive hypnosis. Since there were no instruments in Portuguese language to asses hypnotic suggestibility, the Waterloo-Stanford Group C (WSGC) scale of hypnotic susceptibility was translated and adapted to Portuguese and was used in the screening of a sample of college students from Lisbon (N=625). Results showed that the Portuguese sample has distribution shapes and difficulty patterns of hypnotic suggestibility scores similar to the reference samples, with the exception of the proportion of Portuguese students scoring in the high range of hypnotic suggestibility, that was found lower than the in reference samples. In order to shed some light on the reasons for this finding participant’s attitudes toward hypnosis were inquired using a Portuguese translation and adaptation of the Escala de Valencia de Actitudes y Creencias Hacia la Hipnosis, Versión Cliente, and compared with participants with no prior hypnosis experience (N=444). Significant differences were found between the two groups with participants without hypnosis experience scoring higher in factors indicating misconceptions and negative attitudes about hypnosis.
Resumo:
The concept of species in Paleontology is of paramount importance since the correct taxonomic determinations are essential to establish the age of the beds where fossils are collected. Particularly since 1940, the concept of species from a biological context, corresponding to the variability of a set of interpopulation compatibility, led us to a new approach, in which a typological conception has been replaced by a populationist one. If the notion of species is not necessarily identical for all living organisms, the greater the difficulties of interpretation in the private world of cephalopod fossils. The latter, lend themselves well to population systematics, and where this concept of species rests primarily on the morphological similarities. Thus, the introduction of general ideas analyse "typological species", "biological species", the problem of the definition of a "population" in Paleontology, and also the importance of the biometric analysis of fossil associations. The classic examples of polymorphism amd polytypism, in existing or extinct organisms, show that the concept of fossil species, observed in a well-defined period of its lifetime, is no different from that of biological species. The study of the evolution of fossil organisms allow us to understand the modelities of evolution and the mechanisms of speciation here synthesized and fully documented, namely the anagenesis or sequential evolution and the cladogenesis or divergent evoltuion; these mechanisms are the basis of the synthetic or gradualist theory of evolution developed by Dobzhansky, Mayr, Huxley, Rensch and impson. This summary ends with a reference to the theory of punctuated (or intermittent) equilibria proposed by Gould and Eldredge, who presented a more objective interpretation of morphological gaps, considered as elements of evolution itself. The interdisciplinary collaboration between zoologists, geneticists and paleontologists, is compulsory in this domain. Paleozoology has a key role since it conveys the dynamism and depth to the dimension of space-time duality.