19 resultados para discriminants of number fields
Resumo:
The inherent stochastic character of most of the physical quantities involved in engineering models has led to an always increasing interest for probabilistic analysis. Many approaches to stochastic analysis have been proposed. However, it is widely acknowledged that the only universal method available to solve accurately any kind of stochastic mechanics problem is Monte Carlo Simulation. One of the key parts in the implementation of this technique is the accurate and efficient generation of samples of the random processes and fields involved in the problem at hand. In the present thesis an original method for the simulation of homogeneous, multi-dimensional, multi-variate, non-Gaussian random fields is proposed. The algorithm has proved to be very accurate in matching both the target spectrum and the marginal probability. The computational efficiency and robustness are very good too, even when dealing with strongly non-Gaussian distributions. What is more, the resulting samples posses all the relevant, welldefined and desired properties of “translation fields”, including crossing rates and distributions of extremes. The topic of the second part of the thesis lies in the field of non-destructive parametric structural identification. Its objective is to evaluate the mechanical characteristics of constituent bars in existing truss structures, using static loads and strain measurements. In the cases of missing data and of damages that interest only a small portion of the bar, Genetic Algorithm have proved to be an effective tool to solve the problem.
Resumo:
The primary goals of this study were to develop a cell-free in vitro assay for the assessment of nonthermal electromagnetic (EMF) bioeffects and to develop theoretical models in accord with current experimental observations. Based upon the hypothesis that EMF effects operate by modulating Ca2+/CaM binding, an in vitro nitric oxide (NO) synthesis assay was developed to assess the effects of a pulsed radiofrequency (PRF) signal used for treatment of postoperative pain and edema. No effects of PRF on NO synthesis were observed. Effects of PRF on Ca2+/CaM binding were also assessed using a Ca2+-selective electrode, also yielding no EMF Ca2+/CaM binding. However, a PRF effect was observed on the interaction of hemoglobin (Hb) with tetrahydrobiopterin, leading to the development of an in vitro Hb deoxygenation assay, showing a reduction in the rate of Hb deoxygenation for exposures to both PRF and a static magnetic field (SMF). Structural studies using pyranine fluorescence, Gd3+ vibronic sideband luminescence and attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy were conducted in order to ascertain the mechanism of this EMF effect on Hb. Also, the effect of SMF on Hb oxygen saturation (SO2) was assessed under gas-controlled conditions. These studies showed no definitive changes in protein/solvation structure or SO2 under equilibrium conditions, suggesting the need for real-time instrumentation or other means of observing out-of-equilibrium Hb dynamics. Theoretical models were developed for EMF transduction, effects on ion binding, neuronal spike timing, and dynamics of Hb deoxygenation. The EMF sensitivity and simplicity of the Hb deoxygenation assay suggest a new tool to further establish basic biophysical EMF transduction mechanisms. If an EMF-induced increase in the rate of deoxygenation can be demonstrated in vivo, then enhancement of oxygen delivery may be a new therapeutic method by which clinically relevant EMF-mediated enhancement of growth and repair processes can occur.
Resumo:
The purpose of this thesis is to investigate the strength and structure of the magnetized medium surrounding radio galaxies via observations of the Faraday effect. This study is based on an analysis of the polarization properties of radio galaxies selected to have a range of morphologies (elongated tails, or lobes with small axial ratios) and to be located in a variety of environments (from rich cluster core to small group). The targets include famous objects like M84 and M87. A key aspect of this work is the combination of accurate radio imaging with high-quality X-ray data for the gas surrounding the sources. Although the focus of this thesis is primarily observational, I developed analytical models and performed two- and three-dimensional numerical simulations of magnetic fields. The steps of the thesis are: (a) to analyze new and archival observations of Faraday rotation measure (RM) across radio galaxies and (b) to interpret these and existing RM images using sophisticated two and three-dimensional Monte Carlo simulations. The approach has been to select a few bright, very extended and highly polarized radio galaxies. This is essential to have high signal-to-noise in polarization over large enough areas to allow computation of spatial statistics such as the structure function (and hence the power spectrum) of rotation measure, which requires a large number of independent measurements. New and archival Very Large Array observations of the target sources have been analyzed in combination with high-quality X-ray data from the Chandra, XMM-Newton and ROSAT satellites. The work has been carried out by making use of: 1) Analytical predictions of the RM structure functions to quantify the RM statistics and to constrain the power spectra of the RM and magnetic field. 2) Two-dimensional Monte Carlo simulations to address the effect of an incomplete sampling of RM distribution and so to determine errors for the power spectra. 3) Methods to combine measurements of RM and depolarization in order to constrain the magnetic-field power spectrum on small scales. 4) Three-dimensional models of the group/cluster environments, including different magnetic field power spectra and gas density distributions. This thesis has shown that the magnetized medium surrounding radio galaxies appears more complicated than was apparent from earlier work. Three distinct types of magnetic-field structure are identified: an isotropic component with large-scale fluctuations, plausibly associated with the intergalactic medium not affected by the presence of a radio source; a well-ordered field draped around the front ends of the radio lobes and a field with small-scale fluctuations in rims of compressed gas surrounding the inner lobes, perhaps associated with a mixing layer.
Resumo:
A composite is a material made out of two or more constituents (phases) combined together in order to achieve desirable mechanical or thermal properties. Such innovative materials have been widely used in a large variety of engineering fields in the past decades. The design of a composite structure requires the resolution of a multiscale problem that involves a macroscale (i.e. the structural scale) and a microscale. The latter plays a crucial role in the determination of the material behavior at the macroscale, especially when dealing with constituents characterized by nonlinearities. For this reason, numerical tools are required in order to design composite structures by taking into account of their microstructure. These tools need to provide an accurate yet efficient solution in terms of time and memory requirements, due to the large number of internal variables of the problem. This issue is addressed by different methods that overcome this problem by reducing the number of internal variables. Within this framework, this thesis focuses on the development of a new homogenization technique named Mixed TFA (MxTFA) in order to solve the homogenization problem for nonlinear composites. This technique is based on a mixed-stress variational approach involving self-equilibrated stresses and plastic multiplier as independent variables on the Reference Volume Element (RVE). The MxTFA is developed for the case of elastoplasticity and viscoplasticity, and it is implemented into a multiscale analysis for nonlinear composites. Numerical results show the efficiency of the presented techniques, both at microscale and at macroscale level.
Resumo:
The main object of this thesis is the analysis and the quantization of spinning particle models which employ extended ”one dimensional supergravity” on the worldline, and their relation to the theory of higher spin fields (HS). In the first part of this work we have described the classical theory of massless spinning particles with an SO(N) extended supergravity multiplet on the worldline, in flat and more generally in maximally symmetric backgrounds. These (non)linear sigma models describe, upon quantization, the dynamics of particles with spin N/2. Then we have analyzed carefully the quantization of spinning particles with SO(N) extended supergravity on the worldline, for every N and in every dimension D. The physical sector of the Hilbert space reveals an interesting geometrical structure: the generalized higher spin curvature (HSC). We have shown, in particular, that these models of spinning particles describe a subclass of HS fields whose equations of motions are conformally invariant at the free level; in D = 4 this subclass describes all massless representations of the Poincar´e group. In the third part of this work we have considered the one-loop quantization of SO(N) spinning particle models by studying the corresponding partition function on the circle. After the gauge fixing of the supergravity multiplet, the partition function reduces to an integral over the corresponding moduli space which have been computed by using orthogonal polynomial techniques. Finally we have extend our canonical analysis, described previously for flat space, to maximally symmetric target spaces (i.e. (A)dS background). The quantization of these models produce (A)dS HSC as the physical states of the Hilbert space; we have used an iterative procedure and Pochhammer functions to solve the differential Bianchi identity in maximally symmetric spaces. Motivated by the correspondence between SO(N) spinning particle models and HS gauge theory, and by the notorious difficulty one finds in constructing an interacting theory for fields with spin greater than two, we have used these one dimensional supergravity models to study and extract informations on HS. In the last part of this work we have constructed spinning particle models with sp(2) R symmetry, coupled to Hyper K¨ahler and Quaternionic-K¨ahler (QK) backgrounds.
Resumo:
Singularities of robot manipulators have been intensely studied in the last decades by researchers of many fields. Serial singularities produce some local loss of dexterity of the manipulator, therefore it might be desirable to search for singularityfree trajectories in the jointspace. On the other hand, parallel singularities are very dangerous for parallel manipulators, for they may provoke the local loss of platform control, and jeopardize the structural integrity of links or actuators. It is therefore utterly important to avoid parallel singularities, while operating a parallel machine. Furthermore, there might be some configurations of a parallel manipulators that are allowed by the constraints, but nevertheless are unreachable by any feasible path. The present work proposes a numerical procedure based upon Morse theory, an important branch of differential topology. Such procedure counts and identify the singularity-free regions that are cut by the singularity locus out of the configuration space, and the disjoint regions composing the configuration space of a parallel manipulator. Moreover, given any two configurations of a manipulator, a feasible or a singularity-free path connecting them can always be found, or it can be proved that none exists. Examples of applications to 3R and 6R serial manipulators, to 3UPS and 3UPU parallel wrists, to 3UPU parallel translational manipulators, and to 3RRR planar manipulators are reported in the work.
Resumo:
The main reasons for the attention focused on ceramics as possible structural materials are their wear resistance and the ability to operate with limited oxidation and ablation at temperatures above 2000°C. Hence, this work is devoted to the study of two classes of materials which can satisfy these requirements: silicon carbide -based ceramics (SiC) for wear applications and borides and carbides of transition metals for ultra-high temperatures applications (UHTCs). SiC-based materials: Silicon carbide is a hard ceramic, which finds applications in many industrial sectors, from heat production, to automotive engineering and metals processing. In view of new fields of uses, SiC-based ceramics were produced with addition of 10-30 vol% of MoSi2, in order to obtain electro conductive ceramics. MoSi2, indeed, is an intermetallic compound which possesses high temperature oxidation resistance, high electrical conductivity (21·10-6 Ω·cm), relatively low density (6.31 g/cm3), high melting point (2030°C) and high stiffness (440 GPa). The SiC-based ceramics were hot pressed at 1900°C with addition of Al2O3-Y2O3 or Y2O3-AlN as sintering additives. The microstructure of the composites and of the reference materials, SiC and MoSi2, were studied by means of conventional analytical techniques, such as X-ray diffraction (XRD), scanning electron microscopy (SEM) and energy dispersive spectroscopy (SEM-EDS). The composites showed a homogeneous microstructure, with good dispersion of the secondary phases and low residual porosity. The following thermo-mechanical properties of the SiC-based materials were measured: Vickers hardness (HV), Young’s modulus (E), fracture toughness (KIc) and room to high temperature flexural strength (σ). The mechanical properties of the composites were compared to those of two monolithic SiC and MoSi2 materials and resulted in a higher stiffness, fracture toughness and slightly higher flexural resistance. Tribological tests were also performed in two configurations disco-on-pin and slideron cylinder, aiming at studying the wear behaviour of SiC-MoSi2 composites with Al2O3 as counterfacing materials. The tests pointed out that the addition of MoSi2 was detrimental owing to a lower hardness in comparison with the pure SiC matrix. On the contrary, electrical measurements revealed that the addition of 30 vol% of MoSi2, rendered the composite electroconductive, lowering the electrical resistance of three orders of magnitude. Ultra High Temperature Ceramics: Carbides, borides and nitrides of transition metals (Ti, Zr, Hf, Ta, Nb, Mo) possess very high melting points and interesting engineering properties, such as high hardness (20-25 GPa), high stiffness (400-500 GPa), flexural strengths which remain unaltered from room temperature to 1500°C and excellent corrosion resistance in aggressive environment. All these properties place the UHTCs as potential candidates for the development of manoeuvrable hypersonic flight vehicles with sharp leading edges. To this scope Zr- and Hf- carbide and boride materials were produced with addition of 5-20 vol% of MoSi2. This secondary phase enabled the achievement of full dense composites at temperature lower than 2000°C and without the application of pressure. Besides the conventional microstructure analyses XRD and SEM-EDS, transmission electron microscopy (TEM) was employed to explore the microstructure on a small length scale to disclose the effective densification mechanisms. A thorough literature analysis revealed that neither detailed TEM work nor reports on densification mechanisms are available for this class of materials, which however are essential to optimize the sintering aids utilized and the processing parameters applied. Microstructural analyses, along with thermodynamics and crystallographic considerations, led to disclose of the effective role of MoSi2 during sintering of Zrand Hf- carbides and borides. Among the investigated mechanical properties (HV, E, KIc, σ from room temperature to 1500°C), the high temperature flexural strength was improved due to the protective and sealing effect of a silica-based glassy phase, especially for the borides. Nanoindentation tests were also performed on HfC-MoSi2 composites in order to extract hardness and elastic modulus of the single phases. Finally, arc jet tests on HfC- and HfB2-based composites confirmed the excellent oxidation behaviour of these materials under temperature exceeding 2000°C; no cracking or spallation occurred and the modified layer was only 80-90 μm thick.
Resumo:
Hybrid technologies, thanks to the convergence of integrated microelectronic devices and new class of microfluidic structures could open new perspectives to the way how nanoscale events are discovered, monitored and controlled. The key point of this thesis is to evaluate the impact of such an approach into applications of ion-channel High Throughput Screening (HTS)platforms. This approach offers promising opportunities for the development of new classes of sensitive, reliable and cheap sensors. There are numerous advantages of embedding microelectronic readout structures strictly coupled to sensing elements. On the one hand the signal-to-noise-ratio is increased as a result of scaling. On the other, the readout miniaturization allows organization of sensors into arrays, increasing the capability of the platform in terms of number of acquired data, as required in the HTS approach, to improve sensing accuracy and reliabiity. However, accurate interface design is required to establish efficient communication between ionic-based and electronic-based signals. The work made in this thesis will show a first example of a complete parallel readout system with single ion channel resolution, using a compact and scalable hybrid architecture suitable to be interfaced to large array of sensors, ensuring simultaneous signal recording and smart control of the signal-to-noise ratio and bandwidth trade off. More specifically, an array of microfluidic polymer structures, hosting artificial lipid bilayers blocks where single ion channel pores are embededed, is coupled with an array of ultra-low noise current amplifiers for signal amplification and data processing. As demonstrating working example, the platform was used to acquire ultra small currents derived by single non-covalent molecular binding between alpha-hemolysin pores and beta-cyclodextrin molecules in artificial lipid membranes.
Resumo:
Reliable electronic systems, namely a set of reliable electronic devices connected to each other and working correctly together for the same functionality, represent an essential ingredient for the large-scale commercial implementation of any technological advancement. Microelectronics technologies and new powerful integrated circuits provide noticeable improvements in performance and cost-effectiveness, and allow introducing electronic systems in increasingly diversified contexts. On the other hand, opening of new fields of application leads to new, unexplored reliability issues. The development of semiconductor device and electrical models (such as the well known SPICE models) able to describe the electrical behavior of devices and circuits, is a useful means to simulate and analyze the functionality of new electronic architectures and new technologies. Moreover, it represents an effective way to point out the reliability issues due to the employment of advanced electronic systems in new application contexts. In this thesis modeling and design of both advanced reliable circuits for general-purpose applications and devices for energy efficiency are considered. More in details, the following activities have been carried out: first, reliability issues in terms of security of standard communication protocols in wireless sensor networks are discussed. A new communication protocol is introduced, allows increasing the network security. Second, a novel scheme for the on-die measurement of either clock jitter or process parameter variations is proposed. The developed scheme can be used for an evaluation of both jitter and process parameter variations at low costs. Then, reliability issues in the field of “energy scavenging systems” have been analyzed. An accurate analysis and modeling of the effects of faults affecting circuit for energy harvesting from mechanical vibrations is performed. Finally, the problem of modeling the electrical and thermal behavior of photovoltaic (PV) cells under hot-spot condition is addressed with the development of an electrical and thermal model.
Resumo:
A two-dimensional model to analyze the distribution of magnetic fields in the airgap of a PM electrical machines is studied. A numerical algorithm for non-linear magnetic analysis of multiphase surface-mounted PM machines with semi-closed slots is developed, based on the equivalent magnetic circuit method. By using a modular structure geometry, whose the basic element can be duplicated, it allows to design whatever typology of windings distribution. In comparison to a FEA, permits a reduction in computing time and to directly changing the values of the parameters in a user interface, without re-designing the model. Output torque and radial forces acting on the moving part of the machine can be calculated. In addition, an analytical model for radial forces calculation in multiphase bearingless Surface-Mounted Permanent Magnet Synchronous Motors (SPMSM) is presented. It allows to predict amplitude and direction of the force, depending on the values of torque current, of levitation current and of rotor position. It is based on the space vectors method, letting the analysis of the machine also during transients. The calculations are conducted by developing the analytical functions in Fourier series, taking all the possible interactions between stator and rotor mmf harmonic components into account and allowing to analyze the effects of electrical and geometrical quantities of the machine, being parametrized. The model is implemented in the design of a control system for bearingless machines, as an accurate electromagnetic model integrated in a three-dimensional mechanical model, where one end of the motor shaft is constrained to simulate the presence of a mechanical bearing, while the other is free, only supported by the radial forces developed in the interactions between magnetic fields, to realize a bearingless system with three degrees of freedom. The complete model represents the design of the experimental system to be realized in the laboratory.
Resumo:
The aims of this research study is to explore the opportunity to set up Performance Objectives (POs) parameters for specific risks in RTE products to propose for food industries and food authorities. In fact, even if microbiological criteria for Salmonella and Listeria monocytogenes Ready-to-Eat (RTE) products are included in the European Regulation, these parameters are not risk based and no microbiological criteria for Bacillus cereus in RTE products is present. For these reasons the behaviour of Salmonella enterica in RTE mixed salad, the microbiological characteristics in RTE spelt salad, and the definition of POs for Bacillus cereus and Listeria monocytogenes in RTE spelt salad has been assessed. Based on the data produced can be drawn the following conclusions: 1. A rapid growth of Salmonella enterica may occurr in mixed ingredient salads, and strict temperature control during the production chain of the product is critical. 2. Spelt salad is characterized by the presence of high number of Lactic Acid Bacteria. Listeria spp. and Enterobacteriaceae, on the contrary, did not grow during the shlef life, probably due to the relevant metabolic activity of LAB. 3. The use of spelt and cheese compliant with the suggested POs might significantly reduce the incidence of foodborne intoxications due to Bacillus cereus and Listeria monocytogenes and the proportions of recalls, causing huge economic losses for food companies commercializing RTE products. 4. The approach to calculate the POs values and reported in my work can be easily adapted to different food/risk combination as well as to any changes in the formulation of the same food products. 5. The optimized sampling plans in term of number of samples to collect can be derive in order to verify the compliance to POs values selected.
Resumo:
Large scale wireless adhoc networks of computers, sensors, PDAs etc. (i.e. nodes) are revolutionizing connectivity and leading to a paradigm shift from centralized systems to highly distributed and dynamic environments. An example of adhoc networks are sensor networks, which are usually composed by small units able to sense and transmit to a sink elementary data which are successively processed by an external machine. Recent improvements in the memory and computational power of sensors, together with the reduction of energy consumptions, are rapidly changing the potential of such systems, moving the attention towards datacentric sensor networks. A plethora of routing and data management algorithms have been proposed for the network path discovery ranging from broadcasting/floodingbased approaches to those using global positioning systems (GPS). We studied WGrid, a novel decentralized infrastructure that organizes wireless devices in an adhoc manner, where each node has one or more virtual coordinates through which both message routing and data management occur without reliance on either flooding/broadcasting operations or GPS. The resulting adhoc network does not suffer from the deadend problem, which happens in geographicbased routing when a node is unable to locate a neighbor closer to the destination than itself. WGrid allow multidimensional data management capability since nodes' virtual coordinates can act as a distributed database without needing neither special implementation or reorganization. Any kind of data (both single and multidimensional) can be distributed, stored and managed. We will show how a location service can be easily implemented so that any search is reduced to a simple query, like for any other data type. WGrid has then been extended by adopting a replication methodology. We called the resulting algorithm WRGrid. Just like WGrid, WRGrid acts as a distributed database without needing neither special implementation nor reorganization and any kind of data can be distributed, stored and managed. We have evaluated the benefits of replication on data management, finding out, from experimental results, that it can halve the average number of hops in the network. The direct consequence of this fact are a significant improvement on energy consumption and a workload balancing among sensors (number of messages routed by each node). Finally, thanks to the replications, whose number can be arbitrarily chosen, the resulting sensor network can face sensors disconnections/connections, due to failures of sensors, without data loss. Another extension to {WGrid} is {W*Grid} which extends it by strongly improving network recovery performance from link and/or device failures that may happen due to crashes or battery exhaustion of devices or to temporary obstacles. W*Grid guarantees, by construction, at least two disjoint paths between each couple of nodes. This implies that the recovery in W*Grid occurs without broadcasting transmissions and guaranteeing robustness while drastically reducing the energy consumption. An extensive number of simulations shows the efficiency, robustness and traffic road of resulting networks under several scenarios of device density and of number of coordinates. Performance analysis have been compared to existent algorithms in order to validate the results.
Resumo:
This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.
Resumo:
The humans process the numbers in a similar way to animals. There are countless studies in which similar performance between animals and humans (adults and/or children) are reported. Three models have been developed to explain the cognitive mechanisms underlying the number processing. The triple-code model (Dehaene, 1992) posits an mental number line as preferred way to represent magnitude. The mental number line has three particular effects: the distance, the magnitude and the SNARC effects. The SNARC effect shows a spatial association between number and space representations. In other words, the small numbers are related to left space while large numbers are related to right space. Recently a vertical SNARC effect has been found (Ito & Hatta, 2004; Schwarz & Keus, 2004), reflecting a space-related bottom-to-up representation of numbers. The magnitude representations horizontally and vertically could influence the subject performance in explicit and implicit digit tasks. The goal of this research project aimed to investigate the spatial components of number representation using different experimental designs and tasks. The experiment 1 focused on horizontal and vertical number representations in a within- and between-subjects designs in a parity and magnitude comparative tasks, presenting positive or negative Arabic digits (1-9 without 5). The experiment 1A replied the SNARC and distance effects in both spatial arrangements. The experiment 1B showed an horizontal reversed SNARC effect in both tasks while a vertical reversed SNARC effect was found only in comparative task. In the experiment 1C two groups of subjects performed both tasks in two different instruction-responding hand assignments with positive numbers. The results did not show any significant differences between two assignments, even if the vertical number line seemed to be more flexible respect to horizontal one. On the whole the experiment 1 seemed to demonstrate a contextual (i.e. task set) influences of the nature of the SNARC effect. The experiment 2 focused on the effect of horizontal and vertical number representations on spatial biases in a paper-and-pencil bisecting tasks. In the experiment 2A the participants were requested to bisect physical and number (2 or 9) lines horizontally and vertically. The findings demonstrated that digit 9 strings tended to generate a more rightward bias comparing with digit 2 strings horizontally. However in vertical condition the digit 2 strings generated a more upperward bias respect to digit 9 strings, suggesting a top-to-bottom number line. In the experiment 2B the participants were asked to bisect lines flanked by numbers (i.e. 1 or 7) in four spatial arrangements: horizontal, vertical, right-diagonal and left-diagonal lines. Four number conditions were created according to congruent or incongruent number line representation: 1-1, 1-7, 7-1 and 7-7. The main results showed a more reliable rightward bias in horizontal congruent condition (1-7) respect to incongruent condition (7-1). Vertically the incongruent condition (1-7) determined a significant bias towards bottom side of line respect to congruent condition (7-1). The experiment 2 suggested a more rigid horizontal number line while in vertical condition the number representation could be more flexible. In the experiment 3 we adopted the materials of experiment 2B in order to find a number line effect on temporal (motor) performance. The participants were presented horizontal, vertical, rightdiagonal and left-diagonal lines flanked by the same digits (i.e. 1-1 or 7-7) or by different digits (i.e. 1-7 or 7-1). The digits were spatially congruent or incongruent with their respective hypothesized mental representations. Participants were instructed to touch the lines either close to the large digit, or close to the small digit, or to bisected the lines. Number processing influenced movement execution more than movement planning. Number congruency influenced spatial biases mostly along the horizontal but also along the vertical dimension. These results support a two-dimensional magnitude representation. Finally, the experiment 4 addressed the visuo-spatial manipulation of number representations for accessing and retrieval arithmetic facts. The participants were requested to perform a number-matching and an addition verification tasks. The findings showed an interference effect between sum-nodes and neutral-nodes only with an horizontal presentation of digit-cues, in number-matching tasks. In the addition verification task, the performance was similar for horizontal and vertical presentations of arithmetic problems. In conclusion the data seemed to show an automatic activation of horizontal number line also used to retrieval arithmetic facts. The horizontal number line seemed to be more rigid and the preferred way to order number from left-to-right. A possible explanation could be the left-to-right direction for reading and writing. The vertical number line seemed to be more flexible and more dependent from the tasks, reflecting perhaps several example in the environment representing numbers either from bottom-to-top or from top-to-bottom. However the bottom-to-top number line seemed to be activated by explicit task demands.
Resumo:
La presente ricerca di dottorato si propone di esaminare l’evoluzione della teologia politica bizantina e dei suoi riflessi nella propaganda imperiale nel periodo compreso tra il XIII e il XIV secolo, attraverso lo studio delle manifestazioni dell'ideologia nell'iconografia e nella numismatica del periodo in esame. L'intreccio interdisciplinare di questi ambiti di ricerca, iconografia e numismatica - con una metodologia innovativa, i cui risultati si profilano estremamente proficui - permette di comprendere i caratteri concreti, ma forse più reconditi, del realizzarsi dell'ideologia politica e della propaganda imperiale nell'impero bizantino ormai ridotto ad una costellazione di potentati particolari di estensione limitata. Il tema specifico di questo studio ha come oggetto alcune iconografie ritenute inedite, o meno tradizionali, nel panorama numismatico bizantino, emesse, in particolare, dalla zecca di Tessalonica tra XIII e XIV secolo, che vengono qui esaminate in funzione dell’evoluzione della rappresentazione imperiale. Tra di esse emerge l’inedita iconografia dell’imperatore pteroforo per la sua valenza di interscambiabilità semantica con l’immagine arcangelica. Lo studio si è proposto l’obiettivo principale di rintracciare elementi iconologici quanto più possibile comuni tra tutti i soggetti iconografici presi in esame, vagliando il substrato ideologico e propagandistico sotteso alla valenza iconologica per ogni tipologia numismatica.