913 resultados para cyber-physical systems (CPS)
Resumo:
Para estudar os problemas de prospecção geofísica eletromagnética através de modelagem analógica, as condições em escala natural são representadas, no laboratório, em escala reduzida de acordo com a teoria da similitude. Portanto, para investigar os problemas de técnicas VLF, AFMAG e MT, frequentemente é necessário criar campo uniforme no arranjo experimental. Vários sistemas físicos para geração de campos uniformes são analisados teoricamente nesta tese. Os sistemas estudados aqui são a bobina circular, bobina de Helmholtz, solenóide, um plano de corrente e dois planos paralelos de correntes. As equações analíticas foram obtidas para campo magnético num ponto do espaço e subsequentemente as condições de campo uniforme. Nos casos em que as condições para o campo uniforme não puderam ser obtidas analiticamente, a porcentagem de desvio do campo em relação a um ponto pré-selecionado foi calculada. Contudo, os mapas de campo magnético, assim como o mapa de porcentagem de desvio, estão presentes para todos os sistemas estudados aqui. Também, foram calculados as áreas e os volumes espaciais de vários desvios de porcentagem do campo uniforme. Um estudo comparativo desses sistemas mostra que o solenóide é a maneira mais eficiente para criar um campo uniforme, seguido pelo sistema de bobinas de Helmholtz. Porém, o campo criado em um solenóide está em um espaço fechado onde é difícil colocar modelos e substituí-los para executar experimentos. Portanto, recomenda-se o uso de bobinas de Helmholtz para criar um campo uniforme. Este último sistema fornece campo uniforme com espaço aberto suficiente, o que facilita o experimento.
Resumo:
Optics consists in the study of interaction of light with physical systems. The human vision is a product of the interaction of light with the eye (a very peculiar physical system). Here we present a basic study of the relationship between the optics and human vision, including: - The fundaments and physicals properties who characterize the light and the colors and the characteristics of the incidence mediums. - The basics laws of geometrical optics, based in the rectilinear propagation of light in the form of a light ray, in the independence of light rays and in the principle of reversibility of the light beams. This principle is present in the process of image formations in lenses and mirrors and applied in the study of image formation in the human eye. - The refraction and reflection laws and types of lenses, who permits the construction of optics devices for the study of physical universe, and the appliances to correct vision diseases. - Presents the human vision process as consisting in the reception of light (electromagnetic radiation in the zone of wavelength visible to us) through the eye and the sending of information obtained by the retina to the brain where it is interpreted. The vision involves a biophysical relation between the light and the biological structure of the eye who is constituted by cornea, iris, crystalline and retina. Analyzes is made of how some parts of the eye performs a function in the reception and sending of information of the images to the brain
Resumo:
Synchronization in nonlinear dynamical systems, especially in chaotic systems, is field of research in several areas of knowledge, such as Mechanical Engineering and Electrical Engineering, Biology, Physics, among others. In simple terms, two systems are synchronized if after a certain time, they have similar behavior or occurring at the same time. The sound and image in a film is an example of this phenomenon in our daily lives. The studies of synchronization include studies of continuous dynamic systems, governed by differential equations or studies of discrete time dynamical systems, also called maps. Maps correspond, in general, discretizations of differential equations and are widely used to model physical systems, mainly due to its ease of computational. It is enough to make iterations from given initial conditions for knowing the trajectories of system. This completion of course work based on the study of the map called ”Zaslavksy Web Map”. The Zaslavksy Web Map is a result of the combination of the movements of a particle in a constant magnetic field and a wave electrostatic propagating perpendicular to the magnetic field. Apart from interest in the particularities of this map, there was objective the deepening of concepts of nonlinear dynamics, as equilibrium points, linear stability, stability non-linear, bifurcation and chaos
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Usually we observe that Bio-physical systems or Bio-chemical systems are many a time based on nanoscale phenomenon in different host environments, which involve many particles can often not be solved explicitly. Instead a physicist, biologist or a chemist has to rely either on approximate or numerical methods. For a certain type of systems, called integrable in nature, there exist particular mathematical structures and symmetries which facilitate the exact and explicit description. Most integrable systems, we come across are low-dimensional, for instance, a one-dimensional chain of coupled atoms in DNA molecular system with a particular direction or exist as a vector in the environment. This theoretical research paper aims at bringing one of the pioneering ‘Reaction-Diffusion’ aspects of the DNA-plasma material system based on an integrable lattice model approach utilizing quantized functional algebras, to disseminate the new developments, initiate novel computational and design paradigms.
Resumo:
In this work we present the idea of how generalized ensembles can be used to simplify the operational study of non-additive physical systems. As alternative of the usual methods of direct integration or mean-field theory, we show how the solution of the Ising model with infinite-range interactions is obtained by using a generalized canonical ensemble. We describe how the thermodynamical properties of this model in the presence of an external magnetic field are founded by simple parametric equations. Without impairing the usual interpretation, we obtain an identical critical behaviour as observed in traditional approaches.
Resumo:
Since the development of quantum mechanics it has been natural to analyze the connection between classical and quantum mechanical descriptions of physical systems. In particular one should expect that in some sense when quantum mechanical effects becomes negligible the system will behave like it is dictated by classical mechanics. One famous relation between classical and quantum theory is due to Ehrenfest. This result was later developed and put on firm mathematical foundations by Hepp. He proved that matrix elements of bounded functions of quantum observables between suitable coherents states (that depend on Planck's constant h) converge to classical values evolving according to the expected classical equations when h goes to zero. His results were later generalized by Ginibre and Velo to bosonic systems with infinite degrees of freedom and scattering theory. In this thesis we study the classical limit of Nelson model, that describes non relativistic particles, whose evolution is dictated by Schrödinger equation, interacting with a scalar relativistic field, whose evolution is dictated by Klein-Gordon equation, by means of a Yukawa-type potential. The classical limit is a mean field and weak coupling limit. We proved that the transition amplitude of a creation or annihilation operator, between suitable coherent states, converges in the classical limit to the solution of the system of differential equations that describes the classical evolution of the theory. The quantum evolution operator converges to the evolution operator of fluctuations around the classical solution. Transition amplitudes of normal ordered products of creation and annihilation operators between coherent states converge to suitable products of the classical solutions. Transition amplitudes of normal ordered products of creation and annihilation operators between fixed particle states converge to an average of products of classical solutions, corresponding to different initial conditions.
Resumo:
Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].
Resumo:
A quantum critical point (QCP) is a singularity in the phase diagram arising because of quantum mechanical fluctuations. The exotic properties of some of the most enigmatic physical systems, including unconventional metals and superconductors, quantum magnets and ultracold atomic condensates, have been related to the importance of critical quantum and thermal fluctuations near such a point. However, direct and continuous control of these fluctuations has been difficult to realize, and complete thermodynamic and spectroscopic information is required to disentangle the effects of quantum and classical physics around a QCP. Here we achieve this control in a high-pressure, high-resolution neutron scattering experiment on the quantum dimer material TlCuCl3. By measuring the magnetic excitation spectrum across the entire quantum critical phase diagram, we illustrate the similarities between quantum and thermal melting of magnetic order. We prove the critical nature of the unconventional longitudinal (Higgs) mode of the ordered phase by damping it thermally. We demonstrate the development of two types of criticality, quantum and classical, and use their static and dynamic scaling properties to conclude that quantum and thermal fluctuations can behave largely independently near a QCP.
Resumo:
Transaction costs, one often hears, are the economic equivalent of friction in physical systems. Like physicists, economists can sometimes neglect friction in formulating theories; but like engineers, they can never neglect friction in studying how the system actually does let alone should work. Interestingly, however, the present-day economics of organization also ignores friction. That is, almost single-mindedly, the literature analyzes transactions from the point of view of misaligned incentives and (especially) transaction-specific assets. The costs involved are certainly costs of running the economic system in some sense, but they are not obviously frictions. Stories about frictions in trade are not nearly as intriguing as stories about guileful trading partners and expensive assets placed at risk. But I will argue that these seemingly dull categories of cost what Baldwin and Clark (2003) call mundane transaction costs actually have a secret life. They are at least as important as, and quite probably far more important than, the more glamorous costs of asset specificity in explaining the partition between firm and market. These costs also have a secret life in another sense: they have a secret life cycle. I will argue that these mundane transaction costs provide much better material for helping us understanding how the boundaries among firms, markets, and hybrid forms change over time.
Resumo:
A 30-year series (1978-2007) of photographic records were analysed to determine changes in lake ice cover, local (low elevation) and montane (high elevation) snow cover and phenological stages of mountain birch (Betula pubescens ssp. czerepanovii) at the Abisko Scientific Research Station, Sweden. In most cases, the photographic-derived data showed no significant difference in phenophase score from manually observed field records from the same period, demonstrating the accuracy and potential of using weekly repeat photography as a quicker, cheaper and more adaptable tool to remotely study phenology in both biological and physical systems. Overall, increases in ambient temperatures coupled with decreases in winter ice and snow cover, and earlier occurrence of birch foliage, signal a reduction in the length of winter, a shift towards earlier springs and an increase in the length of available growing season in the Swedish sub-arctic.
Resumo:
Non-linear physical systems of infinite extent are conveniently modelled using FE–BE coupling methods. By the combination of both methods, suitable use of the advantages of each one may be obtained. Several possibilities of FEM–BEM coupling and their performance in some practical cases are discussed in this paper. Parallelizable coupling algorithms based on domain decomposition are developed and compared with the most traditional coupling methods.
Resumo:
The engineering careers models were diverse in Europe, and are adopting now in Spain the Bolonia process for European Universities. Separated from older Universities, that are in part technically active, Civil Engineering (Caminos, Canales y Puertos) started at end of 18th century in Spain adopting the French models of Upper Schools for state civil servants with exam at entry. After 1800 intense wars, to conserve forest regions Ingenieros de Montes appeared as Upper School, and in 1855 also the Ingenieros Agrónomos to push up related techniques and practices. Other Engineers appeared as Upper Schools but more towards private factories. These ES got all adapted Lower Schools of Ingeniero Tecnico. Recently both grew much in number and evolved, linked also to recognized Professions. Spanish society, into European Community, evolved across year 2000, in part highly well, but with severe discordances, that caused severe youth unemployment with 2008-2011 crisis. With Bolonia process high formal changes step in from 2010-11, accepted with intense adaptation. The Lower Schools are changing towards the Upper Schools, and both that have shifted since 2010-11 various 4-years careers (Grado), some included into the precedent Professions, and diverse Masters. Acceptation of them to get students has started relatively well, and will evolve, and acceptation of new grades for employment in Spain, Europe or outside will be essential. Each Grado has now quite rigid curricula and programs, MOODLE was introduced to connect pupils, some specific uses of Personal Computers are taught in each subject. Escuela de Agronomos centre, reorganized with its old name in its precedent buildings at entrance of Campus Moncloa, offers Grados of Agronomic Engineering and Science for various public and private activities for agriculture, Alimentary Engineering for alimentary activities and control, Agro-Environmental Engineering more related to environment activities, and in part Biotechnology also in laboratories in Campus Monte-Gancedo for Biotechnology of Plants and Computational Biotechnology. Curricula include Basics, Engineering, Practices, Visits, English, ?project of end of career?, Stays. Some masters will conduce to specific professional diploma, list includes now Agro-Engineering, Agro-Forestal Biotechnology, Agro and Natural Resources Economy, Complex Physical Systems, Gardening and Landscaping, Rural Genie, Phytogenetic Resources, Plant Genetic Resources, Environmental Technology for Sustainable Agriculture, Technology for Human Development and Cooperation.
Resumo:
Tanto los robots autónomos móviles como los robots móviles remotamente operados se utilizan con éxito actualmente en un gran número de ámbitos, algunos de los cuales son tan dispares como la limpieza en el hogar, movimiento de productos en almacenes o la exploración espacial. Sin embargo, es difícil garantizar la ausencia de defectos en los programas que controlan dichos dispositivos, al igual que ocurre en otros sectores informáticos. Existen diferentes alternativas para medir la calidad de un sistema en el desempeño de las funciones para las que fue diseñado, siendo una de ellas la fiabilidad. En el caso de la mayoría de los sistemas físicos se detecta una degradación en la fiabilidad a medida que el sistema envejece. Esto es debido generalmente a efectos de desgaste. En el caso de los sistemas software esto no suele ocurrir, ya que los defectos que existen en ellos generalmente no han sido adquiridos con el paso del tiempo, sino que han sido insertados en el proceso de desarrollo de los mismos. Si dentro del proceso de generación de un sistema software se focaliza la atención en la etapa de codificación, podría plantearse un estudio que tratara de determinar la fiabilidad de distintos algoritmos, válidos para desempeñar el mismo cometido, según los posibles defectos que pudieran introducir los programadores. Este estudio básico podría tener diferentes aplicaciones, como por ejemplo elegir el algoritmo menos sensible a los defectos, para el desarrollo de un sistema crítico o establecer procedimientos de verificación y validación, más exigentes, si existe la necesidad de utilizar un algoritmo que tenga una alta sensibilidad a los defectos. En el presente trabajo de investigación se ha estudiado la influencia que tienen determinados tipos de defectos software en la fiabilidad de tres controladores de velocidad multivariable (PID, Fuzzy y LQR) al actuar en un robot móvil específico. La hipótesis planteada es que los controladores estudiados ofrecen distinta fiabilidad al verse afectados por similares patrones de defectos, lo cual ha sido confirmado por los resultados obtenidos. Desde el punto de vista de la planificación experimental, en primer lugar se realizaron los ensayos necesarios para determinar si los controladores de una misma familia (PID, Fuzzy o LQR) ofrecían una fiabilidad similar, bajo las mismas condiciones experimentales. Una vez confirmado este extremo, se eligió de forma aleatoria un representante de clase de cada familia de controladores, para efectuar una batería de pruebas más exhaustiva, con el objeto de obtener datos que permitieran comparar de una forma más completa la fiabilidad de los controladores bajo estudio. Ante la imposibilidad de realizar un elevado número de pruebas con un robot real, así como para evitar daños en un dispositivo que generalmente tiene un coste significativo, ha sido necesario construir un simulador multicomputador del robot. Dicho simulador ha sido utilizado tanto en las actividades de obtención de controladores bien ajustados, como en la realización de los diferentes ensayos necesarios para el experimento de fiabilidad. ABSTRACT Autonomous mobile robots and remotely operated robots are used successfully in very diverse scenarios, such as home cleaning, movement of goods in warehouses or space exploration. However, it is difficult to ensure the absence of defects in programs controlling these devices, as it happens in most computer sectors. There exist different quality measures of a system when performing the functions for which it was designed, among them, reliability. For most physical systems, a degradation occurs as the system ages. This is generally due to the wear effect. In software systems, this does not usually happen, and defects often come from system development and not from use. Let us assume that we focus on the coding stage in the software development pro¬cess. We could consider a study to find out the reliability of different and equally valid algorithms, taking into account any flaws that programmers may introduce. This basic study may have several applications, such as choosing the algorithm less sensitive to pro¬gramming defects for the development of a critical system. We could also establish more demanding procedures for verification and validation if we need an algorithm with high sensitivity to programming defects. In this thesis, we studied the influence of certain types of software defects in the reliability of three multivariable speed controllers (PID, Fuzzy and LQR) designed to work in a specific mobile robot. The hypothesis is that similar defect patterns affect differently the reliability of controllers, and it has been confirmed by the results. From the viewpoint of experimental planning, we followed these steps. First, we conducted the necessary test to determine if controllers of the same family (PID, Fuzzy or LQR) offered a similar reliability under the same experimental conditions. Then, a class representative was chosen at ramdom within each controller family to perform a more comprehensive test set, with the purpose of getting data to compare more extensively the reliability of the controllers under study. The impossibility of performing a large number of tests with a real robot and the need to prevent the damage of a device with a significant cost, lead us to construct a multicomputer robot simulator. This simulator has been used to obtain well adjusted controllers and to carry out the required reliability experiments.
Resumo:
When non linear physical systems of infinite extent are modelled, such as tunnels and perforations, it is necessary to simulate suitably the solution in the infinite as well as the non linearity. The finite element method (FEM) is a well known procedure for simulating the non linear behavior. However, the treatment of the infinite field with domain truncations is often questionable. On the other hand, the boundary element method (BEM) is suitable to simulate the infinite behavior without truncations. Because of this, by the combination of both methods, suitable use of the advantages of each one may be obtained. Several possibilities of FEM-BEM coupling and their performance in some practical cases are discussed in this paper. Parallelizable coupling algorithms based on domain decomposition are developed and compared with the most traditional coupling methods.