876 resultados para dS vacua in string theory
Resumo:
This resource can be particularly helpful to students taking the Intermediate Macroeconomics course, which corresponds to the second year of the current Degree in Economics at the University of the Basque Country UPV/ EHU. The precise content of this resource is a collection of eight chapters of multiple-choice questions. For each question the user is asked to guess which the correct answer is. Finally, the tool will return all the correct answers for the whole test, thereby allowing the user to check the validity of his/her answers. A remarkable feature of the tool is that it has been edited in three versions, for the three languages (Spanish, Basque and English) in which the subject is taught at the UPV/EHU.
Resumo:
Computer science and electrical engineering have been the great success story of the twentieth century. The neat modularity and mapping of a language onto circuits has led to robots on Mars, desktop computers and smartphones. But these devices are not yet able to do some of the things that life takes for granted: repair a scratch, reproduce, regenerate, or grow exponentially fast–all while remaining functional.
This thesis explores and develops algorithms, molecular implementations, and theoretical proofs in the context of “active self-assembly” of molecular systems. The long-term vision of active self-assembly is the theoretical and physical implementation of materials that are composed of reconfigurable units with the programmability and adaptability of biology’s numerous molecular machines. En route to this goal, we must first find a way to overcome the memory limitations of molecular systems, and to discover the limits of complexity that can be achieved with individual molecules.
One of the main thrusts in molecular programming is to use computer science as a tool for figuring out what can be achieved. While molecular systems that are Turing-complete have been demonstrated [Winfree, 1996], these systems still cannot achieve some of the feats biology has achieved.
One might think that because a system is Turing-complete, capable of computing “anything,” that it can do any arbitrary task. But while it can simulate any digital computational problem, there are many behaviors that are not “computations” in a classical sense, and cannot be directly implemented. Examples include exponential growth and molecular motion relative to a surface.
Passive self-assembly systems cannot implement these behaviors because (a) molecular motion relative to a surface requires a source of fuel that is external to the system, and (b) passive systems are too slow to assemble exponentially-fast-growing structures. We call these behaviors “energetically incomplete” programmable behaviors. This class of behaviors includes any behavior where a passive physical system simply does not have enough physical energy to perform the specified tasks in the requisite amount of time.
As we will demonstrate and prove, a sufficiently expressive implementation of an “active” molecular self-assembly approach can achieve these behaviors. Using an external source of fuel solves part of the the problem, so the system is not “energetically incomplete.” But the programmable system also needs to have sufficient expressive power to achieve the specified behaviors. Perhaps surprisingly, some of these systems do not even require Turing completeness to be sufficiently expressive.
Building on a large variety of work by other scientists in the fields of DNA nanotechnology, chemistry and reconfigurable robotics, this thesis introduces several research contributions in the context of active self-assembly.
We show that simple primitives such as insertion and deletion are able to generate complex and interesting results such as the growth of a linear polymer in logarithmic time and the ability of a linear polymer to treadmill. To this end we developed a formal model for active-self assembly that is directly implementable with DNA molecules. We show that this model is computationally equivalent to a machine capable of producing strings that are stronger than regular languages and, at most, as strong as context-free grammars. This is a great advance in the theory of active self- assembly as prior models were either entirely theoretical or only implementable in the context of macro-scale robotics.
We developed a chain reaction method for the autonomous exponential growth of a linear DNA polymer. Our method is based on the insertion of molecules into the assembly, which generates two new insertion sites for every initial one employed. The building of a line in logarithmic time is a first step toward building a shape in logarithmic time. We demonstrate the first construction of a synthetic linear polymer that grows exponentially fast via insertion. We show that monomer molecules are converted into the polymer in logarithmic time via spectrofluorimetry and gel electrophoresis experiments. We also demonstrate the division of these polymers via the addition of a single DNA complex that competes with the insertion mechanism. This shows the growth of a population of polymers in logarithmic time. We characterize the DNA insertion mechanism that we utilize in Chapter 4. We experimentally demonstrate that we can control the kinetics of this re- action over at least seven orders of magnitude, by programming the sequences of DNA that initiate the reaction.
In addition, we review co-authored work on programming molecular robots using prescriptive landscapes of DNA origami; this was the first microscopic demonstration of programming a molec- ular robot to walk on a 2-dimensional surface. We developed a snapshot method for imaging these random walking molecular robots and a CAPTCHA-like analysis method for difficult-to-interpret imaging data.
Resumo:
Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.
This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.
The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.
Resumo:
This thesis introduces fundamental equations and numerical methods for manipulating surfaces in three dimensions via conformal transformations. Conformal transformations are valuable in applications because they naturally preserve the integrity of geometric data. To date, however, there has been no clearly stated and consistent theory of conformal transformations that can be used to develop general-purpose geometry processing algorithms: previous methods for computing conformal maps have been restricted to the flat two-dimensional plane, or other spaces of constant curvature. In contrast, our formulation can be used to produce---for the first time---general surface deformations that are perfectly conformal in the limit of refinement. It is for this reason that we commandeer the title Conformal Geometry Processing.
The main contribution of this thesis is analysis and discretization of a certain time-independent Dirac equation, which plays a central role in our theory. Given an immersed surface, we wish to construct new immersions that (i) induce a conformally equivalent metric and (ii) exhibit a prescribed change in extrinsic curvature. Curvature determines the potential in the Dirac equation; the solution of this equation determines the geometry of the new surface. We derive the precise conditions under which curvature is allowed to evolve, and develop efficient numerical algorithms for solving the Dirac equation on triangulated surfaces.
From a practical perspective, this theory has a variety of benefits: conformal maps are desirable in geometry processing because they do not exhibit shear, and therefore preserve textures as well as the quality of the mesh itself. Our discretization yields a sparse linear system that is simple to build and can be used to efficiently edit surfaces by manipulating curvature and boundary data, as demonstrated via several mesh processing applications. We also present a formulation of Willmore flow for triangulated surfaces that permits extraordinarily large time steps and apply this algorithm to surface fairing, geometric modeling, and construction of constant mean curvature (CMC) surfaces.
Resumo:
The aim of this paper is to investigate to what extent the known theory of subdifferentiability and generic differentiability of convex functions defined on open sets can be carried out in the context of convex functions defined on not necessarily open sets. Among the main results obtained I would like to mention a Kenderov type theorem (the subdifferential at a generic point is contained in a sphere), a generic Gâteaux differentiability result in Banach spaces of class S and a generic Fréchet differentiability result in Asplund spaces. At least two methods can be used to prove these results: first, a direct one, and second, a more general one, based on the theory of monotone operators. Since this last theory was previously developed essentially for monotone operators defined on open sets, it was necessary to extend it to the context of monotone operators defined on a larger class of sets, our "quasi open" sets. This is done in Chapter III. As a matter of fact, most of these results have an even more general nature and have roots in the theory of minimal usco maps, as shown in Chapter II.
Resumo:
In this thesis, we provide a statistical theory for the vibrational pooling and fluorescence time dependence observed in infrared laser excitation of CO on an NaCl surface. The pooling is seen in experiment and in computer simulations. In the theory, we assume a rapid equilibration of the quanta in the substrate and minimize the free energy subject to the constraint at any time t of a fixed number of vibrational quanta N(t). At low incident intensity, the distribution is limited to one- quantum exchanges with the solid and so the Debye frequency of the solid plays a key role in limiting the range of this one-quantum domain. The resulting inverted vibrational equilibrium population depends only on fundamental parameters of the oscillator (ωe and ωeχe) and the surface (ωD and T). Possible applications and relation to the Treanor gas phase treatment are discussed. Unlike the solid phase system, the gas phase system has no Debye-constraining maximum. We discuss the possible distributions for arbitrary N-conserving diatom-surface pairs, and include application to H:Si(111) as an example.
Computations are presented to describe and analyze the high levels of infrared laser-induced vibrational excitation of a monolayer of absorbed 13CO on a NaCl(100) surface. The calculations confirm that, for situations where the Debye frequency limited n domain restriction approximately holds, the vibrational state population deviates from a Boltzmann population linearly in n. Nonetheless, the full kinetic calculation is necessary to capture the result in detail.
We discuss the one-to-one relationship between N and γ and the examine the state space of the new distribution function for varied γ. We derive the Free Energy, F = NγkT − kTln(∑Pn), and effective chemical potential, μn ≈ γkT, for the vibrational pool. We also find the anti correlation of neighbor vibrations leads to an emergent correlation that appears to extend further than nearest neighbor.
Resumo:
In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.
The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.
Resumo:
Em sua teoria do conhecimento, cuja formulação definitiva se encontra na segunda parte da Ethica, Spinoza afirma que o conhecimento que se dá por meio de signos pertence à Imaginação, isto é, ao primeiro gênero de conhecimento, o qual é essencialmente inadequado uma vez que não consegue compreender a natureza das coisas, mas simplesmente as conhece de forma mutilada e confusa. Contudo, atribuir o conhecimento ex signis ao âmbito imaginativo não pode implicar a recusa, por parte de Spinoza, de toda e qualquer utilização de signos a fim de comunicar o conhecimento verdadeiro, sob pena de o próprio texto da Ethica deslegitimar suas pretensões de verdade já no momento mesmo em que se anuncia. Partindo do princípio de que deve haver certo modo de utilização de signos que consiga contornar, em alguma medida, sua constituição essencialmente inadequada a fim de comunicar idéias adequadas, a presente investigação reconstrói uma teoria da linguagem subjacente à doutrina da Ethica na tentativa de estabelecer por que meios se pode efetuar uma utilização filosófica dos signos.
Resumo:
Neste trabalho abordamos a teoria de Ginzburg-Landau da supercondutividade (teoria GL). Apresentamos suas origens, características e resultados mais importantes. A idéia fundamental desta teoria e descrever a transição de fase que sofrem alguns metais de uma fase normal para uma fase supercondutora. Durante uma transição de fase em supercondutores do tipo II é característico o surgimento de linhas de fluxo magnético em determinadas regiões de tamanho finito chamadas comumente de vórtices. A dinâmica destas estruturas topológicas é de grande interesse na comunidade científica atual e impulsiona incontáveis núcleos de pesquisa na área da supercondutividade. Baseado nisto estudamos como essas estruturas topológicas influenciam em uma transição de fase em um modelo bidimensional conhecido como modelo XY. No modelo XY vemos que os principais responsáveis pela transição de fase são os vórtices (na verdade pares de vórtice-antivórtice). Villain, observando este fato, percebeu que poderia tornar explícita a contribuição desses defeitos topológicos na função de partição do modelo XY realizando uma transformação de dualidade. Este modelo serve como inspiração para a proposta deste trabalho. Apresentamos aqui um modelo baseado em considerações físicas sobre sistemas de matéria condensada e ao mesmo tempo utilizamos um formalismo desenvolvido recentemente na referência [29] que possibilita tornar explícita a contribuição dos defeitos topológicos na ação original proposta em nossa teoria. Após isso analisamos alguns limites clássicos e finalmente realizamos as flutuações quânticas visando obter a expressão completa da função correlação dos vórtices o que pode ser muito útil em teorias de vórtices interagentes (dinâmica de vórtices).
Resumo:
O termo Teoria da Mente diz respeito à habilidade que os seres humanos adquirem de compreender seus próprios estados mentais e os dos outros e predizer ações e comportamentos dentro de uma interação social. As principais questões da pesquisa em Teoria da Mente são: determinar qual o tipo de conhecimento que sustenta essa habilidade, qual sua origem e desenvolvimento e em que momento se manifesta. (Astington e Gopnik, 1991). Ao levar em consideração que a língua pode ser vista como instrumento da cognição (Spelke, 2003), através da qual o falante adquire suporte para o planejamento de ações, contribuindo para o desempenho de tarefas cognitivas complexas (Corrêa, 2006), de Villiers (2004, 2005, 2007 e subsequentes), no que diz respeito à Teoria da Mente, argumenta que o seu desenvolvimento depende do desenvolvimento linguístico, estando diretamente ligado à aquisição de verbos de estado mental, como pensar, por exemplo, pelo fato de que esses verbos subcategorizam uma sentença encaixada. Para ela, o domínio dessa estrutura possibilita que o raciocínio de crenças falsas da Teoria da Mente seja efetivamente realizado. A presente dissertação tem como objetivo verificar em que medida há uma influência direta e necessária da linguagem para a condução de tarefas de Teoria da Mente. Para tanto, focamos nossa atenção em pessoas que estão, por algum motivo, destituídas parcialmente da capacidade linguística, mas que mantêm intacta a capacidade cognitiva, os afásicos. Por meio de uma pesquisa realizada com dois pacientes afásicos de Broca, selecionados pelos critérios clássicos, procuramos entender se a habilidade de predizer ações está intacta nestes pacientes ou se tal habilidade foi perdida, assim como a linguagem. Para tanto, aplicamos dois testes de crença falsa em Teoria da Mente. O primeiro utilizou-se de suporte verbal, uma narração de eventos e expectativas dos personagens envolvidos. A pergunta-teste foi manipulada em função do grau de complexidade por meio do cruzamento de dois fatores: sentenças simples ou complexas e QU-in situ ou movido. O segundo teste seguiu o padrão não-verbal, sendo constituído de uma sequência de imagens, sendo que o sujeito deveria decidir entre duas últimas imagens apresentadas, aquela que coerentemente finalizava a história. Uma vez que houvesse influência direta da linguagem na condução de tarefas de Teoria da Mente, esperar-se-ia que a dificuldade no teste verbal refletisse o grau de complexidade das questões apresentadas. Adicionalmente, o desempenho no teste não-verbal também deveria ser insatisfatório, dado o comprometimento linguístico apresentado pelos sujeitos testados. Para o primeiro teste, o desempenho dos pacientes foi aleatório e inferior ao do grupo controle, já para o segundo teste, o aproveitamento foi de 100%. Em geral, os resultados sugerem que o raciocínio de crenças falsas em Teoria da Mente é alcançado por esses sujeitos, haja vista o desempenho plenamente satisfatório no teste não-verbal. Os resultados do teste verbal, por outro lado, atestam tão somente a dificuldade linguística característica dessa população. Desse modo, conclui-se que uma vez desenvolvida a habilidade em Teoria da Mente, esta permanece intacta na mente destes pacientes, mesmo que destituídos parcialmente da capacidade linguística
Resumo:
Esse estudo aborda a perversão, tema de extrema relevância política e clínica no âmbito psicanalítico, que se revela como um dos problemas cruciais da psicanálise. Sua importância política imprime-se quando ressalta, com Lacan, que a política da psicanálise é a política da falta-a-ser correlata à ética do desejo e quando propõe que aceitar a diversidade do gozo, com suas múltiplas modalidades, e levar o sujeito a desvelar e a confrontar-se com o seu mais-de-gozar é uma indicação ética que deve orientar a política e a prática do psicanalista. A pesquisa, elaborada a partir dos fundamentos teórico-clínicos de Freud e Lacan, destaca o movimento lógico que delimita a perversão na obra freudiana e verifica uma convergência nas teses de ambos os autores, no que se refere à diferenciação entre a perversidade e a perversão como estrutura clínica. A construção de uma série de casos clínicos e o estudo da vida e da obra de cinco famosos escritores Marquês de Sade, Sacher-Masoch, André Gide, Jean Genet, Yukio Mishima vêm ilustrar que as práticas de gozo perverso não determinam a estrutura perversa. O matema da fantasia sadiana, forjado por Lacan, é tomado para demonstrar a Verleugnung freudiana, o modo que os sujeitos perversos encontram para lidar com castração da mãe/mulher. As fórmulas quânticas da sexuação, formuladas por Lacan, são utilizadas para evidenciar que a diferença entre a neurose e a perversão se explicita na estratégia de gozo que o sujeito utiliza na relação com o seu parceiro. A pesquisa, que se iniciou bibliográfica e se desenvolveu de cunho teórico-clínico, desvela que a clínica da perversão muito pode ensinar aos psicanalistas sobre os quatro conceitos fundamentais da psicanálise, sobre a relação entre o fetiche, a máscara e o semblante, sobre a sublimação, sobre as estratégias de gozo, em particular, o masoquismo. Esse estudo convoca a comunidade analítica ao debate, uma vez que enuncia, através da apresentação de casos de perversão, os impasses da clínica, além de presentificar o real da clínica na condução do tratamento de um caso de perversão.