992 resultados para Weak-field approximation
Resumo:
In this thesis, we consider N quantum particles coupled to collective thermal quantum environments. The coupling is energy conserving and scaled in the mean field way. There is no direct interaction between the particles, they only interact via the common reservoir. It is well known that an initially disentangled state of the N particles will remain disentangled at times in the limit N -> [infinity]. In this thesis, we evaluate the η-body reduced density matrix (tracing over the reservoirs and the N - η remaining particles). We identify the main disentangled part of the reduced density matrix and obtain the first order correction term in 1/N. We show that this correction term is entangled. We also estimate the speed of convergence of the reduced density matrix as N -> [infinity]. Our model is exactly solvable and it is not based on numerical approximation.
Resumo:
An object based image analysis approach (OBIA) was used to create a habitat map of the Lizard Reef. Briefly, georeferenced dive and snorkel photo-transect surveys were conducted at different locations surrounding Lizard Island, Australia. For the surveys, a snorkeler or diver swam over the bottom at a depth of 1-2m in the lagoon, One Tree Beach and Research Station areas, and 7m depth in Watson's Bay, while taking photos of the benthos at a set height using a standard digital camera and towing a surface float GPS which was logging its track every five seconds. The camera lens provided a 1.0 m x 1.0 m footprint, at 0.5 m height above the benthos. Horizontal distance between photos was estimated by fin kicks, and corresponded to a surface distance of approximately 2.0 - 4.0 m. Approximation of coordinates of each benthic photo was done based on the photo timestamp and GPS coordinate time stamp, using GPS Photo Link Software (www.geospatialexperts.com). Coordinates of each photo were interpolated by finding the gps coordinates that were logged at a set time before and after the photo was captured. Dominant benthic or substrate cover type was assigned to each photo by placing 24 points random over each image using the Coral Point Count excel program (Kohler and Gill, 2006). Each point was then assigned a dominant cover type using a benthic cover type classification scheme containing nine first-level categories - seagrass high (>=70%), seagrass moderate (40-70%), seagrass low (<= 30%), coral, reef matrix, algae, rubble, rock and sand. Benthic cover composition summaries of each photo were generated automatically in CPCe. The resulting benthic cover data for each photo was linked to GPS coordinates, saved as an ArcMap point shapefile, and projected to Universal Transverse Mercator WGS84 Zone 56 South. The OBIA class assignment followed a hierarchical assignment based on membership rules with levels for "reef", "geomorphic zone" and "benthic community" (above).
Resumo:
An accurate and simple technique for determining the focal length of a lens is presented. It consists of measuring the period of the fringes produced by a diffraction grating at the near field when it is illuminated with a beam focused by the unknown lens. In paraxial approximation, the period of the fringes varies linearly with the distance. After some calculations, a simple extrapolation of data is performed to obtain the locations of the principal plane and the focal plane of the lens. Thus, the focal length is obtained as the distance between the two mentioned planes. The accuracy of the method is limited by the collimation degree of the incident beam and by the algorithm used to obtain the period of the fringes. We have checked the technique with two commercial lenses, one convergent and one divergent, with nominal focal lengths (+100±1) mm and (−100±1) mm respectively. We have experimentally obtained the focal lengths resulting into the interval given by the manufacturer but with an uncertainty of 0.1%, one order of magnitude lesser than the uncertainty given by the manufacturer.
Resumo:
Historically, the concepts of field-independence, closure flexibility, and weak central coherence have been used to denote a locally, rather globally, dominated perceptual style. To date, there has been little attempt to clarify the relationship between these constructs, or to examine the convergent validity of the various tasks purported to measure them. To address this, we administered 14 tasks that have been used to study visual perceptual styles to a group of 90 neuro-typical adults. The data were subjected to exploratory factor analysis. We found evidence for the existence of a narrowly defined weak central coherence (field-independence) factor that received loadings from only a few of the tasks used to operationalise this concept. This factor can most aptly be described as representing the ability to dis-embed a simple stimulus from a more complex array. The results suggest that future studies of perceptual styles should include tasks whose theoretical validity is empirically verified, as such validity cannot be established merely on the basis of a priori task analysis. Moreover, the use of multiple indices is required to capture the latent dimensions of perceptual styles reliably.
Resumo:
The aim of this study is to explore the suitability of chromospheric images for magnetic modeling of active regions. We use high-resolutionimages (≈0.2"-0.3"), from the Interferometric Bidimensional Spectrometer in the Ca II 8542 Å line, the Rapid Oscillations in the Solar Atmosphere instrument in the Hα 6563Å line, the Interface Region Imaging Spectrograph in the 2796Å line, and compare non-potential magnetic field models obtainedfrom those chromospheric images with those obtained from images of the Atmospheric Imaging Assembly in coronal (171 Å, etc.) and inchromospheric (304 Å) wavelengths. Curvi-linear structures are automatically traced in those images with the OCCULT-2 code, to which we forward-fitted magnetic field lines computed with the Vertical-current Approximation Nonlinear Force Free Field code. We find that the chromospheric images: (1) reveal crisp curvi-linear structures (fibrils, loop segments, spicules) that are extremely well-suited for constraining magnetic modeling; (2) that these curvi-linear structures arefield-aligned with the best-fit solution by a median misalignment angle of μ2 ≈ 4°–7° (3) the free energy computed from coronal data may underestimate that obtained from chromospheric data by a factor of ≈2–4, (4) the height range of chromospheric features is confined to h≲4000 km, while coronal features are detected up to h = 35,000 km; and (5) the plasma-β parameter is β ≈ 10^-5 - 10^-1 for all traced features. We conclude that chromospheric images reveal important magnetic structures that are complementary to coronal images and need to be included in comprehensive magnetic field models, something that is currently not accomodated in standard NLFFF codes.
Resumo:
The industrial production of aluminium is an electrolysis process where two superposed horizontal liquid layers are subjected to a mainly vertical electric current supplied by carbon electrodes. The lower layer consists of molten aluminium and lies on the cathode. The upper layer is the electrolyte and is covered by the anode. The interface between the two layers is often perturbed, leading to oscillations, or waves, similar to the waves on the surface of seas or lakes. The presence of electric currents and the resulting magnetic field are responsible for electromagnetic (Lorentz) forces within the fluid, which can amplify these oscillations and have an adverse influence on the process. The electrolytic bath vertical to horizontal aspect ratio is such, that it is advantageous to use the shallow water equations to model the interface motion. These are the depth-averaging the Navier-Stokes equations so that nonlinear and dispersion terms may be taken into account. Although these terms are essential to the prediction of wave dynamics, they are neglected in most of the literature on interface instabilities in aluminium reduction cells where only the linear theory is usually considered. The unknown variables are the two horizontal components of the fluid velocity, the height of the interface and the electric potential. In this application, a finite volume resolution of the double-layer shallow water equations including the electromagnetic sources has been developed, for incorporation into a generic three-dimensional computational fluid dynamics code that also deals with heat transfer within the cell.
Resumo:
This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.
Resumo:
This dissertation concerns the well-posedness of the Navier-Stokes-Smoluchowski system. The system models a mixture of fluid and particles in the so-called bubbling regime. The compressible Navier-Stokes equations governing the evolution of the fluid are coupled to the Smoluchowski equation for the particle density at a continuum level. First, working on fixed domains, the existence of weak solutions is established using a three-level approximation scheme and based largely on the Lions-Feireisl theory of compressible fluids. The system is then posed over a moving domain. By utilizing a Brinkman-type penalization as well as penalization of the viscosity, the existence of weak solutions of the Navier-Stokes-Smoluchowski system is proved over moving domains. As a corollary the convergence of the Brinkman penalization is proved. Finally, a suitable relative entropy is defined. This relative entropy is used to establish a weak-strong uniqueness result for the Navier-Stokes-Smoluchowski system over moving domains, ensuring that strong solutions are unique in the class of weak solutions.
Resumo:
Neural field models of firing rate activity typically take the form of integral equations with space-dependent axonal delays. Under natural assumptions on the synaptic connectivity we show how one can derive an equivalent partial differential equation (PDE) model that properly treats the axonal delay terms of the integral formulation. Our analysis avoids the so-called long-wavelength approximation that has previously been used to formulate PDE models for neural activity in two spatial dimensions. Direct numerical simulations of this PDE model show instabilities of the homogeneous steady state that are in full agreement with a Turing instability analysis of the original integral model. We discuss the benefits of such a local model and its usefulness in modeling electrocortical activity. In particular we are able to treat "patchy'" connections, whereby a homogeneous and isotropic system is modulated in a spatially periodic fashion. In this case the emergence of a "lattice-directed" traveling wave predicted by a linear instability analysis is confirmed by the numerical simulation of an appropriate set of coupled PDEs. Article published and (c) American Physical Society 2007
Resumo:
It has been recently shown that the double exchange Hamiltonian, with weak antiferromagnetic interactions, has a richer variety of first- and second-order transitions than previously anticipated, and that such transitions are consistent with the magnetic properties of manganites. Here we present a thorough discussion of the variational mean-field approach that leads to these results. We also show that the effect of the Berry phase turns out to be crucial to produce first-order paramagnetic-ferromagnetic transitions near half filling with transition temperatures compatible with the experimental situation. The computation relies on two crucial facts: the use of a mean-field ansatz that retains the complexity of a system of electrons with off-diagonal disorder, not fully taken into account by the mean-field techniques, and the small but significant antiferromagnetic superexchange interaction between the localized spins.
Resumo:
This PhD thesis focuses on studying the classical scattering of massive/massless particles toward black holes, and investigating double copy relations between classical observables in gauge theories and gravity. This is done in the Post-Minkowskian approximation i.e. a perturbative expansion of observables controlled by the gravitational coupling constant κ = 32πGN, with GN being the Newtonian coupling constant. The investigation is performed by using the Worldline Quantum Field Theory (WQFT), displaying a worldline path integral describing the scattering objects and a QFT path integral in the Born approximation, describing the intermediate bosons exchanged in the scattering event by the massive/massless particles. We introduce the WQFT, by deriving a relation between the Kosower- Maybee-O’Connell (KMOC) limit of amplitudes and worldline path integrals, then, we use that to study the classical Compton amplitude and higher point amplitudes. We also present a nice application of our formulation to the case of Hard Thermal Loops (HTL), by explicitly evaluating hard thermal currents in gauge theory and gravity. Next we move to the investigation of the classical double copy (CDC), which is a powerful tool to generate integrands for classical observables related to the binary inspiralling problem in General Relativity. In order to use a Bern-Carrasco-Johansson (BCJ) like prescription, straight at the classical level, one has to identify a double copy (DC) kernel, encoding the locality structure of the classical amplitude. Such kernel is evaluated by using a theory where scalar particles interacts through bi-adjoint scalars. We show here how to push forward the classical double copy so to account for spinning particles, in the framework of the WQFT. Here the quantization procedure on the worldline allows us to fully reconstruct the quantum theory on the gravitational side. Next we investigate how to describe the scattering of massless particles off black holes in the WQFT.
Resumo:
Ultracold dilute gases occupy an important role in modern physics and they are employed to verify fundamental quantum theories in most branches of theoretical physics. The scope of this thesis work is the study of Bose-Fermi (BF) mixtures at zero temperature with a tunable pairing between bosons and fermions. The mixtures are treated with diagrammatic quantum many-body methods based on the so-called T-matrix formalism. Starting from the Fermi-polaron limit, I will explore various values of relative concentrations up to mixtures with a majority of bosons, a case barely considered in previous works. An unexpected quantum phase transition is found to occur in a certain range of BF coupling for mixture with a slight majority of bosons. The mechanical stability of mixtures has been analysed, when the boson-fermion interaction is changed from weak to strong values, in the light of experimental results recently obtained for a double-degenerate Bose-Fermi mixture of 23 Na - 40 K. A possible improvement in the description of the boson-boson repulsion based on Popov's theory is proposed. Finally, the effects of a harmonic trapping potential are described, with a comparison with the experimental data for the condensate fraction recently obtained for a trapped 23 Na - 40 K mixture.
Resumo:
The models of teaching social sciences and clinical practice are insufficient for the needs of practical-reflective teaching of social sciences applied to health. The scope of this article is to reflect on the challenges and perspectives of social science education for health professionals. In the 1950s the important movement bringing together social sciences and the field of health began, however weak credentials still prevail. This is due to the low professional status of social scientists in health and the ill-defined position of the social sciences professionals in the health field. It is also due to the scant importance attributed by students to the social sciences, the small number of professionals and the colonization of the social sciences by the biomedical culture in the health field. Thus, the professionals of social sciences applied to health are also faced with the need to build an identity, even after six decades of their presence in the field of health. This is because their ambivalent status has established them as a partial, incomplete and virtual presence, requiring a complex survival strategy in the nebulous area between social sciences and health.
Resumo:
Local parity-odd domains are theorized to form inside a quark-gluon plasma which has been produced in high-energy heavy-ion collisions. The local parity-odd domains manifest themselves as charge separation along the magnetic field axis via the chiral magnetic effect. The experimental observation of charge separation has previously been reported for heavy-ion collisions at the top RHIC energies. In this Letter, we present the results of the beam-energy dependence of the charge correlations in Au+Au collisions at midrapidity for center-of-mass energies of 7.7, 11.5, 19.6, 27, 39, and 62.4 GeV from the STAR experiment. After background subtraction, the signal gradually reduces with decreased beam energy and tends to vanish by 7.7 GeV. This implies the dominance of hadronic interactions over partonic ones at lower collision energies.