921 resultados para classical conditioning, mere exposure effect, classical conditioning of preferences.
Resumo:
Authority files serve to uniquely identify real world ‘things’ or entities like documents, persons, organisations, and their properties, like relations and features. Already important in the classical library world, authority files are indispensable for adequate information retrieval and analysis in the computer age. This is because, even more than humans, computers are poor at handling ambiguity. Through authority files, people tell computers which terms, names or numbers refer to the same thing or have the same meaning by giving equivalent notions the same identifier. Thus, authority files signpost the internet where these identifiers are interlinked on the basis of relevance. When executing a query, computers are able to navigate from identifier to identifier by following these links and collect the queried information on these so-called ‘crosswalks’. In this context, identifiers also go under the name controlled access points. Identifiers become even more crucial now massive data collections like library catalogues or research datasets are releasing their till-now contained data directly to the internet. This development is coined Open Linked Data. The concatenating name for the internet is Web of Data instead of the classical Web of Documents.
Resumo:
ICINCO 2010
Resumo:
Background: Sporadic Creutzfeldt-Jakob disease (sCJD) is a rare neurodegenerative disorder in humans included in the group of Transmissible Spongiform Encephalopathies or prion diseases. The vast majority of sCJD cases are molecularly classified according to the abnormal prion protein (PrPSc) conformations along with polymorphism of codon 129 of the PRNP gene. Recently, a novel human disease, termed "protease-sensitive prionopathy", has been described. This disease shows a distinct clinical and neuropathological phenotype and it is associated to an abnormal prion protein more sensitive to protease digestion. Case presentation: We report the case of a 75-year-old-man who developed a clinical course and presented pathologic lesions compatible with sporadic Creutzfeldt-Jakob disease, and biochemical findings reminiscent of "protease-sensitive prionopathy". Neuropathological examinations revealed spongiform change mainly affecting the cerebral cortex, putamen/globus pallidus and thalamus, accompanied by mild astrocytosis and microgliosis, with slight involvement of the cerebellum. Confluent vacuoles were absent. Diffuse synaptic PrP deposits in these regions were largely removed following proteinase treatment. PrP deposition, as revealed with 3F4 and 1E4 antibodies, was markedly sensitive to pre-treatment with proteinase K. Molecular analysis of PrPSc showed an abnormal prion protein more sensitive to proteinase K digestion, with a five-band pattern of 28, 24, 21, 19, and 16 kDa, and three aglycosylated isoforms of 19, 16 and 6 kDa. This PrPSc was estimated to be 80% susceptible to digestion while the pathogenic prion protein associated with classical forms of sporadic Creutzfeldt-Jakob disease were only 2% (type VV2) and 23% (type MM1) susceptible. No mutations in the PRNP gene were found and genotype for codon 129 was heterozygous methionine/valine. Conclusions: A novel form of human disease with abnormal prion protein sensitive to protease and MV at codon 129 was described. Although clinical signs were compatible with sporadic Creutzfeldt-Jakob disease, the molecular subtype with the abnormal prion protein isoforms showing enhanced protease sensitivity was reminiscent of the "protease-sensitive prionopathy". It remains to be established whether the differences found between the latter and this case are due to the polymorphism at codon 129. Different degrees of proteinase K susceptibility were easily determined with the chemical polymer detection system which could help to detect proteinase-susceptible pathologic prion protein in diseases other than the classical ones.
Resumo:
For monitoring of the engine power of fishing vessels permitted for fishery in the plaice box with engine power of 300 HP or less at sea three different portable power measurement systems are developed and tested. A system measuring the twist of the propeller shaft by two divisible gearwheels mounted on the shaft worked well at shafts with roller bearing at both sides of the measured interval of 100–300 mm length. Only at a very few fishing vessels this system is applicable and therefore for monitoring purposes not suitable. The application of a commercial available system measuring the stress at the surface of the shaft was simplified for application by non experts. The torque is measured by strain gauges. The calibration of the system, measuring and recording of the power is done by a PC automatically. A small polished facet on the shaft protected against oxidation is needed for easy and quick application. In this case the system can be used by technical personnel of supervision boats for monitoring of the engine power at sea in a short time. A third power measurement system determinates the torque by measuring the displacement of two supports clamped on the shaft at a distance of 100 mm. The displacement is measured by a micrometer gauge mounted on one of the supports. Readout of the rotating gauge display is possible taking advantage of stroboscopic effect. The system needs no conditioning of the shaft and can be used by non technicians. The development is not finished until now and some additional investigations and tests are required. Additional measures for monitoring of the power on fishing vessels by self recording power measurement systems and sealed fuel racks with limited injection are reported and discussed.
Resumo:
In Part I the kinetic theory of excitations in flowing liquid He II is developed to a higher order than that carried out previously, by Landau and Khalatnikov, in order to demonstrate the existence of non-equilibrium terms of a new nature in the hydrodynamic equations. It is then shown that these terms can lead to spontaneous destabilization in counter currents when the relative velocity of the normal and super fluids exceeds a critical value that depends on the temperature, but not on geometry. There are no adjustable parameters in the theory. The critical velocities are estimated to be in the 14-20 m/sec range for T ≤ 2.0° K, but tend to zero as T → T_λ. The possibility that these critical velocities may be related to the experimentally observed "intrinsic" critical velocities is discussed.
Part II consists of a semi-classical investigation of rotonquantized vortex line interactions. An essentially classical model is used for the collision and the behavior of the roton in the vortex field is investigated in detail. From this model it is possible to derive the HVBK mutual friction terms that appear in the phenomenalogical equations of motion for rotating liquid He II. Estimates of the Hall and Vinen B and B' coefficients are in good agreement with experiments. The claim is made that the theory does not contain any arbitrary adjustable parameters.
Resumo:
This thesis presents recent research into analytic topics in the classical theory of General Relativity. It is a thesis in two parts. The first part features investigations into the spectrum of perturbed, rotating black holes. These include the study of near horizon perturbations, leading to a new generic frequency mode for black hole ringdown; an treatment of high frequency waves using WKB methods for Kerr black holes; and the discovery of a bifurcation of the quasinormal mode spectrum of rapidly rotating black holes. These results represent new discoveries in the field of black hole perturbation theory, and rely on additional approximations to the linearized field equations around the background black hole. The second part of this thesis presents a recently developed method for the visualization of curved spacetimes, using field lines called the tendex and vortex lines of the spacetime. The works presented here both introduce these visualization techniques, and explore them in simple situations. These include the visualization of asymptotic gravitational radiation; weak gravity situations with and without radiation; stationary black hole spacetimes; and some preliminary study into numerically simulated black hole mergers. The second part of thesis culminates in the investigation of perturbed black holes using these field line methods, which have uncovered new insights into the dynamics of curved spacetime around black holes.
Resumo:
Separating the dynamics of variables that evolve on different timescales is a common assumption in exploring complex systems, and a great deal of progress has been made in understanding chemical systems by treating independently the fast processes of an activated chemical species from the slower processes that proceed activation. Protein motion underlies all biocatalytic reactions, and understanding the nature of this motion is central to understanding how enzymes catalyze reactions with such specificity and such rate enhancement. This understanding is challenged by evidence of breakdowns in the separability of timescales of dynamics in the active site form motions of the solvating protein. Quantum simulation methods that bridge these timescales by simultaneously evolving quantum and classical degrees of freedom provide an important method on which to explore this breakdown. In the following dissertation, three problems of enzyme catalysis are explored through quantum simulation.
Resumo:
Computational general relativity is a field of study which has reached maturity only within the last decade. This thesis details several studies that elucidate phenomena related to the coalescence of compact object binaries. Chapters 2 and 3 recounts work towards developing new analytical tools for visualizing and reasoning about dynamics in strongly curved spacetimes. In both studies, the results employ analogies with the classical theory of electricity and magnitism, first (Ch. 2) in the post-Newtonian approximation to general relativity and then (Ch. 3) in full general relativity though in the absence of matter sources. In Chapter 4, we examine the topological structure of absolute event horizons during binary black hole merger simulations conducted with the SpEC code. Chapter 6 reports on the progress of the SpEC code in simulating the coalescence of neutron star-neutron star binaries, while Chapter 7 tests the effects of various numerical gauge conditions on the robustness of black hole formation from stellar collapse in SpEC. In Chapter 5, we examine the nature of pseudospectral expansions of non-smooth functions motivated by the need to simulate the stellar surface in Chapters 6 and 7. In Chapter 8, we study how thermal effects in the nuclear equation of state effect the equilibria and stability of hypermassive neutron stars. Chapter 9 presents supplements to the work in Chapter 8, including an examination of the stability question raised in Chapter 8 in greater mathematical detail.
Resumo:
We address the influence of the orbital symmetry and the molecular alignment with respect to the laser-field polarization on laser-induced nonsequential double ionization of diatomic molecules, in the length and velocity gauges. We work within the strong-field approximation and assume that the second electron is dislodged by electron-impact ionization, and also consider the classical limit of this model. We show that the electron-momentum distributions exhibit interference maxima and minima due to electron emission at spatially separated centers. The interference patterns survive integration over the transverse momenta for a small range of alignment angles, and are sharpest for parallel-aligned molecules. Due to the contributions of the transverse-momentum components, these patterns become less defined as the alignment angle increases, until they disappear for perpendicular alignment. This behavior influences the shapes and the peaks of the electron-momentum distributions.
Resumo:
This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.
The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.
The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.
The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.
Resumo:
Accurate simulation of quantum dynamics in complex systems poses a fundamental theoretical challenge with immediate application to problems in biological catalysis, charge transfer, and solar energy conversion. The varied length- and timescales that characterize these kinds of processes necessitate development of novel simulation methodology that can both accurately evolve the coupled quantum and classical degrees of freedom and also be easily applicable to large, complex systems. In the following dissertation, the problems of quantum dynamics in complex systems are explored through direct simulation using path-integral methods as well as application of state-of-the-art analytical rate theories.
Resumo:
While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.
Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.
Resumo:
Proton-coupled electron transfer (PCET) reactions are ubiquitous throughout chemistry and biology. However, challenges arise in both the the experimental and theoretical investigation of PCET reactions; the rare-event nature of the reactions and the coupling between quantum mechanical electron- and proton-transfer with the slower classical dynamics of the surrounding environment necessitates the development of robust simulation methodology. In the following dissertation, novel path-integral based methods are developed and employed for the direct simulation of the reaction dynamics and mechanisms of condensed-phase PCET.
Resumo:
The Talbot effect is one of the most basic optical phenomena that has received extensive investigations both because its new results provide us more understanding of the fundamental Fresnel diffraction and also because of its wide applications. We summarize our recent results on this subject. Symmetry of the Talbot effect, which was reported in Optics Communications in 1995, is now realized as the key to reveal other rules for explanation of the Talbot effect for array illumination. The regularly rearranged-neighboring-phase-differences (RRNPD) rule, a completely new set of analytic phase equations (Applied Optics, 1999), and the prime-number decomposing rule (Applied Optics, 2001) are the newly obtained results that reflect the symmetry of the Talbot effect in essence. We also reported our results on the applications of the Talbot effect. Talbot phase codes are the orthogonal codes that can be used for phase coding of holographic storage. A new optical scanner based on the phase codes for Talbot array illumination has unique advantages. Furthermore, a novel two-layered multifunctional computer-generated hologram based on the fractional Talbot effect was proposed and implemented (Optics Letters, 2003). We believe that these new results should bring us more new understanding of the Talbot effect and help us to design novel optical devices that should benefit practical applications. (C) 2004 Society of Photo-Optical Instrumentation Engineers.
Resumo:
Experimental results of the Talbot effect of an amplitude grating under femtosecond laser illumination are reported. Compared with Talbot image under continuous wave (CW) illumination, Talbot images under femtosecond laser illumination are different due to the wide spectral bandwidth and the Talbot images are more distorted at longer Talbot distances. The spectrums and the pulsewidths of femtosecond laser pulses are measured with the frequency-resolved optical gating (FROG) apparatus. Experimental results are in good agreement with the theoretical analysis. (c) 2005 Elsevier B.V. All rights reserved.