971 resultados para Reinforcement Sensitivity Theory
Resumo:
We address under what conditions a magma generated by partial melting at 100 km depth in the mantle wedge above a subduction zone can reach the crust in dikes before stalling. We also address under what conditions primitive basaltic magma (Mg # >60) can be delivered from this depth to the crust. We employ linear elastic fracture mechanics with magma solidification theory and perform a parametric sensitivity analysis. All dikes are initiated at a depth of 100 km in the thermal core of the wedge, and the Moho is fixed at 35 km depth. We consider a range of melt solidus temperatures (800-1100 degrees C), viscosities (10-100 Pa s), and densities (2400-2700 kg m(-3)). We also consider a range of host rock fracture toughness values (50-300 MPa m(1/2)) and dike lengths (2-5 km) and two thermal structures for the mantle wedge (1260 and 1400 degrees C at 100 km depth and 760 and 900 degrees C at 35 km depth). For the given parameter space, many dikes can reach the Moho in less than a few hundred hours, well within the time constraints provided by U series isotope disequilibria studies. Increasing the temperature in the mantle wedge, or increasing the dike length, allows additional dikes to propagate to the Moho. We conclude that some dikes with vertical lengths near their critical lengths and relatively high solidus temperatures will stall in the mantle before reaching the Moho, and these may be returned by corner flow to depths where they can melt under hydrous conditions. Thus, a chemical signature in arc lavas suggesting partial melting of slab basalts may be partly influenced by these recycled dikes. Alternatively, dikes with lengths well above their critical lengths can easily deliver primitive magmas to the crust, particularly if the mantle wedge is relatively hot. Dike transport remains a viable primary mechanism of magma ascent in convergent tectonic settings, but the potential for less rapid mechanisms making an important contribution increases as the mantle temperature at the Moho approaches the solidus temperature of the magma.
Resumo:
This study examined the relation between ethnically based rejection sensitivity and academic achievement in a sample of 936 immigrant students in Germany and Switzerland. The theory of race-based rejection sensitivity that originated in North America was extended to immigrant students in Europe. The rough political climate against immigrants in Europe makes it probable that immigrant youth face particular difficulties and are affected by ethnically based rejection sensitivity, at least as much as—or even more than—minority youth in the United States. Using a standardized literacy performance test and multilevel analyses, we found that ethnically based rejection sensitivity was negatively related to academic achievement for immigrant students. This relation was partially mediated by a strong contingency of the students' self-worth on the heritage culture, as well as by a low number of native German or Swiss majority-group friends. We interpret these processes as immigrant students' efforts to cope with ethnically based rejection sensitivity by retracting into their heritage culture and avoiding majority-group contact, which unfortunately, however, at the same time also results in lower academic achievement. Copyright © 2014 John Wiley & Sons, Ltd.
Resumo:
If quantum interference patterns in the hearts of polycyclic aromatic hydrocarbons (PAHs) could be isolated and manipulated, then a significant step towards realizing the potential of single-molecule electronics would be achieved. Here we demonstrate experimentally and theoretically that a simple, parameter-free, analytic theory of interference patterns evaluated at the mid-point of the HOMO-LUMO gap (referred to as M-functions) correctly predicts conductance ratios of molecules with pyrene, naphthalene, anthracene, anthanthrene or azulene hearts. M-functions provide new design strategies for identifying molecules with phase-coherent logic functions and enhancing the sensitivity of molecular-scale interferometers.
Resumo:
The construction of a Gothic vault implied the solution of several technical challenges. The literature on Gothic vault construction is quite large and its growth continues steadily. The main challenge of any structure is that, during and after construction, it must be "safe", that is, it must not collapse. Indeed, it must be amply safe, able to support different loads for long periods of time. Masonry architecture has shown its structural safety for centuries or millennia. The Pantheon of Rome stands today after almost 2,000 years without having needed any structural reinforcement (of course, the survival of any building implies continuous maintenance) . Hagia Sophia in Istanbul, finished in the 6th century AD, has withstood not only the dead loads but also many severe earthquakes . Finally, the Gothic cathedrals, with their appearance of weakness, are• more than a half millennium old. The question arises of what the source of this amazing strength is and how the illiterate master masons were able to design such daring and safe structures . This question is usually evaded in manuals of Gothic architecture. This is quite surprising, the structure being a fundamental part of Gothic buildings. The present article aims to give such an explanation, which has been studied in detail elsewhere. In the first part, the Gothic design methods "V ill be discussed. In the second part, the validity of these methods wi11 be verified within the frame of the modern theory of masonry structures . References have been reduced to a minimum to make the text simpler and more direct.
Resumo:
Abstract This work is focused on the problem of performing multi‐robot patrolling for infrastructure security applications in order to protect a known environment at critical facilities. Thus, given a set of robots and a set of points of interest, the patrolling task consists of constantly visiting these points at irregular time intervals for security purposes. Current existing solutions for these types of applications are predictable and inflexible. Moreover, most of the previous centralized and deterministic solutions and only few efforts have been made to integrate dynamic methods. Therefore, the development of new dynamic and decentralized collaborative approaches in order to solve the aforementioned problem by implementing learning models from Game Theory. The model selected in this work that includes belief‐based and reinforcement models as special cases is called Experience‐Weighted Attraction. The problem has been defined using concepts of Graph Theory to represent the environment in order to work with such Game Theory techniques. Finally, the proposed methods have been evaluated experimentally by using a patrolling simulator. The results obtained have been compared with previous available
Resumo:
Ripple-based controls can strongly reduce the required output capacitance in PowerSoC converter thanks to a very fast dynamic response. Unfortunately, these controls are prone to sub-harmonic oscillations and several parameters affect the stability of these systems. This paper derives and validates a simulation-based modeling and stability analysis of a closed-loop V 2Ic control applied to a 5 MHz Buck converter using discrete modeling and Floquet theory to predict stability. This allows the derivation of sensitivity analysis to design robust systems. The work is extended to different V 2 architectures using the same methodology.
Resumo:
We study a parabolic–elliptic chemotactic system describing the evolution of a population’s density “u” and a chemoattractant’s concentration “v”. The system considers a non-constant chemotactic sensitivity given by “χ(N−u)”, for N≥0, and a source term of logistic type “λu(1−u)”. The existence of global bounded classical solutions is proved for any χ>0, N≥0 and λ≥0. By using a comparison argument we analyze the stability of the constant steady state u=1, v=1, for a range of parameters. – For N>1 and Nλ>2χ, any positive and bounded solution converges to the steady state. – For N≤1 the steady state is locally asymptotically stable and for χN<λ, the steady state is globally asymptotically stable.
Resumo:
Este trabajo presenta un método discreto para el cálculo de estabilidad hidrodinámica y análisis de sensibilidad a perturbaciones externas para ecuaciones diferenciales y en particular para las ecuaciones de Navier-Stokes compressible. Se utiliza una aproximación con variable compleja para obtener una precisión analítica en la evaluación de la matriz Jacobiana. Además, mapas de sensibilidad para la sensibilidad a las modificaciones del flujo de base y a una fuerza constante permiten identificar las regiones del campo fluido donde una modificacin (ej. fuerza puntual) tiene un efecto estabilizador del flujo. Se presentan cuatro casos de prueba: (1) un caso analítico para comprobar la derivación discreta, (2) una cavidad cerrada a bajo Reynolds para mostrar la mayor precisión en el cálculo de los valores propios con la aproximación de paso complejo, (3) flujo 2D en un cilindro circular para validar la metodología, y (4) flujo en un cavidad abierta, presentado para validar el método en casos de inestabilidades convectivamente inestables. Los tres últimos casos mencionados (2-4) se resolvieron con las ecuaciones de Navier-Stokes compresibles, utilizando un método Discontinuous Galerkin Spectral Element Method. Se obtuvo una buena concordancia para el caso de validación (3), cuando se comparó el nuevo método con resultados de la literatura. Además, este trabajo muestra que para el cálculo de los modos propios directos y adjuntos, así como para los mapas de sensibilidad, el uso de variables complejas es de suprema importancia para obtener una predicción precisa. El método descrito es aplicado al análisis para la estabilización de la estela generada por un disco actuador, que representa un modelo sencillo para hélices, rotores de helicópteros o turbinas eólicas. Se explora la primera bifurcación del flujo para un disco actuador, y se sugiere que está asociada a una inestabilidad de tipo Kelvin-Helmholtz, cuya estabilidad se controla con en el número de Reynolds y en la resistencia del disco actuador (o fuerza resistente). En primer lugar, se verifica que la disminución de la resistencia del disco tiene un efecto estabilizador parecido a una disminución del Reynolds. En segundo lugar, el análisis hidrodinmico discreto identifica dos regiones para la colocación de una fuerza puntual que controle las inestabilidades, una cerca del disco y otra en una zona aguas abajo. En tercer lugar, se muestra que la inclusión de un forzamiento localizado cerca del actuador produce una estabilización más eficiente que al forzar aguas abajo. El análisis de los campos de flujo controlados confirma que modificando el gradiente de velocidad cerca del actuador es más eficiente para estabilizar la estela. Estos resultados podrían proporcionar nuevas directrices para la estabilización de la estela de turbinas de viento o de marea cuando estén instaladas en un parque eólico y minimizar las interacciones no estacionarias entre turbinas. ABSTRACT A discrete framework for computing the global stability and sensitivity analysis to external perturbations for any set of partial differential equations is presented. In particular, a complex-step approximation is used to achieve near analytical accuracy for the evaluation of the Jacobian matrix. Sensitivity maps for the sensitivity to base flow modifications and to a steady force are computed to identify regions of the flow field where an input could have a stabilising effect. Four test cases are presented: (1) an analytical test case to prove the theory of the discrete framework, (2) a lid-driven cavity at low Reynolds case to show the improved accuracy in the calculation of the eigenvalues when using the complex-step approximation, (3) the 2D flow past a circular cylinder at just below the critical Reynolds number is used to validate the methodology, and finally, (4) the flow past an open cavity is presented to give an example of the discrete method applied to a convectively unstable case. The latter three (2–4) of the aforementioned cases were solved with the 2D compressible Navier–Stokes equations using a Discontinuous Galerkin Spectral Element Method. Good agreement was obtained for the validation test case, (3), with appropriate results in the literature. Furthermore, it is shown that for the calculation of the direct and adjoint eigenmodes and their sensitivity maps to external perturbations, the use of complex variables is paramount for obtaining an accurate prediction. An analysis for stabilising the wake past an actuator disc, which represents a simple model for propellers, helicopter rotors or wind turbines is also presented. We explore the first flow bifurcation for an actuator disc and it suggests that it is associated to a Kelvin- Helmholtz type instability whose stability relies on the Reynolds number and the flow resistance applied through the disc (or actuator forcing). First, we report that decreasing the disc resistance has a similar stabilising effect to an decrease in the Reynolds number. Second, a discrete sensitivity analysis identifies two regions for suitable placement of flow control forcing, one close to the disc and one far downstream where the instability originates. Third, we show that adding a localised forcing close to the actuator provides more stabilisation that forcing far downstream. The analysis of the controlled flow fields, confirms that modifying the velocity gradient close to the actuator is more efficient to stabilise the wake than controlling the sheared flow far downstream. An interesting application of these results is to provide guidelines for stabilising the wake of wind or tidal turbines when placed in an energy farm to minimise unsteady interactions.
Resumo:
Based on Tversky and Kahneman’s Prospect Theory, we test the existence of reference dependence, loss aversion and diminishing sensitivity in Spanish tourism. To do this, we incorporate the reference-dependent model into a Multinomial Logit Model with Random Parameters -which controls for heterogeneity- and apply it to a sample of vacation choices made by Spaniards. We find that the difference between reference price and actual price is considered to make decisions, confirming that reference dependence exists; that people react more strongly to price increases than to price decreases relative to their reference price, which represents evidence in favor of the loss aversion phenomenon; and that there is diminishing sensitivity for losses only, showing convexity for these negative values.
Resumo:
Foreign exchange trading has emerged recently as a significant activity in many countries. As with most forms of trading, the activity is influenced by many random parameters so that the creation of a system that effectively emulates the trading process will be very helpful. A major issue for traders in the deregulated Foreign Exchange Market is when to sell and when to buy a particular currency in order to maximize profit. This paper presents novel trading strategies based on the machine learning methods of genetic algorithms and reinforcement learning.
Resumo:
The theoretical impacts of anthropogenic habitat degradation on genetic resources have been well articulated. Here we use a simulation approach to assess the magnitude of expected genetic change, and review 31 studies of 23 neotropical tree species to assess whether empirical case studies conform to theory. Major differences in the sensitivity of measures to detect the genetic health of degraded populations were obvious. Most studies employing genetic diversity (nine out of 13) found no significant consequences, yet most that assessed progeny inbreeding (six out of eight), reproductive output (seven out of 10) and fitness (all six) highlighted significant impacts. These observations are in line with theory, where inbreeding is observed immediately following impact, but genetic diversity is lost slowly over subsequent generations, which for trees may take decades. Studies also highlight the ecological, not just genetic, consequences of habitat degradation that can cause reduced seed set and progeny fitness. Unexpectedly, two studies examining pollen flow using paternity analysis highlight an extensive network of gene flow at smaller spatial scales (less than 10 km). Gene flow can thus mitigate against loss of genetic diversity and assist in long-term population viability, even in degraded landscapes. Unfortunately, the surveyed studies were too few and heterogeneous to examine concepts of population size thresholds and genetic resilience in relation to life history. Future suggested research priorities include undertaking integrated studies on a range of species in the same landscapes; better documentation of the extent and duration of impact; and most importantly, combining neutral marker, pollination dynamics, ecological consequences, and progeny fitness assessment within single studies.
Resumo:
We construct a simple growth model where agents with uncertain survival choose schooling time, life-cycle consumption and the number of children. We show that rising longevity reduces fertility but raises saving, schooling time and the growth rate at a diminishing rate. Cross-section analyses using data from 76 countries support these propositions: life expectancy has a significant positive effect on the saving rate, secondary school enrollment and growth but a significant negative effect on fertility. Through sensitivity analyses, the effect on the saving rate is inconclusive, while the effects on the other variables are robust and consistent. These estimated effects are decreasing in life expectancy. Copyright The editors of the Scandinavian Journal of Economics 2005.
Resumo:
Fuzzy signal detection analysis can be a useful complementary technique to traditional signal detection theory analysis methods, particularly in applied settings. For example, traffic situations are better conceived as being on a continuum from no potential for hazard to high potential, rather than either having potential or not having potential. This study examined the relative contribution of sensitivity and response bias to explaining differences in the hazard perception performance of novices and experienced drivers, and the effect of a training manipulation. Novice drivers and experienced drivers were compared (N = 64). Half the novices received training, while the experienced drivers and half the novices remained untrained. Participants completed a hazard perception test and rated potential for hazard in occluded scenes. The response latency of participants to the hazard perception test replicated previous findings of experienced/novice differences and trained/untrained differences. Fuzzy signal detection analysis of both the hazard perception task and the occluded rating task suggested that response bias may be more central to hazard perception test performance than sensitivity, with trained and experienced drivers responding faster and with a more liberal bias than untrained novices. Implications for driver training and the hazard perception test are discussed.
Resumo:
Speed's theory makes two predictions for the development of analogical reasoning. Firstly, young children should not be able to reason analogically due to an undeveloped PFC neural network. Secondly, category knowledge enables the reinforcement of structural features over surface features, and thus the development of sophisticated, analogical, reasoning. We outline existing studies that support these predictions and highlight some critical remaining issues. Specifically, we argue that the development of inhibition must be directly compared alongside the development of reasoning strategies in order to support Speed's account. © 2010 Psychology Press.
Resumo:
Contrast sensitivity improves with the area of a sine-wave grating, but why? Here we assess this phenomenon against contemporary models involving spatial summation, probability summation, uncertainty, and stochastic noise. Using a two-interval forced-choice procedure we measured contrast sensitivity for circular patches of sine-wave gratings with various diameters that were blocked or interleaved across trials to produce low and high extrinsic uncertainty, respectively. Summation curves were steep initially, becoming shallower thereafter. For the smaller stimuli, sensitivity was slightly worse for the interleaved design than for the blocked design. Neither area nor blocking affected the slope of the psychometric function. We derived model predictions for noisy mechanisms and extrinsic uncertainty that was either low or high. The contrast transducer was either linear (c1.0) or nonlinear (c2.0), and pooling was either linear or a MAX operation. There was either no intrinsic uncertainty, or it was fixed or proportional to stimulus size. Of these 10 canonical models, only the nonlinear transducer with linear pooling (the noisy energy model) described the main forms of the data for both experimental designs. We also show how a cross-correlator can be modified to fit our results and provide a contemporary presentation of the relation between summation and the slope of the psychometric function.