902 resultados para Higher-order Shear Deformation Theory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A change in synaptic strength arising from the activation of two neuronal pathways at approximately the same time is a form of associative plasticity and may underlie classical conditioning. Previously, a cellular analog of a classical conditioning protocol has been demonstrated to produce short-term associative plasticity at the connections between sensory and motor neurons in Aplysia. A similar training protocol produced long-term (24 hour) enhancement of excitatory postsynaptic potentials (EPSPs). EPSPs produced by sensory neurons in which activity was paired with a reinforcing stimulus were significantly larger than unpaired controls 24 hours after training. To examined whether the associative plasticity observed at these synapses may be involved in higher-order forms of classical conditioning, a neural analog of contingency was developed. In addition, computer simulations were used to analyze whether the associative plasticity observed in Aplysia could, in theory, account for second-order conditioning and blocking. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A well developed theoretical framework is available in which paleofluid properties, such as chemical composition and density, can be reconstructed from fluid inclusions in minerals that have undergone no ductile deformation. The present study extends this framework to encompass fluid inclusions hosted by quartz that has undergone weak ductile deformation following fluid entrapment. Recent experiments have shown that such deformation causes inclusions to become dismembered into clusters of irregularly shaped relict inclusions surrounded by planar arrays of tiny, new-formed (neonate) inclusions. Comparison of the experimental samples with a naturally sheared quartz vein from Grimsel Pass, Aar Massif, Central Alps, Switzerland, reveals striking similarities. This strong concordance justifies applying the experimentally derived rules of fluid inclusion behaviour to nature. Thus, planar arrays of dismembered inclusions defining cleavage planes in quartz may be taken as diagnostic of small amounts of intracrystalline strain. Deformed inclusions preserve their pre-deformation concentration ratios of gases to electrolytes, but their H2O contents typically have changed. Morphologically intact inclusions, in contrast, preserve the pre-deformation composition and density of their originally trapped fluid. The orientation of the maximum principal compressive stress (σ1σ1) at the time of shear deformation can be derived from the pole to the cleavage plane within which the dismembered inclusions are aligned. Finally, the density of neonate inclusions is commensurate with the pressure value of σ1σ1 at the temperature and time of deformation. This last rule offers a means to estimate magnitudes of shear stresses from fluid inclusion studies. Application of this new paleopiezometer approach to the Grimsel vein yields a differential stress (σ1–σ3σ1–σ3) of ∼300 MPa∼300 MPa at View the MathML source390±30°C during late Miocene NNW–SSE orogenic shortening and regional uplift of the Aar Massif. This differential stress resulted in strain-hardening of the quartz at very low total strain (<5%<5%) while nearby shear zones were accommodating significant displacements. Further implementation of these experimentally derived rules should provide new insight into processes of fluid–rock interaction in the ductile regime within the Earth's crust.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interface discontinuity factors based on the Generalized Equivalence Theory are commonly used in nodal homogenized diffusion calculations so that diffusion average values approximate heterogeneous higher order solutions. In this paper, an additional form of interface correction factors is presented in the frame of the Analytic Coarse Mesh Finite Difference Method (ACMFD), based on a correction of the modal fluxes instead of the physical fluxes. In the ACMFD formulation, implemented in COBAYA3 code, the coupled multigroup diffusion equations inside a homogenized region are reduced to a set of uncoupled modal equations through diagonalization of the multigroup diffusion matrix. Then, physical fluxes are transformed into modal fluxes in the eigenspace of the diffusion matrix. It is possible to introduce interface flux discontinuity jumps as the difference of heterogeneous and homogeneous modal fluxes instead of introducing interface discontinuity factors as the ratio of heterogeneous and homogeneous physical fluxes. The formulation in the modal space has been implemented in COBAYA3 code and assessed by comparison with solutions using classical interface discontinuity factors in the physical space

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tree-reweighted belief propagation is a message passing method that has certain advantages compared to traditional belief propagation (BP). However, it fails to outperform BP in a consistent manner, does not lend itself well to distributed implementation, and has not been applied to distributions with higher-order interactions. We propose a method called uniformly-reweighted belief propagation that mitigates these drawbacks. After having shown in previous works that this method can substantially outperform BP in distributed inference with pairwise interaction models, in this paper we extend it to higher-order interactions and apply it to LDPC decoding, leading performance gains over BP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El cálculo de relaciones binarias fue creado por De Morgan en 1860 para ser posteriormente desarrollado en gran medida por Peirce y Schröder. Tarski, Givant, Freyd y Scedrov demostraron que las álgebras relacionales son capaces de formalizar la lógica de primer orden, la lógica de orden superior así como la teoría de conjuntos. A partir de los resultados matemáticos de Tarski y Freyd, esta tesis desarrolla semánticas denotacionales y operacionales para la programación lógica con restricciones usando el álgebra relacional como base. La idea principal es la utilización del concepto de semántica ejecutable, semánticas cuya característica principal es el que la ejecución es posible utilizando el razonamiento estándar del universo semántico, este caso, razonamiento ecuacional. En el caso de este trabajo, se muestra que las álgebras relacionales distributivas con un operador de punto fijo capturan toda la teoría y metateoría estándar de la programación lógica con restricciones incluyendo los árboles utilizados en la búsqueda de demostraciones. La mayor parte de técnicas de optimización de programas, evaluación parcial e interpretación abstracta pueden ser llevadas a cabo utilizando las semánticas aquí presentadas. La demostración de la corrección de la implementación resulta extremadamente sencilla. En la primera parte de la tesis, un programa lógico con restricciones es traducido a un conjunto de términos relacionales. La interpretación estándar en la teoría de conjuntos de dichas relaciones coincide con la semántica estándar para CLP. Las consultas contra el programa traducido son llevadas a cabo mediante la reescritura de relaciones. Para concluir la primera parte, se demuestra la corrección y equivalencia operacional de esta nueva semántica, así como se define un algoritmo de unificación mediante la reescritura de relaciones. La segunda parte de la tesis desarrolla una semántica para la programación lógica con restricciones usando la teoría de alegorías—versión categórica del álgebra de relaciones—de Freyd. Para ello, se definen dos nuevos conceptos de Categoría Regular de Lawvere y _-Alegoría, en las cuales es posible interpretar un programa lógico. La ventaja fundamental que el enfoque categórico aporta es la definición de una máquina categórica que mejora e sistema de reescritura presentado en la primera parte. Gracias al uso de relaciones tabulares, la máquina modela la ejecución eficiente sin salir de un marco estrictamente formal. Utilizando la reescritura de diagramas, se define un algoritmo para el cálculo de pullbacks en Categorías Regulares de Lawvere. Los dominios de las tabulaciones aportan información sobre la utilización de memoria y variable libres, mientras que el estado compartido queda capturado por los diagramas. La especificación de la máquina induce la derivación formal de un juego de instrucciones eficiente. El marco categórico aporta otras importantes ventajas, como la posibilidad de incorporar tipos de datos algebraicos, funciones y otras extensiones a Prolog, a la vez que se conserva el carácter 100% declarativo de nuestra semántica. ABSTRACT The calculus of binary relations was introduced by De Morgan in 1860, to be greatly developed by Peirce and Schröder, as well as many others in the twentieth century. Using different formulations of relational structures, Tarski, Givant, Freyd, and Scedrov have shown how relation algebras can provide a variable-free way of formalizing first order logic, higher order logic and set theory, among other formal systems. Building on those mathematical results, we develop denotational and operational semantics for Constraint Logic Programming using relation algebra. The idea of executable semantics plays a fundamental role in this work, both as a philosophical and technical foundation. We call a semantics executable when program execution can be carried out using the regular theory and tools that define the semantic universe. Throughout this work, the use of pure algebraic reasoning is the basis of denotational and operational results, eliminating all the classical non-equational meta-theory associated to traditional semantics for Logic Programming. All algebraic reasoning, including execution, is performed in an algebraic way, to the point we could state that the denotational semantics of a CLP program is directly executable. Techniques like optimization, partial evaluation and abstract interpretation find a natural place in our algebraic models. Other properties, like correctness of the implementation or program transformation are easy to check, as they are carried out using instances of the general equational theory. In the first part of the work, we translate Constraint Logic Programs to binary relations in a modified version of the distributive relation algebras used by Tarski. Execution is carried out by a rewriting system. We prove adequacy and operational equivalence of the semantics. In the second part of the work, the relation algebraic approach is improved by using allegory theory, a categorical version of the algebra of relations developed by Freyd and Scedrov. The use of allegories lifts the semantics to typed relations, which capture the number of logical variables used by a predicate or program state in a declarative way. A logic program is interpreted in a _-allegory, which is in turn generated from a new notion of Regular Lawvere Category. As in the untyped case, program translation coincides with program interpretation. Thus, we develop a categorical machine directly from the semantics. The machine is based on relation composition, with a pullback calculation algorithm at its core. The algorithm is defined with the help of a notion of diagram rewriting. In this operational interpretation, types represent information about memory allocation and the execution mechanism is more efficient, thanks to the faithful representation of shared state by categorical projections. We finish the work by illustrating how the categorical semantics allows the incorporation into Prolog of constructs typical of Functional Programming, like abstract data types, and strict and lazy functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este proyecto se estudian y analizan las diferentes técnicas de procesado digital de señal aplicadas a acelerómetros. Se hace uso de una tarjeta de prototipado, basada en DSP, para realizar las diferentes pruebas. El proyecto se basa, principalmente, en realizar filtrado digital en señales provenientes de un acelerómetro en concreto, el 1201F, cuyo campo de aplicación es básicamente la automoción. Una vez estudiadas la teoría de procesado y las características de los filtros, diseñamos una aplicación basándonos sobre todo en el entorno en el que se desarrollaría una aplicación de este tipo. A lo largo del diseño, se explican las diferentes fases: diseño por ordenador (Matlab), diseño de los filtros en el DSP (C), pruebas sobre el DSP sin el acelerómetro, calibración del acelerómetro, pruebas finales sobre el acelerómetro... Las herramientas utilizadas son: la plataforma Kit de evaluación 21-161N de Analog Devices (equipado con el entorno de desarrollo Visual DSP 4.5++), el acelerómetro 1201F, el sistema de calibración de acelerómetros CS-18-LF de Spektra y los programas software MATLAB 7.5 y CoolEditPRO 2.0. Se realizan únicamente filtros IIR de 2º orden, de todos los tipos (Butterworth, Chebyshev I y II y Elípticos). Realizamos filtros de banda estrecha, paso-banda y banda eliminada, de varios tipos, dentro del fondo de escala que permite el acelerómetro. Una vez realizadas todas las pruebas, tanto simulaciones como físicas, se seleccionan los filtros que presentan un mejor funcionamiento y se analizan para obtener conclusiones. Como se dispone de un entorno adecuado para ello, se combinan los filtros entre sí de varias maneras, para obtener filtros de mayor orden (estructura paralelo). De esta forma, a partir de filtros paso-banda, podemos obtener otras configuraciones que nos darán mayor flexibilidad. El objetivo de este proyecto no se basa sólo en obtener buenos resultados en el filtrado, sino también de aprovechar las facilidades del entorno y las herramientas de las que disponemos para realizar el diseño más eficiente posible. In this project, we study and analize digital signal processing in order to design an accelerometer-based application. We use a hardware card of evaluation, based on DSP, to make different tests. This project is based in design digital filters for an automotion application. The accelerometer type is 1201F. First, we study digital processing theory and main parameters of real filters, to make a design based on the application environment. Along the application, we comment all the different steps: computer design (Matlab), filter design on the DSP (C language), simulation test on the DSP without the accelerometer, accelerometer calibration, final tests on the accelerometer... Hardware and software tools used are: Kit of Evaluation 21-161-N, based on DSP, of Analog Devices (equiped with software development tool Visual DSP 4.5++), 1201-F accelerometer, CS-18-LF calibration system of SPEKTRA and software tools MATLAB 7.5 and CoolEditPRO 2.0. We only perform 2nd orden IIR filters, all-type : Butterworth, Chebyshev I and II and Ellyptics. We perform bandpass and stopband filters, with very narrow band, taking advantage of the accelerometer's full scale. Once all the evidence, both simulations and physical, are finished, filters having better performance and analyzed and selected to draw conclusions. As there is a suitable environment for it, the filters are combined together in different ways to obtain higher order filters (parallel structure). Thus, from band-pass filters, we can obtain many configurations that will give us greater flexibility. The purpose of this project is not only based on good results in filtering, but also to exploit the facilities of the environment and the available tools to make the most efficient design possible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There have been several previous proposals for the integration of Object Oriented Programming features into Logic Programming, resulting in much support theory and several language proposals. However, none of these proposals seem to have made it into the mainstream. Perhaps one of the reasons for these is that the resulting languages depart too much from the standard logic programming languages to entice the average Prolog programmer. Another reason may be that most of what can be done with object-oriented programming can already be done in Prolog through the meta- and higher-order programming facilities that the language includes, albeit sometimes in a more cumbersome way. In light of this, in this paper we propose an alternative solution which is driven by two main objectives. The first one is to include only those characteristics of object-oriented programming which are cumbersome to implement in standard Prolog systems. The second one is to do this in such a way that there is minimum impact on the syntax and complexity of the language, i.e., to introduce the minimum number of new constructs, declarations, and concepts to be learned. Finally, we would like the implementation to be as straightforward as possible, ideally based on simple source to source expansions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El hormigón estructural sigue siendo sin duda uno de los materiales más utilizados en construcción debido a su resistencia, rigidez y flexibilidad para diseñar estructuras. El cálculo de estructuras de hormigón, utilizando vigas y vigas-columna, es complejo debido a los fenómenos de acoplamiento entre esfuerzos y al comportamiento no lineal del material. Los modelos más empleados para su análisis son el de Bernoulli-Euler y el de Timoshenko, indicándose en la literatura la conveniencia de usar el segundo cuando la relación canto/luz no es pequeña o los elementos están fuertemente armados. El objetivo fundamental de esta tesis es el análisis de elementos viga y viga-columna en régimen no lineal con deformación por cortante, aplicando el concepto de Pieza Lineal Equivalente (PLE). Concepto éste que consiste básicamente en resolver el problema de una pieza en régimen no lineal, transformándolo en uno lineal equivalente, de modo que ambas piezas tengan la misma deformada y los mismos esfuerzos. Para ello, se hizo en primer lugar un estudio comparado de: las distintas propuestas que aplican la deformación por cortante, de los distintos modelos constitutivos y seccionales del hormigón estructural y de los métodos de cálculo no lineal aplicando el método de elementos finitos (MEF). Teniendo en cuenta que la resolución del problema no lineal se basa en la resolución de sucesivos problemas lineales empleando un proceso de homotopía, los problemas lineales de la viga y viga-columna de Timoshenko, se resuelven mediante MEF, utilizando soluciones nodalmente exactas (SNE) y acción repartida equivalente de cualquier orden. Se obtiene así, con muy pocos elementos finitos, una excelente aproximación de la solución, no sólo en los nodos sino en el interior de los elementos. Se introduce el concepto PLE para el análisis de una barra, de material no lineal, sometida a acciones axiales, y se extiende el mismo para el análisis no lineal de vigas y vigas-columna con deformación por cortante. Cabe señalar que para estos últimos, la solución de una pieza en régimen no lineal es igual a la de una en régimen lineal, cuyas rigideces son constantes a trozos, y donde además hay que añadir momentos y cargas puntuales ficticias en los nodos, así como, un momento distribuido ficticio en toda la pieza. Se han desarrollado dos métodos para el análisis: uno para problemas isostáticos y otro general, aplicable tanto a problemas isostáticos como hiperestáticos. El primero determina de entrada la PLE, realizándose a continuación el cálculo por MEF-SNE de dicha pieza, que ahora está en régimen lineal. El general utiliza una homotopía que transforma de manera iterativa, unas leyes constitutivas lineales en las leyes no lineales del material. Cuando se combina con el MEF, la pieza lineal equivalente y la solución del problema original quedan determinadas al final de todo el proceso. Si bien el método general es un procedimiento próximo al de Newton- Raphson, presenta sobre éste la ventaja de permitir visualizar las deformaciones de la pieza en régimen no lineal, de manera tanto cualitativa como cuantitativa, ya que es posible observar en cada paso del proceso la modificación de rigideces (a flexión y cortante) y asimismo la evolución de las acciones ficticias. Por otra parte, los resultados obtenidos comparados con los publicados en la literatura, indican que el concepto PLE ofrece una forma directa y eficiente para analizar con muy buena precisión los problemas asociados a vigas y vigas-columna en las que por su tipología los efectos del cortante no pueden ser despreciados. ABSTRACT The structural concrete clearly remains the most used material in construction due to its strength, rigidity and structural design flexibility. The calculation of concrete structures using beams and beam-column is complex as consequence of the coupling phenomena between stresses and of its nonlinear behaviour. The models most commonly used for analysis are the Bernoulli-Euler and Timoshenko. The second model is strongly recommended when the relationship thickness/span is not small or in case the elements are heavily reinforced. The main objective of this thesis is to analyse the beam and beam-column elements with shear deformation in nonlinear regime, applying the concept of Equivalent Linear Structural Element (ELSE). This concept is basically to solve the problem of a structural element in nonlinear regime, transforming it into an equivalent linear structural element, so that both elements have the same deformations and the same stresses. Firstly, a comparative study of the various proposals of applying shear deformation, of various constitutive and sectional models of structural concrete, and of the nonlinear calculation methods (using finite element methods) was carried out. Considering that the resolution of nonlinear problem is based on solving the successive linear problem, using homotopy process, the linear problem of Timoshenko beam and beam-columns is resolved by FEM, using the exact nodal solutions (ENS) and equivalent distributed load of any order. Thus, the accurate solution approximation can be obtained with very few finite elements for not only nodes, but also for inside of elements. The concept ELSE is introduced to analyse a bar of nonlinear material, subjected to axial forces. The same bar is then used for other nonlinear beam and beam-column analysis with shear deformation. It is noted that, for the last analyses, the solution of a structural element in nonlinear regime is equal to that of linear regime, in which the piecewise-stiffness is constant, the moments and fictitious point loads need to be added at nodes of each element, as well as the fictitious distributed moment on element. Two methods have been developed for analysis: one for isostatic problem and other more general, applicable for both isostatic and hiperstatic problem. The first method determines the ELSE, and then the calculation of this piece is performed by FEM-ENS that now is in linear regime. The general method uses the homotopy that transforms iteratively linear constitutive laws into nonlinear laws of material. When combined with FEM, the ELSE and the solution of the original problem are determined at the end of the whole process. The general method is well known as a procedure closed to Newton-Raphson procedure but presents an advantage that allows displaying deformations of the piece in nonlinear regime, in both qualitative and quantitative way. Since it is possible to observe the modification of stiffness (flexural and shear) in each step of process and also the evolution of the fictitious actions. Moreover, the results compared with those published in the literature indicate that the ELSE concept offers a direct and efficient way to analyze with very good accuracy the problems associated with beams and beams columns in which, by typology, the effects of shear cannot be neglected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Averaged event-related potential (ERP) data recorded from the human scalp reveal electroencephalographic (EEG) activity that is reliably time-locked and phase-locked to experimental events. We report here the application of a method based on information theory that decomposes one or more ERPs recorded at multiple scalp sensors into a sum of components with fixed scalp distributions and sparsely activated, maximally independent time courses. Independent component analysis (ICA) decomposes ERP data into a number of components equal to the number of sensors. The derived components have distinct but not necessarily orthogonal scalp projections. Unlike dipole-fitting methods, the algorithm does not model the locations of their generators in the head. Unlike methods that remove second-order correlations, such as principal component analysis (PCA), ICA also minimizes higher-order dependencies. Applied to detected—and undetected—target ERPs from an auditory vigilance experiment, the algorithm derived ten components that decomposed each of the major response peaks into one or more ICA components with relatively simple scalp distributions. Three of these components were active only when the subject detected the targets, three other components only when the target went undetected, and one in both cases. Three additional components accounted for the steady-state brain response to a 39-Hz background click train. Major features of the decomposition proved robust across sessions and changes in sensor number and placement. This method of ERP analysis can be used to compare responses from multiple stimuli, task conditions, and subject states.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT The higher education systems throughout the continent of Africa are undergoing unprecedented challenges and are considered in crisis. African countries, including Ghana, all have in common ties to their colonial legacy whereby they are confronted with weak policies put in place by their colonizers. Having gained their independence, Africans should now take responsibility for the task of reforming their higher education system. To date, nothing substantial has been accomplished, with serious implications for weakening and damaging the structures of the foundation of their educational systems. This qualitative, single case study utilized a postcolonial theory-critical pedagogy framework, providing guidance for coming to grips with the mindset posed by Ghana's colonial heritage in the postcolonial era, especially in terms of its damaging effects on Ghana's higher education system. The study explores alternative pathways for secondary school students to transition to tertiary education--a problematic transition that currently hinders open access to all and equality in educational opportunity, resulting in a tremendous pool of discontinued students. This transitional problem is directly related to Ghana's crisis in higher education with far reaching consequences. The alternative pathway considered in this study is an adaptation of the U.S. community college model or an integration of its applicable aspects into the current structures of the higher education system already in place. In-depth interviews were conducted with 5 Ghanaian professors teaching at community colleges in the United States, 5 Ghanaian professors teaching at universities in Ghana, and 2 educational consultants from the Ghanaian Ministry of Education. Based on their perspectives of the current state of Ghanaian higher education, analyzed in terms of pedagogy, structure/infrastructure, and curriculum, the participants provided their perceptions of salient aspects of the U.S. community college model that would be applicable to Ghana's situation, along with other recommendations. Access to all, including equality of educational opportunity, was considered essential, followed by adaptability, affordability, practicality, and quality of curriculum content and delivery. Canada's successful adaptation of the U.S. model was also discussed. Findings can help guide consideration of alternative pathways to higher education in Ghana and Africa as a whole.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dual-phase-lagging (DPL) models constitute a family of non-Fourier models of heat conduction that allow for the presence of time lags in the heat flux and the temperature gradient. These lags may need to be considered when modeling microscale heat transfer, and thus DPL models have found application in the last years in a wide range of theoretical and technical heat transfer problems. Consequently, analytical solutions and methods for computing numerical approximations have been proposed for particular DPL models in different settings. In this work, a compact difference scheme for second order DPL models is developed, providing higher order precision than a previously proposed method. The scheme is shown to be unconditionally stable and convergent, and its accuracy is illustrated with numerical examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analogue model experiments using both brittle and viscous materials were performed to investigate the development and interaction of strike-slip faults in zones of distributed shear deformation. At low strain, bulk dextral shear deformation of an initial rectangular model is dominantly accommodated by left-stepping, en echelon strike-slip faults (Riedel shears, R) that form in response to the regional (bulk) stress field. Push-up zones form in the area of interaction between adjacent left-stepping Riedel shears. In cross sections, faults bounding push-up zones have an arcuate shape or merge at depth. Adjacent left-stepping R shears merge by sideways propagation or link by short synthetic shears that strike subparallel to the bulk shear direction. Coalescence of en echelon R shears results in major, through-going faults zones (master faults). Several parallel master faults develop due to the distributed nature of deformation. Spacing between master faults is related to the thickness of the brittle layers overlying the basal viscous layer. Master faults control to a large extent the subsequent fault pattern. With increasing strain, relatively short antithetic and synthetic faults develop mostly between old, but still active master faults. The orientation and evolution of the new faults indicate local modifications of the stress field. In experiments lacking lateral borders, closely spaced parallel antithetic faults (cross faults) define blocks that undergo clockwise rotation about a vertical axis with continuing deformation. Fault development and fault interaction at different stages of shear strain in our models show similarities with natural examples that have undergone distributed shear.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a fully quantum mechanical treatment of the nondegenerate optical parametric oscillator both below and near threshold. This is a nonequilibrium quantum system with a critical point phase transition, that is also known to exhibit strong yet easily observed squeezing and quantum entanglement. Our treatment makes use of the positive P representation and goes beyond the usual linearized theory. We compare our analytical results with numerical simulations and find excellent agreement. We also carry out a detailed comparison of our results with those obtained from stochastic electrodynamics, a theory obtained by truncating the equation of motion for the Wigner function, with a view to locating regions of agreement and disagreement between the two. We calculate commonly used measures of quantum behavior including entanglement, squeezing, and Einstein-Podolsky-Rosen (EPR) correlations as well as higher order tripartite correlations, and show how these are modified as the critical point is approached. These results are compared with those obtained using two degenerate parametric oscillators, and we find that in the near-critical region the nondegenerate oscillator has stronger EPR correlations. In general, the critical fluctuations represent an ultimate limit to the possible entanglement that can be achieved in a nondegenerate parametric oscillator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper develops an evolutionary theory of adaptive growth, understood as a product of structural change and economic self-transformation, based upon processes that are closely connected with but not reducible to the growth of knowledge. The dominant connecting theme is enterprise, the innovative variations it generates and the multiple connections between investment, innovation, demand and structural transformation in the market process. The paper explores the dependence of macroeconomic productivity growth on the diversity of technical progress functions and income elasticities of demand at the industry level, and the resolution of this diversity into patterns of economic change through market processes. It is shown how industry growth rates are constrained by higher-order processes of emergence that convert an ensemble of industry growth rates into an aggregate rate of growth. The growth of productivity, output and employment are determined mutually and endogenously, and their values depend on the variation in the primary causal influences in the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The practice of career counseling has been derived from principles of career theory and counseling theory. In recent times, the fields of both career and counseling theory have undergone considerable change. This article details the move toward convergence in career theory, and the subsequent development of the Systems Theory Framework in this domain. The importance of this development to connecting theory and practice in the field of career counseling is discussed.