923 resultados para Space-time block codes
Resumo:
Using the formalism of the Ruelle response theory, we study how the invariant measure of an Axiom A dynamical system changes as a result of adding noise, and describe how the stochastic perturbation can be used to explore the properties of the underlying deterministic dynamics. We first find the expression for the change in the expectation value of a general observable when a white noise forcing is introduced in the system, both in the additive and in the multiplicative case. We also show that the difference between the expectation value of the power spectrum of an observable in the stochastically perturbed case and of the same observable in the unperturbed case is equal to the variance of the noise times the square of the modulus of the linear susceptibility describing the frequency-dependent response of the system to perturbations with the same spatial patterns as the considered stochastic forcing. This provides a conceptual bridge between the change in the fluctuation properties of the system due to the presence of noise and the response of the unperturbed system to deterministic forcings. Using Kramers-Kronig theory, it is then possible to derive the real and imaginary part of the susceptibility and thus deduce the Green function of the system for any desired observable. We then extend our results to rather general patterns of random forcing, from the case of several white noise forcings, to noise terms with memory, up to the case of a space-time random field. Explicit formulas are provided for each relevant case analysed. As a general result, we find, using an argument of positive-definiteness, that the power spectrum of the stochastically perturbed system is larger at all frequencies than the power spectrum of the unperturbed system. We provide an example of application of our results by considering the spatially extended chaotic Lorenz 96 model. These results clarify the property of stochastic stability of SRB measures in Axiom A flows, provide tools for analysing stochastic parameterisations and related closure ansatz to be implemented in modelling studies, and introduce new ways to study the response of a system to external perturbations. Taking into account the chaotic hypothesis, we expect that our results have practical relevance for a more general class of system than those belonging to Axiom A.
Resumo:
We reconsider the theory of the linear response of non-equilibrium steady states to perturbations. We �rst show that by using a general functional decomposition for space-time dependent forcings, we can de�ne elementary susceptibilities that allow to construct the response of the system to general perturbations. Starting from the de�nition of SRB measure, we then study the consequence of taking di�erent sampling schemes for analysing the response of the system. We show that only a speci�c choice of the time horizon for evaluating the response of the system to a general time-dependent perturbation allows to obtain the formula �rst presented by Ruelle. We also discuss the special case of periodic perturbations, showing that when they are taken into consideration the sampling can be �ne-tuned to make the de�nition of the correct time horizon immaterial. Finally, we discuss the implications of our results in terms of strategies for analyzing the outputs of numerical experiments by providing a critical review of a formula proposed by Reick.
Resumo:
The separate effects of ozone depleting substances (ODSs) and greenhouse gases (GHGs) on forcing circulation changes in the Southern Hemisphere extratropical troposphere are investigated using a version of the Canadian Middle Atmosphere Model (CMAM) that is coupled to an ocean. Circulation-related diagnostics include zonal wind, tropopause pressure, Hadley cell width, jet location, annular mode index, precipitation, wave drag, and eddy fluxes of momentum and heat. As expected, the tropospheric response to the ODS forcing occurs primarily in austral summer, with past (1960-99) and future (2000-99) trends of opposite sign, while the GHG forcing produces more seasonally uniform trends with the same sign in the past and future. In summer the ODS forcing dominates past trends in all diagnostics, while the two forcings contribute nearly equally but oppositely to future trends. The ODS forcing produces a past surface temperature response consisting of cooling over eastern Antarctica, and is the dominant driver of past summertime surface temperature changes when the model is constrained by observed sea surface temperatures. For all diagnostics, the response to the ODS and GHG forcings is additive: that is, the linear trend computed from the simulations using the combined forcings equals (within statistical uncertainty) the sum of the linear trends from the simulations using the two separate forcings. Space time spectra of eddy fluxes and the spatial distribution of transient wave drag are examined to assess the viability of several recently proposed mechanisms for the observed poleward shift in the tropospheric jet.
The Asian summer monsoon: an intercomparison of CMIP5 vs. CMIP3 simulations of the late 20th century
Resumo:
The boreal summer Asian monsoon has been evaluated in 25 Coupled Model Intercomparison Project-5 (CMIP5) and 22 CMIP3 GCM simulations of the late 20th Century. Diagnostics and skill metrics have been calculated to assess the time-mean, climatological annual cycle, interannual variability, and intraseasonal variability. Progress has been made in modeling these aspects of the monsoon, though there is no single model that best represents all of these aspects of the monsoon. The CMIP5 multi-model mean (MMM) is more skillful than the CMIP3 MMM for all diagnostics in terms of the skill of simulating pattern correlations with respect to observations. Additionally, for rainfall/convection the MMM outperforms the individual models for the time mean, the interannual variability of the East Asian monsoon, and intraseasonal variability. The pattern correlation of the time (pentad) of monsoon peak and withdrawal is better simulated than that of monsoon onset. The onset of the monsoon over India is typically too late in the models. The extension of the monsoon over eastern China, Korea, and Japan is underestimated, while it is overestimated over the subtropical western/central Pacific Ocean. The anti-correlation between anomalies of all-India rainfall and Niño-3.4 sea surface temperature is overly strong in CMIP3 and typically too weak in CMIP5. For both the ENSO-monsoon teleconnection and the East Asian zonal wind-rainfall teleconnection, the MMM interannual rainfall anomalies are weak compared to observations. Though simulation of intraseasonal variability remains problematic, several models show improved skill at representing the northward propagation of convection and the development of the tilted band of convection that extends from India to the equatorial west Pacific. The MMM also well represents the space-time evolution of intraseasonal outgoing longwave radiation anomalies. Caution is necessary when using GPCP and CMAP rainfall to validate (1) the time-mean rainfall, as there are systematic differences over ocean and land between these two data sets, and (2) the timing of monsoon withdrawal over India, where the smooth southward progression seen in India Meteorological Department data is better realized in CMAP data compared to GPCP data.
Resumo:
Drought characterisation is an intrinsically spatio-temporal problem. A limitation of previous approaches to characterisation is that they discard much of the spatio-temporal information by reducing events to a lower-order subspace. To address this, an explicit 3-dimensional (longitude, latitude, time) structure-based method is described in which drought events are defined by a spatially and temporarily coherent set of points displaying standardised precipitation below a given threshold. Geometric methods can then be used to measure similarity between individual drought structures. Groupings of these similarities provide an alternative to traditional methods for extracting recurrent space-time signals from geophysical data. The explicit consideration of structure encourages the construction of summary statistics which relate to the event geometry. Example measures considered are the event volume, centroid, and aspect ratio. The utility of a 3-dimensional approach is demonstrated by application to the analysis of European droughts (15 °W to 35°E, and 35 °N to 70°N) for the period 1901–2006. Large-scale structure is found to be abundant with 75 events identified lasting for more than 3 months and spanning at least 0.5 × 106 km2. Near-complete dissimilarity is seen between the individual drought structures, and little or no regularity is found in the time evolution of even the most spatially similar drought events. The spatial distribution of the event centroids and the time evolution of the geographic cross-sectional areas strongly suggest that large area, sustained droughts result from the combination of multiple small area (∼106 km2) short duration (∼3 months) events. The small events are not found to occur independently in space. This leads to the hypothesis that local water feedbacks play an important role in the aggregation process.
Resumo:
A fingerprint method for detecting anthropogenic climate change is applied to new simulations with a coupled ocean-atmosphere general circulation model (CGCM) forced by increasing concentrations of greenhouse gases and aerosols covering the years 1880 to 2050. In addition to the anthropogenic climate change signal, the space-time structure of the natural climate variability for near-surface temperatures is estimated from instrumental data over the last 134 years and two 1000 year simulations with CGCMs. The estimates are compared with paleoclimate data over 570 years. The space-time information on both the signal and the noise is used to maximize the signal-to-noise ratio of a detection variable obtained by applying an optimal filter (fingerprint) to the observed data. The inclusion of aerosols slows the predicted future warming. The probability that the observed increase in near-surface temperatures in recent decades is of natural origin is estimated to be less than 5%. However, this number is dependent on the estimated natural variability level, which is still subject to some uncertainty.
Resumo:
In the 1960s and early 1970s sea surface temperatures in the North Atlantic Ocean cooled rapidly. There is still considerable uncertainty about the causes of this event, although various mechanisms have been proposed. In this observational study it is demonstrated that the cooling proceeded in several distinct stages. Cool anomalies initially appeared in the mid-1960s in the Nordic Seas and Gulf Stream Extension, before spreading to cover most of the Subpolar Gyre. Subsequently, cool anomalies spread into the tropical North Atlantic before retreating, in the late 1970s, back to the Subpolar Gyre. There is strong evidence that changes in atmospheric circulation, linked to a southward shift of the Atlantic ITCZ, played an important role in the event, particularly in the period 1972-76. Theories for the cooling event must account for its distinctive space-time evolution. Our analysis suggests that the most likely drivers were: 1) The “Great Salinity Anomaly” of the late 1960s; 2) An earlier warming of the subpolar North Atlantic, which may have led to a slow-down in the Atlantic Meridional Overturning Circulation; 3) An increase in anthropogenic sulphur dioxide emissions. Determining the relative importance of these factors is a key area for future work.
Resumo:
Understanding how and why the capability of one set of business resources, its structural arrangements and mechanisms compared to another works can provide competitive advantage in terms of new business processes and product and service development. However, most business models of capability are descriptive and lack formal modelling language to qualitatively and quantifiably compare capabilities, Gibson’s theory of affordance, the potential for action, provides a formal basis for a more robust and quantitative model, but most formal affordance models are complex and abstract and lack support for real-world applications. We aim to understand the ‘how’ and ‘why’ of business capability, by developing a quantitative and qualitative model that underpins earlier work on Capability-Affordance Modelling – CAM. This paper integrates an affordance based capability model and the formalism of Coloured Petri Nets to develop a simulation model. Using the model, we show how capability depends on the space time path of interacting resources, the mechanism of transition and specific critical affordance factors relating to the values of the variables for resources, people and physical objects. We show how the model can identify the capabilities of resources to enable the capability to inject a drug and anaesthetise a patient.
Resumo:
Research on invention has focused on business invention and little work has been conducted on the process and capability required for the individual inventor or the capabilities required for an advice to be considered an invention. This paper synthesises the results of an empirical survey of ten inventor case studies with current research on invention and recent capability affordance research to develop an integrated capability process model of human capabilities for invention and specific capabilities of an invented device. We identify eight necessary human effectivities required for individual invention capability and six functional key activities using these effectivities, to deliver the functional capability of invention. We also identified key differences between invention and general problem solving processes. Results suggest that inventive step capability relies on a unique application of principles that relate to a new combination of affordance chain with a new mechanism and or space time (affordance) path representing the novel way the device works, in conjunction with defined critical affordance operating factors that are the subject of the patent claims.
Resumo:
In numerical weather prediction, parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known; and the parameterisations themselves are also approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation (DA), such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential DA methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to pre-determined functional forms of missing physics or parameterisations that are based upon prior information. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. Furthermore, it is shown how the method depends on the quality of the DA results. The results indicate that this new method is a powerful tool in systematic model improvement.
Resumo:
Animals faced with conflicting cues, such as predatory threat and a given rewarding stimulus, must make rapid decisions to engage in defensive versus other appetitive behaviors. The brain mechanisms mediating such responses are poorly understood. However, the periaqueductal gray (PAG) seems particularly suitable for accomplishing this task. The PAG is thought to have, at least, two distinct general roles on the organization of motivated responses, i.e., one on the execution of defensive and reproductive behaviors, and the other on the motivational drive underlying adaptive responses. We have presently examined how the PAG would be involved in mediating the behavioral choice between mutually incompatible behaviors, such as reproduction or defense, when dams are exposed to pups and cat odor. First, we established the behavioral protocol and observed that lactating rats, simultaneously exposed to pups and cat odor, inhibited maternal behavior and expressed clear defensive responses. We have further revealed that cat odor exposure up-regulated Fos expression in the dorsal PAG, and that NMDA cytotoxic lesions therein were able to restore maternal responses, and, at the same time, block defensive responsiveness to cat odor. Potential paths mediating the dorsal PAG influences on the inhibition of appetitive (i.e., retrieving behavior) and consummatory (i.e., nursing) maternal responses are discussed. Overall, we were able to confirm the dual role of the PAG, where, in the present case, the dorsal PAG, apart from organizing defensive responses, also appears to account for the behavioral inhibition of non-defensive responses. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In a 2D parameter space, by using nine experimental time series of a Clitia`s circuit, we characterized three codimension-1 chaotic fibers parallel to a period-3 window. To show the local preservation of the properties of the chaotic attractors in each fiber, we applied the closed return technique and two distinct topological methods. With the first topological method we calculated the linking, numbers in the sets of unstable periodic orbits, and with the second one we obtained the symbolic planes and the topological entropies by applying symbolic dynamic analysis. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
It is known that the actions of field theories on a noncommutative space-time can be written as some modified (we call them theta-modified) classical actions already on the commutative space-time (introducing a star product). Then the quantization of such modified actions reproduces both space-time noncommutativity and the usual quantum mechanical features of the corresponding field theory. In the present article, we discuss the problem of constructing theta-modified actions for relativistic QM. We construct such actions for relativistic spinless and spinning particles. The key idea is to extract theta-modified actions of the relativistic particles from path-integral representations of the corresponding noncommutative field theory propagators. We consider the Klein-Gordon and Dirac equations for the causal propagators in such theories. Then we construct for the propagators path-integral representations. Effective actions in such representations we treat as theta-modified actions of the relativistic particles. To confirm the interpretation, we canonically quantize these actions. Thus, we obtain the Klein-Gordon and Dirac equations in the noncommutative field theories. The theta-modified action of the relativistic spinning particle is just a generalization of the Berezin-Marinov pseudoclassical action for the noncommutative case.
Resumo:
We consider a four dimensional field theory with target space being CP(N) which constitutes a generalization of the usual Skyrme-Faddeev model defined on CP(1). We show that it possesses an integrable sector presenting an infinite number of local conservation laws, which are associated to the hidden symmetries of the zero curvature representation of the theory in loop space. We construct an infinite class of exact solutions for that integrable submodel where the fields are meromorphic functions of the combinations (x(1) + i x(2)) and (x(3) + x(0)) of the Cartesian coordinates of four dimensional Minkowski space-time. Among those solutions we have static vortices and also vortices with waves traveling along them with the speed of light. The energy per unity of length of the vortices show an interesting and intricate interaction among the vortices and waves.
Resumo:
Comprehension of the processes of formation of new organizational fields is the main objective that stimulated the theoretical reflection and empirical research that I present in this paper. My intention here is to uphold the potential for the application of seemingly dichotomous perspectives in terms of the objectivity/subjectivity dimension in the comprehension of the objective in question. The contribution of Foucault, with his concept of discourse, is linked to the proposal of critical constructivism represented by Latour and studies of science and technology. Juxtaposing these perspectives, I examined the dynamics of the biotechnological field on the basis of the dialectic of movements of demarcation/circularity, which is basically a simultaneous movement of (dis)construction of the boundaries of a field. The dialectic of demarcation/circularity is made up of the set of relations established between heterogeneous elements ¿ institutions, economic and social processes, behavioral patterns, systems of norms, techniques, types of classification, forms of characterization, in other words it finds ways of emerging in the course of discursive formations. This theoretical proposal ¿ which incorporates an overlooked dimension in institutional analysis, especially in organization studies (power) ¿ has the advantage of contributing to enhancing comprehension of the dynamics of institutionalization. By proposing that the institutional processes arise within discursive fields, the argument put forward is that such processes contribute to the productivity of the power relations in these fields. In empirical terms, I conducted a descriptive and exploratory research directed at the biotechnology sector. The research was based on a historical perspective, since the analysis spans the period from the origins of genetic science (beginning of the 20th century) through to recent developments in biotechnology in the USA (beginning of the 21st century). The USA was chosen as the locus of research, principally due to the fact that structuring of the field of biotechnology originated in that country, subsequently spreading to other countries around the world. Starting from this theoretical and methodological framework, three discursive formations are highlighted: organization, information and network. Each of the discursive formations is characterized by a dominant set of discourses that prepare the ground for the appearance and (trans)formation of the focus-objects under analysis. In this process, organizations appear in at least two ways: as boundary-organizations ¿ which are important for understanding the movement of the approximation of different discursive domains ¿ and as new organizations, which accompany the (trans)formation of new fields, whereby prevailing discourses materialize at a given historical moment and contribute to breathe life into new discourses, which in turn spark off new power relations. Among the conclusions of this work, I would highlight the following: questioning the 'organizational' dimension of the fields; the relationship revealed not only between the discourses and the institutionalized practices, but also with the process of construction of legitimacy; and the redefinition of the concept of organizations, based on new conceptions relating to the limits of the topic, the objectivity/subjectivity, and space/time dichotomy.