88 resultados para 3-DIMENSIONAL ARCHITECTURE
Resumo:
We study two-dimensional (2D) turbulence in a doubly periodic domain driven by a monoscale-like forcing and damped by various dissipation mechanisms of the form νμ(−Δ)μ. By “monoscale-like” we mean that the forcing is applied over a finite range of wavenumbers kmin≤k≤kmax, and that the ratio of enstrophy injection η≥0 to energy injection ε≥0 is bounded by kmin2ε≤η≤kmax2ε. Such a forcing is frequently considered in theoretical and numerical studies of 2D turbulence. It is shown that for μ≥0 the asymptotic behaviour satisfies ∥u∥12≤kmax2∥u∥2, where ∥u∥2 and ∥u∥12 are the energy and enstrophy, respectively. If the condition of monoscale-like forcing holds only in a time-mean sense, then the inequality holds in the time mean. It is also shown that for Navier–Stokes turbulence (μ=1), the time-mean enstrophy dissipation rate is bounded from above by 2ν1kmax2. These results place strong constraints on the spectral distribution of energy and enstrophy and of their dissipation, and thereby on the existence of energy and enstrophy cascades, in such systems. In particular, the classical dual cascade picture is shown to be invalid for forced 2D Navier–Stokes turbulence (μ=1) when it is forced in this manner. Inclusion of Ekman drag (μ=0) along with molecular viscosity permits a dual cascade, but is incompatible with the log-modified −3 power law for the energy spectrum in the enstrophy-cascading inertial range. In order to achieve the latter, it is necessary to invoke an inverse viscosity (μ<0). These constraints on permissible power laws apply for any spectrally localized forcing, not just for monoscale-like forcing.
Resumo:
A parallel pipelined array of cells suitable for realtime computation of histograms is proposed. The cell architecture builds on previous work to now allow operating on a stream of data at 1 pixel per clock cycle. This new cell is more suitable for interfacing to camera sensors or to microprocessors of 8-bit data buses which are common in consumer digital cameras. Arrays using the new proposed cells are obtained via C-slow retiming techniques and can be clocked at a 65% faster frequency than previous arrays. This achieves over 80% of the performance of two-pixel per clock cycle parallel pipelined arrays.
Resumo:
The Kagome lattice, comprising a two-dimensional array of corner-sharing equilateral triangles, is central to the exploration of magnetic frustration. In such a lattice, antiferromagnetic coupling between ions in triangular plaquettes prevents all of the exchange interactions being simultaneously satisfied and a variety of novel magnetic ground states may result at low temperature. Experimental realization of a Kagome lattice remains difficult. The jarosite family of materials of nominal composition AM3(SO4)2(OH)6 (A = monovalent cation; M= Fe3+, Cr3+), offers perhaps one of the most promising manifestations of the phenomenon of magnetic frustration in two dimensions. The magnetic properties of jarosites are however extremely sensitive to the degree of coverage of magnetic sites. Consequently, there is considerable interest in the use of soft chemical techniques for the design and synthesis of novel materials in which to explore the effects of spin, degree of site coverage and connectivity on magnetic frustration.
Resumo:
The energy-Casimir stability method, also known as the Arnold stability method, has been widely used in fluid dynamical applications to derive sufficient conditions for nonlinear stability. The most commonly studied system is two-dimensional Euler flow. It is shown that the set of two-dimensional Euler flows satisfying the energy-Casimir stability criteria is empty for two important cases: (i) domains having the topology of the sphere, and (ii) simply-connected bounded domains with zero net vorticity. The results apply to both the first and the second of Arnold’s stability theorems. In the spirit of Andrews’ theorem, this puts a further limitation on the applicability of the method. © 2000 American Institute of Physics.
Resumo:
The non-quadratic conservation laws of the two-dimensional Euler equations are used to show that the gravest modes in a doubly-periodic domain with aspect ratio L = 1 are stable up to translations (or structurally stable) for finite-amplitude disturbances. This extends a previous result based on conservation of energy and enstrophy alone. When L 1, a saturation bound is established for the mode with wavenumber |k| = L −1 (the next-gravest mode), which is linearly unstable. The method is applied to prove nonlinear structural stability of planetary wave two on a rotating sphere.
Resumo:
The quantitative effects of uniform strain and background rotation on the stability of a strip of constant vorticity (a simple shear layer) are examined. The thickness of the strip decreases in time under the strain, so it is necessary to formulate the linear stability analysis for a time-dependent basic flow. The results show that even a strain rate γ (scaled with the vorticity of the strip) as small as 0.25 suppresses the conventional Rayleigh shear instability mechanism, in the sense that the r.m.s. wave steepness cannot amplify by more than a certain factor, and must eventually decay. For γ < 0.25 the amplification factor increases as γ decreases; however, it is only 3 when γ e 0.065. Numerical simulations confirm the predictions of linear theory at small steepness and predict a threshold value necessary for the formation of coherent vortices. The results help to explain the impression from numerous simulations of two-dimensional turbulence reported in the literature that filaments of vorticity infrequently roll up into vortices. The stabilization effect may be expected to extend to two- and three-dimensional quasi-geostrophic flows.
Resumo:
Faced with the strongly nonlinear and apparently random behaviour of the energy-containing scales in the atmosphere, geophysical fluid dynamicists have attempted to understand the synoptic-scale atmospheric flow within the context of two-dimensional homogeneous turbulence theory (e.g. FJØRTOFT [1]; LEITH [2]). However atmospheric observations (BOER and SHEPHERD [3] and Fig.1) show that the synoptic-scale transient flow evolves in the presence of a planetary-scale, quasi-stationary background flow which is approximately zonal (east-west). Classical homogeneous 2-D turbulence theory is therefore not strictly applicable to the transient flow. One is led instead to study 2-D turbulence in the presence of a large-scale (barotropically stable) zonal jet inhomogeneity.
Resumo:
In recent years, the importance of the corporate brand has significantly grown and companies increasingly seek to strengthen their corporate brand. The corporate brand image can be strengthened through portfolio advertising as a technique of impression management. This mechanism works only if important variables are considered, such as the fit between product brands, the number of product brands as well as the processing depth of the consumers. Based on three experiments, the benefits of portfolio advertising for the corporate brand and its product brands are shown and practical implications are discussed.
Resumo:
We study the degree to which Kraichnan–Leith–Batchelor (KLB) phenomenology describes two-dimensional energy cascades in α turbulence, governed by ∂θ/∂t+J(ψ,θ)=ν∇2θ+f, where θ=(−Δ)α/2ψ is generalized vorticity, and ψ^(k)=k−αθ^(k) in Fourier space. These models differ in spectral non-locality, and include surface quasigeostrophic flow (α=1), regular two-dimensional flow (α=2) and rotating shallow flow (α=3), which is the isotropic limit of a mantle convection model. We re-examine arguments for dual inverse energy and direct enstrophy cascades, including Fjørtoft analysis, which we extend to general α, and point out their limitations. Using an α-dependent eddy-damped quasinormal Markovian (EDQNM) closure, we seek self-similar inertial range solutions and study their characteristics. Our present focus is not on coherent structures, which the EDQNM filters out, but on any self-similar and approximately Gaussian turbulent component that may exist in the flow and be described by KLB phenomenology. For this, the EDQNM is an appropriate tool. Non-local triads contribute increasingly to the energy flux as α increases. More importantly, the energy cascade is downscale in the self-similar inertial range for 2.5<α<10. At α=2.5 and α=10, the KLB spectra correspond, respectively, to enstrophy and energy equipartition, and the triad energy transfers and flux vanish identically. Eddy turnover time and strain rate arguments suggest the inverse energy cascade should obey KLB phenomenology and be self-similar for α<4. However, downscale energy flux in the EDQNM self-similar inertial range for α>2.5 leads us to predict that any inverse cascade for α≥2.5 will not exhibit KLB phenomenology, and specifically the KLB energy spectrum. Numerical simulations confirm this: the inverse cascade energy spectrum for α≥2.5 is significantly steeper than the KLB prediction, while for α<2.5 we obtain the KLB spectrum.
Resumo:
An open-framework indium selenide, [C7H10N][In9Se14], has been prepared under solvothermal conditions in the presence of 3,5-dimethylpyridine, and characterized by single crystal diffraction, thermogravimetry, elemental analysis, FTIR spectroscopy and UV-Vis diffuse reflectance. The crystal structure of [C7H10N][In9Se14] contains an unusual building unit, in which corner-linked and edge-linked InSe45- tetrahedra coexist. The presence of one-dimensional circular channels, of ca. 6 Å diameter, results in approximately 25% of solvent accessible void space.
Resumo:
The self-assembly of proteins and peptides into b-sheet-rich amyloid fibers is a process that has gained notoriety because of its association with human diseases and disorders. Spontaneous self-assembly of peptides into nonfibrillar supramolecular structures can also provide a versatile and convenient mechanism for the bottom-up design of biocompatible materials with functional properties favoring a wide range of practical applications.[1] One subset of these fascinating and potentially useful nanoscale constructions are the peptide nanotubes, elongated cylindrical structures with a hollow center bounded by a thin wall of peptide molecules.[2] A formidable challenge in optimizing and harnessing the properties of nanotube assemblies is to gain atomistic insight into their architecture, and to elucidate precisely how the tubular morphology is constructed from the peptide building blocks. Some of these fine details have been elucidated recently with the use of magic-angle-spinning (MAS) solidstate NMR (SSNMR) spectroscopy.[3] MAS SSNMR measurements of chemical shifts and through-space interatomic distances provide constraints on peptide conformation (e.g., b-strands and turns) and quaternary packing. We describe here a new application of a straightforward SSNMR technique which, when combined with FTIR spectroscopy, reports quantitatively on the orientation of the peptide molecules within the nanotube structure, thereby providing an additional structural constraint not accessible to MAS SSNMR.
Resumo:
We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.
Resumo:
A set of high-resolution radar observations of convective storms has been collected to evaluate such storms in the UK Met Office Unified Model during the DYMECS project (Dynamical and Microphysical Evolution of Convective Storms). The 3-GHz Chilbolton Advanced Meteorological Radar was set up with a scan-scheduling algorithm to automatically track convective storms identified in real-time from the operational rainfall radar network. More than 1,000 storm observations gathered over fifteen days in 2011 and 2012 are used to evaluate the model under various synoptic conditions supporting convection. In terms of the detailed three-dimensional morphology, storms in the 1500-m grid-length simulations are shown to produce horizontal structures a factor 1.5–2 wider compared to radar observations. A set of nested model runs at grid lengths down to 100m show that the models converge in terms of storm width, but the storm structures in the simulations with the smallest grid lengths are too narrow and too intense compared to the radar observations. The modelled storms were surrounded by a region of drizzle without ice reflectivities above 0 dBZ aloft, which was related to the dominance of ice crystals and was improved by allowing only aggregates as an ice particle habit. Simulations with graupel outperformed the standard configuration for heavy-rain profiles, but the storm structures were a factor 2 too wide and the convective cores 2 km too deep.
Resumo:
Users’ requirements change drives an information system evolution. Consequently, such evolution affects those atomic services which provide functional operations from one state of their composition to another state of composition. A challenging issue associated with such evolution of the state of service composition is to ensure a resultant service composition remaining rational. This paper presents a method of Service Composition Atomic-Operation Set (SCAOS). SCAOS defines 2 classes of atomic operations and 13 kinds of basic service compositions to aid a state change process by using Workflow Net. The workflow net has algorithmic capabilities to compose the required services with rationality and maintain any changes to the services in a different composition also rational. This method can improve the adaptability to the ever changing business requirements of information systems in the dynamic environment.