992 resultados para Fuzzy Closure Space
Resumo:
Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.
Resumo:
The concept of slow vortical dynamics and its role in theoretical understanding is central to geophysical fluid dynamics. It leads, for example, to “potential vorticity thinking” (Hoskins et al. 1985). Mathematically, one imagines an invariant manifold within the phase space of solutions, called the slow manifold (Leith 1980; Lorenz 1980), to which the dynamics are constrained. Whether this slow manifold truly exists has been a major subject of inquiry over the past 20 years. It has become clear that an exact slow manifold is an exceptional case, restricted to steady or perhaps temporally periodic flows (Warn 1997). Thus the concept of a “fuzzy slow manifold” (Warn and Ménard 1986) has been suggested. The idea is that nearly slow dynamics will occur in a stochastic layer about the putative slow manifold. The natural question then is, how thick is this layer? In a recent paper, Ford et al. (2000) argue that Lighthill emission—the spontaneous emission of freely propagating acoustic waves by unsteady vortical flows—is applicable to the problem of balance, with the Mach number Ma replaced by the Froude number F, and that it is a fundamental mechanism for this fuzziness. They consider the rotating shallow-water equations and find emission of inertia–gravity waves at O(F2). This is rather surprising at first sight, because several studies of balanced dynamics with the rotating shallow-water equations have gone beyond second order in F, and found only an exponentially small unbalanced component (Warn and Ménard 1986; Lorenz and Krishnamurthy 1987; Bokhove and Shepherd 1996; Wirosoetisno and Shepherd 2000). We have no technical objection to the analysis of Ford et al. (2000), but wish to point out that it depends crucially on R 1, where R is the Rossby number. This condition requires the ratio of the characteristic length scale of the flow L to the Rossby deformation radius LR to go to zero in the limit F → 0. This is the low Froude number scaling of Charney (1963), which, while originally designed for the Tropics, has been argued to be also relevant to mesoscale dynamics (Riley et al. 1981). If L/LR is fixed, however, then F → 0 implies R → 0, which is the standard quasigeostrophic scaling of Charney (1948; see, e.g., Pedlosky 1987). In this limit there is reason to expect the fuzziness of the slow manifold to be “exponentially thin,” and balance to be much more accurate than is consistent with (algebraic) Lighthill emission.
Resumo:
Backtracks aimed to investigate critical relationships between audio-visual technologies and live performance, emphasising technologies producing sound, contrasted with non-amplified bodily sound. Drawing on methodologies for studying avant garde theatre, live performance and the performing body, it was informed by work in critical and cultural theory by, for example, Steven Connor and Jonathan Rée, on the body's experience and interpretation of sound. The performance explored how shifting national boundaries, mobile workforces, complex family relationships, cultural pluralities and possibilities for bodily transformation have compelled a re-evaluation of what it means to feel 'at home' in modernity. Using montages of live and mediated images, disrupted narratives and sound, it evoked destablised identities which characterise contemporary lived experience, and enacted the displacement of certainties provided by family and nation, community and locality, body and selfhood. Homer's Odyssey framed the performance: elements could be traced in the mise-en-scène; in the physical presence of Athene, the narrator and Penelope weaving mementoes from the past into her loom; and in voice-overs from Homer's work. The performance drew on personal experiences and improvisations, structured around notions of journey. It presented incomplete narratives, memories, repressed anxieties and dreams through different combinations of sounds, music, mediated images, movement, voice and bodily sound. The theme of travel was intensified by performers carrying suitcases and umbrellas, by soundtracks incorporating travel effects, and by the distorted video images of forms of transport playing across 'screens' which proliferated across the space (sails, umbrellas, the loom, actors' bodies). The performance experimented with giving sound and silence performative dimensions, including presenting sound in visual and imagistic ways, for example by using signs from deaf sign language. Through-composed soundtracks of live and recorded song, music, voice-over, and noise exploited the viscerality of sound and disrupted cognitive interpretation by phenomenological, somatic experience, thereby displacing the impulse for closure/destination/home.
Resumo:
We study the degree to which Kraichnan–Leith–Batchelor (KLB) phenomenology describes two-dimensional energy cascades in α turbulence, governed by ∂θ/∂t+J(ψ,θ)=ν∇2θ+f, where θ=(−Δ)α/2ψ is generalized vorticity, and ψ^(k)=k−αθ^(k) in Fourier space. These models differ in spectral non-locality, and include surface quasigeostrophic flow (α=1), regular two-dimensional flow (α=2) and rotating shallow flow (α=3), which is the isotropic limit of a mantle convection model. We re-examine arguments for dual inverse energy and direct enstrophy cascades, including Fjørtoft analysis, which we extend to general α, and point out their limitations. Using an α-dependent eddy-damped quasinormal Markovian (EDQNM) closure, we seek self-similar inertial range solutions and study their characteristics. Our present focus is not on coherent structures, which the EDQNM filters out, but on any self-similar and approximately Gaussian turbulent component that may exist in the flow and be described by KLB phenomenology. For this, the EDQNM is an appropriate tool. Non-local triads contribute increasingly to the energy flux as α increases. More importantly, the energy cascade is downscale in the self-similar inertial range for 2.5<α<10. At α=2.5 and α=10, the KLB spectra correspond, respectively, to enstrophy and energy equipartition, and the triad energy transfers and flux vanish identically. Eddy turnover time and strain rate arguments suggest the inverse energy cascade should obey KLB phenomenology and be self-similar for α<4. However, downscale energy flux in the EDQNM self-similar inertial range for α>2.5 leads us to predict that any inverse cascade for α≥2.5 will not exhibit KLB phenomenology, and specifically the KLB energy spectrum. Numerical simulations confirm this: the inverse cascade energy spectrum for α≥2.5 is significantly steeper than the KLB prediction, while for α<2.5 we obtain the KLB spectrum.
Resumo:
The ability of six scanning cloud radar scan strategies to reconstruct cumulus cloud fields for radiation study is assessed. Utilizing snapshots of clean and polluted cloud fields from large eddy simulations, an analysis is undertaken of error in both the liquid water path and monochromatic downwelling surface irradiance at 870 nm of the reconstructed cloud fields. Error introduced by radar sensitivity, choice of radar scan strategy, retrieval of liquid water content (LWC), and reconstruction scheme is explored. Given an in␣nitely sensitive radar and perfect LWC retrieval, domain average surface irradiance biases are typically less than 3 W m␣2 ␣m␣1, corresponding to 5–10% of the cloud radiative effect (CRE). However, when using a realistic radar sensitivity of ␣37.5 dBZ at 1 km, optically thin areas and edges of clouds are dif␣cult to detect due to their low radar re-ectivity; in clean conditions, overestimates are of order 10 W m␣2 ␣m␣1 (~20% of the CRE), but in polluted conditions, where the droplets are smaller, this increases to 10–26 W m␣2 ␣m␣1 (~40–100% of the CRE). Drizzle drops are also problematic; if treated as cloud droplets, reconstructions are poor, leading to large underestimates of 20–46 W m␣2 ␣m␣1 in domain average surface irradiance (~40–80% of the CRE). Nevertheless, a synergistic retrieval approach combining the detailed cloud structure obtained from scanning radar with the droplet-size information and location of cloud base gained from other instruments would potentially make accurate solar radiative transfer calculations in broken cloud possible for the first time.
Resumo:
The retrieval (estimation) of sea surface temperatures (SSTs) from space-based infrared observations is increasingly performed using retrieval coefficients derived from radiative transfer simulations of top-of-atmosphere brightness temperatures (BTs). Typically, an estimate of SST is formed from a weighted combination of BTs at a few wavelengths, plus an offset. This paper addresses two questions about the radiative transfer modeling approach to deriving these weighting and offset coefficients. How precisely specified do the coefficients need to be in order to obtain the required SST accuracy (e.g., scatter <0.3 K in week-average SST, bias <0.1 K)? And how precisely is it actually possible to specify them using current forward models? The conclusions are that weighting coefficients can be obtained with adequate precision, while the offset coefficient will often require an empirical adjustment of the order of a few tenths of a kelvin against validation data. Thus, a rational approach to defining retrieval coefficients is one of radiative transfer modeling followed by offset adjustment. The need for this approach is illustrated from experience in defining SST retrieval schemes for operational meteorological satellites. A strategy is described for obtaining the required offset adjustment, and the paper highlights some of the subtler aspects involved with reference to the example of SST retrievals from the imager on the geostationary satellite GOES-8.
Resumo:
In homogeneous environments, by overturning the possibility of competitive exclusion among phytoplankton species, and by regulating the dynamics of overall plankton population, toxin-producing phytoplankton (TPP) potentially help in maintaining plankton diversity—a result shown recently. Here, I explore the competitive effects of TPP on phytoplankton and zooplankton species undergoing spatial movements in the subsurface water. The spatial interactions among the species are represented in the form of reaction-diffusion equations. Suitable parametric conditions under which Turing patterns may or may not evolve are investigated. Spatiotemporal distributions of species biomass are simulated using the diffusivity assumptions realistic for natural planktonic systems. The study demonstrates that spatial movements of planktonic systems in the presence of TPP generate and maintain inhomogeneous biomass distribution of competing phytoplankton, as well as grazer zooplankton, thereby ensuring the persistence of multiple species in space and time. The overall results may potentially explain the sustainability of biodiversity and the spatiotemporal emergence of phytoplankton and zooplankton species under the influence of TPP combined with their physical movement in the subsurface water.
Resumo:
Aim Earth observation (EO) products are a valuable alternative to spectral vegetation indices. We discuss the availability of EO products for analysing patterns in macroecology, particularly related to vegetation, on a range of spatial and temporal scales. Location Global. Methods We discuss four groups of EO products: land cover/cover change, vegetation structure and ecosystem productivity, fire detection, and digital elevation models. We address important practical issues arising from their use, such as assumptions underlying product generation, product accuracy and product transferability between spatial scales. We investigate the potential of EO products for analysing terrestrial ecosystems. Results Land cover, productivity and fire products are generated from long-term data using standardized algorithms to improve reliability in detecting change of land surfaces. Their global coverage renders them useful for macroecology. Their spatial resolution (e.g. GLOBCOVER vegetation, 300 m; MODIS vegetation and fire, ≥ 500 m; ASTER digital elevation, 30 m) can be a limiting factor. Canopy structure and productivity products are based on physical approaches and thus are independent of biome-specific calibrations. Active fire locations are provided in near-real time, while burnt area products show actual area burnt by fire. EO products can be assimilated into ecosystem models, and their validation information can be employed to calculate uncertainties during subsequent modelling. Main conclusions Owing to their global coverage and long-term continuity, EO end products can significantly advance the field of macroecology. EO products allow analyses of spatial biodiversity, seasonal dynamics of biomass and productivity, and consequences of disturbances on regional to global scales. Remaining drawbacks include inter-operability between products from different sensors and accuracy issues due to differences between assumptions and models underlying the generation of different EO products. Our review explains the nature of EO products and how they relate to particular ecological variables across scales to encourage their wider use in ecological applications.
Resumo:
Practical realisation of Cyborgs opens up significant new opportunities in many fields. In particular when it comes to space travel many of the limitations faced by humans, in stand-alone form, are transposed by the adoption of a cyborg persona. In this article a look is taken at different types of Brain-Computer interface which can be employed to realise Cyborgs, biology-technology hybrids. e approach taken is a practical one with applications in mind, although some wider implications are also considered. In particular results from experiments are discussed in terms of their meaning and application possibilities. e article is written from the perspective of scientific experimentation opening up realistic possibilities to be faced in the future rather than giving conclusive comments on the technologies employed. Human implantation and the merger of biology and technology are though important elements.
Resumo:
The discrete Fourier transmission spread OFDM DFTS-OFDM) based single-carrier frequency division multiple access (SC-FDMA) has been widely adopted due to its lower peak-to-average power ratio (PAPR) of transmit signals compared with OFDM. However, the offset modulation, which has lower PAPR than general modulation, cannot be directly applied into the existing SC-FDMA. When pulse-shaping filters are employed to further reduce the envelope fluctuation of transmit signals of SC-FDMA, the spectral efficiency degrades as well. In order to overcome such limitations of conventional SC-FDMA, this paper for the first time investigated cyclic prefixed OQAMOFDM (CP-OQAM-OFDM) based SC-FDMA transmission with adjustable user bandwidth and space-time coding. Firstly, we propose CP-OQAM-OFDM transmission with unequally-spaced subbands. We then apply it to SC-FDMA transmission and propose a SC-FDMA scheme with the following features: a) the transmit signal of each user is offset modulated single-carrier with frequency-domain pulse-shaping; b) the bandwidth of each user is adjustable; c) the spectral efficiency does not decrease with increasing roll-off factors. To combat both inter-symbolinterference and multiple access interference in frequencyselective fading channels, a joint linear minimum mean square error frequency domain equalization using a prior information with low complexity is developed. Subsequently, we construct space-time codes for the proposed SC-FDMA. Simulation results confirm the powerfulness of the proposed CP-OQAM-OFDM scheme (i.e., effective yet with low complexity).
Resumo:
Changes in the cultures and spaces of death during the Victorian era reveal the shifting conceptualisations and mobilisations of class in this period. Using the example of Brookwood Necropolis, established 1852 in response to the contemporary burial reform debate, the paper explores tensions within the sanitary reform movement, 1853–1903. Whilst reformist ideology grounded the cemetery's practices in a discourse of inclusion, one of the consequences of reform was to reinforce class distinctions. Combined with commercial imperatives and the modern impulse towards separation of living and dead, this aspect of reform enacted a counter-discourse of alienation. The presence of these conflicting strands in the spaces and practices of the Necropolis and their changes during the time period reflect wider urban trends.
Resumo:
The anthropogenic heat emissions generated by human activities in London are analysed in detail for 2005–2008 and considered in context of long-term past and future trends (1970–2025). Emissions from buildings, road traffic and human metabolism are finely resolved in space (30 min) and time (200 × 200 m2). Software to compute and visualize the results is provided. The annual mean anthropogenic heat flux for Greater London is 10.9 W m−2 for 2005–2008, with the highest peaks in the central activities zone (CAZ) associated with extensive service industry activities. Towards the outskirts of the city, emissions from the domestic sector and road traffic dominate. Anthropogenic heat is mostly emitted as sensible heat, with a latent heat fraction of 7.3% and a heat-to-wastewater fraction of 12%; the implications related to the use of evaporative cooling towers are briefly addressed. Projections indicate a further increase of heat emissions within the CAZ in the next two decades related to further intensification of activities within this area.
Resumo:
What is it that gives celebrities the voice and authority to do and say the things they do in the realm of development politics? Asked another way, how is celebrity practised and, simultaneously, how does this praxis make celebrity, personas, politics and, indeed, celebrities themselves? In this article, we explore this ‘celebrity praxis’ through the lens of the creation of the contemporary ‘development celebrity’ in those stars working for development writ large in the so-called Third World. Drawing on work in science studies, material cultures and the growing geo-socio-anthropologies of things, the key to understanding the material practices embedded in and creating development celebrity networks is the multiple and complex circulations of the everyday and bespectacled artefacts of celebrity. Conceptualised as the ‘celebrity–consumption–compassion complex’, the performances of development celebrities are as much about everyday events, materials, technologies, emotions and consumer acts as they are about the mediated and liquidised constructions of the stars who now ‘market’ development.Moreover, this complex is constructed by and constructs what we are calling ‘star/poverty space’ that works to facilitate the ‘expertise’ and ‘authenticity’ and, thus, elevated voice and authority, of development celebrities through poverty tours, photoshoots, textual and visual diaries, websites and tweets. In short, the creation of star/poverty space is performed through a kind of ‘materiality of authenticity’ that is at the centre of the networks of development celebrity. The article concludes with several brief observations about the politics, possibilities and problematics of development celebrities and the star/poverty spaces that they create.