982 resultados para Oxygénation Sensible
Resumo:
En su aspecto central esta investigación explora como se construyen distintos discursos hegemónicos en la sociedad alrededor del cuerpo e indaga como se lo representa, como se lo observa de forma material o discursiva, a través del análisis procesual de tres obras del último periodo de la artista de Lygia Clark. El propósito fundamental fue comprender como la práctica estética de esta artista irrumpió como una práctica de sentido para descolonizar la noción de cuerpo y de arte construida bajo los parámetros de la razón eurocéntrica con la finalidad de permitir la apertura de otros modos de entender y de hacer arte. Esta tesis se compone de tres partes: el primer capítulo inicia tratando de ubicar y comprender el contexto político, social, cultural y estético de la obra de Lygia Clark, referencia importante para entender su quehacer estético. En esta primera parte se desarrolla un análisis de la propuesta de Lygia a través de un corpus de tres obras, las mismas que cuestionan el régimen estético mediante la investigación de la subjetividad y la corporalidad humana compleja. Este marco de referencias teóricas da paso, en el segundo capítulo que indaga sobre el pensamiento fronterizo como marco de referencia teórico metodológico. Para incursionar en la propuesta de Lygia Clark, es fundamental comprender esta categoría críticamente situada en y desde las epistemologías del sur, como uno de los lugares de enunciación, la cual nos incita a reflexionar, y explorar las fronteras simbólicas como una zona de fusión y de encuentro. En este mismo capítulo se sigue la pista al origen del término estética y de las denominadas “otras” estéticas desde las matrices de poder generadas por la colonización. Y finalmente en el tercer capítulo se indaga, una ruta o cartografía de los elementos que integran su obra, así como las convergencias y divergencias entre los espacios de la creatividad artística y los espacios psicoterapéuticos. Considerando que la experiencia estética no está limitada exclusivamente a artistas y especialistas en arte, o el cuerpo, la percepción y lo sensible, no es asunto sólo de científicos o filósofos, sino que atañe a las formas de vida de todos y cada uno de nosotros, individual y colectivamente.
Resumo:
We investigate the question of how many facets are needed to represent the energy balance of an urban area by developing simplified 3-, 2- and 1-facet versions of a 4-facet energy balance model of two-dimensional streets and buildings. The 3-facet model simplifies the 4-facet model by averaging over the canyon orientation, which results in similar net shortwave and longwave balances for both wall facets, but maintains the asymmetry in the heat fluxes within the street canyon. For the 2-facet model, on the assumption that the wall and road temperatures are equal, the road and wall facets can be combined mathematically into a single street-canyon facet with effective values of the heat transfer coefficient, albedo, emissivity and thermodynamic properties, without further approximation. The 1-facet model requires the additional assumption that the roof temperature is also equal to the road and wall temperatures. Idealised simulations show that the geometry and material properties of the walls and road lead to a large heat capacity of the combined street canyon, whereas the roof behaves like a flat surface with low heat capacity. This means that the magnitude of the diurnal temperature variation of the street-canyon facets are broadly similar and much smaller than the diurnal temperature variation of the roof facets. Consequently, the approximation that the street-canyon facets have similar temperatures is sound, and the road and walls can be combined into a single facet. The roof behaves very differently and a separate roof facet is required. Consequently, the 2-facet model performs similarly to the 4-facet model, while the 1-facet model does not. The models are compared with previously published observations collected in Mexico City. Although the 3- and 2-facet models perform better than the 1-facet model, the present models are unable to represent the phase of the sensible heat flux. This result is consistent with previous model comparisons, and we argue that this feature of the data cannot be produced by a single column model. We conclude that a 2-facet model is necessary, and for numerical weather prediction sufficient, to model an urban surface, and that this conclusion is robust and therefore applicable to more general geometries.
Resumo:
The entropy budget is calculated of the coupled atmosphere–ocean general circulation model HadCM3. Estimates of the different entropy sources and sinks of the climate system are obtained directly from the diabatic heating terms, and an approximate estimate of the planetary entropy production is also provided. The rate of material entropy production of the climate system is found to be ∼50 mW m−2 K−1, a value intermediate in the range 30–70 mW m−2 K−1 previously reported from different models. The largest part of this is due to sensible and latent heat transport (∼38 mW m−2 K−1). Another 13 mW m−2 K−1 is due to dissipation of kinetic energy in the atmosphere by friction and Reynolds stresses. Numerical entropy production in the atmosphere dynamical core is found to be about 0.7 mW m−2 K−1. The material entropy production within the ocean due to turbulent mixing is ∼1 mW m−2 K−1, a very small contribution to the material entropy production of the climate system. The rate of change of entropy of the model climate system is about 1 mW m−2 K−1 or less, which is comparable with the typical size of the fluctuations of the entropy sources due to interannual variability, and a more accurate closure of the budget than achieved by previous analyses. Results are similar for FAMOUS, which has a lower spatial resolution but similar formulation to HadCM3, while more substantial differences are found with respect to other models, suggesting that the formulation of the model has an important influence on the climate entropy budget. Since this is the first diagnosis of the entropy budget in a climate model of the type and complexity used for projection of twenty-first century climate change, it would be valuable if similar analyses were carried out for other such models.
Resumo:
Twenty-five small soil-filled perspex boxes arranged in a square, with dwarf sunflowers growing in them, were used to study micro-scale advection. Hydrological heterogeneity was introduced by applying two different amounts of irrigation water (low-irrigation, L, versus high-irrigation, H). The nine central boxes (4 H, 4 L and I bare box) were precision weighing lysimeters, yielding diurnal measurements of evaporation. After the onset of soil water stress, a large difference in latent heat flux (up to 4-fold) was observed between the lysimeters of the H and L treatments, mainly caused by large differences between H and L stomatal conductance values. This resulted in micro-advection, causing H soil-sunflower systems to evaporate well above equilibrium latent heat flux. The occurrence of micro-advective enhancement was reflected in large values of the Priestley-Taylor constant (often larger than 2.0) and generally negative values of sensible heat flux for the H treatment. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
There are now considerable expectations that semi-distributed models are useful tools for supporting catchment water quality management. However, insufficient attention has been given to evaluating the uncertainties inherent to this type of model, especially those associated with the spatial disaggregation of the catchment. The Integrated Nitrogen in Catchments model (INCA) is subjected to an extensive regionalised sensitivity analysis in application to the River Kennet, part of the groundwater-dominated upper Thames catchment, UK The main results are: (1) model output was generally insensitive to land-phase parameters, very sensitive to groundwater parameters, including initial conditions, and significantly sensitive to in-river parameters; (2) INCA was able to produce good fits simultaneously to the available flow, nitrate and ammonium in-river data sets; (3) representing parameters as heterogeneous over the catchment (206 calibrated parameters) rather than homogeneous (24 calibrated parameters) produced a significant improvement in fit to nitrate but no significant improvement to flow and caused a deterioration in ammonium performance; (4) the analysis indicated that calibrating the flow-related parameters first, then calibrating the remaining parameters (as opposed to calibrating all parameters together) was not a sensible strategy in this case; (5) even the parameters to which the model output was most sensitive suffered from high uncertainty due to spatial inconsistencies in the estimated optimum values, parameter equifinality and the sampling error associated with the calibration method; (6) soil and groundwater nutrient and flow data are needed to reduce. uncertainty in initial conditions, residence times and nitrogen transformation parameters, and long-term historic data are needed so that key responses to changes in land-use management can be assimilated. The results indicate the general, difficulty of reconciling the questions which catchment nutrient models are expected to answer with typically limited data sets and limited knowledge about suitable model structures. The results demonstrate the importance of analysing semi-distributed model uncertainties prior to model application, and illustrate the value and limitations of using Monte Carlo-based methods for doing so. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
A surface forcing response framework is developed that enables an understanding of time-dependent climate change from a surface energy perspective. The framework allows the separation of fast responses that are unassociated with global-mean surface air temperature change (ΔT), which is included in the forcing, and slow feedbacks that scale with ΔT. The framework is illustrated primarily using 2 × CO2 climate model experiments and is robust across the models. For CO2 increases, the positive downward radiative component of forcing is smaller at the surface than at the tropopause, and so a rapid reduction in the upward surface latent heat (LH) flux is induced to conserve the tropospheric heat budget; this reduces the precipitation rate. Analysis of the time-dependent surface energy balance over sea and land separately reveals that land areas rapidly regain energy balance, and significant land surface warming occurs before global sea temperatures respond. The 2 × CO2 results are compared to a solar increase experiment and show that some fast responses are forcing dependent. In particular, a significant forcing from the fast hydrological response found in the CO2 experiments is much smaller in the solar experiment. The different fast response explains why previous equilibrium studies found differences in the hydrological sensitivity between these two forcings. On longer time scales, as ΔT increases, the net surface longwave and LH fluxes provide positive and negative surface feedbacks, respectively, while the net surface shortwave and sensible heat fluxes change little. It is found that in contrast to their fast responses, the longer-term response of both surface energy fluxes and the global hydrological cycle are similar for the different forcing agents.
Resumo:
Anomalous heavy snow during winter or spring has long been regarded as a possible precursor of deficient Indian monsoon rainfall during the subsequent summer. However previous work in this field is inconclusive, in terms of the mechanism that communicates snow anomalies to the monsoon summer, and even the region from which snow has the most impact. In this study we explore these issues in coupled and atmosphere-only versions of the Hadley Centre model. A 1050-year control integration of the HadCM3 coupled model, which well represents the seasonal cycle of snow cover over the Eurasian continent, is analysed and shows evidence for weakened monsoons being preceded by strong snow forcing (in the absence of ENSO) over either the Himalaya/Tibetan Plateau or north/west Eurasia regions. However, empirical orthogonal function (EOF) analysis of springtime interannual variability in snow depth shows the leading mode to have opposite signs between these two regions, suggesting that competing mechanisms may be possible. To determine the dominant region, ensemble integrations are carried out using HadAM3, the atmospheric component of HadCM3, and a variety of anomalous snow forcing initial conditions obtained from the control integration of the coupled model. Forcings are applied during spring in separate experiments over the Himalaya/Tibetan Plateau and north/west Eurasia regions, in conjunction with climatological SSTs in order to avoid the direct effects of ENSO. With the aid of idealized forcing conditions in sensitivity tests, we demonstrate that forcing from the Himalaya region is dominant in this model via a Blanford-type mechanism involving reduced surface sensible heat and longwave fluxes, reduced heating of the troposphere over the Tibetan Plateau and consequently a reduced meridional tropospheric temperature gradient which weakens the monsoon during early summer. Snow albedo is shown to be key to the mechanism, explaining around 50% of the perturbation in sensible heating over the Tibetan Plateau, and accounting for the majority of cooling through the troposphere.
Resumo:
The structure and size of the eyes generated in numerically simulated tropical cyclones and polar lows have been studied. A primitive-equation numerical model simulated systems in which the structures of the eyes formed were consistent with available observations. Whilst the tropical cyclone eyes generated were usually rapidly rotating, it appeared impossible for an eye formed in a system with a polar environment to develop this type of structure. The polar low eyes were found to be unable to warm through the subsidence of air with high values of potential temperature, as the environment was approximately statically neutral. Factors affecting the size of the eye were investigated through a series of controlled experiments. In mature tropical cyclone systems the size of the eye was insensitive to small changes in initial conditions, surface friction and latent and sensible heating from the ocean. In contrast, the eye size was strongly dependent on these parameters in the mature polar lows. Consistent with the findings, a mechanism is proposed in which the size of the eye in simulated polar lows is controlled by the strength of subsidence within it.
Resumo:
The interface between humans and technology is a rapidly changing field. In particular as technological methods have improved dramatically so interaction has become possible that could only be speculated about even a decade earlier. This interaction can though take on a wide range of forms. Indeed standard buttons and dials with televisual feedback are perhaps a common example. But now virtual reality systems, wearable computers and most of all, implant technology are throwing up a completely new concept, namely a symbiosis of human and machine. No longer is it sensible simply to consider how a human interacts with a machine, but rather how the human-machine symbiotic combination interacts with the outside world. In this paper we take a look at some of the recent approaches, putting implant technology in context. We also consider some specific practical examples which may well alter the way we look at this symbiosis in the future. The main area of interest as far as symbiotic studies are concerned is clearly the use of implant technology, particularly where a connection is made between technology and the human brain and/or nervous system. Often pilot tests and experimentation has been carried out apriori to investigate the eventual possibilities before human subjects are themselves involved. Some of the more pertinent animal studies are discussed briefly here. The paper however concentrates on human experimentation, in particular that carried out by the authors themselves, firstly to indicate what possibilities exist as of now with available technology, but perhaps more importantly to also show what might be possible with such technology in the future and how this may well have extensive social effects. The driving force behind the integration of technology with humans on a neural level has historically been to restore lost functionality in individuals who have suffered neurological trauma such as spinal cord damage, or who suffer from a debilitating disease such as lateral amyotrophic sclerosis. Very few would argue against the development of implants to enable such people to control their environment, or some aspect of their own body functions. Indeed this technology in the short term has applications for amelioration of symptoms for the physically impaired, such as alternative senses being bestowed on a blind or deaf individual. However the issue becomes distinctly more complex when it is proposed that such technology be used on those with no medical need, but instead who wish to enhance and augment their own bodies, particularly in terms of their mental attributes. These issues are discussed here in the light of practical experimental test results and their ethical consequences.
Resumo:
A large number of urban surface energy balance models now exist with different assumptions about the important features of the surface and exchange processes that need to be incorporated. To date, no com- parison of these models has been conducted; in contrast, models for natural surfaces have been compared extensively as part of the Project for Intercomparison of Land-surface Parameterization Schemes. Here, the methods and first results from an extensive international comparison of 33 models are presented. The aim of the comparison overall is to understand the complexity required to model energy and water exchanges in urban areas. The degree of complexity included in the models is outlined and impacts on model performance are discussed. During the comparison there have been significant developments in the models with resulting improvements in performance (root-mean-square error falling by up to two-thirds). Evaluation is based on a dataset containing net all-wave radiation, sensible heat, and latent heat flux observations for an industrial area in Vancouver, British Columbia, Canada. The aim of the comparison is twofold: to identify those modeling ap- proaches that minimize the errors in the simulated fluxes of the urban energy balance and to determine the degree of model complexity required for accurate simulations. There is evidence that some classes of models perform better for individual fluxes but no model performs best or worst for all fluxes. In general, the simpler models perform as well as the more complex models based on all statistical measures. Generally the schemes have best overall capability to model net all-wave radiation and least capability to model latent heat flux.
Resumo:
This paper presents a unique two-stage image restoration framework especially for further application of a novel rectangular poor-pixels detector, which, with properties of miniature size, light weight and low power consumption, has great value in the micro vision system. To meet the demand of fast processing, only a few measured images shifted up to subpixel level are needed to join the fusion operation, fewer than those required in traditional approaches. By maximum likelihood estimation with a least squares method, a preliminary restored image is linearly interpolated. After noise removal via Canny operator based level set evolution, the final high-quality restored image is achieved. Experimental results demonstrate effectiveness of the proposed framework. It is a sensible step towards subsequent image understanding and object identification.
Resumo:
We present simulations of London's meteorology using the Met Office Unified Model with a new, sophisticated surface energy-balance scheme to represent the urban surfaces, called MORUSES. Simulations are performed with the urban surfaces represented and with the urban surfaces replaced with grass in order to calculate the urban increment on the local meteorology. The local urban effects were moderated to some extent by the passage of an onshore flow that propagated up the Thames estuary and across the city, cooling London slightly in the afternoon. Validations of screen-level temperature show encouraging agreement to within 1–2 K, when the urban increment is up to 5 K. The model results are then used to examine factors shaping the spatial and temporal structure of London's atmospheric boundary layer. The simulations reconcile the differences in the temporal evolution of the urban heat island (UHI) shown in various studies and demonstrate that the variation of UHI with time depends strongly on the urban fetch. The UHI at a location downwind of the city centre shows a decrease in UHI during the night, while the UHI at the city centre stays constant. Finally, the UHI at a location upwind of the city centre increases continuously. The magnitude of the UHI by the time of the evening transition increases with urban fetch. The urban increments are largest at night, when the boundary layer is shallow. The boundary layer experiences continued warming after sunset, as the heat from the urban fabric is released, and a weakly convective boundary layer develops across the city. The urban land-use fraction is the dominant control on the spatial structure in the sensible heat flux and the resulting urban increment, although even the weak advection present in this case study is sufficient to advect the peak temperature increments downwind of the most built-up areas. Copyright © 2011 Royal Meteorological Society and British Crown Copyright, the Met Office
Resumo:
The whole concept of just what is and what is not, intelligence is a vitally important one. As humans interact more with machines, so the similarities and differences between human and machine intelligence need to be looked at in a sensible, scientific way. This paper considers human and machine intelligence and links them closely to physical characteristics, as exhibited by robots. Potential interfaces between humans and machines are also considered, as is the state of the art in direct physical links between humans and machines.
Resumo:
Scintillometry is an established technique for determining large areal average sensible heat fluxes. The scintillometer measurement is related to sensible heat flux via Monin–Obukhov similarity theory, which was developed for ideal homogeneous land surfaces. In this study it is shown that judicious application of scintillometry over heterogeneous mixed agriculture on undulating topography yields valid results when compared to eddy covariance (EC). A large aperture scintillometer (LAS) over a 2.4 km path was compared with four EC stations measuring sensible (H) and latent (LvE) heat fluxes over different vegetation (cereals and grass) which when aggregated were representative of the LAS source area. The partitioning of available energy into H and LvE varied strongly for different vegetation types, with H varying by a factor of three between senesced winter wheat and grass pasture. The LAS derived H agrees (one-to-one within the experimental uncertainty) with H aggregated from EC with a high coefficient of determination of 0.94. Chronological analysis shows individual fields may have a varying contribution to the areal average sensible heat flux on short (weekly) time scales due to phenological development and changing soil moisture conditions. Using spatially aggregated measurements of net radiation and soil heat flux with H from the LAS, the areal averaged latent heat flux (LvELAS) was calculated as the residual of the surface energy balance. The regression of LvELAS against aggregated LvE from the EC stations has a slope of 0.94, close to ideal, and demonstrates that this is an accurate method for the landscape-scale estimation of evaporation over heterogeneous complex topography.
Resumo:
Reader Response Theory remains popular within Children's Literature Criticism. It seems to offer a sensible resolution to the question of whether meaning derives from text or reader. Through a close reading of one example of this criticism, I suggest that its dualisms are constantly collapsing into appeals to singular authority. at various stages the text or the reader is wholly responsible for meaning. I further suggest that the criticism bypasses the question of interpretation through claiming knowledge of a child reader whose opinions and reactions can be unproblematically accessed. We do not have to worry about reading texts, because we can, apparently, know the child's response to them with certainty. Anything other than this claim to certainty is taken to be a failure of responsibility, a wallowing in the subjective, obscure and perverse. My intention is to reinstate reading as the responsibility of criticism.