98 resultados para State-space modeling
Resumo:
Even minor changes in user activity can bring about significant energy savings within built space. Many building performance assessment methods have been developed, however these often disregard the impact of user behavior (i.e. the social, cultural and organizational aspects of the building). Building users currently have limited means of determining how sustainable they are, in context of the specific building structure and/or when compared to other users performing similar activities, it is therefore easy for users to dismiss their energy use. To support sustainability, buildings must be able to monitor energy use, identify areas of potential change in the context of user activity and provide contextually relevant information to facilitate persuasion management. If the building is able to provide users with detailed information about how specific user activity that is wasteful, this should provide considerable motivation to implement positive change. This paper proposes using a dynamic and temporal semantic model, to populate information within a model of persuasion, to manage user change. By semantically mapping a building, and linking this to persuasion management we suggest that: i) building energy use can be monitored and analyzed over time; ii) persuasive management can be facilitated to move user activity towards sustainability.
Resumo:
This paper examines the interaction of spatial and dynamic aspects of resource extraction from forests by local people. Highly cyclical and varied across space and time, the patterns of resource extraction resulting from the spatial–temporal model bear little resemblance to the patterns drawn from focusing either on spatial or temporal aspects of extraction alone. Ignoring this variability inaccurately depicts villagers’ dependence on different parts of the forest and could result in inappropriate policies. Similarly, the spatial links in extraction decisions imply that policies imposed in one area can have unintended consequences in other areas. Combining the spatial–temporal model with a measure of success in community forest management—the ability to avoid open-access resource degradation—characterizes the impact of incomplete property rights on patterns of resource extraction and stocks.
Resumo:
This article aims to create intellectual space in which issues of social inequality and education can be analyzed and discussed in relation to the multifaceted and multi-levelled complexities of the modern world. It is divided into three sections. Section One locates the concept of social class in the context of the modern nation state during the period after the Second World War. Focusing particularly on the impact of ‘Fordism’ on social organization and cultural relations, it revisits the articulation of social justice issues in the United Kingdom, and the structures put into place at the time to alleviate educational and social inequalities. Section Two problematizes the traditional concept of social class in relation to economic, technological and sociocultural changes that have taken place around the world since the mid-1980s. In particular, it charts some of the changes to the international labour market and global patterns of consumption, and their collective impact on the re-constitution of class boundaries in ‘developed countries’. This is juxtaposed with some of the major social effects of neo-classical economic policies in recent years on the sociocultural base in developing countries. It discusses some of the ways these inequalities are reflected in education. Section Three explores tensions between the educational ideals of the ‘knowledge economy’ and the discursive range of social inequalities that are emerging within and beyond the nation state. Drawing on key motifs identified throughout, the article concludes with a reassessment of the concept of social class within the global cultural economy. This is discussed in relation to some of the major equity and human rights issues in education today.
Resumo:
Attempts to estimate photosynthetic rate or gross primary productivity from remotely sensed absorbed solar radiation depend on knowledge of the light use efficiency (LUE). Early models assumed LUE to be constant, but now most researchers try to adjust it for variations in temperature and moisture stress. However, more exact methods are now required. Hyperspectral remote sensing offers the possibility of sensing the changes in the xanthophyll cycle, which is closely coupled to photosynthesis. Several studies have shown that an index (the photochemical reflectance index) based on the reflectance at 531 nm is strongly correlated with the LUE over hours, days and months. A second hyperspectral approach relies on the remote detection of fluorescence, which is a directly related to the efficiency of photosynthesis. We discuss the state of the art of the two approaches. Both have been demonstrated to be effective, but we specify seven conditions required before the methods can become operational.
Resumo:
The Fredholm properties of Toeplitz operators on the Bergman space A2 have been well-known for continuous symbols since the 1970s. We investigate the case p=1 with continuous symbols under a mild additional condition, namely that of the logarithmic vanishing mean oscillation in the Bergman metric. Most differences are related to boundedness properties of Toeplitz operators acting on Ap that arise when we no longer have 1
Resumo:
The Earth’s climate, as well as planetary climates in general, is broadly regulated by three fundamental parameters: the total solar irradiance, the planetary albedo and the planetary emissivity. Observations from series of different satellites during the last three decades indicate that these three quantities are generally very stable. The total solar irradiation of some 1,361 W/m2 at 1 A.U. varies within 1 W/m2 during the 11-year solar cycle (Fröhlich 2012). The albedo is close to 29 % with minute changes from year to year but with marked zonal differences (Stevens and Schwartz 2012). The only exception to the overall stability is a minor decrease in the planetary emissivity (the ratio between the radiation to space and the radiation from the surface of the Earth). This is a consequence of the increase in atmospheric greenhouse gas amounts making the atmosphere gradually more opaque to long-wave terrestrial radiation. As a consequence, radiation processes are slightly out of balance as less heat is leaving the Earth in the form of thermal radiation than the amount of heat from the incoming solar radiation. Present space-based systems cannot yet measure this imbalance, but the effect can be inferred from the increase in heat in the oceans where most of the heat accumulates. Minor amounts of heat are used to melt ice and to warm the atmosphere and the surface of the Earth.
Resumo:
Rising sea level is perhaps the most severe consequence of climate warming, as much of the world’s population and infrastructure is located near current sea level (Lemke et al. 2007). A major rise of a metre or more would cause serious problems. Such possibilities have been suggested by Hansen and Sato (2011) who pointed out that sea level was several metres higher than now during the Holsteinian and Eemian interglacials (about 250,000 and 120,000 years ago, respectively), even though the global temperature was then only slightly higher than it is nowadays. It is consequently of the utmost importance to determine whether such a sea level rise could occur and, if so, how fast it might happen. Sea level undergoes considerable changes due to natural processes such as the wind, ocean currents and tidal motions. On longer time scales, the sea level is influenced by steric effects (sea water expansion caused by temperature and salinity changes of the ocean) and by eustatic effects caused by changes in ocean mass. Changes in the Earth’s cryosphere, such as the retreat or expansion of glaciers and land ice areas, have been the dominant cause of sea level change during the Earth’s recent history. During the glacial cycles of the last million years, the sea level varied by a large amount, of the order of 100 m. If the Earth’s cryosphere were to disappear completely, the sea level would rise by some 65 m. The scientific papers in the present volume address the different aspects of the Earth’s cryosphere and how the different changes in the cryosphere affect sea level change. It represents the outcome of the first workshop held within the new ISSI Earth Science Programme. The workshop took place from 22 to 26 March, 2010, in Bern, Switzerland, with the objective of providing an in-depth insight into the future of mountain glaciers and the large land ice areas of Antarctica and Greenland, which are exposed to natural and anthropogenic climate influences, and their effects on sea level change. The participants of the workshop are experts in different fields including meteorology, climatology, oceanography, glaciology and geodesy; they use advanced space-based observational studies and state-of-the-art numerical modelling.
Resumo:
We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.
Resumo:
This paper presents an image motion model for airborne three-line-array (TLA) push-broom cameras. Both aircraft velocity and attitude instability are taken into account in modeling image motion. Effects of aircraft pitch, roll, and yaw on image motion are analyzed based on geometric relations in designated coordinate systems. The image motion is mathematically modeled by image motion velocity multiplied by exposure time. Quantitative analysis to image motion velocity is then conducted in simulation experiments. The results have shown that image motion caused by aircraft velocity is space invariant while image motion caused by aircraft attitude instability is more complicated. Pitch,roll and yaw all contribute to image motion to different extents. Pitch dominates the along-track image motion and both roll and yaw greatly contribute to the cross-track image motion. These results provide a valuable base for image motion compensation to ensure high accuracy imagery in aerial photogrammetry.
Resumo:
Despite many decades investigating scalp recordable 8–13-Hz (alpha) electroencephalographic activity, no consensus has yet emerged regarding its physiological origins nor its functional role in cognition. Here we outline a detailed, physiologically meaningful, theory for the genesis of this rhythm that may provide important clues to its functional role. In particular we find that electroencephalographically plausible model dynamics, obtained with physiological admissible parameterisations, reveals a cortex perched on the brink of stability, which when perturbed gives rise to a range of unanticipated complex dynamics that include 40-Hz (gamma) activity. Preliminary experimental evidence, involving the detection of weak nonlinearity in resting EEG using an extension of the well-known surrogate data method, suggests that nonlinear (deterministic) dynamics are more likely to be associated with weakly damped alpha activity. Thus rather than the “alpha rhythm” being an idling rhythm it may be more profitable to conceive it as a readiness rhythm.
Resumo:
The self-assembly of proteins and peptides into b-sheet-rich amyloid fibers is a process that has gained notoriety because of its association with human diseases and disorders. Spontaneous self-assembly of peptides into nonfibrillar supramolecular structures can also provide a versatile and convenient mechanism for the bottom-up design of biocompatible materials with functional properties favoring a wide range of practical applications.[1] One subset of these fascinating and potentially useful nanoscale constructions are the peptide nanotubes, elongated cylindrical structures with a hollow center bounded by a thin wall of peptide molecules.[2] A formidable challenge in optimizing and harnessing the properties of nanotube assemblies is to gain atomistic insight into their architecture, and to elucidate precisely how the tubular morphology is constructed from the peptide building blocks. Some of these fine details have been elucidated recently with the use of magic-angle-spinning (MAS) solidstate NMR (SSNMR) spectroscopy.[3] MAS SSNMR measurements of chemical shifts and through-space interatomic distances provide constraints on peptide conformation (e.g., b-strands and turns) and quaternary packing. We describe here a new application of a straightforward SSNMR technique which, when combined with FTIR spectroscopy, reports quantitatively on the orientation of the peptide molecules within the nanotube structure, thereby providing an additional structural constraint not accessible to MAS SSNMR.
Resumo:
Global wetlands are believed to be climate sensitive, and are the largest natural emitters of methane (CH4). Increased wetland CH4 emissions could act as a positive feedback to future warming. The Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP) investigated our present ability to simulate large-scale wetland characteristics and corresponding CH4 emissions. To ensure inter-comparability, we used a common experimental protocol driving all models with the same climate and carbon dioxide (CO2) forcing datasets. The WETCHIMP experiments were conducted for model equilibrium states as well as transient simulations covering the last century. Sensitivity experiments investigated model response to changes in selected forcing inputs (precipitation, temperature, and atmospheric CO2 concentration). Ten models participated, covering the spectrum from simple to relatively complex, including models tailored either for regional or global simulations. The models also varied in methods to calculate wetland size and location, with some models simulating wetland area prognostically, while other models relied on remotely sensed inundation datasets, or an approach intermediate between the two. Four major conclusions emerged from the project. First, the suite of models demonstrate extensive disagreement in their simulations of wetland areal extent and CH4 emissions, in both space and time. Simple metrics of wetland area, such as the latitudinal gradient, show large variability, principally between models that use inundation dataset information and those that independently determine wetland area. Agreement between the models improves for zonally summed CH4 emissions, but large variation between the models remains. For annual global CH4 emissions, the models vary by ±40% of the all-model mean (190 Tg CH4 yr−1). Second, all models show a strong positive response to increased atmospheric CO2 concentrations (857 ppm) in both CH4 emissions and wetland area. In response to increasing global temperatures (+3.4 °C globally spatially uniform), on average, the models decreased wetland area and CH4 fluxes, primarily in the tropics, but the magnitude and sign of the response varied greatly. Models were least sensitive to increased global precipitation (+3.9 % globally spatially uniform) with a consistent small positive response in CH4 fluxes and wetland area. Results from the 20th century transient simulation show that interactions between climate forcings could have strong non-linear effects. Third, we presently do not have sufficient wetland methane observation datasets adequate to evaluate model fluxes at a spatial scale comparable to model grid cells (commonly 0.5°). This limitation severely restricts our ability to model global wetland CH4 emissions with confidence. Our simulated wetland extents are also difficult to evaluate due to extensive disagreements between wetland mapping and remotely sensed inundation datasets. Fourth, the large range in predicted CH4 emission rates leads to the conclusion that there is both substantial parameter and structural uncertainty in large-scale CH4 emission models, even after uncertainties in wetland areas are accounted for.
Resumo:
Sea ice friction models are necessary to predict the nature of interactions between sea ice floes. These interactions are of interest on a range of scales, for example, to predict loads on engineering structures in icy waters or to understand the basin-scale motion of sea ice. Many models use Amonton's friction law due to its simplicity. More advanced models allow for hydrodynamic lubrication and refreezing of asperities; however, modeling these processes leads to greatly increased complexity. In this paper we propose, by analogy with rock physics, that a rate- and state-dependent friction law allows us to incorporate memory (and thus the effects of lubrication and bonding) into ice friction models without a great increase in complexity. We support this proposal with experimental data on both the laboratory (∼0.1 m) and ice tank (∼1 m) scale. These experiments show that the effects of static contact under normal load can be incorporated into a friction model. We find the parameters for a first-order rate and state model to be A = 0.310, B = 0.382, and μ0 = 0.872. Such a model then allows us to make predictions about the nature of memory effects in moving ice-ice contacts.
Resumo:
Nocturnal cooling of air within a forest canopy and the resulting temperature profile may drive local thermally driven motions, such as drainage flows, which are believed to impact measurements of ecosystem–atmosphere exchange. To model such flows, it is necessary to accurately predict the rate of cooling. Cooling occurs primarily due to radiative heat loss. However, much of the radiative loss occurs at the surface of canopy elements (leaves, branches, and boles of trees), while radiative divergence in the canopy air space is small due to high transmissivity of air. Furthermore, sensible heat exchange between the canopy elements and the air space is slow relative to radiative fluxes. Therefore, canopy elements initially cool much more quickly than the canopy air space after the switch from radiative gain during the day to radiative loss during the night. Thus in modeling air cooling within a canopy, it is not appropriate to neglect the storage change of heat in the canopy elements or even to assume equal rates of cooling of the canopy air and canopy elements. Here a simple parameterization of radiatively driven cooling of air within the canopy is presented, which accounts implicitly for radiative cooling of the canopy volume, heat storage in the canopy elements, and heat transfer between the canopy elements and the air. Simulations using this parameterization are compared to temperature data from the Morgan–Monroe State Forest (IN, USA) FLUXNET site. While the model does not perfectly reproduce the measured rates of cooling, particularly near the top of the canopy, the simulated cooling rates are of the correct order of magnitude.