782 resultados para accounting model design
Resumo:
This contribution introduces a new digital predistorter to compensate serious distortions caused by memory high power amplifiers (HPAs) which exhibit output saturation characteristics. The proposed design is based on direct learning using a data-driven B-spline Wiener system modeling approach. The nonlinear HPA with memory is first identified based on the B-spline neural network model using the Gauss-Newton algorithm, which incorporates the efficient De Boor algorithm with both B-spline curve and first derivative recursions. The estimated Wiener HPA model is then used to design the Hammerstein predistorter. In particular, the inverse of the amplitude distortion of the HPA's static nonlinearity can be calculated effectively using the Newton-Raphson formula based on the inverse of De Boor algorithm. A major advantage of this approach is that both the Wiener HPA identification and the Hammerstein predistorter inverse can be achieved very efficiently and accurately. Simulation results obtained are presented to demonstrate the effectiveness of this novel digital predistorter design.
Resumo:
The fourth assessment report of the Intergovernmental Panel on Climate Change (IPCC) includes a comparison of observation-based and modeling-based estimates of the aerosol direct radiative forcing. In this comparison, satellite-based studies suggest a more negative aerosol direct radiative forcing than modeling studies. A previous satellite-based study, part of the IPCC comparison, uses aerosol optical depths and accumulation-mode fractions retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS) at collection 4. The latest version of MODIS products, named collection 5, improves aerosol retrievals. Using these products, the direct forcing in the shortwave spectrum defined with respect to present-day natural aerosols is now estimated at −1.30 and −0.65 Wm−2 on a global clear-sky and all-sky average, respectively, for 2002. These values are still significantly more negative than the numbers reported by modeling studies. By accounting for differences between present-day natural and preindustrial aerosol concentrations, sampling biases, and investigating the impact of differences in the zonal distribution of anthropogenic aerosols, good agreement is reached between the direct forcing derived from MODIS and the Hadley Centre climate model HadGEM2-A over clear-sky oceans. Results also suggest that satellite estimates of anthropogenic aerosol optical depth over land should be coupled with a robust validation strategy in order to refine the observation-based estimate of aerosol direct radiative forcing. In addition, the complex problem of deriving the aerosol direct radiative forcing when aerosols are located above cloud still needs to be addressed.
Response of the middle atmosphere to CO2 doubling: results from the Canadian Middle Atmosphere Model
Resumo:
The Canadian Middle Atmosphere Model (CMAM) has been used to examine the middle atmosphere response to CO2 doubling. The radiative-photochemical response induced by doubling CO2 alone and the response produced by changes in prescribed SSTs are found to be approximately additive, with the former effect dominating throughout the middle atmosphere. The paper discusses the overall response, with emphasis on the effects of SST changes, which allow a tropospheric response to the CO2 forcing. The overall response is a cooling of the middle atmosphere accompanied by significant increases in the ozone and water vapor abundances. The ozone radiative feedback occurs through both an increase in solar heating and a decrease in infrared cooling, with the latter accounting for up to 15% of the total effect. Changes in global mean water vapor cooling are negligible above ~30 hPa. Near the polar summer mesopause, the temperature response is weak and not statistically significant. The main effects of SST changes are a warmer troposphere, a warmer and higher tropopause, cell-like structures of heating and cooling at low and middlelatitudes in the middle atmosphere, warming in the summer mesosphere, water vapor increase throughout the domain, and O3 decrease in the lower tropical stratosphere. No noticeable change in upwardpropagating planetary wave activity in the extratropical winter–spring stratosphere and no significant temperature response in the polar winter–spring stratosphere have been detected. Increased upwelling in the tropical stratosphere has been found to be linked to changed wave driving at low latitudes.
Resumo:
Inspired by Habermas’ works, we develop a prescriptive conceptual model of stakeholder engagement and corporate social responsibility (CSR) reporting against which empirical descriptions can be compared and contrasted. We compare the high profile case of Kraft's takeover of Cadbury with the conceptual model to illustrate the gap between an ideal speech situation and practice. The paper conducts a desk study of documents relating to the takeover and interviews with stakeholders from the local community to gauge their views of stakeholder engagement and CSR reporting by Cadbury/Kraft. The findings lead to policy recommendations for enhancing stakeholder accountability through improved steering mechanisms.
Resumo:
As laid out in its convention there are 8 different objectives for ECMWF. One of the major objectives will consist of the preparation, on a regular basis, of the data necessary for the preparation of medium-range weather forecasts. The interpretation of this item is that the Centre will make forecasts once a day for a prediction period of up to 10 days. It is also evident that the Centre should not carry out any real weather forecasting but merely disseminate to the member countries the basic forecasting parameters with an appropriate resolution in space and time. It follows from this that the forecasting system at the Centre must from the operational point of view be functionally integrated with the Weather Services of the Member Countries. The operational interface between ECMWF and the Member Countries must be properly specified in order to get a reasonable flexibility for both systems. The problem of making numerical atmospheric predictions for periods beyond 4-5 days differs substantially from 2-3 days forecasting. From the physical point we can define a medium range forecast as a forecast where the initial disturbances have lost their individual structure. However we are still interested to predict the atmosphere in a similar way as in short range forecasting which means that the model must be able to predict the dissipation and decay of the initial phenomena and the creation of new ones. With this definition, medium range forecasting is indeed very difficult and generally regarded as more difficult than extended forecasts, where we usually only predict time and space mean values. The predictability of atmospheric flow has been extensively studied during the last years in theoretical investigations and by numerical experiments. As has been discussed elsewhere in this publication (see pp 338 and 431) a 10-day forecast is apparently on the fringe of predictability.
Resumo:
Current methods and techniques used in designing organisational performance measurement systems do not consider the multiple aspects of business processes or the semantics of data generated during the lifecycle of a product. In this paper, we propose an organisational performance measurement systems design model that is based on the semantics of an organisation, business process and products lifecycle. Organisational performance measurement is examined from academic and practice disciplines. The multi-discipline approach is used as a research tool to explore the weaknesses of current models that are used to design organisational performance measurement systems. This helped in identifying the gaps in research and practice concerning the issues and challenges in designing information systems for measuring the performance of an organisation. The knowledge sources investigated include on-going and completed research project reports; scientific and management literature; and practitioners’ magazines.
Resumo:
The behavior of the ensemble Kalman filter (EnKF) is examined in the context of a model that exhibits a nonlinear chaotic (slow) vortical mode coupled to a linear (fast) gravity wave of a given amplitude and frequency. It is shown that accurate recovery of both modes is enhanced when covariances between fast and slow normal-mode variables (which reflect the slaving relations inherent in balanced dynamics) are modeled correctly. More ensemble members are needed to recover the fast, linear gravity wave than the slow, vortical motion. Although the EnKF tends to diverge in the analysis of the gravity wave, the filter divergence is stable and does not lead to a great loss of accuracy. Consequently, provided the ensemble is large enough and observations are made that reflect both time scales, the EnKF is able to recover both time scales more accurately than optimal interpolation (OI), which uses a static error covariance matrix. For OI it is also found to be problematic to observe the state at a frequency that is a subharmonic of the gravity wave frequency, a problem that is in part overcome by the EnKF.However, error in themodeled gravity wave parameters can be detrimental to the performance of the EnKF and remove its implied advantages, suggesting that a modified algorithm or a method for accounting for model error is needed.
Resumo:
Key point summary • Cerebellar ataxias are progressive debilitating diseases with no known treatment and are associated with defective motor function and, in particular, abnormalities to Purkinje cells. • Mutant mice with deficits in Ca2+ channel auxiliary α2δ-2 subunits are used as models of cerebellar ataxia. • Our data in the du2J mouse model shows an association between the ataxic phenotype exhibited by homozygous du2J/du2J mice and increased irregularity of Purkinje cell firing. • We show that both heterozygous +/du2J and homozygous du2J/du2J mice completely lack the strong presynaptic modulation of neuronal firing by cannabinoid CB1 receptors which is exhibited by litter-matched control mice. • These results show that the du2J ataxia model is associated with deficits in CB1 receptor signalling in the cerebellar cortex, putatively linked with compromised Ca2+ channel activity due to reduced α2δ-2 subunit expression. Knowledge of such deficits may help design therapeutic agents to combat ataxias. Abstract Cerebellar ataxias are a group of progressive, debilitating diseases often associated with abnormal Purkinje cell (PC) firing and/or degeneration. Many animal models of cerebellar ataxia display abnormalities in Ca2+ channel function. The ‘ducky’ du2J mouse model of ataxia and absence epilepsy represents a clean knock-out of the auxiliary Ca2+ channel subunit, α2δ-2, and has been associated with deficient Ca2+ channel function in the cerebellar cortex. Here, we investigate effects of du2J mutation on PC layer (PCL) and granule cell (GC) layer (GCL) neuronal spiking activity and, also, inhibitory neurotransmission at interneurone-Purkinje cell(IN-PC) synapses. Increased neuronal firing irregularity was seen in the PCL and, to a less marked extent, in the GCL in du2J/du2J, but not +/du2J, mice; these data suggest that the ataxic phenotype is associated with lack of precision of PC firing, that may also impinge on GC activity and requires expression of two du2J alleles to manifest fully. du2J mutation had no clear effect on spontaneous inhibitory postsynaptic current (sIPSC) frequency at IN-PC synapses, but was associated with increased sIPSC amplitudes. du2J mutation ablated cannabinoid CB1 receptor (CB1R)-mediated modulation of spontaneous neuronal spike firing and CB1Rmediated presynaptic inhibition of synaptic transmission at IN-PC synapses in both +/du2J and du2J/du2J mutants; effects that occurred in the absence of changes in CB1R expression. These results demonstrate that the du2J ataxia model is associated with deficient CB1R signalling in the cerebellar cortex, putatively linked with compromised Ca2+ channel activity and the ataxic phenotype.
Resumo:
Two sources of bias arise in conventional loss predictions in the wake of natural disasters. One source of bias stems from neglect of accounting for animal genetic resource loss. A second source of bias stems from failure to identify, in addition to the direct effects of such loss, the indirect effects arising from implications impacting animal-human interactions. We argue that, in some contexts, the magnitude of bias imputed by neglecting animal genetic resource stocks is substantial. We show, in addition, and contrary to popular belief, that the biases attributable to losses in distinct genetic resource stocks are very likely to be the same. We derive the formal equivalence across the distinct resource stocks by deriving an envelope result in a model that forms the mainstay of enquiry in subsistence farming and we validate the theory, empirically, in a World-Society-for-the-Protection-of-Animals application
The capability-affordance model: a method for analysis and modelling of capabilities and affordances
Resumo:
Existing capability models lack qualitative and quantitative means to compare business capabilities. This paper extends previous work and uses affordance theories to consistently model and analyse capabilities. We use the concept of objective and subjective affordances to model capability as a tuple of a set of resource affordance system mechanisms and action paths, dependent on one or more critical affordance factors. We identify an affordance chain of subjective affordances by which affordances work together to enable an action and an affordance path that links action affordances to create a capability system. We define the mechanism and path underlying capability. We show how affordance modelling notation, AMN, can represent affordances comprising a capability. We propose a method to quantitatively and qualitatively compare capabilities using efficiency, effectiveness and quality metrics. The method is demonstrated by a medical example comparing the capability of syringe and needless anaesthetic systems.
Resumo:
Purpose – This paper seeks to problematise “accounting for biodiversity” and to provide a framework for analysing and understanding the role of accounting in preserving and enhancing biodiversity on Planet Earth. The paper aims to raise awareness of the urgent need to address biodiversity loss and extinction and the need for corporations to discharge accountability for their part in the current biodiversity crisis by accounting for their biodiversity-related strategies and policies. Such accounting is, it is believed, emancipatory and leads to engendering change in corporate behaviour and attitudes. Design/methodology/approach – The authors reviewed the literature relating to biodiversity across a wide array of disciplines including anthropology, biodiversity, ecology, finance, philosophy, and of course, accounting, in order to build an image of the current state of biodiversity and the role which accounting can and “should” play in the future of biodiversity. Findings – It is found that the problems underlying accounting for biodiversity fall into four broad categories: philosophical and scientific problems, accountability problems, technical accounting problems, and problems of accounting practice. Practical implications – Through establishing a framework problematising biodiversity, a roadmap is laid out for researchers and practitioners to navigate a route for future research and policymaking in biodiversity accounting. It is concluded that an interdisciplinary approach to accounting for biodiversity is crucial to ensuring effective action on biodiversity and for accounting for biodiversity to achieve its emancipatory potential. Originality/value – Although there is a wealth of sustainability reporting research, there is hardly any work exploring the role of accounting in preserving and enhancing biodiversity. There is no research exploring the current state of accounting for biodiversity. This paper summarises the current state of biodiversity using an interdisciplinary approach and introduces a series of papers devoted to the role of accounting in biodiversity accepted for this AAAJ special issue. The paper also provides a framework identifying the diverse problems associated with accounting for biodiversity.
Resumo:
Fresh water hosing simulations, in which a fresh water flux is imposed in the North Atlantic to force fluctuations of the Atlantic Meridional Overturning Circulation, have been routinely performed, first to study the climatic signature of different states of this circulation, then, under present or future conditions, to investigate the potential impact of a partial melting of the Greenland ice sheet. The most compelling examples of climatic changes potentially related to AMOC abrupt variations, however, are found in high resolution palaeo-records from around the globe for the last glacial period. To study those more specifically, more and more fresh water hosing experiments have been performed under glacial conditions in the recent years. Here we compare an ensemble constituted by 11 such simulations run with 6 different climate models. All simulations follow a slightly different design, but are sufficiently close in their design to be compared. They all study the impact of a fresh water hosing imposed in the extra-tropical North Atlantic. Common features in the model responses to hosing are the cooling over the North Atlantic, extending along the sub-tropical gyre in the tropical North Atlantic, the southward shift of the Atlantic ITCZ and the weakening of the African and Indian monsoons. On the other hand, the expression of the bipolar see-saw, i.e., warming in the Southern Hemisphere, differs from model to model, with some restricting it to the South Atlantic and specific regions of the southern ocean while others simulate a widespread southern ocean warming. The relationships between the features common to most models, i.e., climate changes over the north and tropical Atlantic, African and Asian monsoon regions, are further quantified. These suggest a tight correlation between the temperature and precipitation changes over the extra-tropical North Atlantic, but different pathways for the teleconnections between the AMOC/North Atlantic region and the African and Indian monsoon regions.
Resumo:
The solar and longwave environmental irradiance geometry (SOLWEIG) model simulates spatial variations of 3-D radiation fluxes and mean radiant temperature (T mrt) as well as shadow patterns in complex urban settings. In this paper, a new vegetation scheme is included in SOLWEIG and evaluated. The new shadow casting algorithm for complex vegetation structures makes it possible to obtain continuous images of shadow patterns and sky view factors taking both buildings and vegetation into account. For the calculation of 3-D radiation fluxes and T mrt, SOLWEIG only requires a limited number of inputs, such as global shortwave radiation, air temperature, relative humidity, geographical information (latitude, longitude and elevation) and urban geometry represented by high-resolution ground and building digital elevation models (DEM). Trees and bushes are represented by separate DEMs. The model is evaluated using 5 days of integral radiation measurements at two sites within a square surrounded by low-rise buildings and vegetation in Göteborg, Sweden (57°N). There is good agreement between modelled and observed values of T mrt, with an overall correspondence of R 2 = 0.91 (p < 0.01, RMSE = 3.1 K). A small overestimation of T mrt is found at locations shadowed by vegetation. Given this good performance a number of suggestions for future development are identified for applications which include for human comfort, building design, planning and evaluation of instrument exposure.
Resumo:
Urbanization, the expansion of built-up areas, is an important yet less-studied aspect of land use/land cover change in climate science. To date, most global climate models used to evaluate effects of land use/land cover change on climate do not include an urban parameterization. Here, the authors describe the formulation and evaluation of a parameterization of urban areas that is incorporated into the Community Land Model, the land surface component of the Community Climate System Model. The model is designed to be simple enough to be compatible with structural and computational constraints of a land surface model coupled to a global climate model yet complex enough to explore physically based processes known to be important in determining urban climatology. The city representation is based upon the “urban canyon” concept, which consists of roofs, sunlit and shaded walls, and canyon floor. The canyon floor is divided into pervious (e.g., residential lawns, parks) and impervious (e.g., roads, parking lots, sidewalks) fractions. Trapping of longwave radiation by canyon surfaces and solar radiation absorption and reflection is determined by accounting for multiple reflections. Separate energy balances and surface temperatures are determined for each canyon facet. A one-dimensional heat conduction equation is solved numerically for a 10-layer column to determine conduction fluxes into and out of canyon surfaces. Model performance is evaluated against measured fluxes and temperatures from two urban sites. Results indicate the model does a reasonable job of simulating the energy balance of cities.
Resumo:
Medium range flood forecasting activities, driven by various meteorological forecasts ranging from high resolution deterministic forecasts to low spatial resolution ensemble prediction systems, share a major challenge in the appropriateness and design of performance measures. In this paper possible limitations of some traditional hydrological and meteorological prediction quality and verification measures are identified. Some simple modifications are applied in order to circumvent the problem of the autocorrelation dominating river discharge time-series and in order to create a benchmark model enabling the decision makers to evaluate the forecast quality and the model quality. Although the performance period is quite short the advantage of a simple cost-loss function as a measure of forecast quality can be demonstrated.