147 resultados para Money generation model
em University of Queensland eSpace - Australia
Resumo:
This paper focuses on measuring the extent to which market power has been exercised in a recently deregulated electricity generation sector. Our study emphasises the need to consider the concept of market power in a long-run dynamic context. A market power index is constructed focusing on differences between actual market returns and long-run competitive returns, estimated using a programming model devised by the authors. The market power implications of hedge contracts are briefly considered. The state of Queensland Australia is used as a context for the analysis. The results suggest that generators have exercised significant market power since deregulation.
Resumo:
The detection of seizure in the newborn is a critical aspect of neurological research. Current automatic detection techniques are difficult to assess due to the problems associated with acquiring and labelling newborn electroencephalogram (EEG) data. A realistic model for newborn EEG would allow confident development, assessment and comparison of these detection techniques. This paper presents a model for newborn EEG that accounts for its self-similar and non-stationary nature. The model consists of background and seizure sub-models. The newborn EEG background model is based on the short-time power spectrum with a time-varying power law. The relationship between the fractal dimension and the power law of a power spectrum is utilized for accurate estimation of the short-time power law exponent. The newborn EEG seizure model is based on a well-known time-frequency signal model. This model addresses all significant time-frequency characteristics of newborn EEG seizure which include; multiple components or harmonics, piecewise linear instantaneous frequency laws and harmonic amplitude modulation. Estimates of the parameters of both models are shown to be random and are modelled using the data from a total of 500 background epochs and 204 seizure epochs. The newborn EEG background and seizure models are validated against real newborn EEG data using the correlation coefficient. The results show that the output of the proposed models has a higher correlation with real newborn EEG than currently accepted models (a 10% and 38% improvement for background and seizure models, respectively).
Resumo:
Effect of temperature-dependent viscosity on fully developed forced convection in a duct of rectangular cross-section occupied by a fluid-saturated porous medium is investigated analytically. The Darcy flow model is applied and the viscosity-temperature relation is assumed to be an inverse-linear one. The case of uniform heat flux on the walls, i.e. the H boundary condition in the terminology of Kays and Crawford, is treated. For the case of a fluid whose viscosity decreases with temperature, it is found that the effect of the variation is to increase the Nusselt number for heated walls. Having found the velocity and the temperature distribution, the second law of thermodynamics is invoked to find the local and average entropy generation rate. Expressions for the entropy generation rate, the Bejan number, the heat transfer irreversibility, and the fluid flow irreversibility are presented in terms of the Brinkman number, the Péclet number, the viscosity variation number, the dimensionless wall heat flux, and the aspect ratio (width to height ratio). These expressions let a parametric study of the problem based on which it is observed that the entropy generated due to flow in a duct of square cross-section is more than those of rectangular counterparts while increasing the aspect ratio decreases the entropy generation rate similar to what previously reported for the clear flow case.
Resumo:
We investigate analytically the first and the second law characteristics of fully developed forced convection inside a porous-saturated duct of rectangular cross-section. The Darcy-Brinkman flow model is employed. Three different types of thermal boundary conditions are examined. Expressions for the Nusselt number, the Bejan number, and the dimensionless entropy generation rate are presented in terms of the system parameters. The conclusions of this analytical study will make it possible to compare, evaluate, and optimize alternative rectangular duct design options in terms of heat transfer, pressure drop, and entropy generation. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
A numerical study is reported to investigate both the First and the Second Law of Thermodynamics for thermally developing forced convection in a circular tube filled by a saturated porous medium, with uniform wall temperature, and with the effects of viscous dissipation included. A theoretical analysis is also presented to study the problem for the asymptotic region applying the perturbation solution of the Brinkman momentum equation reported by Hooman and Kani [1]. Expressions are reported for the temperature profile, the Nusselt number, the Bejan number, and the dimensionless entropy generation rate in the asymptotic region. Numerical results are found to be in good agreement with theoretical counterparts.
Resumo:
Heat transfer and entropy generation analysis of the thermally developing forced convection in a porous-saturated duct of rectangular cross-section, with walls maintained at a constant and uniform heat flux, is investigated based on the Brinkman flow model. The classical Galerkin method is used to obtain the fully developed velocity distribution. To solve the thermal energy equation, with the effects of viscous dissipation being included, the Extended Weighted Residuals Method (EWRM) is applied. The local (three dimensional) temperature field is solved by utilizing the Green’s function solution based on the EWRM where symbolic algebra is being used for convenience in presentation. Following the computation of the temperature field, expressions are presented for the local Nusselt number and the bulk temperature as a function of the dimensionless longitudinal coordinate, the aspect ratio, the Darcy number, the viscosity ratio, and the Brinkman number. With the velocity and temperature field being determined, the Second Law (of Thermodynamics) aspect of the problem is also investigated. Approximate closed form solutions are also presented for two limiting cases of MDa values. It is observed that decreasing the aspect ratio and MDa values increases the entropy generation rate.
Resumo:
The acceptance-probability-controlled simulated annealing with an adaptive move generation procedure, an optimization technique derived from the simulated annealing algorithm, is presented. The adaptive move generation procedure was compared against the random move generation procedure on seven multiminima test functions, as well as on the synthetic data, resembling the optical constants of a metal. In all cases the algorithm proved to have faster convergence and superior escaping from local minima. This algorithm was then applied to fit the model dielectric function to data for platinum and aluminum.
Resumo:
Pasminco Century Mine has developed a geophysical logging system to provide new data for ore mining/grade control and the generation of Short Term Models for mine planning. Previous work indicated the applicability of petrophysical logging for lithology prediction, however, the automation of the method was not considered reliable enough for the development of a mining model. A test survey was undertaken using two diamond drilled control holes and eight percussion holes. All holes were logged with natural gamma, magnetic susceptibility and density. Calibration of the LogTrans auto-interpretation software using only natural gamma and magnetic susceptibility indicated that both lithology and stratigraphy could be predicted. Development of a capability to enforce stratigraphic order within LogTrans increased the reliability and accuracy of interpretations. After the completion of a feasibility program, Century Mine has invested in a dedicated logging vehicle to log blast holes as well as for use in in-fill drilling programs. Future refinement of the system may lead to the development of GPS controlled excavators for mining ore.
Resumo:
Management are keen to maximize the life span of an information system because of the high cost, organizational disruption, and risk of failure associated with the re-development or replacement of an information system. This research investigates the effects that various factors have on an information system's life span by understanding how the factors affect an information system's stability. The research builds on a previously developed two-stage model of information system change whereby an information system is either in a stable state of evolution in which the information system's functionality is evolving, or in a state of revolution, in which the information system is being replaced because it is not providing the functionality expected by its users. A case study surveyed a number of systems within one organization. The aim was to test whether a relationship existed between the base value of the volatility index (a measure of the stability of an information system) and certain system characteristics. Data relating to some 3000 user change requests covering 40 systems over a 10-year period were obtained. The following factors were hypothesized to have significant associations with the base value of the volatility index: language level (generation of language of construction), system size, system age, and the timing of changes applied to a system. Significant associations were found in the hypothesized directions except that the timing of user changes was not associated with any change in the value of the volatility index. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
For dynamic simulations to be credible, verification of the computer code must be an integral part of the modelling process. This two-part paper describes a novel approach to verification through program testing and debugging. In Part 1, a methodology is presented for detecting and isolating coding errors using back-to-back testing. Residuals are generated by comparing the output of two independent implementations, in response to identical inputs. The key feature of the methodology is that a specially modified observer is created using one of the implementations, so as to impose an error-dependent structure on these residuals. Each error can be associated with a fixed and known subspace, permitting errors to be isolated to specific equations in the code. It is shown that the geometric properties extend to multiple errors in either one of the two implementations. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
A mathematical model that describes the operation of a sequential leach bed process for anaerobic digestion of organic fraction of municipal solid waste (MSW) is developed and validated. This model assumes that ultimate mineralisation of the organic component of the waste occurs in three steps, namely solubilisation of particulate matter, fermentation to volatile organic acids (modelled as acetic acid) along with liberation of carbon dioxide and hydrogen, and methanogenesis from acetate and hydrogen. The model incorporates the ionic equilibrium equations arising due to dissolution of carbon dioxide, generation of alkalinity from breakdown of solids and dissociation of acetic acid. Rather than a charge balance, a mass balance on the hydronium and hydroxide ions is used to calculate pH. The flow of liquid through the bed is modelled as occurring through two zones-a permeable zone with high flushing rates and the other more stagnant. Some of the kinetic parameters for the biological processes were obtained from batch MSW digestion experiments. The parameters for flow model were obtained from residence time distribution studies conducted using tritium as a tracer. The model was validated using data from leach bed digestion experiments in which a leachate volume equal to 10% of the fresh waste bed volume was sequenced. The model was then tested, without altering any kinetic or flow parameters, by varying volume of leachate that is sequenced between the beds. Simulations for sequencing/recirculating 5 and 30% of the bed volume are presented and compared with experimental results. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Formal specifications can precisely and unambiguously define the required behavior of a software system or component. However, formal specifications are complex artifacts that need to be verified to ensure that they are consistent, complete, and validated against the requirements. Specification testing or animation tools exist to assist with this by allowing the specifier to interpret or execute the specification. However, currently little is known about how to do this effectively. This article presents a framework and tool support for the systematic testing of formal, model-based specifications. Several important generic properties that should be satisfied by model-based specifications are first identified. Following the idea of mutation analysis, we then use variants or mutants of the specification to check that these properties are satisfied. The framework also allows the specifier to test application-specific properties. All properties are tested for a range of states that are defined by the tester in the form of a testgraph, which is a directed graph that partially models the states and transitions of the specification being tested. Tool support is provided for the generation of the mutants, for automatically traversing the testgraph and executing the test cases, and for reporting any errors. The framework is demonstrated on a small specification and its application to three larger specifications is discussed. Experience indicates that the framework can be used effectively to test small to medium-sized specifications and that it can reveal a significant number of problems in these specifications.
Resumo:
We investigated whether the protection from graft-versus-host disease (GVHD) afforded by donor treatment with granulocyte colony-stimulating factor (G-CSF) could be enhanced by dose escalation. Donor treatment with human G-CSIF prevented GVHD in the B6 --> B6D2F1 murine model in a dose-dependent fashion, and murine G-CSF provided equivalent protection from GVHD at 10-fold lower doses. Donor pretreatment with a single dose of pegylated G-CSF (peg-G-CSF) prevented GVHD to a significantly greater extent than standard G-CSIF (survival, 75% versus 11%, P < .001). Donor T cells from peg-G-CSF-treated donors failed to proliferate to alloantigen and inhibited the responses of control T cells in an interleukin 10 (IL-10)-dependent-fashion in vitro. T cells from peg-GCSF-treated IL-10(-/-) donors induced lethal GVHD; T cells from peg-G-CSF-treated wild-type (wt) donors promoted long-term survival. Whereas T cells from peg-G-CSF wt donors were able to regulate GVHD induced by T cells from control-treated donors, T cells from G-CSF-treated wt donors and peg-G-CSF-treated IL-10(-/-) donors did not prevent mortality. Thus, peg-G-CSF is markedly superior to standard G-CSF for the prevention of GVHD following allogeneic stem cell transplantation (SCT), due to the generation of IL-10-producing regulatory T cells. These data support prospective clinical trials of peg-G-CSF-mobilized allogeneic blood SCT. (C) 2004 by The American Society of Hematology.