788 resultados para Rectifiability of demand


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Principal Topic: For forward thinking companies, the environment may represent the ''biggest opportunity for enterprise and invention the industrial world has ever seen'' (Cairncross 1990). Increasing awareness of environmental and sustainability issues through media including the promotion of Al Gore's ''An Inconvenient Truth'' has seen increased awareness of environmental and sustainability issues and increased demand for business processes that reduce detrimental environmental impacts of global development (Dean & McMullen 2007). The increased demand for more environmentally sensitive products and services represents an opportunity for the development of ventures that seek to satisfy this demand through entrepreneurial action. As a consequence, increasing recent market developments in renewable energy, carbon emissions, fuel cells, green building, and other sectors suggest an increasing importance of opportunities for environmental entrepreneurship (Dean and McMullen 2007) and increasingly important area of business activity (Schaper 2005). In the last decade in particular, big business has sought to develop a more ''sustainability/ green friendly'' orientation as a response to public pressure and increased government legislation and policy to improve environmental performance (Cohen and Winn 2007). Whilst much of the literature and media is littered with examples of sustainability practices of large firms, nascent and young sustainability firms have only recently begun generating strong research and policy interest (Shepherd, Kuskova and Patzelt 2009): not only for their potential to generate above average financial performance and returns owing to a greater popularity and demand towards sustainability products and services offerings, but also for their intent to lessen environmental impacts, and to provide a more accurate reflection of the ''true cost'' of market offerings taking into account carbon and environmental impacts. More specifically, researchers have suggested that although the previous focus has been on large firms and their impact on the environment, the estimated collective impact of entries and exits of nascent and young firms in development is substantial and could outweigh the combined environmental impact of large companies (Hillary, 2000). Therefore, it may be argued that greater attention should be paid to nascent and young firms and researching sustainability practices, for both their impact in reducing environmental impacts and potential higher financial performance. Whilst acknowledging this research only uses the first wave of a four year longitudinal study of nascent and young firms, it can still begin to provide initial analysis on which to continue further research. The aim of this paper therefore is to provide an overview of the emerging literature in sustainable entrepreneurship and to present some selected preliminary results from the first wave of the data collection, with comparison, where appropriate, of sustainable and firms that do not fulfil this criteria. ''One of the key challenges in evaluating sustainability entrepreneurship is the lack of agreement in how it is defined'' (Schaper, 2005: 10). Some evaluate sustainable entrepreneurs simply as one category of entrepreneurs with little difference between them and traditional entrepreneurs (Dees, 1998). Other research recognises values-based sustainable enterprises requiring a unique perspective (Parrish, 2005). Some see the environmental or sustainable entrepreneurship is a subset of social entrepreneurship (Cohen & Winn, 2007; Dean & McMullen, 2007) whilst others see it as a separate, distinct theory (Archer 2009). Following one of the first definitions of sustainability developed by the Brundtland Commission (1987) we define sustainable entrepreneurship as firms which ''seek to meet the needs and aspirations of the present without compromising the ability to meet those of the future''. ---------- Methodology/Key Propositions: In this exploratory paper we investigate sustainable entrepreneurship using Cohen et al.'s (2008) framework to identify strategies of nascent and young entrepreneurial firms. We use data from The Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE). This study shares the general empirical approach with PSED studies in the US (Reynolds et al 1994; Reynolds & Curtin 2008). The overall study uses samples of 727 nascent (not yet operational) firms and another 674 young firms, the latter being in an operational stage but less than four years old. To generate the sub sample of sustainability firms, we used content analysis techniques on firm titles, descriptions and product descriptions provided by respondents. Two independent coders used a predefined codebook developed from our review of the sustainability entrepreneurship literature (Cohen et al. 2009) to evaluate the content based on terms such as ''sustainable'' ''eco-friendly'' ''renewable energy'' ''environment'' amongst others. The inter-rater reliability was checked and the Kappa's co-efficient was found to be within the acceptable range (0.746). 85 firms fulfilled the criteria given for inclusion in the sustainability cohort. ---------- Results and Implications: The results for this paper are based on Wave one of the CAUSEE survey which has been completed and the data is available for analysis. It is expected that the findings will assist in beginning to develop an understanding of nascent and young firms that are driven to contribute to a society which is sustainable, not just from an economic perspective (Cohen et al 2008), but from an environmental and social perspective as well. The CAUSEE study provides an opportunity to compare the characteristics of sustainability entrepreneurs with entrepreneurial firms without a stated environmental purpose, which constitutes the majority of the new firms created each year, using a large scale novel longitudinal dataset. The results have implications for Government in the design of better conditions for the creation of new business, firms who assist sustainability in developing better advice programs in line with a better understanding of their needs and requirements, individuals who may be considering becoming entrepreneurs in high potential arenas and existing entrepreneurs make better decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is widely held that strong relationships exist between housing, economic status, and well being. This is exemplified by widespread housing stock surpluses in many countries which threaten to destabilise numerous aspects related to individuals and community. However, the position of housing demand and supply is not consistent. The Australian position provides a distinct contrast whereby seemingly inexorable housing demand generally remains a critical issue affecting the socio-economic landscape. Underpinned by high levels of immigration, and further buoyed by sustained historically low interest rates, increasing income levels, and increased government assistance for first home buyers, this strong housing demand ensures elements related to housing affordability continue to gain prominence. A significant, but less visible factor impacting housing affordability – particularly new housing development – relates to holding costs. These costs are in many ways “hidden” and cannot always be easily identified. Although it is only one contributor, the nature and extent of its impact requires elucidation. In its simplest form, it commences with a calculation of the interest or opportunity cost of land holding. However, there is significantly more complexity for major new developments - particularly greenfield property development. Preliminary analysis conducted by the author suggests that even small shifts in primary factors impacting holding costs can appreciably affect housing affordability – and notably, to a greater extent than commonly held. Even so, their importance and perceived high level impact can be gauged from the unprecedented level of attention policy makers have given them over recent years. This may be evidenced by the embedding of specific strategies to address burgeoning holding costs (and particularly those cost savings associated with streamlining regulatory assessment) within statutory instruments such as the Queensland Housing Affordability Strategy, and the South East Queensland Regional Plan. However, several key issues require investigation. Firstly, the computation and methodology behind the calculation of holding costs varies widely. In fact, it is not only variable, but in some instances completely ignored. Secondly, some ambiguity exists in terms of the inclusion of various elements of holding costs, thereby affecting the assessment of their relative contribution. Perhaps this may in part be explained by their nature: such costs are not always immediately apparent. Some forms of holding costs are not as visible as the more tangible cost items associated with greenfield development such as regulatory fees, government taxes, acquisition costs, selling fees, commissions and others. Holding costs are also more difficult to evaluate since for the most part they must be ultimately assessed over time in an ever-changing environment, based on their strong relationship with opportunity cost which is in turn dependant, inter alia, upon prevailing inflation and / or interest rates. By extending research in the general area of housing affordability, this thesis seeks to provide a more detailed investigation of those elements related to holding costs, and in so doing determine the size of their impact specifically on the end user. This will involve the development of soundly based economic and econometric models which seek to clarify the componentry impacts of holding costs. Ultimately, there are significant policy implications in relation to the framework used in Australian jurisdictions that promote, retain, or otherwise maximise, the opportunities for affordable housing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Student learning research literature has shown that students' learning approaches are influenced by the learning context (Evans, Kirby, & Fabrigar, 2003). Of the many contextual factors, assessment has been found to have the most important influence on the way students go about learning. For example, assessment that is perceived to required a low level of cognitive abilities will more likely elicit a learning approach that concentrate on reproductive learning activities. Moreover, assessment demand will also interact with learning approach to determine academic performance. In this paper an assessment specific model of learning comprising presage, process and product variables (Biggs, 2001) was proposed and tested against data obtained from a sample of introductory economics students (n=434). The model developed was used to empirically investigate the influence of learning inputs and learning approaches on academic performances across assessment types (essay assignment, multiple choice question exam and exam essay). By including learning approaches in the learning model, the mechanism through which learning inputs determine academic performance was examined. Methodological limitations of the study will also be discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The standard Blanchard-Quah (BQ) decomposition forces aggregate demand and supply shocks to be orthogonal. However, this assumption is problematic for a nation with an inflation target. The very notion of inflation targeting means that monetary policy reacts to changes in aggregate supply. This paper employs a modification of the BQ procedure that allows for correlated shifts in aggregate supply and demand. It is found that shocks to Australian aggregate demand and supply are highly correlated. The estimated shifts in the aggregate demand and supply curves are then used to measure the effects of inflation targeting on the Australian inflation rate and level of GDP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: The Nurse Researcher Project (NRP) was initiated to support development of a nursing research and evidence based practice culture in Cancer Care Services (CCS) in a large tertiary hospital in Australia. The position was established and evaluated to inform future directions in the organisation.---------- Background: The demand for quality cancer care has been expanding over the past decades. Nurses are well placed to make an impact on improving health outcomes of people affected by cancer. At the same time, there is a robust body of literature documenting the barriers to undertaking and utilising research by and for nurses and nursing. A number of strategies have been implemented to address these barriers including a range of staff researcher positions but there is scant attention to evaluating the outcomes of these strategies. The role of nurse researcher has been documented in the literature with the aim to provide support to nurses in the clinical setting. There is, to date, little information in relation to the design, implementation and evaluation of this role.---------- Design: The Donabedian’s model of program evaluation was used to implement and evaluate this initiative.---------- Methods: The ‘NRP’ outlined the steps needed to implement the nurse researcher role in a clinical setting. The steps involved the design of the role, planning for the support system for the role, and evaluation of outcomes of the role over two years.---------- Discussion: This paper proposes an innovative and feasible model to support clinical nursing research which would be relevant to a range of service areas.---------- Conclusion: Nurse researchers are able to play a crucial role in advancing nursing knowledge and facilitating evidence based practice, especially when placed to support a specialised team of nurses at a service level. This role can be implemented through appropriate planning of the position, building a support system and incorporating an evaluation plan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The previous investigations have shown that the modal strain energy correlation method, MSEC, could successfully identify the damage of truss bridge structures. However, it has to incorporate the sensitivity matrix to estimate damage and is not reliable in certain damage detection cases. This paper presents an improved MSEC method where the prediction of modal strain energy change vector is differently obtained by running the eigensolutions on-line in optimisation iterations. The particular trail damage treatment group maximising the fitness function close to unity is identified as the detected damage location. This improvement is then compared with the original MSEC method along with other typical correlation-based methods on the finite element model of a simple truss bridge. The contributions to damage detection accuracy of each considered mode is also weighed and discussed. The iterative searching process is operated by using genetic algorithm. The results demonstrate that the improved MSEC method suffices the demand in detecting the damage of truss bridge structures, even when noised measurement is considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standardization is critical to scientists and regulators to ensure the quality and interoperability of research processes, as well as the safety and efficacy of the attendant research products. This is perhaps most evident in the case of “omics science,” which is enabled by a host of diverse high-throughput technologies such as genomics, proteomics, and metabolomics. But standards are of interest to (and shaped by) others far beyond the immediate realm of individual scientists, laboratories, scientific consortia, or governments that develop, apply, and regulate them. Indeed, scientific standards have consequences for the social, ethical, and legal environment in which innovative technologies are regulated, and thereby command the attention of policy makers and citizens. This article argues that standardization of omics science is both technical and social. A critical synthesis of the social science literature indicates that: (1) standardization requires a degree of flexibility to be practical at the level of scientific practice in disparate sites; (2) the manner in which standards are created, and by whom, will impact their perceived legitimacy and therefore their potential to be used; and (3) the process of standardization itself is important to establishing the legitimacy of an area of scientific research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Creating sustainable urban environments is one of the challenging issues that need a clear vision and implementation strategies involving changes in governmental values and decision making process for local governments. Particularly, internalisation of environmental externalities of daily urban activities (e.g. manufacturing, transportation and so on) has immense importance for which local policies are formulated to provide better living conditions for the people inhabiting urban areas. Even if environmental problems are defined succinctly by various stakeholders, complicated nature of sustainability issues demand a structured evaluation strategy and well-defined sustainability parameters for efficient and effective policy making. Following this reasoning, this study involves assessment of sustainability performance of urban settings mainly focusing on environmental problems caused by rapid urban expansion and transformation. By taking into account land-use and transportation interaction, it tries to reveal how future urban developments would alter daily urban travel behaviour of people and affect the urban and natural environments. The paper introduces a grid-based indexing method developed for this research and trailed as a GIS-based decision support tool to analyse and model selected spatial and aspatial indicators of sustainability in the Gold Coast. This process reveals parameters of site specific relationship among selected indicators that are used to evaluate index-based performance characteristics of the area. The evaluation is made through an embedded decision support module by assigning relative weights to indicators. Resolution of selected grid-based unit of analysis provides insights about service level of projected urban development proposals at a disaggregate level, such as accessibility to transportation and urban services, and pollution. The paper concludes by discussing the findings including the capacity of the decision support system to assist decision-makers in determining problematic areas and developing intervention policies for sustainable outcomes of future developments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Historically, distance education consisted of a combination of face-to-face blocks of time and surface mailed packages. However, advances in information technology literacy and the abundance of personal computers has placed e-learning in increased demand. The authors describe the planning, implementation, and evaluation of the blending of e-learning with face-to-face education in the postgraduate nursing forum. Experiences of this particular student group are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of investment in the electric power has been the subject of intensive research for many years. The efficient generation and distribution of electrical energy is a difficult task involving the operation of a complex network of facilities, often located over very large geographical regions. Electric power utilities have made use of an enormous range of mathematical models. Some models address time spans which last for a fraction of a second, such as those that deal with lightning strikes on transmission lines while at the other end of the scale there are models which address time horizons consisting of ten or twenty years; these usually involve long range planning issues. This thesis addresses the optimal long term capacity expansion of an interconnected power system. The aim of this study has been to derive a new, long term planning model which recognises the regional differences which exist for energy demand and which are present in the construction and operation of power plant and transmission line equipment. Perhaps the most innovative feature of the new model is the direct inclusion of regional energy demand curves in the nonlinear form. This results in a nonlinear capacity expansion model. After review of the relevant literature, the thesis first develops a model for the optimal operation of a power grid. This model directly incorporates regional demand curves. The model is a nonlinear programming problem containing both integer and continuous variables. A solution algorithm is developed which is based upon a resource decomposition scheme that separates the integer variables from the continuous ones. The decompostion of the operating problem leads to an interactive scheme which employs a mixed integer programming problem, known as the master, to generate trial operating configurations. The optimum operating conditions of each trial configuration is found using a smooth nonlinear programming model. The dual vector recovered from this model is subsequently used by the master to generate the next trial configuration. The solution algorithm progresses until lower and upper bounds converge. A range of numerical experiments are conducted and these experiments are included in the discussion. Using the operating model as a basis, a regional capacity expansion model is then developed. It determines the type, location and capacity of additional power plants and transmission lines, which are required to meet predicted electicity demands. A generalised resource decompostion scheme, similar to that used to solve the operating problem, is employed. The solution algorithm is used to solve a range of test problems and the results of these numerical experiments are reported. Finally, the expansion problem is applied to the Queensland electricity grid in Australia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service bundling can be regarded as an option for service providers to strengthen their competitive advantages, cope with dynamic market conditions and heterogeneous consumer demand. Despite these positive effects, actual guidance for the identification of service bundles and the act of bundling itself can be regarded as a gap. Previous research has resulted in a conceptualization of a service bundling method relying on a structured service description in order to fill this gap. This method addresses the reasoning about the suitability of services to be part of a bundle based on analyzing existing relationships between services captured by a description language. This paper extends the aforementioned research by presenting an initial set of empirically derived relationships between services in existing bundles that can subsequently be utilized to identify potential new bundles. Additionally, a gap analysis points out to what extent prominent ontologies and service description languages accommodate for the identified relationships.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Providing water infrastructure in times of accelerating climate change presents interesting new problems. Expanding demands must be met or managed in contexts of increasingly constrained sources of supply, raising ethical questions of equity and participation. Loss of agricultural land and natural habitats, the coastal impacts of desalination plants and concerns over re-use of waste water must be weighed with demand management issues of water rationing, pricing mechanisms and inducing behaviour change. This case study examines how these factors impact on infrastructure planning in South East Queensland, Australia: a region with one of the developed world’s most rapidly growing populations, which has recently experienced the most severe drought in its recorded history. Proposals to match forecast demands and potential supplies for water over a 20 year period are reviewed by applying ethical principles to evaluate practical plans to meet the water needs of the region’s activities and settlements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work seeks to fill some of the gap existing in the economics and behavioural economics literature pertaining to the decision making process of individuals under extreme environmental situations (life and death events). These essays specifically examine the sinking’s of the R.M.S. Titanic, on 14th April of 1912, and the R.M.S. Lusitania, on 7th May 1915, using econometric (multivariate) analysis techniques. The results show that even under extreme life and death conditions, social norms matter and are reflected in the survival probabilities of individuals onboard the Titanic. However, results from the comparative analysis of the Titanic and Lusitania show that social norms take time to organise and be effective. In the presence of such time constraints, the traditional “homo economicus” model of individual behaviour becomes evident as a survival of the fittest competition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.