685 resultados para physical asset specificity
em Queensland University of Technology - ePrints Archive
Resumo:
Sales growth and employment growth are the two most widely used growth indicators for new ventures; yet, sales growth and employment growth are not interchangeable measures of new venture growth. Rather, they are related, but somewhat independent constructs that respond differently to a variety of criteria. Most of the literature treats this as a methodological technicality. However, sales growth with or without accompanying employment growth has very different implications for managers and policy makers. A better understanding of what drives these different growth metrics has the potential to lead to better decision making. To improve that understanding we apply transaction cost economics reasoning to predict when sales growth will be or will not be accompanied by employment growth. Our results indicate that our predictions are borne out consistently in resource-constrained contexts but not in resource-munificent contexts.
Resumo:
New venture growth is a central topic in entrepreneurship research. Although sales growth is emerging as the most commonly used measure of growth for emerging ventures, employment growth has also been used frequently. However, empirical research demonstrates that there are only very low to moderately sized correlations between the two (Delmar et aL, 2003; Weinzimmer, et al., 1998). In addition) sales growth and employment growth respond differently to a wide variety of criteria (Baum et al., 2001; Delmar et al., 2003). In this study we use transaction cost economics (Williamson, 1996) as a theoretical base to examine transaction cost influences on the addition of new employees as emerging ventures experience sales growth. \\le theorize that transaction cost economics variables will moderate the relationship between sales growth and employment growth. W'e develop and test hypotheses related to asset specificity, behavioral uncertainty, and the influence of resource munificence on the strength of the sales growth/ employment growth relationship. Asset specificity is theorized to be a positive moderator of the relationship between sales growth and employment growth. When the behavioral uncertainty associated with adding new employees is greater than that of outsourcing or subcontracting, it is hypothesized to be a negative moderator of the sales growth/employment growth relationship. We also hypothesize that resource scarcity will strengthen those relationships.
Resumo:
The concept of asset management is not a new but an evolving idea that has been attracting attention of many organisations operating and/or owning some kind of infrastructure assets. The term asset management have been used widely with fundamental differences in interpretation and usage. Regardless of the context of the usage of the term, asset management implies the process of optimising return by scrutinising performance and making key strategic decisions throughout all phases of an assets lifecycle (Sarfi and Tao, 2004). Hence, asset management is a philosophy and discipline through which organisations are enabled to more effectively deploy their resources to provide higher levels of customer service and reliability while balancing financial objectives. In Australia, asset management made its way into the public works in 1993 when the Australian Accounting Standard Board issued the Australian Accounting Standard 27 – AAS27. Standard AAS27 required government agencies to capitalise and depreciate assets rather than expense them against earnings. This development has indirectly forced organisations managing infrastructure assets to consider the useful life and cost effectiveness of asset investments. The Australian State Treasuries and the Australian National Audit Office was the first organisation to formalise the concepts and principles of asset management in Australia in which they defined asset management as “ a systematic, structured process covering the whole life of an asset”(Australian National Audit Office, 1996). This initiative led other Government bodies and industry sectors to develop, refine and apply the concept of asset management in the management of their respective infrastructure assets. Hence, it can be argued that the concept of asset management has emerged as a separate and recognised field of management during the late 1990s. In comparison to other disciplines such as construction, facilities, maintenance, project management, economics, finance, to name a few, asset management is a relatively new discipline and is clearly a contemporary topic. The primary contributors to the literature in asset management are largely government organisations and industry practitioners. These contributions take the form of guidelines and reports on the best practice of asset management. More recently, some of these best practices have been made to become a standard such as the PAS 55 (IAM, 2004, IAM, 2008b) in UK. As such, current literature in this field tends to lack well-grounded theories. To-date, while receiving relatively more interest and attention from empirical researchers, the advancement of this field, particularly in terms of the volume of academic and theoretical development is at best moderate. A plausible reason for the lack of advancement is that many researchers and practitioners are still unaware of, or unimpressed by, the contribution that asset management can make to the performance of infrastructure asset. This paper seeks to explore the practices of organisations that manage infrastructure assets to develop a framework of strategic infrastructure asset management processes. It will begin by examining the development of asset management. This is followed by the discussion on the method to be adopted for this paper. Next, is the discussion of the result form case studies. It first describes the goals of infrastructure asset management and how they can support the broader business goals. Following this, a set of core processes that can support the achievement of business goals are provided. These core processes are synthesised based on the practices of asset managers in the case study organisations.
Resumo:
The upstream oil & gas industry has been contending with massive data sets and monolithic files for many years, but “Big Data”—that is, the ability to apply more sophisticated types of analytical tools to information in a way that extracts new insights or creates new forms of value—is a relatively new concept that has the potential to significantly re-shape the industry. Despite the impressive amount of value that is being realized by Big Data technologies in other parts of the marketplace, however, much of the data collected within the oil & gas sector tends to be discarded, ignored, or analyzed in a very cursory way. This paper examines existing data management practices in the upstream oil & gas industry, and compares them to practices and philosophies that have emerged in organizations that are leading the Big Data revolution. The comparison shows that, in companies that are leading the Big Data revolution, data is regarded as a valuable asset. The presented evidence also shows, however, that this is usually not true within the oil & gas industry insofar as data is frequently regarded there as descriptive information about a physical asset rather than something that is valuable in and of itself. The paper then discusses how upstream oil & gas companies could potentially extract more value from data, and concludes with a series of specific technical and management-related recommendations to this end.
Resumo:
The upstream oil and gas industry has been contending with massive data sets and monolithic files for many years, but “Big Data” is a relatively new concept that has the potential to significantly re-shape the industry. Despite the impressive amount of value that is being realized by Big Data technologies in other parts of the marketplace, however, much of the data collected within the oil and gas sector tends to be discarded, ignored, or analyzed in a very cursory way. This viewpoint examines existing data management practices in the upstream oil and gas industry, and compares them to practices and philosophies that have emerged in organizations that are leading the way in Big Data. The comparison shows that, in companies that are widely considered to be leaders in Big Data analytics, data is regarded as a valuable asset—but this is usually not true within the oil and gas industry insofar as data is frequently regarded there as descriptive information about a physical asset rather than something that is valuable in and of itself. The paper then discusses how the industry could potentially extract more value from data, and concludes with a series of policy-related questions to this end.
Resumo:
The following paper considers the question, where to office property? In doing so, it focuses, in the first instance, on identifying and describing a selection of key forces for change present within the contemporary operating environment in which office property functions. Given the increasingly complex, dynamic and multi-faceted character of this environment, the paper seeks to identify only the primary forces for change, within the context of the future of office property. These core drivers of change have, for the purposes of this discussion, been characterised as including a range of economic, demographic and socio-cultural factors, together with developments in information and communication technology. Having established this foundation, the paper proceeds to consider the manner in which these forces may, in the future, be manifested within the office property market. Comment is offered regarding the potential future implications of these forces for change together with their likely influence on the nature and management of the physical asset itself. Whilst no explicit time horizon has been envisioned in the preparation of this paper particular attention has been accorded short to medium term trends, that is, those likely to emerge in the office property marketplace over the coming two decades. Further, the paper considers the question posed, in respect of the future of office property, in the context of developed western nations. The degree of commonality seen in these mature markets is such that generalisations may more appropriately and robustly be applied. Whilst some of the comments offered with respect to the target market may find application in other arenas, it is beyond the scope of this paper to explicitly consider highly heterogeneous markets. Given also the wide scope of this paper key drivers for change and their likely implications for the commercial office property market are identified at a global level (within the above established parameters). Accordingly, the focus is necessarily such that it serves to reflect overarching directions at a universal level (with the effect being that direct applicability to individual markets - when viewed in isolation on a geographic or property type specific basis – may not be fitting in all instances)
Resumo:
The benefits of using eXtensible Business Reporting Language (XBRL) as a business reporting standard have been widely canvassed in the extant literature, in particular, as the enabling technology for standard business reporting tools. One of the key benefits noted is the ability of standard business reporting to create significant efficiencies in the regulatory reporting process. Efficiency-driven cost reductions are highly desirable by data and report producers. However, they may not have the same potential to create long-term firm value as improved effectiveness of decision making. This study assesses the perceptions of Australian business stakeholders in relation to the benefits of the Australian standard business reporting instantiation (SBR) for financial reporting. These perceptions were drawn from interviews of persons knowledgeable in XBRL-based standard business reporting and submissions to Treasury relative to SBR reporting options. The combination of interviews and submissions permit insights into the views of various groups of stakeholders in relation to the potential benefits. In line with predictions based on a transaction-cost economics perspective, interviewees who primarily came from a data and report-producer background mentioned benefits that centre largely on asset specificity and efficiency. The interviewees who principally came from a data and report-consumer background mentioned benefits that centre on reducing decision-making uncertainty and decision-making effectiveness. The data and report consumers also took a broader view of the benefits of SBR to the financial reporting supply chain. Our research suggests that advocates of SBR have successfully promoted its efficiency benefits to potential users. However, the effectiveness benefits of SBR, for example, the decision-making benefits offered to investors via standardised reports, while becoming more broadly acknowledged, remain not a priority for all stakeholders.
Resumo:
This document provides a review of international and national practices in investment decision support tools in road asset management. Efforts were concentrated on identifying analytic frameworks, evaluation methodologies and criteria adopted by current tools. Emphasis was also given to how current approaches support Triple Bottom Line decision-making. Benefit Cost Analysis and Multiple Criteria Analysis are principle methodologies in supporting decision-making in Road Asset Management. The complexity of the applications shows significant differences in international practices. There is continuing discussion amongst practitioners and researchers regarding to which one is more appropriate in supporting decision-making. It is suggested that the two approaches should be regarded as complementary instead of competitive means. Multiple Criteria Analysis may be particularly helpful in early stages of project development, say strategic planning. Benefit Cost Analysis is used most widely for project prioritisation and selecting the final project from amongst a set of alternatives. Benefit Cost Analysis approach is useful tool for investment decision-making from an economic perspective. An extension of the approach, which includes social and environmental externalities, is currently used in supporting Triple Bottom Line decision-making in the road sector. However, efforts should be given to several issues in the applications. First of all, there is a need to reach a degree of commonality on considering social and environmental externalities, which may be achieved by aggregating the best practices. At different decision-making level, the detail of consideration of the externalities should be different. It is intended to develop a generic framework to coordinate the range of existing practices. The standard framework will also be helpful in reducing double counting, which appears in some current practices. Cautions should also be given to the methods of determining the value of social and environmental externalities. A number of methods, such as market price, resource costs and Willingness to Pay, are found in the review. The use of unreasonable monetisation methods in some cases has discredited Benefit Cost Analysis in the eyes of decision makers and the public. Some social externalities, such as employment and regional economic impacts, are generally omitted in current practices. This is due to the lack of information and credible models. It may be appropriate to consider these externalities in qualitative forms in a Multiple Criteria Analysis. Consensus has been reached in considering noise and air pollution in international practices. However, Australia practices generally omitted these externalities. Equity is an important consideration in Road Asset Management. The considerations are either between regions, or social groups, such as income, age, gender, disable, etc. In current practice, there is not a well developed quantitative measure for equity issues. More research is needed to target this issue. Although Multiple Criteria Analysis has been used for decades, there is not a generally accepted framework in the choice of modelling methods and various externalities. The result is that different analysts are unlikely to reach consistent conclusions about a policy measure. In current practices, some favour using methods which are able to prioritise alternatives, such as Goal Programming, Goal Achievement Matrix, Analytic Hierarchy Process. The others just present various impacts to decision-makers to characterise the projects. Weighting and scoring system are critical in most Multiple Criteria Analysis. However, the processes of assessing weights and scores were criticised as highly arbitrary and subjective. It is essential that the process should be as transparent as possible. Obtaining weights and scores by consulting local communities is a common practice, but is likely to result in bias towards local interests. Interactive approach has the advantage in helping decision-makers elaborating their preferences. However, computation burden may result in lose of interests of decision-makers during the solution process of a large-scale problem, say a large state road network. Current practices tend to use cardinal or ordinal scales in measure in non-monetised externalities. Distorted valuations can occur where variables measured in physical units, are converted to scales. For example, decibels of noise converts to a scale of -4 to +4 with a linear transformation, the difference between 3 and 4 represents a far greater increase in discomfort to people than the increase from 0 to 1. It is suggested to assign different weights to individual score. Due to overlapped goals, the problem of double counting also appears in some of Multiple Criteria Analysis. The situation can be improved by carefully selecting and defining investment goals and criteria. Other issues, such as the treatment of time effect, incorporating risk and uncertainty, have been given scant attention in current practices. This report suggested establishing a common analytic framework to deal with these issues.
Resumo:
Historically, asset management focused primarily on the reliability and maintainability of assets; organisations have since then accepted the notion that a much larger array of processes govern the life and use of an asset. With this, asset management’s new paradigm seeks a holistic, multi-disciplinary approach to the management of physical assets. A growing number of organisations now seek to develop integrated asset management frameworks and bodies of knowledge. This research seeks to complement existing outputs of the mentioned organisations through the development of an asset management ontology. Ontologies define a common vocabulary for both researchers and practitioners who need to share information in a chosen domain. A by-product of ontology development is the realisation of a process architecture, of which there is also no evidence in published literature. To develop the ontology and subsequent asset management process architecture, a standard knowledge-engineering methodology is followed. This involves text analysis, definition and classification of terms and visualisation through an appropriate tool (in this case, the Protégé application was used). The result of this research is the first attempt at developing an asset management ontology and process architecture.
Resumo:
Physical infrastructure assets are important components of our society and our economy. They are usually designed to last for many years, are expected to be heavily used during their lifetime, carry considerable load, and are exposed to the natural environment. They are also normally major structures, and therefore present a heavy investment, requiring constant management over their life cycle to ensure that they perform as required by their owners and users. Given a complex and varied infrastructure life cycle, constraints on available resources, and continuing requirements for effectiveness and efficiency, good management of infrastructure is important. While there is often no one best management approach, the choice of options is improved by better identification and analysis of the issues, by the ability to prioritise objectives, and by a scientific approach to the analysis process. The abilities to better understand the effect of inputs in the infrastructure life cycle on results, to minimise uncertainty, and to better evaluate the effect of decisions in a complex environment, are important in allocating scarce resources and making sound decisions. Through the development of an infrastructure management modelling and analysis methodology, this thesis provides a process that assists the infrastructure manager in the analysis, prioritisation and decision making process. This is achieved through the use of practical, relatively simple tools, integrated in a modular flexible framework that aims to provide an understanding of the interactions and issues in the infrastructure management process. The methodology uses a combination of flowcharting and analysis techniques. It first charts the infrastructure management process and its underlying infrastructure life cycle through the time interaction diagram, a graphical flowcharting methodology that is an extension of methodologies for modelling data flows in information systems. This process divides the infrastructure management process over time into self contained modules that are based on a particular set of activities, the information flows between which are defined by the interfaces and relationships between them. The modular approach also permits more detailed analysis, or aggregation, as the case may be. It also forms the basis of ext~nding the infrastructure modelling and analysis process to infrastructure networks, through using individual infrastructure assets and their related projects as the basis of the network analysis process. It is recognised that the infrastructure manager is required to meet, and balance, a number of different objectives, and therefore a number of high level outcome goals for the infrastructure management process have been developed, based on common purpose or measurement scales. These goals form the basis of classifYing the larger set of multiple objectives for analysis purposes. A two stage approach that rationalises then weights objectives, using a paired comparison process, ensures that the objectives required to be met are both kept to the minimum number required and are fairly weighted. Qualitative variables are incorporated into the weighting and scoring process, utility functions being proposed where there is risk, or a trade-off situation applies. Variability is considered important in the infrastructure life cycle, the approach used being based on analytical principles but incorporating randomness in variables where required. The modular design of the process permits alternative processes to be used within particular modules, if this is considered a more appropriate way of analysis, provided boundary conditions and requirements for linkages to other modules, are met. Development and use of the methodology has highlighted a number of infrastructure life cycle issues, including data and information aspects, and consequences of change over the life cycle, as well as variability and the other matters discussed above. It has also highlighted the requirement to use judgment where required, and for organisations that own and manage infrastructure to retain intellectual knowledge regarding that infrastructure. It is considered that the methodology discussed in this thesis, which to the author's knowledge has not been developed elsewhere, may be used for the analysis of alternatives, planning, prioritisation of a number of projects, and identification of the principal issues in the infrastructure life cycle.
Resumo:
The United States Supreme Court has handed down a once in a generation patent law decision that will have important ramifications for the patentability of non-physical methods, both internationally and in Australia. In Bilski v Kappos, the Supreme Court considered whether an invention must either be tied to a machine or apparatus, or transform an article into a different state or thing to be patentable. It also considered for the first time whether business methods are patentable subject matter. The decision will be of particular interest to practitioners who followed the litigation in Grant v Commissioner of Patents, a Federal Court decision in which a Brisbane-based inventor was denied a patent over a method of protecting an asset from the claims of creditors.
Resumo:
The demands and responsibilities placed on schools in contemporary education systems are vast. However, with growing obesity levels and physical inactivity, the prevention of chronic disease has focused on youth populations, with schools playing the focal educative asset in this strategy. Parents play a decisive role in their child’s educational setting, and as fee and tax payers, are ultimately a consumer. Parents (82 males and 208 females) of secondary school children were recruited from three private (n=151) and two government schools (n=150) in Brisbane, Australia. The mean (standard deviation) age was 44.57 (6.21) years. Participants responded to a series of questions about physical activity at their child’s school, in addition to completing the International Physical Activity Questionnaire. Data were analysed using descriptive statistics, frequency distributions and logistic regressions. Parents were deemed sufficiently physically active if they participated in at least 150 minutes of moderate-to-vigorous physical activity per week. Overall, 83 (59.7%) parents from private and 60 (50.8%) parents from government schools were deemed sufficiently physically active. Concerning whether physical activity promotion should be a priority at their child’s school, 111 (73.5%) parents from private schools either agreed or strongly agreed, as opposed to 97 (64.7%) parents from government schools. Logistic regressions indicated that the concept of physical activity promotion being prioritised at schools was dependent on whether the child attended a private school (OR =1.34, z = 2.30, p = 0.02), and whether the participant was sufficiently active (OR =.71, z = -2.48, p = 0.01). Physical activity promotion within schools may provide substantial future benefits on a population scale. The demands on schools may need to be addressed to meet the needs of students and the desires of their parents.
Resumo:
Background The purposes of this study were 1) to establish accelerometer count cutoffs to categorize activity intensity of 3 to 5-y old-children and 2) to evaluate the accelerometer as a measure of children’s physical activity in preschool settings. Methods While wearing an ActiGraph accelerometer, 16 preschool children performed five, 3-min structured activities. Receiver Operating Characteristic (ROC) curve analyses identified count cutoffs for four physical activity intensities. In 9 preschools, 281 children wore an ActiGraph during observations performed by three trained observers (interobserver reli-ability = 0.91 to 0.98). Results Separate count cutoffs for 3, 4, and 5-y olds were established. Sensitivity and specificity for the count cutoffs ranged from 86.7% to 100.0% and 66.7% to 100.0%, respectively. ActiGraph counts/15 s were different among all activities (P < 0.05) except the two sitting activities. Correlations between observed and ActiGraph intensity categorizations at the preschools ranged from 0.46 to 0.70 (P < 0.001). Conclusions The ActiGraph count cutoffs established and validated in this study can be used to objectively categorize the time that preschool-age children spend in different physical activity intensity levels.
Resumo:
We report a tunable alternating current electrohydrodynamic (ac-EHD) force which drives lateran fluid motion within a few nanometers of an electrode surface. Because the magnitude of this fluid shear force can be tuned externally (e.g., via the application of an ac electric field), it provides a new capability to physically displace weakly (nonspecifically) bound cellular analytes. To demonstrate the utility of the tunable nanoshearing phenomenon, we present data on purpose-built microfluidic devices that employ ac-EHD force to remove nonspecific adsorption of molecular and cellular species. Here, we show that an ac-EHD device containing asymmetric planar and microtip electrode pairs resulted in a 4-fold reduction in nonspecific adsorption of blood cells and also captured breast cancer cells in blood, with high efficiency (approximately 87%) and specificity. We therefore feel that this new capability of externally tuning and manipulating fluid flow could have wide applications as an innovative approach to enhance the specific capture of rare cells such as cancer cells in blood.
Resumo:
OBJECTIVE Public health organizations recommend that preschool-aged children accumulate at least 3h of physical activity (PA) daily. Objective monitoring using pedometers offers an opportunity to measure preschooler's PA and assess compliance with this recommendation. The purpose of this study was to derive step-based recommendations consistent with the 3h PA recommendation for preschool-aged children. METHOD The study sample comprised 916 preschool-aged children, aged 3 to 6years (mean age=5.0+/-0.8years). Children were recruited from kindergartens located in Portugal, between 2009 and 2013. Children wore an ActiGraph GT1M accelerometer that measured PA intensity and steps per day simultaneously over a 7-day monitoring period. Receiver operating characteristic (ROC) curve analysis was used to identify the daily step count threshold associated with meeting the daily 3hour PA recommendation. RESULTS A significant correlation was observed between minutes of total PA and steps per day (r=0.76, p<0.001). The optimal step count for >/=3h of total PA was 9099 steps per day (sensitivity (90%) and specificity (66%)) with area under the ROC curve=0.86 (95% CI: 0.84 to 0.88). CONCLUSION Preschool-aged children who accumulate less than 9000 steps per day may be considered Insufficiently Active.