244 resultados para regression discrete models
Resumo:
Background: SEQ Catchments Ltd and QUT are collaborating on groundwater investigations in the SE Qld region, which utilise community engagement and 3D Visualisation methodologies. The projects, which have been funded by the Australian Government’s NHT and Caring for our Country programmes, were initiated from local community concerns regarding groundwater sustainability and quality in areas where little was previously known. ----- Objectives: Engage local and regional stakeholders to tap all available sources of information;•Establish on-going (2 years +) community-based groundwater / surface water monitoring programmes;•Develop 3D Visualisation from all available data; and•Involve, train and inform the local community for improved on-ground land and water use management. ----- Results and findings: Respectful community engagement yielded information, access to numerous monitoring sites and education opportunities at low cost, which would otherwise be unavailable. A Framework for Community-Based Groundwater Monitoring has been documented (Todd, 2008).A 3D visualisation models have been developed for basaltic settings, which relate surface features familiar to the local community with the interpreted sub-surface hydrogeology. Groundwater surface movements have been animated and compared to local rainfall using the time-series monitoring data.An important 3D visualisation feature of particular interest to the community was the interaction between groundwater and surface water. This factor was crucial in raising awareness of potential impacts of land and water use on groundwater and surface water resources.
Resumo:
Studies have examined the associations between cancers and circulating 25-hydroxyvitamin D [25(OH)D], but little is known about the impact of different laboratory practices on 25(OH)D concentrations. We examined the potential impact of delayed blood centrifuging, choice of collection tube, and type of assay on 25(OH)D concentrations. Blood samples from 20 healthy volunteers underwent alternative laboratory procedures: four centrifuging times (2, 24, 72, and 96 h after blood draw); three types of collection tubes (red top serum tube, two different plasma anticoagulant tubes containing heparin or EDTA); and two types of assays (DiaSorin radioimmunoassay [RIA] and chemiluminescence immunoassay [CLIA/LIAISON®]). Log-transformed 25(OH)D concentrations were analyzed using the generalized estimating equations (GEE) linear regression models. We found no difference in 25(OH)D concentrations by centrifuging times or type of assay. There was some indication of a difference in 25(OH)D concentrations by tube type in CLIA/LIAISON®-assayed samples, with concentrations in heparinized plasma (geometric mean, 16.1 ng ml−1) higher than those in serum (geometric mean, 15.3 ng ml−1) (p = 0.01), but the difference was significant only after substantial centrifuging delays (96 h). Our study suggests no necessity for requiring immediate processing of blood samples after collection or for the choice of a tube type or assay.
Resumo:
This article presents a survey of authorisation models and considers their ‘fitness-for-purpose’ in facilitating information sharing. Network-supported information sharing is an important technical capability that underpins collaboration in support of dynamic and unpredictable activities such as emergency response, national security, infrastructure protection, supply chain integration and emerging business models based on the concept of a ‘virtual organisation’. The article argues that present authorisation models are inflexible and poorly scalable in such dynamic environments due to their assumption that the future needs of the system can be predicted, which in turn justifies the use of persistent authorisation policies. The article outlines the motivation and requirement for a new flexible authorisation model that addresses the needs of information sharing. It proposes that a flexible and scalable authorisation model must allow an explicit specification of the objectives of the system and access decisions must be made based on a late trade-off analysis between these explicit objectives. A research agenda for the proposed Objective-based Access Control concept is presented.
Resumo:
Games and related virtual environments have been a much-hyped area of the entertainment industry. The classic quote is that games are now approaching the size of Hollywood box office sales [1]. Books are now appearing that talk up the influence of games on business [2], and it is one of the key drivers of present hardware development. Some of this 3D technology is now embedded right down at the operating system level via the Windows Presentation Foundations – hit Windows/Tab on your Vista box to find out... In addition to this continued growth in the area of games, there are a number of factors that impact its development in the business community. Firstly, the average age of gamers is approaching the mid thirties. Therefore, a number of people who are in management positions in large enterprises are experienced in using 3D entertainment environments. Secondly, due to the pressure of demand for more computational power in both CPU and Graphical Processing Units (GPUs), your average desktop, any decent laptop, can run a game or virtual environment. In fact, the demonstrations at the end of this paper were developed at the Queensland University of Technology (QUT) on a standard Software Operating Environment, with an Intel Dual Core CPU and basic Intel graphics option. What this means is that the potential exists for the easy uptake of such technology due to 1. a broad range of workers being regularly exposed to 3D virtual environment software via games; 2. present desktop computing power now strong enough to potentially roll out a virtual environment solution across an entire enterprise. We believe such visual simulation environments can have a great impact in the area of business process modeling. Accordingly, in this article we will outline the communication capabilities of such environments, giving fantastic possibilities for business process modeling applications, where enterprises need to create, manage, and improve their business processes, and then communicate their processes to stakeholders, both process and non-process cognizant. The article then concludes with a demonstration of the work we are doing in this area at QUT.
Resumo:
Purpose: To investigate the impact of glaucomatous visual impairment on postural sway and falls among older adults.Methods: The sample comprised 72 community-dwelling older adults with open-angle glaucoma, aged 74.0 5.8 years (range 62 to 90 years). Measures of visual function included binocular visual acuity (high-contrast), binocular contrast sensitivity (Pelli- Robson) and binocular visual fields (merged monocular HFA 24-2 SITA-Std). Postural stability was assessed under four conditions: eyes open and closed, on a firm and on a foam surface. Falls were monitored for six months with prospective falls diaries. Regression models, adjusting for age and gender, examined the association between vision measures and postural stability (linear regression) and the number of falls (negative binomial regression). Results: Greater visual field loss was significantly associated with poorer postural stability with eyes open, both on firm (r = 0.34, p < 0.01) and foam (r = 0.45, p < 0.001) surfaces. Eighteen (25 per cent) participants experienced at least one fall: 12 (17 per cent) participants fell only once and six (eight per cent) participants fell two or more times (up to five falls). Visual field loss was significantly associated with falling; the rate of falls doubled for every 10 dB reduction in field sensitivity (rate ratio = 1.08, 95% CI = 1.02–1.13). Importantly, in a model comprising upper and lower field sensitivity, only lower field loss was significantly associated with the number of falls (rate ratio = 1.17, 95% CI = 1.04–1.33). Conclusions: Binocular visual field loss was significantly associated with postural instability and falls among older adults with glaucoma. These findings provide valuable directions for developing falls risk assessment and falls prevention strategies for this population.
Resumo:
The performance of iris recognition systems is significantly affected by the segmentation accuracy, especially in non- ideal iris images. This paper proposes an improved method to localise non-circular iris images quickly and accurately. Shrinking and expanding active contour methods are consolidated when localising inner and outer iris boundaries. First, the pupil region is roughly estimated based on histogram thresholding and morphological operations. There- after, a shrinking active contour model is used to precisely locate the inner iris boundary. Finally, the estimated inner iris boundary is used as an initial contour for an expanding active contour scheme to find the outer iris boundary. The proposed scheme is robust in finding exact the iris boundaries of non-circular and off-angle irises. In addition, occlusions of the iris images from eyelids and eyelashes are automatically excluded from the detected iris region. Experimental results on CASIA v3.0 iris databases indicate the accuracy of proposed technique.
Resumo:
The term structure of interest rates is often summarized using a handful of yield factors that capture shifts in the shape of the yield curve. In this paper, we develop a comprehensive model for volatility dynamics in the level, slope, and curvature of the yield curve that simultaneously includes level and GARCH effects along with regime shifts. We show that the level of the short rate is useful in modeling the volatility of the three yield factors and that there are significant GARCH effects present even after including a level effect. Further, we find that allowing for regime shifts in the factor volatilities dramatically improves the model’s fit and strengthens the level effect. We also show that a regime-switching model with level and GARCH effects provides the best out-of-sample forecasting performance of yield volatility. We argue that the auxiliary models often used to estimate term structure models with simulation-based estimation techniques should be consistent with the main features of the yield curve that are identified by our model.
Resumo:
This paper firstly presents an extended ambiguity resolution model that deals with an ill-posed problem and constraints among the estimated parameters. In the extended model, the regularization criterion is used instead of the traditional least squares in order to estimate the float ambiguities better. The existing models can be derived from the general model. Secondly, the paper examines the existing ambiguity searching methods from four aspects: exclusion of nuisance integer candidates based on the available integer constraints; integer rounding; integer bootstrapping and integer least squares estimations. Finally, this paper systematically addresses the similarities and differences between the generalized TCAR and decorrelation methods from both theoretical and practical aspects.
Resumo:
In this paper, the problems of three carrier phase ambiguity resolution (TCAR) and position estimation (PE) are generalized as real time GNSS data processing problems for a continuously observing network on large scale. In order to describe these problems, a general linear equation system is presented to uniform various geometry-free, geometry-based and geometry-constrained TCAR models, along with state transition questions between observation times. With this general formulation, generalized TCAR solutions are given to cover different real time GNSS data processing scenarios, and various simplified integer solutions, such as geometry-free rounding and geometry-based LAMBDA solutions with single and multiple-epoch measurements. In fact, various ambiguity resolution (AR) solutions differ in the floating ambiguity estimation and integer ambiguity search processes, but their theoretical equivalence remains under the same observational systems models and statistical assumptions. TCAR performance benefits as outlined from the data analyses in some recent literatures are reviewed, showing profound implications for the future GNSS development from both technology and application perspectives.
Resumo:
The focus of this thesis is discretionary work effort, that is, work effort that is voluntary, is above and beyond what is minimally required or normally expected to avoid reprimand or dismissal, and is organisationally functional. Discretionary work effort is an important construct because it is known to affect individual performance as well as organisational efficiency and effectiveness. To optimise organisational performance and ensure their long term competitiveness and sustainability, firms need to be able to induce their employees to work at or near their peak level. To work at or near their peak level, individuals must be willing to supply discretionary work effort. Thus, managers need to understand the determinants of discretionary work effort. Nonetheless, despite many years of scholarly investigation across multiple disciplines, considerable debate still exists concerning why some individuals supply only minimal work effort whilst others expend effort well above and beyond what is minimally required of them (Le. they supply discretionary work effort). Even though it is well recognised that discretionary work effort is important for promoting organisational performance and effectiveness, many authors claim that too little is being done by managers to increase the discretionary work effort of their employees. In this research, I have adopted a multi-disciplinary approach towards investigating the role of monetary and non-monetary work environment characteristics in determining discretionary work effort. My central research questions were "What non-monetary work environment characteristics do employees perceive as perks (perquisites) and irks (irksome work environment characteristics)?" and "How do perks, irks and monetary rewards relate to an employee's level of discretionary work effort?" My research took a unique approach in addressing these research questions. By bringing together the economics and organisational behaviour (OB) literatures, I identified problems with the current definition and conceptualisations of the discretionary work effort construct. I then developed and empirically tested a more concise and theoretically-based definition and conceptualisation of this construct. In doing so, I disaggregated discretionary work effort to include three facets - time, intensity and direction - and empirically assessed if different classes of work environment characteristics have a differential pattern of relationships with these facets. This analysis involved a new application of a multi-disciplinary framework of human behaviour as a tool for classifying work environment characteristics and the facets of discretionary work effort. To test my model of discretionary work effort, I used a public sector context in which there has been limited systematic empirical research into work motivation. The program of research undertaken involved three separate but interrelated studies using mixed methods. Data on perks, irks, monetary rewards and discretionary work effort were gathered from employees in 12 organisations in the local government sector in Western Australia. Non-monetary work environment characteristics that should be associated with discretionary work effort were initially identified through a review of the literature. Then, a qualitative study explored what work behaviours public sector employees perceive as discretionary and what perks and irks were associated with high and low levels of discretionary work effort. Next, a quantitative study developed measures of these perks and irks. A Q-sorttype procedure and exploratory factor analysis were used to develop the perks and irks measures. Finally, a second quantitative study tested the relationships amongst perks, irks, monetary rewards and discretionary work effort. Confirmatory factor analysis was firstly used to confirm the factor structure of the measurement models. Correlation analysis, regression analysis and effect-size correlation analysis were used to test the hypothesised relationships in the proposed model of discretionary work effort. The findings confirmed five hypothesised non-monetary work environment characteristics as common perks and two of three hypothesised non-monetary work environment characteristics as common irks. Importantly, they showed that perks, irks and monetary rewards are differentially related to the different facets of discretionary work effort. The convergent and discriminant validities of the perks and irks constructs as well as the time, intensity and direction facets of discretionary work effort were generally confirmed by the research findings. This research advances the literature in several ways: (i) it draws on the Economics and OB literatures to redefine and reconceptualise the discretionary work effort construct to provide greater definitional clarity and a more complete conceptualisation of this important construct; (ii) it builds on prior research to create a more comprehensive set of perks and irks for which measures are developed; (iii) it develops and empirically tests a new motivational model of discretionary work effort that enhances our understanding of the nature and functioning of perks and irks and advances our ability to predict discretionary work effort; and (iv) it fills a substantial gap in the literature on public sector work motivation by revealing what work behaviours public sector employees perceive as discretionary and what work environment characteristics are associated with their supply of discretionary work effort. Importantly, by disaggregating discretionary work effort this research provides greater detail on how perks, irks and monetary rewards are related to the different facets of discretionary work effort. Thus, from a theoretical perspective this research also demonstrates the conceptual meaningfulness and empirical utility of investigating the different facets of discretionary work effort separately. From a practical perspective, identifying work environment factors that are associated with discretionary work effort enhances managers' capacity to tap this valuable resource. This research indicates that to maximise the potential of their human resources, managers need to address perks, irks and monetary rewards. It suggests three different mechanisms through which managers might influence discretionary work effort and points to the importance of training for both managers and non-managers in cultivating positive interpersonal relationships.
Resumo:
The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.
Resumo:
Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business process model repositories. For example, in some cases new process models may be derived from existing models, thus finding these models and adapting them may be more effective than developing them from scratch. As process model repositories may be large, query evaluation may be time consuming. Hence, we investigate the use of indexes to speed up this evaluation process. Experiments are conducted to demonstrate that our proposal achieves a significant reduction in query evaluation time.
Resumo:
An Approach with Vertical Guidance (APV) is an instrument approach procedure which provides horizontal and vertical guidance to a pilot on approach to landing in reduced visibility conditions. APV approaches can greatly reduce the safety risk to general aviation by improving the pilot’s situational awareness. In particular the incidence of Controlled Flight Into Terrain (CFIT) which has occurred in a number of fatal air crashes in general aviation over the past decade in Australia, can be reduced. APV approaches can also improve general aviation operations. If implemented at Australian airports, APV approach procedures are expected to bring a cost saving of millions of dollars to the economy due to fewer missed approaches, diversions and an increased safety benefit. The provision of accurate horizontal and vertical guidance is achievable using the Global Positioning System (GPS). Because aviation is a safety of life application, an aviation-certified GPS receiver must have integrity monitoring or augmentation to ensure that its navigation solution can be trusted. However, the difficulty with the current GPS satellite constellation alone meeting APV integrity requirements, the susceptibility of GPS to jamming or interference and the potential shortcomings of proposed augmentation solutions for Australia such as the Ground-based Regional Augmentation System (GRAS) justifies the investigation of Aircraft Based Augmentation Systems (ABAS) as an alternative integrity solution for general aviation. ABAS augments GPS with other sensors at the aircraft to help it meet the integrity requirements. Typical ABAS designs assume high quality inertial sensors to provide an accurate reference trajectory for Kalman filters. Unfortunately high-quality inertial sensors are too expensive for general aviation. In contrast to these approaches the purpose of this research is to investigate fusing GPS with lower-cost Micro-Electro-Mechanical System (MEMS) Inertial Measurement Units (IMU) and a mathematical model of aircraft dynamics, referred to as an Aircraft Dynamic Model (ADM) in this thesis. Using a model of aircraft dynamics in navigation systems has been studied before in the available literature and shown to be useful particularly for aiding inertial coasting or attitude determination. In contrast to these applications, this thesis investigates its use in ABAS. This thesis presents an ABAS architecture concept which makes use of a MEMS IMU and ADM, named the General Aviation GPS Integrity System (GAGIS) for convenience. GAGIS includes a GPS, MEMS IMU, ADM, a bank of Extended Kalman Filters (EKF) and uses the Normalized Solution Separation (NSS) method for fault detection. The GPS, IMU and ADM information is fused together in a tightly-coupled configuration, with frequent GPS updates applied to correct the IMU and ADM. The use of both IMU and ADM allows for a number of different possible configurations. Three are investigated in this thesis; a GPS-IMU EKF, a GPS-ADM EKF and a GPS-IMU-ADM EKF. The integrity monitoring performance of the GPS-IMU EKF, GPS-ADM EKF and GPS-IMU-ADM EKF architectures are compared against each other and against a stand-alone GPS architecture in a series of computer simulation tests of an APV approach. Typical GPS, IMU, ADM and environmental errors are simulated. The simulation results show the GPS integrity monitoring performance achievable by augmenting GPS with an ADM and low-cost IMU for a general aviation aircraft on an APV approach. A contribution to research is made in determining whether a low-cost IMU or ADM can provide improved integrity monitoring performance over stand-alone GPS. It is found that a reduction of approximately 50% in protection levels is possible using the GPS-IMU EKF or GPS-ADM EKF as well as faster detection of a slowly growing ramp fault on a GPS pseudorange measurement. A second contribution is made in determining how augmenting GPS with an ADM compares to using a low-cost IMU. By comparing the results for the GPS-ADM EKF against the GPS-IMU EKF it is found that protection levels for the GPS-ADM EKF were only approximately 2% higher. This indicates that the GPS-ADM EKF may potentially replace the GPS-IMU EKF for integrity monitoring should the IMU ever fail. In this way the ADM may contribute to the navigation system robustness and redundancy. To investigate this further, a third contribution is made in determining whether or not the ADM can function as an IMU replacement to improve navigation system redundancy by investigating the case of three IMU accelerometers failing. It is found that the failed IMU measurements may be supplemented by the ADM and adequate integrity monitoring performance achieved. Besides treating the IMU and ADM separately as in the GPS-IMU EKF and GPS-ADM EKF, a fourth contribution is made in investigating the possibility of fusing the IMU and ADM information together to achieve greater performance than either alone. This is investigated using the GPS-IMU-ADM EKF. It is found that the GPS-IMU-ADM EKF can achieve protection levels approximately 3% lower in the horizontal and 6% lower in the vertical than a GPS-IMU EKF. However this small improvement may not justify the complexity of fusing the IMU with an ADM in practical systems. Affordable ABAS in general aviation may enhance existing GPS-only fault detection solutions or help overcome any outages in augmentation systems such as the Ground-based Regional Augmentation System (GRAS). Countries such as Australia which currently do not have an augmentation solution for general aviation could especially benefit from the economic savings and safety benefits of satellite navigation-based APV approaches.
Resumo:
Principal Topic: In this study we investigate how strategic orientation moderates the impact of growth on profitability for a sample of Danish high growth (Gazelle) firms. ---------- Firm growth has been an essential part of both management research and entrepreneurship research for decades (e.g. Penrose 1959, Birch 1987, Storey 1994). From a societal point of view, firm growth has been perceived as economic generator and job creator. In entrepreneurship research, growth has been an important part of the field (Davidsson, Delmar and Wiklund 2006), and many have used growth as a measure of success. In strategic management, growth has been seen as an approach to achieve competitive advantages and a way of becoming increasing profitable (e.g. Russo and Fouts 1997, Cho and Pucic 2005). However, although firm growth used to be perceived as a natural pathway to profitability recently more skepticism has emerged due to both new theoretical development and new empirical insights. Empirically, studies show inconsistent and inconclusive empirical evidence regarding the impact of growth on profitability. Our review reveals that some studies find a substantial positive relationship, some find a weak positive relationship, some find no relationship and further some find a negative relationship. Overall, two dominant yet divergent theoretical positions can be identified. The first position, mainly focusing on the environmental fit, argues that firms are likely to become more profitable if they enter a market quickly and on a larger scale due to first mover advantages and economic of scale. The second position, mainly focusing the internal fit, argues that growth may lead to a range of internal challenges and difficulties, including rapid change in structure, reward systems, decision making, communication and management style. The inconsistent empirical results together with two divergent theoretical positions call for further investigations into the circumstances by which growth generate profitability and into the circumstances by which growth do not generate profitability. In this project, we investigate how strategic orientations influence the impact of growth on profitability by asking the following research question: How is the impact of growth on profitability moderated by strategic orientation? Based on a literature review of how growth impacts profitability in areas such as entrepreneurship, strategic management and strategic entrepreneurship we develop three hypotheses regarding the growth-profitability relationship and strategic orientation as a potential moderator. ---------- Methodology/Key Propositions: The three hypotheses are tested on data collected in 2008. All firms in Denmark, including all listed and non-listed (VAT-registered) firms who experienced a 100 % growth and had a positive sales or gross profit over a four years period (2004-2007) were surveyed. In total 2,475 fulfilled the requirements. Among those 1,107 firms returned usable questionnaires satisfactory giving us a response rate on 45 %. The financial data together with data on number of employees were obtained from D&B (previously Dun & Bradstreet). The remaining data were obtained through the survey. Hierarchical regression models with ROA (return on assets) as the dependent variable were used to test the hypotheses. In the first model control variables including region, industry, firm age, CEO age, CEO gender, CEO education and number of employees were entered. In the second model, growth measured as growth in employees was entered. Then strategic orientation (differentiation, cost leadership, focus differentiation and focus cost leadership) and then interaction effects of strategic orientation and growth were entered in the model. ---------- Results and Implications: The results show a positive impact of firm growth on profitability and further that this impact is moderated by strategic orientation. Specifically, it was found that growth has a larger impact on profitability when firms do not pursue a focus strategy including both focus differentiation and focus cost leadership. Our preliminary interpretation of the results suggests that the value of growth depends on the circumstances and more specifically 'how much is left to fight for'. It seems like those firms who target towards a narrow segment are less likely to gain value of growth. The remaining market shares to fight for to these firms are not large enough to compensate for the cost of growing. Based on our findings, it therefore seems like growth has a more positive relationship with profitability for those who approach a broad market segment. Furthermore we argue that firms pursuing af Focus strategy will have more specialized assets that decreases the possibilities of further profitable expansion. For firms, CEOs, board of directors etc., the study shows that high growth is not necessarily something worth aiming for. It is a trade-off between the cost of growing and the value of growing. For many firms, there might be better ways of generating profitability in the long run. It depends on the strategic orientation of the firm. For advisors and consultants, the conditional value of growth implies that in-depth knowledge on their clients' situation is necessary before any advice can be given. And finally, for policy makers, it means they have to be careful when initiating new policies to promote firm growth. They need to take into consideration firm strategy and industry conditions.
Resumo:
Business Process Modelling is a fast growing field in business and information technology, which uses visual grammars to model and execute the processes within an organisation. However, many analysts present such models in a 2D static and iconic manner that is difficult to understand by many stakeholders. Difficulties in understanding such grammars can impede the improvement of processes within an enterprise due to communication problems. In this chapter we present a novel framework for intuitively visualising animated business process models in interactive Virtual Environments. We also show that virtual environment visualisations can be performed with present 2D business process modelling technology, thus providing a low barrier to entry for business process practitioners. Two case studies are presented from film production and healthcare domains that illustrate the ease with which these visualisations can be created. This approach can be generalised to other executable workflow systems, for any application domain being modelled.