326 resultados para Original model

em Queensland University of Technology - ePrints Archive


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In an empirical test and extension of Klein Conn and Sorra’s model of innovation implementation effectiveness, we apply structural equation modelling to identify the generalizability of their data-modified model in comparison with their theorised model. We examined the implementation of various types of innovations in a sample of 135 organizations. We found that the data supported the original model rather than the data-modified model, such that implementation climate mediated polices and practices and implementation effectiveness, while implementation effectiveness partially mediated the relationship between implementation climate and innovation effectiveness. Furthermore, we extend their model to suggest that non-financial resources availability plays a critical role in implementation policies and practices.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The present study aims to validate the current best-practice model of implementation effectiveness in small and mid-size businesses. Data from 135 organizations largely confirm the original model across various types of innovation. In addition, we extended this work by highlighting the importance of human resources in implementation effectiveness and the consequences of innovation effectiveness on future adoption attitudes. We found that the availability of skilled employees was positively related to implementation effectiveness. Furthermore, organizations that perceived a high level of benefits from implemented innovations were likely to have a positive attitude towards future innovation adoption. The implications of our improvements to the original model of implementation effectiveness are discussed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

All civil and private aircraft are required to comply with the airworthiness standards set by their national airworthiness authority and throughout their operational life must be in a condition of safe operation. Aviation accident data shows that over twenty percent of all fatal accidents in aviation are due to airworthiness issues, specifically aircraft mechanical failures. Ultimately it is the responsibility of each registered operator to ensure that their aircraft remain in a condition of safe operation, and this is done through both effective management of airworthiness activities and the effective program governance of safety outcomes. Typically, the projects within these airworthiness management programs are focused on acquiring, modifying and maintaining the aircraft as a capability supporting the business. Program governance provides the structure through which the goals and objectives of airworthiness programs are set along with the means of attaining them. Whilst the principal causes of failures in many programs can be traced to inadequate program governance, many of the failures in large scale projects can have their root causes in the organisational culture and more specifically in the organisational processes related to decision-making. This paper examines the primary theme of project and program based enterprises, and introduces a model for measuring organisational culture in airworthiness management programs using measures drawn from 211 respondents in Australian airline programs. The paper describes the theoretical perspectives applied to modifying an original model to specifically focus it on measuring the organisational culture of programs for managing airworthiness; identifying the most important factors needed to explain the relationship between the measures collected, and providing a description of the nature of these factors. The paper concludes by identifying a model that best describes the organisational culture data collected from seven airworthiness management programs.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

All civil and private aircraft are required to comply with the airworthiness standards set by their national airworthiness authority and throughout their operational life must be in a condition of safe operation. Aviation accident data shows that over 20% of all fatal accidents in aviation are due to airworthiness issues, specifically aircraft mechanical failures. Ultimately it is the responsibility of each registered operator to ensure that their aircraft remain in a condition of safe operation, and this is done through both effective management of airworthiness activities and the effective programme governance of safety outcomes. Typically, the projects within these airworthiness management programmes are focused on acquiring, modifying and maintaining the aircraft as a capability supporting the business. Programme governance provides the structure through which the goals and objectives of airworthiness programmes are set along with the means of attaining them. Whilst the principal causes of failures in many programmes can be traced to inadequate programme governance, many of the failures in large-scale projects can have their root causes in the organizational culture and more specifically in the organizational processes related to decision-making. This paper examines the primary theme of project and programme-based enterprises, and introduces a model for measuring organizational culture in airworthiness management programmes using measures drawn from 211 respondents in Australian airline programmes. The paper describes the theoretical perspectives applied to modifying an original model to specifically focus it on measuring the organizational culture of programmes for managing airworthiness; identifying the most important factors needed to explain the relationship between the measures collected, and providing a description of the nature of these factors. The paper concludes by identifying a model that best describes the organizational culture data collected from seven airworthiness management programmes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper addresses the following problem: given two or more business process models, create a process model that is the union of the process models given as input. In other words, the behavior of the produced process model should encompass that of the input models. The paper describes an algorithm that produces a single configurable process model from an arbitrary collection of process models. The algorithm works by extracting the common parts of the input process models, creating a single copy of them, and appending the differences as branches of configurable connectors. This way, the merged process model is kept as small as possible, while still capturing all the behavior of the input models. Moreover, analysts are able to trace back from which original model(s) does a given element in the merged model come from. The algorithm has been prototyped and tested against process models taken from several application domains.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As order dependencies between process tasks can get complex, it is easy to make mistakes in process model design, especially behavioral ones such as deadlocks. Notions such as soundness formalize behavioral errors and tools exist that can identify such errors. However these tools do not provide assistance with the correction of the process models. Error correction can be very challenging as the intentions of the process modeler are not known and there may be many ways in which an error can be corrected. We present a novel technique for automatic error correction in process models based on simulated annealing. Via this technique a number of process model alternatives are identified that resolve one or more errors in the original model. The technique is implemented and validated on a sample of industrial process models. The tests show that at least one sound solution can be found for each input model and that the response times are short.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recently, an analysis of the response curve of the vascular endothelial growth factor (VEGF) receptor and its application to cancer therapy was described in [T. Alarcón, and K. Page, J. R. Soc. Lond. Interface 4, 283–304 (2007)]. The analysis is significantly extended here by demonstrating that an alternative computational strategy, namely the Krylov FSP algorithm for the direct solution of the chemical master equation, is feasible for the study of the receptor model. The new method allows us to further investigate the hypothesis of symmetry in the stochastic fluctuations of the response. Also, by augmenting the original model with a single reversible reaction we formulate a plausible mechanism capable of realizing a bimodal response, which is reported experimentally but which is not exhibited by the original model. The significance of these findings for mechanisms of tumour resistance to antiangiogenic therapy is discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology(1) even in complex tissue sections(2). Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells(3), however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper explores grassroots leadership, an under-researched and often side-lined approach to leadership that operates outside of formal bureaucratic structures. The paper’s central purpose is the claim that an understanding of grassroots leadership and tactics used by grassroots leaders provides valuable insights for the study of school leadership. In this paper, we present and discuss an original model of grassroots leadership based on the argument that this under-researched area can further our understanding of school leadership. Drawing upon the limited literature in the field, we present a model consisting of two approaches to change (i.e. conflict and consensus) and two categories of change (i.e. reform and refinement) and then provide illustrations of how the model works in practice. We make the argument that the model has much merit for conceptualizing school leadership, and this is illustrated by applying the model to formal bureaucratic leadership within school contexts. Given the current climate in education where business and management language is pervasive within leadership-preparation programs, we argue that it is timely for university academics, who are responsible for preparing school leaders to consider broadening their approach by exposing school leaders to a variety of change-based strategies and tactics used by grassroots leaders.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

IN MANY FACTORIES, the feed chute of the first mill is operated with a high chute level for the purpose of maximising the cane rate through the mill. There is a trend towards trying to control chute level within a small control range near the top of a chute that can result in rapid changes in cane feeding rate to maintain the chute level set point. This paper reviews the theory that predicts higher cane rate with higher chute level and discusses the main weakness in the theory that it does not consider the beneficial effect on capacity of cane falling from the top of the chute to the top surface of the cane mat. An extension to the chute theory model is described that predicts higher capacity with lower chute level because of the effect of the falling cane. The original model and this extended model are believed to be the upper and lower limits to the true effect. The paper reports an experiment that measured the real effect of chute level on capacity and finds that increasing chute level does lead to higher capacity but that the trend is only about one-third as strong as the original theory predicted. The paper questions whether the benefits of slightly greater capacity outweigh the costs of operating with the small control range near the top of the chute.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This program of research examines the experience of chronic pain in a community sample. While, it is clear that like patient samples, chronic pain in non-patient samples is also associated with psychological distress and physical disability, the experience of pain across the total spectrum of pain conditions (including acute and episodic pain conditions) and during the early course of chronic pain is less clear. Information about these aspects of the pain experience is important because effective early intervention for chronic pain relies on identification of people who are likely to progress to chronicity post-injury. A conceptual model of the transition from acute to chronic pain was proposed by Gatchel (1991a). In brief, Gatchel’s model describes three stages that individuals who have a serious pain experience move through, each with worsening psychological dysfunction and physical disability. The aims of this program of research were to describe the experience of pain in a community sample in order to obtain pain-specific data on the problem of pain in Queensland, and to explore the usefulness of Gatchel’s Model in a non-clinical sample. Additionally, five risk factors and six protective factors were proposed as possible extensions to Gatchel’s Model. To address these aims, a prospective longitudinal mixed-method research design was used. Quantitative data was collected in Phase 1 via a comprehensive postal questionnaire. Phase 2 consisted of a follow-up questionnaire 3 months post-baseline. Phase 3 consisted of semi-structured interviews with a subset of the original sample 12 months post follow-up, which used qualitative data to provide a further in-depth examination of the experience and process of chronic pain from respondents’ point of view. The results indicate chronic pain is associated with high levels of anxiety and depressive symptoms. However, the levels of disability reported by this Queensland sample were generally lower than those reported by clinical samples and consistent with disability data reported in a New South Wales population-based study. With regard to the second aim of this program of research, while some elements of the pain experience of this sample were consistent with that described by Gatchel’s Model, overall the model was not a good fit with the experience of this non-clinical sample. The findings indicate that passive coping strategies (minimising activity), catastrophising, self efficacy, optimism, social support, active strategies (use of distraction) and the belief that emotions affect pain may be important to consider in understanding the processes that underlie the transition to and continuation of chronic pain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing prevalence of International New Ventures (INVs) during the past twenty years has been highlighted by numerous studies (Knight and Cavusgil, 1996, Moen, 2002). International New Ventures are firms, typically small to medium enterprises, that internationalise within six years of inception (Oviatt and McDougall, 1997). To date there has been no general consensus within the literature on a theoretical framework of internationalisation to explain the internationalisation process of INVs (Madsen and Servais, 1997). However, some researchers have suggested that the innovation diffusion model may provide a suitable theoretical framework (Chetty & Hamilton, 1996, Fan & Phan, 2007).The proposed model was based on the existing and well-established innovation diffusion theories drawn from consumer behaviour and internationalisation literature to explain the internationalisation process of INVs (Lim, Sharkey, and Kim, 1991, Reid, 1981, Robertson, 1971, Rogers, 1962, Wickramasekera and Oczkowski, 2006). The results of this analysis indicated that the synthesied model of export adoption was effective in explaining the internationalisation process of INVs within the Queensland Food and Beverage Industry. Significantly the results of the analysis also indicated that features of the original I-models developed in the consumer behaviour literature, that had limited examination within the internationalisation literature were confirmed. This includes the ability of firms, or specifically decision-makers, to skip stages based om previous experience.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite more than three decades of research, there is a limited understanding of the transactional processes of appraisal, stress and coping. This has led to calls for more focused research on the entire process that underlies these variables. To date, there remains a paucity of such research. The present study examined Lazarus and Folkman’s (1984) transactional model of stress and coping. One hundred and twenty nine Australian participants with full time employment (i.e. nurses and administration employees) were recruited. There were 49 male (age mean = 34, SD = 10.51) and 80 female (age mean = 36, SD = 10.31) participants. The analysis of three path models indicated that in addition to the original paths, which were found in Lazarus and Folkman’s transactional model (primary appraisal-->secondary appraisal-->stress-->coping), there were also direct links between primary appraisal and stress level time one and between stress level time one to stress level time two. This study has provided additional insights into the transactional process which will extend our understanding of how individuals appraise, cope and experience occupational stress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Existing literature has failed to find robust relationships between individual differences and the ability to fake psychological tests, possibly due to limitations in how successful faking is operationalised. In order to fake, individuals must alter their original profile to create a particular impression. Currently, successful faking is operationalised through statistical definitions, informant ratings, known groups comparisons, the use of archival and baseline data, and breaches of validity indexes. However, there are many methodological limitations to these approaches. This research proposed a three component model of successful faking to address this, where an original response is manipulated into a strategic response, which must match a criteria target. Further, by operationalising successful faking in this manner, this research takes into account the fact that individuals may have been successful in reaching their implicitly created profile, but that this may not have matched the criteria they were instructed to fake.Participants (N=48, 22 students and 26 non-students) completed the BDI-II honestly. Participants then faked the BDI-II as if they had no, mild, moderate and severe depression, as well as completing a checklist revealing which symptoms they thought indicated each level of depression. Findings were consistent with a three component model of successful faking, where individuals effectively changed their profile to what they believed was required, however this profile differed from the criteria defined by the psychometric norms of the test.One of the foremost issues for research in this area is the inconsistent manner in which successful faking is operationalised. This research allowed successful faking to be operationalised in an objective, quantifiable manner. Using this model as a template may allow researchers better understanding of the processes involved in faking, including the role of strategies and abilities in determining the outcome of test dissimulation.