663 resultados para systematically conferred advantages

em Queensland University of Technology - ePrints Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Enterprise Application Integration (EAI) is a challenging area that is attracting growing attention from the software industry and the research community. A landscape of languages and techniques for EAI has emerged and is continuously being enriched with new proposals from different software vendors and coalitions. However, little or no effort has been dedicated to systematically evaluate and compare these languages and techniques. The work reported in this paper is a first step in this direction. It presents an in-depth analysis of a language, namely the Business Modeling Language, specifically developed for EAI. The framework used for this analysis is based on a number of workflow and communication patterns. This framework provides a basis for evaluating the advantages and drawbacks of EAI languages with respect to recurrent problems and situations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Principal Topic: It is well known that most new ventures suffer from a significant lack of resources, which increases the risk of failure (Shepherd, Douglas and Shanley, 2000) and makes it difficult to attract stakeholders and financing for the venture (Bhide & Stevenson, 1999). The Resource-Based View (RBV) (Barney, 1991; Wernerfelt, 1984) is a dominant theoretical base increasingly drawn on within Strategic Management. While theoretical contributions applying RBV in the domain of entrepreneurship can arguably be traced back to Penrose (1959), there has been renewed attention recently (e.g. Alvarez & Busenitz, 2001; Alvarez & Barney, 2004). This said, empirical work is in its infancy. In part, this may be due to a lack of well developed measuring instruments for testing ideas derived from RBV. The purpose of this study is to develop a measurement scales that can serve to assist such empirical investigations. In so doing we will try to overcome three deficiencies in current empirical measures used for the application of RBV to the entrepreneurship arena. First, measures for resource characteristics and configurations associated with typical competitive advantages found in entrepreneurial firms need to be developed. These include such things as alertness and industry knowledge (Kirzner, 1973), flexibility (Ebben & Johnson, 2005), strong networks (Lee et al., 2001) and within knowledge intensive contexts, unique technical expertise (Wiklund and Shepard, 2003). Second, the RBV has the important limitations of being relatively static and modelled on large, established firms. In that context, traditional RBV focuses on competitive advantages. However, newly established firms often face disadvantages, especially those associated with the liabilities of newness (Aldrich & Auster, 1986). It is therefore important in entrepreneurial contexts to expand to an investigation of responses to competitive disadvantage through an RBV lens. Conversely, recent research has suggested that resource constraints actually have a positive effect on firm growth and performance under some circumstances (e.g., George, 2005; Katila & Shane, 2005; Mishina et al., 2004; Mosakowski, 2002; cf. also Baker & Nelson, 2005). Third, current empirical applications of RBV measured levels or amounts of particular resources available to a firm. They infer that these resources deliver firms competitive advantage by establishing a relationship between these resource levels and performance (e.g. via regression on profitability). However, there is the opportunity to directly measure the characteristics of resource configurations that deliver competitive advantage, such as Barney´s well known VRIO (Valuable, Rare, Inimitable and Organized) framework (Barney, 1997). Key Propositions and Methods: The aim of our study is to develop and test scales for measuring resource advantages (and disadvantages) and inimitability for entrepreneurial firms. The study proceeds in three stages. The first stage developed our initial scales based on earlier literature. Where possible, we adapt scales based on previous work. The first block of the scales related to the level of resource advantages and disadvantages. Respondents were asked the degree to which each resource category represented an advantage or disadvantage relative to other businesses in their industry on a 5 point response scale: Major Disadvantage, Slight Disadvantage, No Advantage or Disadvantage, Slight Advantage and Major Advantage. Items were developed as follows. Network capabilities (3 items) were adapted from (Madsen, Alsos, Borch, Ljunggren & Brastad, 2006). Knowledge resources marketing expertise / customer service (3 items) and technical expertise (3 items) were adapted from Wiklund and Shepard (2003). flexibility (2 items), costs (4 items) were adapted from JIBS B97. New scales were developed for industry knowledge / alertness (3 items) and product / service advantages. The second block asked the respondent to nominate the most important resource advantage (and disadvantage) of the firm. For the advantage, they were then asked four questions to determine how easy it would be for other firms to imitate and/or substitute this resource on a 5 point likert scale. For the disadvantage, they were asked corresponding questions related to overcoming this disadvantage. The second stage involved two pre-tests of the instrument to refine the scales. The first was an on-line convenience sample of 38 respondents. The second pre-test was a telephone interview with a random sample of 31 Nascent firms and 47 Young firms (< 3 years in operation) generated using a PSED method of randomly calling households (Gartner et al. 2004). Several items were dropped or reworded based on the pre-tests. The third stage (currently in progress) is part of Wave 1 of CAUSEE (Nascent Firms) and FEDP (Young Firms), a PSED type study being conducted in Australia. The scales will be tested and analysed with a random sample of approximately 700 Nascent and Young firms respectively. In addition, a judgement sample of approximately 100 high potential businesses in each category will be included. Findings and Implications: The paper will report the results of the main study (stage 3 – currently data collection is in progress) will allow comparison of the level of resource advantage / disadvantage across various sub-groups of the population. Of particular interest will be a comparison of the high potential firms with the random sample. Based on the smaller pre-tests (N=38 and N=78) the factor structure of the items confirmed the distinctiveness of the constructs. The reliabilities are within an acceptable range: Cronbach alpha ranged from 0.701 to 0.927. The study will provide an opportunity for researchers to better operationalize RBV theory in studies within the domain of entrepreneurship. This is a fundamental requirement for the ability to test hypotheses derived from RBV in systematic, large scale research studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A common optometric problem is to specify the eye’s ocular aberrations in terms of Zernike coefficients and to reduce that specification to a prescription for the optimum sphero-cylindrical correcting lens. The typical approach is first to reconstruct wavefront phase errors from measurements of wavefront slopes obtained by a wavefront aberrometer. This paper applies a new method to this clinical problem that does not require wavefront reconstruction. Instead, we base our analysis of axial wavefront vergence as inferred directly from wavefront slopes. The result is a wavefront vergence map that is similar to the axial power maps in corneal topography and hence has a potential to be favoured by clinicians. We use our new set of orthogonal Zernike slope polynomials to systematically analyse details of the vergence map analogous to Zernike analysis of wavefront maps. The result is a vector of slope coefficients that describe fundamental aberration components. Three different methods for reducing slope coefficients to a spherocylindrical prescription in power vector forms are compared and contrasted. When the original wavefront contains only second order aberrations, the vergence map is a function of meridian only and the power vectors from all three methods are identical. The differences in the methods begin to appear as we include higher order aberrations, in which case the wavefront vergence map is more complicated. Finally, we discuss the advantages and limitations of vergence map representation of ocular aberrations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the past 20 years, mesoporous materials have been attracted great attention due to their significant feature of large surface area, ordered mesoporous structure, tunable pore size and volume, and well-defined surface property. They have many potential applications, such as catalysis, adsorption/separation, biomedicine, etc. [1]. Recently, the studies of the applications of mesoporous materials have been expanded into the field of biomaterials science. A new class of bioactive glass, referred to as mesoporous bioactive glass (MBG), was first developed in 2004. This material has a highly ordered mesopore channel structure with a pore size ranging from 5–20 nm [1]. Compared to non-mesopore bioactive glass (BG), MBG possesses a more optimal surface area, pore volume and improved in vitro apatite mineralization in simulated body fluids [1,2]. Vallet-Regí et al. has systematically investigated the in vitro apatite formation of different types of mesoporous materials, and they demonstrated that an apatite-like layer can be formed on the surfaces of Mobil Composition of Matters (MCM)-48, hexagonal mesoporous silica (SBA-15), phosphorous-doped MCM-41, bioglass-containing MCM-41 and ordered mesoporous MBG, allowing their use in biomedical engineering for tissue regeneration [2-4]. Chang et al. has found that MBG particles can be used for a bioactive drug-delivery system [5,6]. Our study has shown that MBG powders, when incorporated into a poly (lactide-co-glycolide) (PLGA) film, significantly enhance the apatite-mineralization ability and cell response of PLGA films. compared to BG [7]. These studies suggest that MBG is a very promising bioactive material with respect to bone regeneration. It is known that for bone defect repair, tissue engineering represents an optional method by creating three-dimensional (3D) porous scaffolds which will have more advantages than powders or granules as 3D scaffolds will provide an interconnected macroporous network to allow cell migration, nutrient delivery, bone ingrowth, and eventually vascularization [8]. For this reason, we try to apply MBG for bone tissue engineering by developing MBG scaffolds. However, one of the main disadvantages of MBG scaffolds is their low mechanical strength and high brittleness; the other issue is that they have very quick degradation, which leads to an unstable surface for bone cell growth limiting their applications. Silk fibroin, as a new family of native biomaterials, has been widely studied for bone and cartilage repair applications in the form of pure silk or its composite scaffolds [9-14]. Compared to traditional synthetic polymer materials, such as PLGA and poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV), the chief advantage of silk fibroin is its water-soluble nature, which eliminates the need for organic solvents, that tend to be highly cytotoxic in the process of scaffold preparation [15]. Other advantages of silk scaffolds are their excellent mechanical properties, controllable biodegradability and cytocompatibility [15-17]. However, for the purposes of bone tissue engineering, the osteoconductivity of pure silk scaffolds is suboptimal. It is expected that combining MBG with silk to produce MBG/silk composite scaffolds would greatly improve their physiochemical and osteogenic properties for bone tissue engineering application. Therefore, in this chapter, we will introduce the research development of MBG/silk scaffolds for bone tissue engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A better understanding of Open Source Innovation in Physical Product (OSIP) might allow project managers to mitigate risks associated with this innovation model and process, while developing the right strategies to maximise OSIP outputs. In the software industry, firms have been highly successful using Open Source Innovation (OSI) strategies. However, OSI in the physical world has not been studied leading to the research question: What advantages and disadvantages do organisations incur from using OSI in physical products? An exploratory research methodology supported by thirteen semi-structured interviews helped us build a seven-theme framework to categorise advantages and disadvantages elements linked with the use of OSIP. In addition, factors impacting advantage and disadvantage elements for firms using OSIP were identified as: „h Degree of openness in OSIP projects; „h Time of release of OSIP in the public domain; „h Use of Open Source Innovation in Software (OSIS) in conjunction with OSIP; „h Project management elements (Project oversight, scope and modularity); „h Firms. Corporate Social Responsibility (CSR) values; „h Value of the OSIP project to the community. This thesis makes a contribution to the body of innovation theory by identifying advantages and disadvantages elements of OSIP. Then, from a contingency perspective it identifies factors which enhance or decrease advantages, or mitigate/ or increase disadvantages of OSIP. In the end, the research clarifies the understanding of OSI by clearly setting OSIP apart from OSIS. The main practical contribution of this paper is to provide manager with a framework to better understand OSIP as well as providing a model, which identifies contingency factors increasing advantage and decreasing disadvantage. Overall, the research allows managers to make informed decisions about when they can use OSIP and how they can develop strategies to make OSIP a viable proposition. In addition, this paper demonstrates that advantages identified in OSIS cannot all be transferred to OSIP, thus OSIP decisions should not be based upon OSIS knowledge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To critically appraise the Biodex System 4 isokinetic dynamometer for strength assessment of children. Methods: Appraisal was based on experiences from two independent laboratories involving testing of 213 children. Issues were recorded and the manufacturer was consulted regarding appropriate solutions. Results: The dynamometer had insufficient height adjustment for alignment of the knee for some children, requiring the construction of padding to better fit the child within the dynamometer. Potential for entrapment of the non-testing leg was evident in the passive and eccentric modes and a leg bracket restraint was constructed. Automated gravity correction did not operate when protocols were linked or data was exported to an external device. Conclusions: Limitations were noted, some of which were applicable to knee strength testing in general and others which were specific to use with children. However, most of these obstacles could be overcome, making the Biodex System 4 suitable for assessment of knee strength in children.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The belief that regions play a role in determining national economic development and that advantages are found at the local and regional level has been the focus of economic geography and development studies over the last 10 years. However, this issue has historically been dominated by economic perspectives, industrial firms, and public bodies. In recent years the social economy is starting to receive greater attention in creating regional advantage as well as ameliorating regional disadvantage. The social economy includes the impact of the third sector such as social enterprises. This paper proposes that understanding the role and function of social enterprise will enable a more nuanced understanding of the socio-economic aspects of regional development. Drawing upon Oliver’s (1997) framework for sustainable competitive advantage it is argued that this established management framework provides a valuable foundation for examining the organisational resources that social enterprise need to operate effectively, as well as the socio-economic resources they produce for regional communities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increased or fluctuating resources may facilitate opportunities for invasive exotic plants to dominate. This hypothesis does not, however, explain how invasive species succeed in regions characterized by low resource conditions or how these species persist in the lulls between high resource periods. We compare the growth of three co-occurring C4 perennial bunchgrasses under low resource conditions: an exotic grass, Eragrostis curvula (African lovegrass) and two native grasses, Themeda triandra and Eragrostis sororia. We grew each species over 12 weeks under low nutrients and three low water regimes differentiated by timing: continuous, pulsed, and mixed treatments (switched from continuous to pulsed and back to continuous). Over time, we measured germination rates, time to germination (first and second generations), height, root biomass, vegetative biomass, and reproductive biomass. Contrary to our expectations that the pulsed watering regime would favor the invader, water-supply treatments had little significant effect on plant growth. We did find inherent advantages in a suite of early colonization traits that likely favor African lovegrass over the natives including faster germination speed, earlier flowering times, faster growth rates and from 2 weeks onward it was taller. African lovegrass also showed similar growth allocation strategies to the native grasses in terms of biomass levels belowground, but produced more vegetative biomass than kangaroo grass. Overall our results suggest that even under low resource conditions invasive plant species like African lovegrass can grow similarly to native grasses, and for some key colonization traits, like germination rate, perform better than natives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Client owners usually need an estimate or forecast of their likely building costs in advance of detailed design in order to confirm the financial feasibility of their projects. Because of their timing in the project life cycle, these early stage forecasts are characterized by the minimal amount of information available concerning the new (target) project to the point that often only its size and type are known. One approach is to use the mean contract sum of a sample, or base group, of previous projects of a similar type and size to the project for which the estimate is needed. Bernoulli’s law of large numbers implies that this base group should be as large as possible. However, increasing the size of the base group inevitably involves including projects that are less and less similar to the target project. Deciding on the optimal number of base group projects is known as the homogeneity or pooling problem. A method of solving the homogeneity problem is described involving the use of closed form equations to compare three different sampling arrangements of previous projects for their simulated forecasting ability by a cross-validation method, where a series of targets are extracted, with replacement, from the groups and compared with the mean value of the projects in the base groups. The procedure is then demonstrated with 450 Hong Kong projects (with different project types: Residential, Commercial centre, Car parking, Social community centre, School, Office, Hotel, Industrial, University and Hospital) clustered into base groups according to their type and size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning can allow individuals to increase their fitness in particular environments. The advantage to learning depends on the predictability of the environment and the extent to which animals can adjust their behaviour. Earlier general models have investigated when environmental predictability might favour the evolution of learning in foraging animals. Here, we construct a theoretical model that predicts the advantages to learning using a specific biological example: oviposition in the Lepidoptera. Our model includes environmental and behavioural complexities relevant to host selection in these insects and tests whether the predictions of the general models still hold. Our results demonstrate how the advantage of learning is maximised when within-generation variability is minimised (the local environment consists mainly of a single host plant species) and between-generation variability is maximised (different host plant species are the most common in different generations). We discuss how our results: (a) can be applied to recent empirical work in different lepidopteran species and (b) predict an important role of learning in lepidopteran agricultural pests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, a tandem LC-MS (Waters Xevo TQ) MRM-based MS method was developed for rapid, broad profiling of hydrophilic metabolites from biological samples, in either positive or negative ion modes without the need for an ion pairing reagent, using a reversed-phase pentafluorophenylpropyl (PFPP) column. The developed method was successfully applied to analyze various biological samples from C57BL/6 mice, including urine, duodenum, liver, plasma, kidney, heart, and skeletal muscle. As result, a total 112 of hydrophilic metabolites were detected within 8 min of running time to obtain a metabolite profile of the biological samples. The analysis of this number of hydrophilic metabolites is significantly faster than previous studies. Classification separation for metabolites from different tissues was globally analyzed by PCA, PLS-DA and HCA biostatistical methods. Overall, most of the hydrophilic metabolites were found to have a "fingerprint" characteristic of tissue dependency. In general, a higher level of most metabolites was found in urine, duodenum, and kidney. Altogether, these results suggest that this method has potential application for targeted metabolomic analyzes of hydrophilic metabolites in a wide ranges of biological samples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Timely and comprehensive scene segmentation is often a critical step for many high level mobile robotic tasks. This paper examines a projected area based neighbourhood lookup approach with the motivation towards faster unsupervised segmentation of dense 3D point clouds. The proposed algorithm exploits the projection geometry of a depth camera to find nearest neighbours which is time independent of the input data size. Points near depth discontinuations are also detected to reinforce object boundaries in the clustering process. The search method presented is evaluated using both indoor and outdoor dense depth images and demonstrates significant improvements in speed and precision compared to the commonly used Fast library for approximate nearest neighbour (FLANN) [Muja and Lowe, 2009].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of Mahalanobis squared distance–based novelty detection in statistical damage identification has become increasingly popular in recent years. The merit of the Mahalanobis squared distance–based method is that it is simple and requires low computational effort to enable the use of a higher dimensional damage-sensitive feature, which is generally more sensitive to structural changes. Mahalanobis squared distance–based damage identification is also believed to be one of the most suitable methods for modern sensing systems such as wireless sensors. Although possessing such advantages, this method is rather strict with the input requirement as it assumes the training data to be multivariate normal, which is not always available particularly at an early monitoring stage. As a consequence, it may result in an ill-conditioned training model with erroneous novelty detection and damage identification outcomes. To date, there appears to be no study on how to systematically cope with such practical issues especially in the context of a statistical damage identification problem. To address this need, this article proposes a controlled data generation scheme, which is based upon the Monte Carlo simulation methodology with the addition of several controlling and evaluation tools to assess the condition of output data. By evaluating the convergence of the data condition indices, the proposed scheme is able to determine the optimal setups for the data generation process and subsequently avoid unnecessarily excessive data. The efficacy of this scheme is demonstrated via applications to a benchmark structure data in the field.