109 resultados para Distinguishing guise
em Queensland University of Technology - ePrints Archive
Resumo:
There is increasing agreement that understanding complexity is important for project management because of difficulties associated with decision-making and goal attainment which appear to stem from complexity. However the current operational definitions of complex projects, based upon size and budget, have been challenged and questions have been raised about how complexity can be measured in a robust manner that takes account of structural, dynamic and interaction elements. Thematic analysis of data from 25 in-depth interviews of project managers involved with complex projects, together with an exploration of the literature reveals a wide range of factors that may contribute to project complexity. We argue that these factors contributing to project complexity may define in terms of dimensions, or source characteristics, which are in turn subject to a range of severity factors. In addition to investigating definitions and models of complexity from the literature and in the field, this study also explores the problematic issues of ‘measuring’ or assessing complexity. A research agenda is proposed to further the investigation of phenomena reported in this initial study.
Resumo:
In many product categories of durable goods such as TV, PC, and DVD player, the largest component of sales is generated by consumers replacing existing units. Aggregate sales models proposed by diffusion of innovation researchers for the replacement component of sales have incorporated several different replacement distributions such as Rayleigh, Weibull, Truncated Normal and Gamma. Although these alternative replacement distributions have been tested using both time series sales data and individual-level actuarial “life-tables” of replacement ages, there is no census on which distributions are more appropriate to model replacement behaviour. In the current study we are motivated to develop a new “modified gamma” distribution by two reasons. First we recognise that replacements have two fundamentally different drivers – those forced by failure and early, discretionary replacements. The replacement distribution for each of these drivers is expected to be quite different. Second, we observed a poor fit of other distributions to out empirical data. We conducted a survey of 8,077 households to empirically examine models of replacement sales for six electronic consumer durables – TVs, VCRs, DVD players, digital cameras, personal and notebook computers. This data allows us to construct individual-level “life-tables” for replacement ages. We demonstrate the new modified gamma model fits the empirical data better than existing models for all six products using both a primary and a hold-out sample.
Resumo:
The "standard" procedure for calibrating the Vesuvio eV neutron spectrometer at the ISIS neutron source, forming the basis for data analysis over at least the last decade, was recently documented in considerable detail by the instrument’s scientists. Additionally, we recently derived analytic expressions of the sensitivity of recoil peak positions with respect to fight-path parameters and presented neutron–proton scattering results that together called in to question the validity of the "standard" calibration. These investigations should contribute significantly to the assessment of the experimental results obtained with Vesuvio. Here we present new results of neutron–deuteron scattering from D2 in the backscattering angular range (theata > 90 degrees) which are accompanied by a striking energy increase that violates the Impulse Approximation, thus leading unequivocally the following dilemma: (A) either the "standard" calibration is correct and then the experimental results represent a novel quantum dynamical effect of D which stands in blatant contradiction of conventional theoretical expectations; (B) or the present "standard" calibration procedure is seriously deficient and leads to artificial outcomes. For Case(A), we allude to the topic of attosecond quantumdynamical phenomena and our recent neutron scattering experiments from H2 molecules. For Case(B),some suggestions as to how the "standard" calibration could be considerably improved are made.
Resumo:
Mathematical descriptions of birth–death–movement processes are often calibrated to measurements from cell biology experiments to quantify tissue growth rates. Here we describe and analyze a discrete model of a birth–death-movement process applied to a typical two–dimensional cell biology experiment. We present three different descriptions of the system: (i) a standard mean–field description which neglects correlation effects and clustering; (ii) a moment dynamics description which approximately incorporates correlation and clustering effects, and; (iii) averaged data from repeated discrete simulations which directly incorporates correlation and clustering effects. Comparing these three descriptions indicates that the mean–field and moment dynamics approaches are valid only for certain parameter regimes, and that both these descriptions fail to make accurate predictions of the system for sufficiently fast birth and death rates where the effects of spatial correlations and clustering are sufficiently strong. Without any method to distinguish between the parameter regimes where these three descriptions are valid, it is possible that either the mean–field or moment dynamics model could be calibrated to experimental data under inappropriate conditions, leading to errors in parameter estimation. In this work we demonstrate that a simple measurement of agent clustering and correlation, based on coordination number data, provides an indirect measure of agent correlation and clustering effects, and can therefore be used to make a distinction between the validity of the different descriptions of the birth–death–movement process.
Resumo:
Many cell types form clumps or aggregates when cultured in vitro through a variety of mechanisms including rapid cell proliferation, chemotaxis, or direct cell-to-cell contact. In this paper we develop an agent-based model to explore the formation of aggregates in cultures where cells are initially distributed uniformly, at random, on a two-dimensional substrate. Our model includes unbiased random cell motion, together with two mechanisms which can produce cell aggregates: (i) rapid cell proliferation, and (ii) a biased cell motility mechanism where cells can sense other cells within a finite range, and will tend to move towards areas with higher numbers of cells. We then introduce a pair-correlation function which allows us to quantify aspects of the spatial patterns produced by our agent-based model. In particular, these pair-correlation functions are able to detect differences between domains populated uniformly at random (i.e. at the exclusion complete spatial randomness (ECSR) state) and those where the proliferation and biased motion rules have been employed - even when such differences are not obvious to the naked eye. The pair-correlation function can also detect the emergence of a characteristic inter-aggregate distance which occurs when the biased motion mechanism is dominant, and is not observed when cell proliferation is the main mechanism of aggregate formation. This suggests that applying the pair-correlation function to experimental images of cell aggregates may provide information about the mechanism associated with observed aggregates. As a proof of concept, we perform such analysis for images of cancer cell aggregates, which are known to be associated with rapid proliferation. The results of our analysis are consistent with the predictions of the proliferation-based simulations, which supports the potential usefulness of pair correlation functions for providing insight into the mechanisms of aggregate formation.
Resumo:
Aim. This paper is a report of a development and validation of a new job performance scale based on an established job performance model. Background. Previous measures of nursing quality are atheoretical and fail to incorporate the complete range of behaviours performed. Thus, an up-to-date measure of job performance is required for assessing nursing quality. Methods. Test construction involved systematic generation of test items using focus groups, a literature review, and an expert review of test items. A pilot study was conducted to determine the multidimensional nature of the taxonomy and its psychometric properties. All data were collected in 2005. Findings. The final version of the nursing performance taxonomy included 41 behaviours across eight dimensions of job performance. Results from preliminary psychometric investigations suggest that the nursing performance scale has good internal consistency, good convergent validity and good criterion validity. Conclusion. The findings give preliminary support for a new job performance scale as a reliable and valid tool for assessing nursing quality. However, further research using a larger sample and nurses from a broader geographical region is required to cross-validate the measure. This scale may be used to guide hospital managers regarding the quality of nursing care within units and to guide future research in the area.
Resumo:
We present a distinguishing attack against SOBER-128 with linear masking. We found a linear approximation which has a bias of 2^− − 8.8 for the non-linear filter. The attack applies the observation made by Ekdahl and Johansson that there is a sequence of clocks for which the linear combination of some states vanishes. This linear dependency allows that the linear masking method can be applied. We also show that the bias of the distinguisher can be improved (or estimated more precisely) by considering quadratic terms of the approximation. The probability bias of the quadratic approximation used in the distinguisher is estimated to be equal to O(2^− − 51.8), so that we claim that SOBER-128 is distinguishable from truly random cipher by observing O(2^103.6) keystream words.
Resumo:
Kimberlite drill core from the Muskox pipe (Northern Slave Province, Nunavut, Canada) highlights the difficulties in distinguishing coherent from fragmental kimberlite and assessing the volcanological implications of the apparent gradational contact between the two facies. Using field log data, petrography, and several methods to quantify crystal and xenolith sizes and abundances, the pipe is divided into two main facies, dark-coloured massive kimberlite (DMK) and light-coloured fragmental kimberlite (LFK). DMK is massive and homogeneous, containing country-rock lithic clasts (~ 10%) and olivine macrocrysts (~ 15%) set in a dark, typically well crystallised, interstitial medium containing abundant microphenocrysts of olivine (~ 15%), opaques and locally monticellite, all of which are enclosed by mostly serpentine. In general, LFK is also massive and structureless, containing ~ 20% country-rock lithic clasts and ~ 12% olivine macrocrysts. These framework components are supported in a matrix of serpentinized olivine microphenocrysts (10%), microlites of clinopyroxene, and phlogopite, all of which are enclosed by serpentine. The contact between DMK and LFK facies is rarely sharp, and more commonly is gradational (from 5 cm to ~ 10 m). The contact divides the pipe roughly in half and is sub-vertical with an irregular shape, locally placing DMK facies both above and below the fragmental rocks. Most features of DMK are consistent with a fragmental origin, particularly the crystal- and xenolith-rich nature (~ 55-65%), but there are some similarities with rocks described as coherent kimberlite in the literature. We discuss possible origins of gradational contacts and consider the significance for understanding the origin of the DMK facies, with an emphasis on the complications of alteration overprinting of primary textures.
Resumo:
Knowledge cities are seen as fundamental to the economic growth and development of the 21st century cities. The purpose of this paper is to explore the knowledge city concept in depth. This paper discusses the principles of a knowledge city, and portrays its distinguishing characteristics and processes. The paper relates and analyses Melbourne’s experience as a knowledge city and scrutinises Melbourne’s initiatives on science, technology and innovation and policies for economic and social development. It also illustrates how the city administration played a key role in developing Melbourne as a globally recognised, entrepreneurial and competitive knowledge city. Then this paper identifies key success factors and provides some insights to policy makers of the MENA region cities in designing knowledge cities.
Resumo:
The host specificity of the five published sewage-associated Bacteroides markers (i.e., HF183, BacHum, HuBac, BacH and Human-Bac) was evaluated in Southeast Queensland, Australia by testing fecal DNA samples (n = 186) from 11 animal species including human fecal samples collected via influent to a sewage treatment plant (STP). All human fecal samples (n = 50) were positive for all five markers indicating 100% sensitivity of these markers. The overall specificity of the HF183 markers to differentiate between humans and animals was 99%. The specificities of the BacHum and BacH markers were > 94%, suggesting that these markers are suitable for sewage pollution in environmental waters in Australia. The BacHum (i.e., 63% specificity) and Human-Bac (i.e., 79% specificity) markers performed poorly in distinguishing between the sources of human and animal fecal samples. It is recommended that the specificity of the sewage-associated markers must be rigorously tested prior to its application to identify the sources of fecal pollution in environmental waters.
Resumo:
This study explored kindergarten students’ intuitive strategies and understandings in probabilities. The paper aims to provide an in depth insight into the levels of probability understanding across four constructs, as proposed by Jones (1997), for kindergarten students. Qualitative evidence from two students revealed that even before instruction pupils have a good capacity of predicting most and least likely events, of distinguishing fair probability situations from unfair ones, of comparing the probability of an event in two sample spaces, and of recognizing conditional probability events. These results contribute to the growing evidence on kindergarten students’ intuitive probabilistic reasoning. The potential of this study for improving the learning of probability, as well as suggestions for further research, are discussed.
Resumo:
This article describes the linguistic and semantic features of technocratic discourse using a Systemic Functional Linguistics (SFL) framework. The article goes further to assert that the function of technocratic discourse in public policy is to advocate and promulgate a highly contentious political and economic agenda under the guise of scientific objectivity and political impartiality. We provide strong evidence to support the linguistic description, and the claims of political advocacy, by analyzing a 900-word document about globalization produced by the Australian Department of Foreign Affairs and Trade (DFAT). Bernard McKenna, Philip Graham
Resumo:
As a consequence of the increased incidence of collaborative arrangements between firms, the competitive environment characterising many industries has undergone profound change. It is suggested that rivalry is not necessarily enacted by individual firms according to the traditional mechanisms of direct confrontation in factor and product markets, but rather as collaborative orchestration between a number of participants or network members. Strategic networks are recognised as sets of firms within an industry that exhibit denser strategic linkages among themselves than other firms within the same industry. Based on this, strategic networks are determined according to evidence of strategic alliances between firms comprising the industry. As a result, a single strategic network represents a group of firms closely linked according to collaborative ties. Arguably, the collective outcome of these strategic relationships engineered between firms suggest that the collaborative benefits attributed to interorganisational relationships require closer examination in respect to their propensity to influence rivalry in intraindustry environments. Derived in large from the social sciences, network theory allows for the micro and macro examination of the opportunities and constraints inherent in the structure of relationships in strategic networks, establishing a relational approach upon which the conduct and performance of firms can be more fully understood. Research to date has yet to empirically investigate the relationship between strategic networks and rivalry. The limited research that has been completed utilising a network rationale to investigate competitive patterns in contemporary industry environments has been characterised by a failure to directly measure rivalry. Further, this prior research has typically embedded investigation in industry settings dominated by technological or regulatory imperatives, such as the microprocessor and airline industries. These industries, due to the presence of such imperatives, are arguably more inclined to support the realisation of network rivalry, through subscription to prescribed technological standards (eg., microprocessor industry) or by being bound by regulatory constraints dictating operation within particular market segments (airline industry). In order to counter these weaknesses, the proposition guiding research - Are patterns of rivalry predicted by strategic network membership? – is embedded in the United States Light Vehicles Industry, an industry not dominated by technological or regulatory imperatives. Further, rivalry is directly measured and utilised in research, thus distinguishing this investigation from prior research efforts. The timeframe of investigation is 1993 – 1999, with all research data derived from secondary sources. Strategic networks were defined within the United States Light Vehicles Industry based on evidence of horizontal strategic relationships between firms comprising the industry. The measure of rivalry used to directly ascertain the competitive patterns of industry participants was derived from the traditional Herfindahl Index, modified to account for patterns of rivalry observed at the market segment level. Statistical analyses of the strategic network and rivalry constructs found little evidence to support the contention of network rivalry; indeed, greater levels of rivalry were observed between firms comprising the same strategic network than between firms participating in opposing network structures. Based on these results, patterns of rivalry evidenced in the United States Light Vehicle Industry over the period 1993 – 1999 were not found to be predicted by strategic network membership. The findings generated by this research are in contrast to current theorising in the strategic network – rivalry realm. In this respect, these findings are surprising. The relevance of industry type, in conjunction with prevailing network methodology, provides the basis upon which these findings are contemplated. Overall, this study raises some important questions in relation to the relevancy of the network rivalry rationale, establishing a fruitful avenue for further research.
Resumo:
Dragon is a word-based stream cipher. It was submitted to the eSTREAM project in 2005 and has advanced to Phase 3 of the software profile. This paper discusses the Dragon cipher from three perspectives: design, security analysis and implementation. The design of the cipher incorporates a single word-based non-linear feedback shift register and a non-linear filter function with memory. This state is initialized with 128- or 256-bit key-IV pairs. Each clock of the stream cipher produces 64 bits of keystream, using simple operations on 32-bit words. This provides the cipher with a high degree of efficiency in a wide variety of environments, making it highly competitive relative to other symmetric ciphers. The components of Dragon were designed to resist all known attacks. Although the design has been open to public scrutiny for several years, the only published attacks to date are distinguishing attacks which require keystream lengths greatly exceeding the stated 264 bit maximum permitted keystream length for a single key-IV pair.