957 resultados para information gap
Resumo:
An estuary is formed at the mouth of a river where the tides meet a freshwater flow and it may be classified as a function of the salinity distribution and density stratification. An overview of the broad characteristics of the estuaries of South-East Queensland(Australia) is presented herein, where the small peri-urban estuaries may provide an useful indicator of potential changes which might occur in larger systems with growing urbanisation. Small peri-urban estuaries exhibits many key hydrological features and associated with ecosystem types of larger estuaries, albeit at smaller scales, often with a greater extent of urban development as a proportion of catchment area. We explore the potential for some smaller peri-urban estuaries to be used as natural laboratories to gain some much needed information on the estuarine processes, although any dynamics similarity is presently limited by critical absence of in-depth physical investigation in larger estuarine systems. The absence of the detailed turbulence and sedimentary data hampers the understanding and modelling of the estuarine zones. The interactions between the various stake holders are likely to define the vision for the future of South-East Queensland's peri-urban estuaries. This will require a solid understanding of the bio-physical function and capacity of the peri-urban estuaries. Based upon the knowledge gap, it is recommended that an adaptive trial and error approach be adopted for the future of investigation and management strategies.
Resumo:
The development of user expertise is a strategic imperative for organizations in hyper-competitive markets. This paper conceptualizes opreationalises and validates user expertise in contemporary Information Systems (IS) as a formative, multidimensional index. Such a validated and widely accepted index would facilitate progression of past research on user competence and efficacy of IS to complex contemporary IS, while at the same time providing a benchmark for organizations to track their user expertise. The validation involved three separate studies, including exploratory and confirmatory phases, using data from 244 respondents.
Resumo:
This paper takes its root in a trivial observation: management approaches are unable to provide relevant guidelines to cope with uncertainty, and trust of our modern worlds. Thus, managers are looking for reducing uncertainty through information’s supported decision-making, sustained by ex-ante rationalization. They strive to achieve best possible solution, stability, predictability, and control of “future”. Hence, they turn to a plethora of “prescriptive panaceas”, and “management fads” to bring simple solutions through best practices. However, these solutions are ineffective. They address only one part of a system (e.g. an organization) instead of the whole. They miss the interactions and interdependencies with other parts leading to “suboptimization”. Further classical cause-effects investigations and researches are not very helpful to this regard. Where do we go from there? In this conversation, we want to challenge the assumptions supporting the traditional management approaches and shed some lights on the problem of management discourse fad using the concept of maturity and maturity models in the context of temporary organizations as support for reflexion. Global economy is characterized by use and development of standards and compliance to standards as a practice is said to enable better decision-making by managers in uncertainty, control complexity, and higher performance. Amongst the plethora of standards, organizational maturity and maturity models hold a specific place due to general belief in organizational performance as dependent variable of (business) processes continuous improvement, grounded on a kind of evolutionary metaphor. Our intention is neither to offer a new “evidence based management fad” for practitioners, nor to suggest research gap to scholars. Rather, we want to open an assumption-challenging conversation with regards to main stream approaches (neo-classical economics and organization theory), turning “our eyes away from the blinding light of eternal certitude towards the refracted world of turbid finitude” (Long, 2002, p. 44) generating what Bernstein has named “Cartesian Anxiety” (Bernstein, 1983, p. 18), and revisit the conceptualization of maturity and maturity models. We rely on conventions theory and a systemic-discursive perspective. These two lenses have both information & communication and self-producing systems as common threads. Furthermore the narrative approach is well suited to explore complex way of thinking about organizational phenomena as complex systems. This approach is relevant with our object of curiosity, i.e. the concept of maturity and maturity models, as maturity models (as standards) are discourses and systems of regulations. The main contribution of this conversation is that we suggest moving from a neo-classical “theory of the game” aiming at making the complex world simpler in playing the game, to a “theory of the rules of the game”, aiming at influencing and challenging the rules of the game constitutive of maturity models – conventions, governing systems – making compatible individual calculation and social context, and possible the coordination of relationships and cooperation between agents with or potentially divergent interests and values. A second contribution is the reconceptualization of maturity as structural coupling between conventions, rather than as an independent variable leading to organizational performance.
Resumo:
Each year The Australian Centre for Philanthropy and Nonprofit Studies (ACPNS) at QUT analyses statistics on tax-deductible donations made by Australians in their individual income tax returns to Deductible Gift Recipients (DGRs). The information presented below is based on the amount and type of tax-deductible donations made by Australian taxpayers to DGRs for the period 1 July 2010 to 30 June 2011 extracted from the Australian Taxation Office's publication Taxation Statistics 2010-2011.1
Resumo:
Social Media (SM) is increasingly being integrated with business information in decision making. Unique characteristics of social media (e.g. wide accessibility, permanence, global audience, recentness, and ease of use) raise new issues with information quality (IQ); quite different from traditional considerations of IQ in information systems (IS) evaluation. This paper presents a preliminary conceptual model of information quality in social media (IQnSM) derived through directed content analysis and employing characteristics of analytic theory in the study protocol. Based in the notion of ‘fitness for use’, IQnSM is highly use and user centric and is defined as “the degree to which information is suitable for doing a specified task by a specific user, in a certain context”. IQnSM is operationalised as hierarchical, formed by the three dimensions (18 measures): intrinsic quality, contextual quality and representational quality. A research plan for empirically validating the model is proposed.
Resumo:
It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade leading to an array of approaches to business process variability modeling. This survey examines existing approaches in this field based on a common set of criteria and illustrates their key concepts using a running example. The analysis shows that existing approaches are characterized by the fact that they extend a conventional process mod- eling language with constructs that make it able to capture customizable process models. A customizable process model represents a family of process variants in a way that each variant can be derived by adding or deleting fragments according to configuration parameters or according to a domain model. The survey puts into evidence an abundance of customizable process modeling languages, embodying a diverse set of con- structs. In contrast, there is comparatively little tool support for analyzing and constructing customizable process models, as well as a scarcity of empirical evaluations of languages in the field.
Resumo:
With the increasing popularity and adoption of building information modeling (BIM), the amount of digital information available about a building is overwhelming. Enormous challenges remain however in identifying meaningful and required information from a complex BIM model to support a particular construction management (CM) task. Detailed specifications of information required by different construction domains and expressive and easy-to-use BIM reasoning mechanisms are seen as an important means in addressing these challenges. This paper analyzes some of the characteristics and requirements of component-specific construction knowledge in relation to the current work practice and BIM-based applications. It is argued that domain ontologies and information extraction approaches, such as queries could significantly bring much needed support for knowledge sharing and integration of information between design, construction and facility management.
Resumo:
In this study, we explore the relationship between the qualities of the information system environment and management accounting adaptability. The information system environment refers to three distinct elements: the degree of information system integration, system flexibility, and shared knowledge between business unit managers and the IT function. We draw on the literature on integrated information systems (IIS) and management accounting change and propose a model to test the hypothesized relationships. The sample for this study consists of Australian companies from all industries.
Resumo:
Aims Wellness assessments can determine adolescent lifestyle behaviors. A better understanding of wellness differences between high and low SES adolescents could assist policy makers to develop improved strategies to bridge the gap between these two groups. The aim of this investigation was to explore wellness differences between high and low SES adolescents. Methods In total, 241 (125 high and 116 low SES) adolescents completed the 5-Factor Wellness Inventory (5F-Wel). The 5F-Wel comprises 97 items contributing to 17 subscales, 5 dimensions, 4 contexts, total wellness, and a life satisfaction index, with scores ranging from 0-100. Independent sample t-tests were performed with Levene’s test of equality for variances, which checked the assumption of homogeneity of variances. Results Overall, 117 (94%) and 112 (97 %) high and low SES participants had complete data and were included in the analysis. The high SES group scored higher for total wellness (M = 81.09, SE = .61) than the low SES group (M = 75.73, SE = .99). This difference was significant t (186) = 4.635, p < .05, with a medium effect size r = .32. The high SES group scored higher on 23 of 27 scales (21 scales, p < .05), while the low SES group scored higher on the remaining 3 scales (all non-significant). Conclusion These results contribute empirical data to the body of literature, indicating a large wellness discrepancy between high and low SES youth. Deficient areas can be targeted by policymakers to assist in bridging the gap between these groups.
Resumo:
This thesis develops the hardware and software framework for an integrated navigation system. Dynamic data fusion algorithms are used to develop a system with a high level of resistance to the typical problems that affect standard navigation systems.
Resumo:
The article focuses on how the information seeker makes decisions about relevance. It will employ a novel decision theory based on quantum probabilities. This direction derives from mounting research within the field of cognitive science showing that decision theory based on quantum probabilities is superior to modelling human judgements than standard probability models [2, 1]. By quantum probabilities, we mean decision event space is modelled as vector space rather than the usual Boolean algebra of sets. In this way,incompatible perspectives around a decision can be modelled leading to an interference term which modifies the law of total probability. The interference term is crucial in modifying the probability judgements made by current probabilistic systems so they align better with human judgement. The goal of this article is thus to model the information seeker user as a decision maker. For this purpose, signal detection models will be sketched which are in principle applicable in a wide variety of information seeking scenarios.
Resumo:
In this paper two-dimensional (2-D) numerical investigation of flow past four square cylinders in an in-line square configuration are performed using the lattice Boltzmann method. The gap spacing g=s/d is set at 1, 3 and 6 and Reynolds number ranging from Re=60 to 175. We observed four distinct wake patterns: (i) a steady wake pattern (Re=60 and g=1) (ii) a stable shielding wake pattern (80≤Re≤175 and g=1) (iii) a wiggling shielding wake pattern (60≤Re≤175 and g=3) (iv) a vortex shedding wake pattern (60≤Re≤175 and g=6) At g=1, the Reynolds number is observed to have a strong effect on the wake patterns. It is also found that at g=1, the secondary cylinder interaction frequency significantly contributes for drag and lift coefficients signal. It is found that the primary vortex shedding frequency dominates the flow and the role of secondary cylinder interaction frequency almost vanish at g=6. It is observed that the jet between the gaps strongly influenced the wake interaction for different gap spacing and Reynolds number combination. To fully understand the wake transformations the details vorticity contour visualization, power spectra of lift coefficient signal and time signal analysis of drag and lift coefficients also presented in this paper.
Resumo:
This paper presents research findings and design strategies that illustrate how digital technology can be applied as a tool for hybrid placemaking in ways that would not be possible in purely digital or physical space. Digital technology has revolutionised the way people learn and gather new information. This trend has challenged the role of the library as a physical place, as well as the interplay of digital and physical aspects of the library. The paper provides an overview of how the penetration of digital technology into everyday life has affected the library as a place, both as designed by place makers, and, as perceived by library users. It then identifies a gap in current library research about the use of digital technology as a tool for placemaking, and reports results from a study of Gelatine – a custom built user check-in system that displays real-time user information on a set of public screens. Gelatine and its evaluation at The Edge, at State Library of Queensland illustrates how combining affordances of social, spatial and digital space can improve the connected learning experience among on-site visitors. Future design strategies involving gamifying the user experience in libraries are described based on Gelatine’s infrastructure. The presented design ideas and concepts are relevant for managers and designers of libraries as well as other informal, social learning environments.
Resumo:
This study is the first to employ an epidemiological framework to evaluate the ‘fit-for-purpose’ of ICD-10-AM external cause of injury codes, ambulance and hospital clinical documentation for injury surveillance. Importantly, this thesis develops an evidence-based platform to guide future improvements in routine data collections used to inform the design of effective injury prevention strategies. Quantification of the impact of ambulance clinical records on the overall information quality of Queensland hospital morbidity data collections for injury causal information is a unique and notable contribution of this study.
Resumo:
The Web is a steadily evolving resource comprising much more than mere HTML pages. With its ever-growing data sources in a variety of formats, it provides great potential for knowledge discovery. In this article, we shed light on some interesting phenomena of the Web: the deep Web, which surfaces database records as Web pages; the Semantic Web, which de�nes meaningful data exchange formats; XML, which has established itself as a lingua franca for Web data exchange; and domain-speci�c markup languages, which are designed based on XML syntax with the goal of preserving semantics in targeted domains. We detail these four developments in Web technology, and explain how they can be used for data mining. Our goal is to show that all these areas can be as useful for knowledge discovery as the HTML-based part of the Web.