129 resultados para Two Approaches
Resumo:
A kinetic spectrophotometric method with aid of chemometrics is proposed for the simultaneous determination of norfloxacin and rifampicin in mixtures. The proposed method was applied for the simultaneous determination of these two compounds in pharmaceutical formulation and human urine samples, and the results obtained are similar to those obtained by high performance liquid chromatography.
Resumo:
The co-authors raise two matters they consider essential for the future development of ECEfS. The first is the need to create deep foundations based in research. At a time of increasing practitioner interest, research in ECEfS is meagre. A robust research community is crucial to support quality in curriculum and pedagogy, and to promote learning and innovation in thinking and practice. The second 'essential' for the expansion and uptake of ECEfS is broad systemic change. All level within the early childhood education system - individual teachers and classrooms, whole centres and schools, professional associations and networks, accreditation and employing authorities, and teacher educators - must work together to create and reinforce the cultural and educational changes required for sustainability. This chapter provides explanations of processes to engender systemic change. It illustrates a systems approach, with reference to a recent study focused on embedding EfS into teacher education. This study emphasises the apparent contradiction that the answer to large-scale reform lies with small-scale reforms that build capacity and make connections.
Resumo:
Developmental progression and differentiation of distinct cell types depend on the regulation of gene expression in space and time. Tools that allow spatial and temporal control of gene expression are crucial for the accurate elucidation of gene function. Most systems to manipulate gene expression allow control of only one factor, space or time, and currently available systems that control both temporal and spatial expression of genes have their limitations. We have developed a versatile two-component system that overcomes these limitations, providing reliable, conditional gene activation in restricted tissues or cell types. This system allows conditional tissue-specific ectopic gene expression and provides a tool for conditional cell type- or tissue-specific complementation of mutants. The chimeric transcription factor XVE, in conjunction with Gateway recombination cloning technology, was used to generate a tractable system that can efficiently and faithfully activate target genes in a variety of cell types. Six promoters/enhancers, each with different tissue specificities (including vascular tissue, trichomes, root, and reproductive cell types), were used in activation constructs to generate different expression patterns of XVE. Conditional transactivation of reporter genes was achieved in a predictable, tissue-specific pattern of expression, following the insertion of the activator or the responder T-DNA in a wide variety of positions in the genome. Expression patterns were faithfully replicated in independent transgenic plant lines. Results demonstrate that we can also induce mutant phenotypes using conditional ectopic gene expression. One of these mutant phenotypes could not have been identified using noninducible ectopic gene expression approaches.
Resumo:
The robust economic growth across South East Asia and the significant advances in nano-technologies in the past two decades have resulted in the creation of intelligent urban infrastructures. Cities like Seoul, Tokyo and Hong Kong have been competing against each other to develop the first ‘ubiquitous city’, a strategic global node of science and technology that provides all municipal services for residents and visitors via ubiquitous infrastructures. This chapter scrutinises the development of ubiquitous and smart infrastructure in Korea, Japan and Hong Kong. These cases provide invaluable learnings for policy-makers and urban and infrastructure planners when considering adopting these systems approaches in their cities.
Resumo:
Information Overload and Mismatch are two fundamental problems affecting the effectiveness of information filtering systems. Even though both term-based and patternbased approaches have been proposed to address the problems of overload and mismatch, neither of these approaches alone can provide a satisfactory solution to address these problems. This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern-based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experimental results based on the RCV1 corpus show that the proposed twostage filtering model significantly outperforms the both termbased and pattern-based information filtering models.
Resumo:
Micro-businesses, those with fewer than five employees, have a significant impact on the economy. These very small players represent 89% of all Australian businesses and, collectively, they provide 17% of the nation’s private sector employment. They are ubiquitous in Australia as in many other nations, embedded in local communities and therefore well placed to influence community wellbeing. Surprisingly, very little is known about micro-Business Community Responsibility (mBCR), the micro-business equivalent of Small Business Social Responsibility (SBSR) and Corporate Social Responsibility (CSR). Most national data available on business support for community wellbeing does not separately identify micro-business contributions. In this study an exploratory approach informed by business ethics theory was taken. Data from 36 semi-structured interviews was analysed to examine perceived mBCR approaches, motivations and barriers. The sample for this study was a mix of micro-business owner-operators situated in suburban shopping areas in Brisbane. Three types of mBCR emerged. All types are at least partly driven by enlightened selfinterest (ESI). However of the three mBCR types, two combine ESI with other approaches. One type combines ESI and philanthropic approaches to mBCR, and the other combines ESI with social entrepreneurial approaches to mBCR. The combination of doing business and doing good for many micro-business owneroperators, suggests mBCR may be a significant, yet unrecognised component of the third sector social economy.
Resumo:
This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern- based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experiments have been conducted to compare the proposed two-stage filtering (T-SM) model with other possible "term-based + pattern-based" or "term-based + term-based" IF models. The results based on the RCV1 corpus show that the T-SM model significantly outperforms other types of "two-stage" IF models.
Resumo:
Stochastic models for competing clonotypes of T cells by multivariate, continuous-time, discrete state, Markov processes have been proposed in the literature by Stirk, Molina-París and van den Berg (2008). A stochastic modelling framework is important because of rare events associated with small populations of some critical cell types. Usually, computational methods for these problems employ a trajectory-based approach, based on Monte Carlo simulation. This is partly because the complementary, probability density function (PDF) approaches can be expensive but here we describe some efficient PDF approaches by directly solving the governing equations, known as the Master Equation. These computations are made very efficient through an approximation of the state space by the Finite State Projection and through the use of Krylov subspace methods when evolving the matrix exponential. These computational methods allow us to explore the evolution of the PDFs associated with these stochastic models, and bimodal distributions arise in some parameter regimes. Time-dependent propensities naturally arise in immunological processes due to, for example, age-dependent effects. Incorporating time-dependent propensities into the framework of the Master Equation significantly complicates the corresponding computational methods but here we describe an efficient approach via Magnus formulas. Although this contribution focuses on the example of competing clonotypes, the general principles are relevant to multivariate Markov processes and provide fundamental techniques for computational immunology.
Resumo:
This chapter focuses on the interactions and roles between delays and intrinsic noise effects within cellular pathways and regulatory networks. We address these aspects by focusing on genetic regulatory networks that share a common network motif, namely the negative feedback loop, leading to oscillatory gene expression and protein levels. In this context, we discuss computational simulation algorithms for addressing the interplay of delays and noise within the signaling pathways based on biological data. We address implementational issues associated with efficiency and robustness. In a molecular biology setting we present two case studies of temporal models for the Hes1 gene (Monk, 2003; Hirata et al., 2002), known to act as a molecular clock, and the Her1/Her7 regulatory system controlling the periodic somite segmentation in vertebrate embryos (Giudicelli and Lewis, 2004; Horikawa et al., 2006).
Resumo:
Over the last two decades, particularly in Australia and the UK, the doctoral landscape has changed considerably with increasingly hybridised approaches to methodologies and research strategies as well as greater choice of examinable outputs. This paper provides an overview of doctoral practices that are emerging in the context of the creative industries, with a focus on practice-led approaches within the Doctor of Philosophy and recent developments in professional doctorates, from a predominantly Australian perspective. In interrogating what constitutes ‘doctorateness’ in this context, the paper examines some of the diverse theoretical principles which foreground the practitioner/researcher, methodological approaches that incorporate tacit knowledge and reflective practice together with qualitative strategies, blended learning delivery modes, and flexible doctoral outputs; and how these are shaping this shifting environment. The paper concludes with a study of the Doctor of Creative Industries at Queensland University of Technology as one model of an interdisciplinary professional research doctorate.
Resumo:
This study seeks to analyse the adequacy of the current regulation of the payday lending industry in Australia, and consider whether there is a need for additional regulation to protect consumers of these services. The report examines the different regulatory approaches adopted in comparable OECD countries, and reviews alternative models for payday regulation, in particular, the role played by responsible lending. The study also examines the consumer protection mechanisms now in existence in Australia in the National Consumer Credit Protection Act 2009 (Cth) (NCCP) and the National Credit Code (NCC) contained in Schedule 1 of that Act and in the Australian Securities and Investments Commission Act 2001 (Cth).
Resumo:
Unstructured text data, such as emails, blogs, contracts, academic publications, organizational documents, transcribed interviews, and even tweets, are important sources of data in Information Systems research. Various forms of qualitative analysis of the content of these data exist and have revealed important insights. Yet, to date, these analyses have been hampered by limitations of human coding of large data sets, and by bias due to human interpretation. In this paper, we compare and combine two quantitative analysis techniques to demonstrate the capabilities of computational analysis for content analysis of unstructured text. Specifically, we seek to demonstrate how two quantitative analytic methods, viz., Latent Semantic Analysis and data mining, can aid researchers in revealing core content topic areas in large (or small) data sets, and in visualizing how these concepts evolve, migrate, converge or diverge over time. We exemplify the complementary application of these techniques through an examination of a 25-year sample of abstracts from selected journals in Information Systems, Management, and Accounting disciplines. Through this work, we explore the capabilities of two computational techniques, and show how these techniques can be used to gather insights from a large corpus of unstructured text.
Resumo:
Information mismatch and overload are two fundamental issues influencing the effectiveness of information filtering systems. Even though both term-based and pattern-based approaches have been proposed to address the issues, neither of these approaches alone can provide a satisfactory decision for determining the relevant information. This paper presents a novel two-stage decision model for solving the issues. The first stage is a novel rough analysis model to address the overload problem. The second stage is a pattern taxonomy mining model to address the mismatch problem. The experimental results on RCV1 and TREC filtering topics show that the proposed model significantly outperforms the state-of-the-art filtering systems.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
The literature supporting the notion that active, student-centered learning is superior to passive, teacher-centered instruction is encyclopedic (Bonwell & Eison, 1991; Bruning, Schraw, & Ronning, 1999; Haile, 1997a, 1997b, 1998; Johnson, Johnson, & Smith, 1999). Previous action research demonstrated that introducing a learning activity in class improved the learning outcomes of students (Mejias, 2010). People acquire knowledge and skills through practice and reflection, not by watching and listening to others telling them how to do something. In this context, this project aims to find more insights about the level of interactivity in the curriculum a class should have and its alignment with assessment so the intended learning outcomes (ILOs) are achieved. In this project, interactivity is implemented in the form of problem- based learning (PBL). I present the argument that a more continuous formative feedback when implemented with the correct amount of PBL stimulates student engagement bringing enormous benefits to student learning. Different levels of practical work (PBL) were implemented together with two different assessment approaches in two subjects. The outcomes were measured using qualitative and quantitative data to evaluate the levels of student engagement and satisfaction in the terms of ILOs.