506 resultados para conceptual space
Resumo:
Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.
Resumo:
A body of research in conversation analysis has identified a range of structurally-provided positions in which sources of trouble in talk-in-interaction can be addressed using repair. These practices are contained within what Schegloff (1992) calls the repair space. In this paper, I examine a rare instance in which a source of trouble is not resolved within the repair space and comes to be addressed outside of it. The practice by which this occurs is a post-completion account; that is, an account that is produced after the possible completion of the sequence containing a source of trouble. Unlike fourth position repair, the final repair position available within the repair space, this account is not made in preparation for a revised response to the trouble-source turn. Its more restrictive aim, rather, is to circumvent an ongoing difference between the parties involved. I argue that because the trouble is addressed in this manner, and in this particular position, the repair space can be considered as being limited to the sequence in which a source of trouble originates.
Resumo:
Articular cartilage is a complex structure with an architecture in which fluid-swollen proteoglycans constrained within a 3D network of collagen fibrils. Because of the complexity of the cartilage structure, the relationship between its mechanical behaviours at the macroscale level and its components at the micro-scale level are not completely understood. The research objective in this thesis is to create a new model of articular cartilage that can be used to simulate and obtain insight into the micro-macro-interaction and mechanisms underlying its mechanical responses during physiological function. The new model of articular cartilage has two characteristics, namely: i) not use fibre-reinforced composite material idealization ii) Provide a framework for that it does probing the micro mechanism of the fluid-solid interaction underlying the deformation of articular cartilage using simple rules of repartition instead of constitutive / physical laws and intuitive curve-fitting. Even though there are various microstructural and mechanical behaviours that can be studied, the scope of this thesis is limited to osmotic pressure formation and distribution and their influence on cartilage fluid diffusion and percolation, which in turn governs the deformation of the compression-loaded tissue. The study can be divided into two stages. In the first stage, the distributions and concentrations of proteoglycans, collagen and water were investigated using histological protocols. Based on this, the structure of cartilage was conceptualised as microscopic osmotic units that consist of these constituents that were distributed according to histological results. These units were repeated three-dimensionally to form the structural model of articular cartilage. In the second stage, cellular automata were incorporated into the resulting matrix (lattice) to simulate the osmotic pressure of the fluid and the movement of water within and out of the matrix; following the osmotic pressure gradient in accordance with the chosen rule of repartition of the pressure. The outcome of this study is the new model of articular cartilage that can be used to simulate and study the micromechanical behaviours of cartilage under different conditions of health and loading. These behaviours are illuminated at the microscale level using the socalled neighbourhood rules developed in the thesis in accordance with the typical requirements of cellular automata modelling. Using these rules and relevant Boundary Conditions to simulate pressure distribution and related fluid motion produced significant results that provided the following insight into the relationships between osmotic pressure gradient and associated fluid micromovement, and the deformation of the matrix. For example, it could be concluded that: 1. It is possible to model articular cartilage with the agent-based model of cellular automata and the Margolus neighbourhood rule. 2. The concept of 3D inter connected osmotic units is a viable structural model for the extracellular matrix of articular cartilage. 3. Different rules of osmotic pressure advection lead to different patterns of deformation in the cartilage matrix, enabling an insight into how this micromechanism influences macromechanical deformation. 4. When features such as transition coefficient were changed, permeability (representing change) is altered due to the change in concentrations of collagen, proteoglycans (i.e. degenerative conditions), the deformation process is impacted. 5. The boundary conditions also influence the relationship between osmotic pressure gradient and fluid movement at the micro-scale level. The outcomes are important to cartilage research since we can use these to study the microscale damage in the cartilage matrix. From this, we are able to monitor related diseases and their progression leading to potential insight into drug-cartilage interaction for treatment. This innovative model is an incremental progress on attempts at creating further computational modelling approaches to cartilage research and other fluid-saturated tissues and material systems.
Resumo:
We develop a fast Poisson preconditioner for the efficient numerical solution of a class of two-sided nonlinear space fractional diffusion equations in one and two dimensions using the method of lines. Using the shifted Gr¨unwald finite difference formulas to approximate the two-sided(i.e. the left and right Riemann-Liouville) fractional derivatives, the resulting semi-discrete nonlinear systems have dense Jacobian matrices owing to the non-local property of fractional derivatives. We employ a modern initial value problem solver utilising backward differentiation formulas and Jacobian-free Newton-Krylov methods to solve these systems. For efficient performance of the Jacobianfree Newton-Krylov method it is essential to apply an effective preconditioner to accelerate the convergence of the linear iterative solver. The key contribution of our work is to generalise the fast Poisson preconditioner, widely used for integer-order diffusion equations, so that it applies to the two-sided space fractional diffusion equation. A number of numerical experiments are presented to demonstrate the effectiveness of the preconditioner and the overall solution strategy.
Resumo:
The method of lines is a standard method for advancing the solution of partial differential equations (PDEs) in time. In one sense, the method applies equally well to space-fractional PDEs as it does to integer-order PDEs. However, there is a significant challenge when solving space-fractional PDEs in this way, owing to the non-local nature of the fractional derivatives. Each equation in the resulting semi-discrete system involves contributions from every spatial node in the domain. This has important consequences for the efficiency of the numerical solver, especially when the system is large. First, the Jacobian matrix of the system is dense, and hence methods that avoid the need to form and factorise this matrix are preferred. Second, since the cost of evaluating the discrete equations is high, it is essential to minimise the number of evaluations required to advance the solution in time. In this paper, we show how an effective preconditioner is essential for improving the efficiency of the method of lines for solving a quite general two-sided, nonlinear space-fractional diffusion equation. A key contribution is to show, how to construct suitable banded approximations to the system Jacobian for preconditioning purposes that permit high orders and large stepsizes to be used in the temporal integration, without requiring dense matrices to be formed. The results of numerical experiments are presented that demonstrate the effectiveness of this approach.
Resumo:
We consider a two-dimensional space-fractional reaction diffusion equation with a fractional Laplacian operator and homogeneous Neumann boundary conditions. The finite volume method is used with the matrix transfer technique of Ilić et al. (2006) to discretise in space, yielding a system of equations that requires the action of a matrix function to solve at each timestep. Rather than form this matrix function explicitly, we use Krylov subspace techniques to approximate the action of this matrix function. Specifically, we apply the Lanczos method, after a suitable transformation of the problem to recover symmetry. To improve the convergence of this method, we utilise a preconditioner that deflates the smallest eigenvalues from the spectrum. We demonstrate the efficiency of our approach for a fractional Fisher’s equation on the unit disk.
Resumo:
Supervision in the creative arts is a topic of growing significance since the increase in creative practice PhDs across universities in Australasia. This presentation will provide context of existing discussions in creative practice and supervision. Creative practice – encompassing practice-based or practice-led research – has now a rich history of research surrounding it. Although it is a comparatively new area of knowledge, great advances have been made in terms of how practice can influence, generate, and become research. The practice of supervision is also a topic of interest, perhaps unsurprisingly considering its necessity within the university environment. Many scholars have written much about supervision practices and the importance of the supervisory role, both in academic and more informal forms. However, there is an obvious space in between: there is very little research on supervision practices within creative practice higher degrees, especially at PhD or doctorate level. Despite the existence of creative practice PhD programs, and thus the inherent necessity for successful supervisors, there remain minimal publications and limited resources available. Creative Intersections explores the existing publications and resources, and illustrates that a space for new published knowledge and tools exists.
Resumo:
The ageing population is increasing worldwide, as are a range of chronic diseases, conditions, and physical and cognitive disabilities associated with later life. The older population is also neurologically diverse, with unique and specific challenges around mobility and engagement with the urban environment. Older people tend to interact less with cities and neighbourhoods, putting them at risk of further illnesses and co-morbidities associated with being less physically and socially active. Empirical evidence has shown that reduced access to healthcare services, health-related resources and social interaction opportunities is associated with increases in morbidity and premature mortality. While it is crucial to respond to the needs of this ageing population, there is insufficient evidence for interventions regarding their experiences of public space from the vantage point of neurodiversity. This paper provides a conceptual and methodological framework to investigate relationships between the sensory and cognitive abilities of older people, and their use and negotiation of the urban environment. The paper will refer to a case example of the city of Logan, an urban area in Queensland, Australia, where current urban development provides opportunities for the design of spaces that take experiences of neurodiversity into account. The framework will inform the development of principles for urban design for increasingly neurologically diverse populations.
Resumo:
The configuration of comprehensive Enterprise Systems to meet the specific requirements of an organisation up to today is consuming significant resources. The results of failing implementation projects are severe and may even threaten the organisation’s existence. This paper proposes a method which aims at increasing the efficiency of Enterprise Systems implementations. First, we argue that existing process modelling languages that feature different degrees of abstraction for different user groups exist and are used for different purposes which makes it necessary to integrate them. We describe how to do this using the meta models of the involved languages. Second, we motivate that an integrated process model based on the integrated meta model needs to be configurable and elaborate on the mechanisms by which this model configuration can be achieved. We introduce a business example using SAP modelling techniques to illustrate the proposed method.
Resumo:
The success of contemporary organizations depends on their ability to make appropriate decisions. Making appropriate decisions is inevitably bound to the availability and provision of relevant information. Information systems should be able to provide information in an efficient way. Thus, within information systems development a detailed analysis of information supply and information demands has to prevail. Based on Syperski’s information set and subset-model we will give an epistemological foundation of information modeling in general and show, why conceptual modeling in particular is capable of specifying effective and efficient information systems. Furthermore, we derive conceptual modeling requirements based on our findings. A short example illustrates the usefulness of a conceptual data modeling technique for the specification of information systems.
Resumo:
In a business environment, making the right decisions is vital for the success of a company. Making right decisions is inevitably bound to the availability and provision of relevant information. Information systems are supposed to be able to provide this information in an efficient way. Thus, within information systems development a detailed analysis of information supply and information demands has to prevail. Based on Szyperski’s information set and subset-model we will give an epistemological foundation of information modeling in general and show, why conceptual modeling in particular is capable of developing effective and efficient information systems. Furthermore, we derive conceptual modeling requirements based on our findings.
Resumo:
The issue of firm growth - how it is achieved and managed, and what consequences it has for different stakeholders - is both theoretically interesting and practically important. It is also an area of scholarly enquiry that has expanded very significantly since we started doing research on it in the 1980s and 1990s. In this volume we present and comment upon the most recent contributions we have made to this field of inquiry - separately, jointly and with various colleagues (who are included in the 'we/us article authors' used in the remainder of this introduction). While the chapters have been published before in various places, we think it valuable to gather them in one easily accessible place, which also allows space for our reflective commentary across the individual chapters. We hope readers will find the work a useful and worthwhile addition to the extant body of knowledge about firm growth. We also hope they will find that it- as its title suggests- brings new•/ perspectives on firm growth and its study, and that it can inspire future contributions by other researchers. This is important, because despite the growing volume of research on firm growth, many important questions still lack satisfactory answers. The current volume may be regarded as a follow-up of a previous collection where we- and Frederic Delmar- presented and commented on eight articles on (mostly small) firm growth that we had jointly or separately published up until that time (Davidsson et al., 2006). In that volume we organised the works under three broad themes: the conceptual and empirical complexity of the firm growth phenomenon; growth aspirations and motivations; and patterns and determinants of actual growth The current volume builds on and extends these themes. Only one of the chapters in the previous volume directly addressed the issue of drivers of actual growth. We add three more in this book, two of which expand on the (aspirations and motivations' theme by relating growth aspirations and motivations (or lack thereof) of the owner-manager to the actual growth achieved in the subsequent period.
Resumo:
Emerging sciences, such as conceptual cost estimating, seem to have to go through two phases. The first phase involves reducing the field of study down to its basic ingredients - from systems development to technological development (techniques) to theoretical development. The second phase operates in the direction in building up techniques from theories, and systems from techniques. Cost estimating is clearly and distinctly still in the first phase. A great deal of effort has been put into the development of both manual and computer based cost estimating systems during this first phase and, to a lesser extent, the development of a range of techniques that can be used (see, for instance, Ashworth & Skitmore, 1986). Theoretical developments have not, as yet, been forthcoming. All theories need the support of some observational data and cost estimating is not likely to be an exception. These data do not need to be complete in order to build theories. As it is possible to construct an image of a prehistoric animal such as the brontosaurus from only a few key bones and relics, so a theory of cost estimating may possibly be found on a few factual details. The eternal argument of empiricists and deductionists is that, as theories need factual support, so do we need theories in order to know what facts to collect. In cost estimating, the basic facts of interest concern accuracy, the cost of achieving this accuracy, and the trade off between the two. When cost estimating theories do begin to emerge, it is highly likely that these relationships will be central features. This paper presents some of the facts we have been able to acquire regarding one part of this relationship - accuracy, and its influencing factors. Although some of these factors, such as the amount of information used in preparing the estimate, will have cost consequences, we have not yet reached the stage of quantifying these costs. Indeed, as will be seen, many of the factors do not involve any substantial cost considerations. The absence of any theory is reflected in the arbitrary manner in which the factors are presented. Rather, the emphasis here is on the consideration of purely empirical data concerning estimating accuracy. The essence of good empirical research is to .minimize the role of the researcher in interpreting the results of the study. Whilst space does not allow a full treatment of the material in this manner, the principle has been adopted as closely as possible to present results in an uncleaned and unbiased way. In most cases the evidence speaks for itself. The first part of the paper reviews most of the empirical evidence that we have located to date. Knowledge of any work done, but omitted here would be most welcome. The second part of the paper presents an analysis of some recently acquired data pertaining to this growing subject.
Resumo:
Although there is an increasing recognition of the impacts of climate change on communities, residents often resist changing their lifestyle to reduce the effects of the problem. By using a landscape architectural design medium, this paper argues that public space, when designed as an ecological system, has the capacity to create social and environmental change and to increase the quality of the human environment. At the same time, this ecological system can engage residents, enrich the local economy, and increase the social network. Through methods of design, research and case study analysis, an alternative master plan is proposed for a sustainable tourism development in Alacati, Turkey. Our master plan uses local geographical, economic and social information within a sustainable landscape architectural design scheme that addresses the key issues of ecology, employment, public space and community cohesion. A preliminary community empowerment model (CEM) is proposed to manage the designs. The designs address: the coexistence of local agricultural and sustainable energy generation; state of the art water management; and the functional and sustainable social and economic interrelationship of inhabitants, NGOs, and local government.
Resumo:
Better management of knowledge assets has the potential to improve business processes and increase productivity. This fact has led to considerable interest in recent years in the knowledge management (KM) phenomenon, and in the main dimensions that can impact on its application in construction. However, a lack of a systematic way of assessing KM initia-tives’ contribution towards achieving organisational business objectives is evident. This paper describes the first stage of a research project intended to develop, and empirically test, a KM input-process-output framework comprising unique and well-defined theoretical constructs representing the KM process and its internal and external determinants in the context of con-struction. The paper presents the underlying principles used in operationally defining each construct through the use of extant KM literature. The KM process itself is explicitly mod-elled via a number of clearly articulated phases that ultimately lead to knowledge utilisation and capitalisation, which in turn adds value or otherwise to meeting defined business objec-tives. The main objective of the model is to reduce the impact of subjectivity in assessing the contribution made by KM practices and initiatives toward achieving performance improvements.